The European Union's Artificial Intelligence Act (AI Act) creates a legal framework for the development and use of artificial intelligence in the EU. Its main objectives are to promote the adoption of and trust in AI systems and support innovation while ensuring AI systems are safe and respect fundamental rights. The AI Act categorizes AI systems based on risk levels (unacceptable, high, limited and minimal) and establishes requirements and obligations for AI operators to enhance accountability and transparency.
Satisfying the AI Act’s requirements necessarily requires a level of cooperation between customers and their vendors. Our EU customers expect Cloudflare to design our AI-powered products in ways that support their compliance with the AI Act, and we recognize our responsibility in this regard. We have always been, and will remain, fully committed to developing industry-leading, AI-powered security products that align with the requirements of law, including the new AI Act.
Application of the EU AI Act
The AI Act applies to an AI System (defined below) whenever a provider makes it available in the EU. As defined in the AI Act, a provider is the developer of the system. Typically, the AI system will bear the provider’s name or trademark. The AI Act also applies to organizations based in the EU that use AI systems. Such organizations are defined by the AI Act as deployers.
In addition, the AI system-related provisions of the AI Act also apply if the output produced by an AI system outside the EU is used in the EU. In this way, the AI Act may apply to AI system providers and deployers that are not based in the EU.
Finally, the AI Act also regulates the providers of General-Purpose AI (GPAI) models (defined below) made available in the EU.
AnAI system is a machine-based system, designed to function with some level of autonomy, which can infer from the input it receives how to generate outputs like predictions, content, recommendations, or decisions. An AI system incorporates one or more AI models.
AnAI model is an algorithm that has been trained on a dataset, in order to make predictions or perform new tasks on unseen data. However, like software needing installation on a computer to run, an AI model first needs to be integrated with other components, such as a user interface, data pipelines, and computer hardware, to be capable of use. Collectively, once integrated, the AI model and the components into which it has been integrated make up the AI system.
AGPAI model is a particular type of AI model, generally trained on a very large dataset, that displays significant generality and is capable of performing a wide range of distinct tasks. GPAI models are often fine-tuned or modified to create new AI models. Large language models (LLMs) and other types of generative AI models are the most common examples of GPAI models.
High-risk AI systems
Cloudflare does not provide any high-risk AI systems. The AI Act lists the types of AI systems that are considered to be high risk, due to their potential to present a significant risk of harm to the health, safety, or fundamental rights of individuals. Cloudflare’s AI-driven products do not fall within this list, and are designed to protect our customers against cyber attacks and threats.
GPAI models
Cloudflare does not provide General-Purpose AI (GPAI) models trained by Cloudflare as part of its product offerings. The AI models, including GPAI models, made available on Workers AI, are provided by third-party providers who have the responsibility to assess their compliance with the AI Act.
This is important because GPAI models, such as LLMs, require extensive training data. This process can raise concerns regarding data sourcing and potential privacy implications, as highlighted by recent media attention on data scraping practices. In contrast, predictive ML models, like those used in Cloudflare's security services, are trained on specific datasets with structured outputs (e.g., threat scores), minimizing the risk of inadvertent data disclosure. This distinction underscores Cloudflare's commitment to responsible AI deployment and data security.
Deploying AI systems
We deploy and use AI systems within our own operations, leveraging AI to enhance our internal processes and services. To the extent we rely on vendors who use AI technologies, we perform a cross-functional review of those vendors, applying a dedicated vendor AI risk assessment.
Should we deploy any high-risk AI systems, we are committed to implementing compliance obligations determined by the AI Act, in line with the principles of transparency and fairness.