General Purpose AI (GPAI) Models
Title V of the EU AI Act creates a separate, dedicated compliance track for General Purpose AI model providers — the companies that train and release foundation models (large language models, image generators, multimodal models, etc.). The GPAI compliance deadline was August 2, 2025. If you provide a GPAI model and have not yet complied, you are in violation.
GPAI compliance deadline: 2 August 2025 — now passed
If you provide a General Purpose AI model and have not yet implemented the Title V obligations, you are in violation of the EU AI Act. Seek qualified legal counsel immediately and prioritise compliance.
Articles
Art. 51–56
Compliance deadline
2 Aug 2025
Systemic risk threshold
10²⁵ FLOPs
Max penalty
€15M / 3%
What is a GPAI model?
Under Article 3(63), a GPAI model is an AI model — including where trained with a large amount of data using self-supervision at scale — that displays significant generality and is capable of competently performing a wide range of distinct tasks. In practice, this means:
Examples of GPAI models
- • Large language models (GPT, Claude, Gemini, Llama, Mistral)
- • Image generation models (Stable Diffusion, DALL-E, Midjourney)
- • Multimodal foundation models
- • Code generation models
- • Audio/speech/video generation models at scale
Important distinctions
- • A GPAI model is not the same as a GPAI system
- • You can be both a GPAI model provider AND an AI system provider
- • Fine-tuned models may also qualify as GPAI models
- • Open-weight models are included (with nuances for open source)
Art. 53
Obligations for All GPAI Model Providers
Plain English
Article 53 sets the baseline obligations for ALL GPAI model providers — regardless of whether the model has systemic risk. You must: (1) prepare and maintain technical documentation (model architecture, training data sources, training compute, evaluation benchmarks, capability assessments); (2) make documentation available to downstream AI system providers who integrate your model; (3) implement a copyright compliance policy — this means respecting text-and-data-mining opt-outs under the DSM Directive; (4) publish a training data summary. These apply to both proprietary and open-weight models, though open-weight models with weights publicly available have some reduced obligations.
Official Text (EUR-Lex)
1. Providers of general-purpose AI models shall: (a) draw up and keep up-to-date technical documentation including a training process description and the technical parameters of the model; (b) draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to integrate the general-purpose AI model into their AI systems; (c) put in place a policy to comply with Union copyright law, and in particular to identify and comply with a rights reservation expressed pursuant to Article 4(3) of Directive (EU) 2019/790, including through state-of-the-art technologies; (d) publish a sufficiently detailed summary about the content used for training the model.
Key obligations
- 1Prepare technical documentation per Annex XI before making the model available
- 2Include: model architecture, training data sources, training compute (FLOPs), evaluation results, known limitations
- 3Make documentation available to downstream AI system providers upon request
- 4Implement a copyright compliance policy — respect rights reservations under DSM Directive Art. 4(3)
- 5Publish a publicly available summary of training data content
- 6Register the model with the EU AI Office
- 7Keep documentation updated as the model evolves
Tools for this article
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 51
GPAI Models with Systemic Risk — Classification
Plain English
A GPAI model is classified as having 'systemic risk' if it meets the 10²⁵ FLOPs training compute threshold — which today corresponds roughly to frontier models like GPT-4, Gemini Ultra, and Claude 3 Opus. The AI Office can also designate smaller models as systemic if they demonstrate capabilities equivalent to the threshold. The 10²⁵ FLOPs threshold will be updated over time as the Commission develops guidance. If you are unsure whether your model is systemic, you should assess your training compute and monitor AI Office guidance.
Official Text (EUR-Lex)
1. A GPAI model shall be classified as a GPAI model with systemic risk if it meets any of the following conditions: (a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; (b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a). 2. A GPAI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), where the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 10²⁵. 3. The Commission shall develop guidance on the practical implementation of paragraph 1 taking into account scientific developments.
Key obligations
- 1Calculate your model's total training compute in FLOPs
- 2If training compute exceeds 10²⁵ FLOPs, you are presumed to have systemic risk
- 3Notify the AI Office if you believe your model reaches systemic risk threshold
- 4Monitor AI Office designation decisions — you may be designated systemic even below the threshold
- 5Implement all Art. 55 obligations if classified as systemic
Tools for this article
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 55
Obligations for GPAI Models with Systemic Risk
Plain English
On top of the baseline Art. 53 obligations, systemic risk GPAI providers must: (1) conduct adversarial testing / red-teaming using state-of-the-art methods to identify and mitigate systemic risks; (2) assess systemic risks at EU level that could arise from deployment; (3) report serious incidents to the AI Office without undue delay — including information about corrective measures; (4) implement cybersecurity measures adequate for the scale and risk of the model. These are significantly heavier obligations designed for the most capable frontier AI models.
Official Text (EUR-Lex)
1. Providers of GPAI models with systemic risk shall in addition to the obligations referred to in Articles 53 and 54: (a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identify and mitigate systemic risks; (b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of GPAI models with systemic risk; (c) keep track of, document and report, without undue delay to the AI Office and, where relevant, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them; (d) ensure an adequate level of cybersecurity protection for the GPAI model with systemic risk and the physical infrastructure of the model.
Key obligations
- 1Conduct adversarial testing (red-teaming) using state-of-the-art methodologies before model release
- 2Document all adversarial testing protocols, results, and identified risks
- 3Assess and mitigate systemic risks at Union level stemming from the model
- 4Report serious incidents to the AI Office without undue delay
- 5Include corrective measures in incident reports
- 6Implement cybersecurity measures commensurate with the model's risk profile
- 7Report energy consumption and computational resources used
Tools for this article
Source
Official text from EUR-Lex — Regulation (EU) 2024/1689 (EU AI Act). This text is in the public domain.
Art. 56
GPAI Code of Practice — Walkthrough
Article 56 mandates the AI Office to facilitate the development of a Code of Practice for GPAI providers. Adherence creates a presumption of conformity with Title V obligations — making it the primary compliance route for most GPAI model providers. Non-signatories must demonstrate compliance by alternative means, creating significant legal uncertainty.
Code of Practice Timeline
The Four Commitment Areas
Transparency & Technical Documentation
- ✓Publish a model card / technical documentation per Annex XI before model release
- ✓Include: model architecture, intended use, training data description, training compute (FLOPs), known limitations, evaluation benchmarks
- ✓Make downstream documentation available to AI system providers integrating the model
- ✓Update documentation whenever the model is significantly updated or fine-tuned
- ✓Register the model with the EU AI Office via the publicly accessible database
Copyright & Training Data
- ✓Implement a state-of-the-art copyright compliance policy before training
- ✓Identify and respect rights reservations under DSM Directive Art. 4(3) (text-and-data-mining opt-outs)
- ✓Maintain a register of data sources used for training (or a sufficiently detailed description)
- ✓Publish a public training data summary — including the types of content and sources
- ✓Document any licensed datasets and the terms under which they were used
Safety Evaluation & Risk Assessment (All GPAI)
- ✓Conduct capability evaluations prior to model release
- ✓Test for: harmful content generation, CBRN information, cyberattack facilitation, deception, manipulation
- ✓Document risk assessment methodology and results
- ✓Implement mitigations for identified risks before deployment
- ✓For systemic risk models: conduct adversarial testing (red-teaming) using standardised protocols
Incident Reporting & Cybersecurity (Systemic Risk)
- ✓Establish an incident monitoring and reporting process
- ✓Report serious incidents to the AI Office without undue delay
- ✓Include in reports: nature of incident, affected users, corrective measures taken
- ✓Implement cybersecurity measures commensurate with model scale and risk
- ✓Monitor for misuse, jailbreaks, and novel attack vectors post-deployment
Signing the Code
- • Creates presumption of conformity with Title V
- • Simplifies AI Office audit and inspection
- • Signals good faith to regulators
- • Provides structured implementation framework
- • Access to AI Office guidance and working groups
Not Signing
- • Must demonstrate Art. 53/55 compliance independently
- • Higher burden of proof in regulatory investigations
- • No safe harbour from CoP compliance presumption
- • AI Office may scrutinise more closely
- • No access to Code-based compliance frameworks
Quick Self-Assessment: Are You Prepared?
Classify your AI system type
Use the Risk Classifier to determine whether you are a GPAI provider, AI system provider, or both.
Start Risk Classification →