EU AI Act FAQ
Authoritative answers to the 30 most frequently asked questions about the EU AI Act — covering scope, risk classification, provider and deployer obligations, GPAI models, and enforcement. Each answer references the relevant articles and reflects the Act as in force.
Jump to section
Scope & Applicability
8 questionsWhen does the EU AI Act apply to me?
The EU AI Act applies to you if you place an AI system on the EU market, put one into service within the EU, or if your AI system's output is used in the EU — regardless of where your organisation is established. This extraterritorial scope (Art. 2(1)) mirrors the GDPR's approach. If you develop, sell, distribute, or operate an AI system that affects people in the EU, the Act applies. The enforcement timeline is phased: prohibited practices applied from 2 February 2025; GPAI model obligations from 2 August 2025; high-risk AI system obligations under Annex III from 2 August 2026; and remaining high-risk obligations from 2 August 2027.
Does the EU AI Act apply to AI systems used only internally within a company?
Yes, in many cases. The Act applies to 'deployers' — organisations that put AI systems into use under their authority (Art. 3(4)). If your company deploys a high-risk AI system internally (for example, to evaluate job applicants, manage workers, or make credit decisions about employees), you are a deployer subject to Art. 26 obligations. The fact that you are not selling the system externally does not exempt you. The only meaningful internal exemption relates to personal non-professional use (Art. 2(10)), which is irrelevant for organisations.
Are open-source AI models exempt from the EU AI Act?
Partially. Art. 2(12) and Recital 12 provide a limited exemption for providers of free and open-source AI components — including model weights — who release them under open licences. However, this exemption does not apply to: (a) prohibited AI practices; (b) GPAI models with systemic risk; (c) AI systems that are put into service for use in the EU as high-risk systems by any downstream provider. In practice, if you release a model openly but it foreseeably meets the GPAI systemic risk threshold (>10^25 FLOPs), you retain obligations. Open-source is not a blanket exemption.
Does the EU AI Act apply to AI used for research and development?
Art. 2(6) provides that AI systems and models specifically developed and put into service for the sole purpose of scientific research and development are excluded from the Act's scope. However, this exemption is narrow: once a system transitions from research into a real-world deployment — even a pilot — the exemption falls away. Testing of AI systems in real-world conditions (real-world testing) is covered by Art. 60 and requires specific safeguards when conducted outside of AI regulatory sandboxes.
Are AI systems used by public authorities subject to the EU AI Act?
Yes. Public authorities are expressly within scope both as providers and deployers. Indeed, many of the highest-risk use cases in Annex III relate to public authorities: law enforcement, border management, administration of justice, biometric identification, and access to public services. EU institutions operating their own AI systems are subject to Regulation (EU) 2024/1083 (the EU institutions AI Act), which mirrors the main Act. Member State public authorities deploying AI systems for migration or border control have limited derogations available under Art. 2(7)–(8).
What counts as an 'AI system' under the Act?
Art. 3(1) defines an AI system as 'a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence real or virtual environments.' The definition deliberately excludes traditional rule-based software. Whether a system qualifies depends on whether it uses machine learning, statistical approaches, or reasoning techniques to infer outputs. Simple rule engines and decision trees with fully deterministic logic generally fall outside scope, though the boundary is contested.
Do non-EU companies need to appoint an EU representative?
Yes, if a non-EU provider places a high-risk AI system on the EU market or puts one into service in the EU, they must appoint an authorised representative established in the EU (Art. 22). The authorised representative must be designated by written mandate, can act on behalf of the provider before market surveillance authorities, and is responsible for ensuring compliance with the provider's obligations under the Act. This mirrors the GDPR Art. 27 representative requirement. For GPAI model providers without EU establishment, Art. 54 imposes a similar requirement.
Does GDPR still apply to AI systems, or does the EU AI Act replace it?
Both apply simultaneously. The EU AI Act expressly states (Recital 9) that it does not supersede, replace, or modify the GDPR. Wherever an AI system processes personal data — which is most high-risk AI systems — both sets of obligations apply in full. This means conducting both a GDPR Data Protection Impact Assessment (Art. 35 GDPR) and an AI Act risk management assessment (Art. 9 AI Act), satisfying both transparency obligations (Art. 13–14 GDPR and Art. 13 AI Act), and ensuring human oversight satisfies both GDPR Art. 22 and AI Act Art. 14. Compliance with the AI Act does not constitute compliance with the GDPR, and vice versa.
Risk Classification
7 questionsHow do I know if my AI system is high-risk?
High-risk AI systems are defined in Art. 6 and listed in Annex III. There are two routes to being high-risk. First, any AI system that is itself a safety component of a product covered by EU harmonised legislation listed in Annex I (e.g. medical devices, machinery, vehicles) — or is itself such a product — and must undergo a third-party conformity assessment for the product. Second, any AI system that falls within the eight categories listed in Annex III: (1) biometric identification, (2) critical infrastructure management, (3) education and vocational training, (4) employment and workers management, (5) access to essential private and public services, (6) law enforcement, (7) migration, asylum and border control, (8) administration of justice and democratic processes. Art. 6(3) provides that a provider can self-assess and determine their system does not pose a significant risk even if it falls within an Annex III category — but this requires documented justification and notification to the Commission.
What does Annex III actually include?
Annex III lists eight high-risk use case categories. (1) Biometric systems: real-time and post-remote identification; categorisation by sensitive attributes; emotion recognition in workplace/education. (2) Critical infrastructure: safety components in road traffic, water, gas, heating, electricity supply. (3) Education: access determination and evaluation/assessment of students. (4) Employment: recruitment screening, CV filtering, promotion decisions, performance monitoring, task allocation. (5) Essential services: creditworthiness assessment, insurance risk pricing, emergency services dispatch, public benefit entitlement. (6) Law enforcement: individual risk assessment, polygraphs, evidence reliability assessment, crime prediction, facial recognition in databases. (7) Migration: asylum application risk assessment, irregular migration risk, document authenticity. (8) Justice: AI assisting courts and ADR. Each category has specific sub-items — the list is not a blanket prohibition but identifies specific use patterns.
What are the four risk tiers under the EU AI Act?
The EU AI Act uses a four-tier risk pyramid. (1) Unacceptable risk (prohibited): AI practices listed in Art. 5, including social scoring by public authorities, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions), subliminal manipulation, and exploitation of vulnerabilities. These are banned outright from 2 February 2025. (2) High risk: systems covered by Art. 6 and Annex III, subject to the full compliance regime. (3) Specific transparency risk: certain systems (chatbots, deepfakes, emotion recognition outputs) must disclose their AI nature under Art. 50 but face no broader obligations. (4) Minimal risk: all other AI systems, which face only voluntary codes of conduct. Most consumer AI applications fall in tier 4.
Can a provider self-certify that their Annex III system is not high-risk?
Yes, under Art. 6(3). A provider of an AI system that meets an Annex III category may determine it does not pose a significant risk of harm to health, safety, or fundamental rights, provided certain conditions are met: the AI system does not perform profiling of natural persons; it is not intended to make or assist in decisions with significant adverse impact on health, safety, fundamental rights or livelihoods; it is not intended to assess persons in any of the high-risk contexts. The provider must document this determination, register it in the EU database, and notify the Commission. This is an opt-out mechanism — it requires active analysis, documentation, and registration. Silence is not an opt-out.
Is generative AI automatically high-risk?
No. Generative AI as a category does not automatically trigger high-risk classification. Most consumer-facing generative AI tools (chatbots, image generators, writing assistants) fall in the minimal-risk or specific-transparency-risk tier. However, a generative AI system can become high-risk if it is used in an Annex III context — for example, a generative AI system used to produce evidence assessments for law enforcement, or to evaluate job applications. The risk classification is based on the intended use and deployment context, not the underlying technology. General-purpose AI models that can be used for many purposes are subject to the separate GPAI regime in Chapter V.
Do I need a third-party conformity assessment, or can I self-assess?
It depends on the type of high-risk system. For AI systems covered by Annex III, providers can in most cases use internal conformity assessment (i.e., self-assessment against the requirements of Chapter III). However, Art. 43(1) requires third-party conformity assessment by a notified body for two categories: (a) biometric identification systems (with the exception of verification/authentication), and (b) any AI system listed in Annex III where the relevant harmonised standard does not exist or the provider chooses not to apply it. For Annex I product-embedded AI systems (e.g. medical devices), the conformity assessment requirements of the sectoral legislation apply and will typically require notified body involvement. The AI Office is developing harmonised standards through CEN/CENELEC.
What is a Fundamental Rights Impact Assessment (FRIA)?
Art. 27 requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment before deploying the system. The FRIA must identify the population groups affected by the AI system, assess the potential impact on fundamental rights (as protected in the EU Charter), identify measures to mitigate adverse impacts, and be documented. Certain deployers are required to conduct the FRIA: public bodies and private entities exercising public functions, as well as deployers in the fields of banking and financial services, insurance, education, employment, and administration of justice. The FRIA is separate from and additional to the GDPR's DPIA, though the two should be coordinated. The Commission is developing a template to assist deployers.
Provider Obligations
6 questionsWhat is the difference between a provider and a deployer?
A provider (Art. 3(3)) is any natural or legal person who develops an AI system or has one developed with a view to placing it on the market or putting it into service under their own name or trademark. A deployer (Art. 3(4)) is any natural or legal person who uses an AI system under their authority for professional purposes. The key distinction is creation vs. operation. A software company that builds and sells an HR screening tool is a provider. The HR department that buys and uses it is a deployer. Critically, an organisation can be both simultaneously — if you build AI tools for your own internal use, you are both provider and deployer and must satisfy both sets of obligations.
What are a provider's core obligations for high-risk AI systems?
Providers of high-risk AI systems must satisfy a comprehensive compliance regime under Chapter III. The core obligations are: (1) implement a risk management system (Art. 9); (2) apply appropriate data governance to training, validation and testing data (Art. 10); (3) prepare technical documentation before placing the system on the market (Art. 11 and Annex IV); (4) enable logging of events automatically generated by the system (Art. 12); (5) provide instructions for use to deployers (Art. 13); (6) design for appropriate human oversight (Art. 14); (7) ensure accuracy, robustness, and cybersecurity (Art. 15); (8) implement a quality management system (Art. 17); (9) register the system in the EU database (Art. 49); (10) affix a CE marking where applicable and draw up an EU declaration of conformity (Art. 47–48); (11) conduct post-market monitoring (Art. 72).
What must the Art. 13 instructions for use contain?
Art. 13 requires providers to supply deployers with instructions for use that enable the deployer to comply with their own obligations under the Act. The instructions must include: the provider's identity and contact details; the system's characteristics, capabilities, and limitations (including performance on specific persons or groups); the system's intended purpose and foreseeable misuse; the level of human oversight required and how to implement it; the technical measures required to interpret the system's output; maintenance and care requirements; the system's expected lifetime and update requirements; and any known risks and data requirements. The instructions must be written in plain, understandable language. They are a critical compliance document — deployers cannot comply with Art. 26 without adequate instructions.
What is the Art. 17 quality management system?
Art. 17 requires providers of high-risk AI systems to implement a documented quality management system (QMS) that ensures compliance throughout the system's lifecycle. The QMS must cover: the provider's compliance strategy and policies; techniques for design and development; data management procedures; the risk management system; post-market monitoring; incident reporting; personnel skills and accountability; documentation management; internal controls and audit records. For providers subject to EU harmonised legislation (e.g. medical device manufacturers), existing QMS frameworks (such as those under ISO 13485) may be integrated and adapted. The QMS must be documented and made available to market surveillance authorities on request. This is a significant organisational investment for smaller providers.
When must serious incidents be reported, and to whom?
Art. 73 requires providers of high-risk AI systems deployed in the EU to report serious incidents to the market surveillance authority of the Member State where the incident occurred (or where the affected user is located). A 'serious incident' means an incident or malfunction leading to: death or serious harm to health; serious damage to property or the environment; serious and irreversible disruption of critical infrastructure; or infringement of fundamental rights. Reporting timelines are: 15 days for incidents involving risk to life, 10 days for life-threatening situations and serious harm, and 2 years maximum after the incident is known. Providers must also report any malfunction that could lead to a serious incident under post-market monitoring obligations. Reports go to national market surveillance authorities who feed into a shared EU database.
What penalties apply for non-compliance with the EU AI Act?
Art. 99 sets out a tiered penalty regime. The highest tier: up to €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI practices under Art. 5. The middle tier: up to €15 million or 3% of global annual turnover for violations of most other obligations (including high-risk AI system requirements, GPAI model obligations, and notified body obligations). The lowest tier: up to €7.5 million or 1.5% of global annual turnover for supplying incorrect, incomplete or misleading information to authorities. For SMEs and start-ups, the percentage-based cap applies even when the absolute cap would be higher. Member States may impose additional penalties. The Commission can also impose periodic penalty payments and order a system to be withdrawn from the market under Art. 100. Penalties are enforced by national market surveillance authorities for most systems, and by the AI Office for GPAI models.
Deployer Obligations
5 questionsWhat are a deployer's obligations under Art. 26?
Art. 26 sets out deployer obligations for high-risk AI systems. Deployers must: (1) use the system in accordance with the provider's instructions for use; (2) ensure natural persons overseeing the system have the necessary competence, training, and authority; (3) not instruct persons to use the system in ways that put them at risk; (4) monitor the system's operation and report serious incidents to the provider and relevant market surveillance authority; (5) conduct a Fundamental Rights Impact Assessment (Art. 27) before deployment where required; (6) inform employees and their representatives about the deployment of high-risk AI systems where the system affects workers; (7) suspend or discontinue use if they believe the system poses a risk. Deployers cannot 'contractually shift' these obligations to the provider — they are non-delegable statutory duties.
When does a deployer become a provider?
Art. 25 sets out when a distributor, importer, product manufacturer, or deployer takes on provider obligations. A deployer becomes a provider for EU AI Act purposes when they: (a) place the AI system on the market or put it into service under their own name or trademark; (b) make a substantial modification to the AI system (i.e. a modification that alters the intended purpose or affects the risk level beyond what the original provider assessed). A 'substantial modification' (Art. 3(23)) is distinct from routine updates or fine-tuning within the original intended purpose. If you take a third-party AI model and substantially modify it for a new use case, you become the provider of the modified system and inherit all provider obligations — including technical documentation, conformity assessment, and CE marking.
Do employers have specific obligations when deploying AI that affects workers?
Yes. Art. 26(7) specifically requires deployers to inform workers and their representatives before deploying AI systems that affect those workers. This applies to systems used for worker monitoring, performance management, task allocation, and similar employment contexts — all of which appear in Annex III category 4. Employers are deployers in these scenarios and cannot treat AI deployment as a purely technical matter. In practice this means communicating to affected employees what system is being used, for what purpose, what data it processes, what human oversight exists, and how decisions can be contested. This requirement interacts with national labour law and works council consultation rights, which may impose additional obligations.
Can a deployer contractually transfer their obligations to a provider?
No. Art. 26 obligations are statutory duties imposed directly on the deployer and cannot be transferred by contract. However, the relationship between provider and deployer is important: Art. 25(1) requires providers to take all necessary steps to ensure deployers can comply with their Art. 26 obligations. This means providers must supply adequate instructions for use, provide accurate performance information, report incidents that affect deployer compliance, and cooperate with market surveillance investigations. Contracts between providers and deployers should clearly allocate responsibility for: incident reporting, log access, monitoring activities, and update obligations — but these contractual provisions supplement rather than replace statutory obligations.
What human oversight does Art. 14 actually require?
Art. 14 requires that high-risk AI systems be designed and developed to allow effective oversight by natural persons. The system must be designed so that persons can: understand the system's capabilities and limitations; monitor its operation for anomalies, malfunctions, and unexpected performance; intervene, halt, or override its outputs; and refrain from acting on outputs when appropriate. This is an obligation on the provider to design for oversight — and an obligation on the deployer to implement it. Art. 14(4) specifically requires that where technically feasible, the system allows people to disregard or override its output. Oversight measures must be proportionate to the risk. Pure rubber-stamping — a human who formally reviews but cannot practically intervene — does not satisfy Art. 14.
GPAI Models
4 questionsWhat is a GPAI model and what obligations apply?
A General-Purpose AI (GPAI) model (Art. 3(63)) is an AI model trained on large amounts of data using self-supervision at scale, capable of serving a multitude of tasks. Chapter V (Art. 51–56) governs GPAI models separately from the high-risk AI system regime. All GPAI model providers must: (1) maintain and provide technical documentation to the AI Office and downstream providers; (2) maintain an up-to-date list of training data sources; (3) comply with EU copyright law including by implementing a copyright policy; (4) publish a summary of training data. Providers of GPAI models with systemic risk (defined by a training compute threshold of >10^25 FLOPs or Commission designation) face additional obligations: model evaluation, adversarial testing, incident reporting to the AI Office, cybersecurity measures, and energy efficiency reporting. The AI Office enforces GPAI obligations directly.
What is the GPAI systemic risk threshold?
Art. 51(1)(a) designates a GPAI model as presenting systemic risk if it has been trained using a total computing power greater than 10^25 floating point operations (FLOPs). This threshold was derived from the compute levels of the largest models available at the time of the Act's finalisation (2024). The Commission can update this threshold by delegated act as technology evolves. Additionally, Art. 51(1)(b) allows the AI Office to designate a GPAI model as systemic risk based on qualitative criteria even if it does not meet the FLOPs threshold — for example, if it has a particularly large user base, is capable of generating particularly dangerous content, or has capabilities with significant societal impact. Providers can also self-notify that their model presents systemic risk.
How do GPAI model obligations interact with the high-risk AI system regime?
The two regimes are distinct but interact. A GPAI model may be integrated into a downstream AI system that qualifies as high-risk under Annex III. In that scenario, the GPAI model provider is subject to Chapter V obligations, and the downstream provider building the high-risk application is subject to Chapter III obligations. The downstream provider cannot simply rely on the GPAI model provider's documentation to satisfy their own Art. 9–15 obligations — they must assess the combined system. Art. 25(2) clarifies that downstream providers who integrate a GPAI model into their high-risk system remain fully responsible for their system's compliance. The AI Act specifically addresses the case where the same entity is both the GPAI model provider and the downstream provider.
Do GPAI obligations apply to fine-tuned versions of a base model?
Yes, with nuance. If you fine-tune a third-party GPAI model and place the fine-tuned version on the market under your own name, you become a provider of a GPAI model for EU AI Act purposes (Art. 25). You inherit the obligations of Chapter V for your fine-tuned model. However, you may be able to rely on the base model provider's documentation where it remains accurate and applicable. The AI Office's GPAI Code of Practice (in development) is expected to address how downstream fine-tuners can rely on upstream providers' documentation. The critical question is whether your fine-tuning constitutes a 'substantial modification' that changes the model's capabilities, risk profile, or intended uses.
Go further with compliance tools
Use the platform's free tools to apply these answers to your specific situation.
These FAQs are provided for general informational purposes and reflect the EU AI Act (Regulation (EU) 2024/1689) as in force. They do not constitute legal advice. AI Act guidance from the European Commission, the AI Office, and national market surveillance authorities continues to evolve — always verify against the current official text and any applicable delegated acts or implementing regulations. For specific compliance questions, consult a qualified legal professional.