Skip to main contentSkip to main content
Sector GuideAnnex III cat. 5(b)High-risk deadline: 2 Aug 2026

EU AI Act Guide for Financial Services

Financial services is one of the most directly and heavily exposed sectors under the EU AI Act. Credit scoring, insurance pricing, fraud detection, and AML systems are explicitly listed as high-risk under Annex III. This guide explains the classification framework, the regulatory overlaps with GDPR and sector-specific law, and what banks and insurers need to do before August 2026.

Why financial services faces disproportionate exposure

Annex III, Category 5 explicitly targets AI systems used in access to and enjoyment of essential private services. The regulation singles out credit scoring and creditworthiness assessment as the paradigmatic example of a high-risk AI system — because errors in these systems have direct, material, and potentially irreversible consequences for natural persons: people are denied mortgages, refused insurance, or blocked from bank accounts.

Financial services firms are also subject to the unique Art. 26(5) requirement: credit institutions supervised under the Capital Requirements Directive must conduct a Fundamental Rights Impact Assessment (FRIA) before deploying any high-risk AI system. This obligation applies to deployers, not just providers — meaning a bank that buys a vendor credit scoring model still bears this obligation.

High-Risk AI Systems in Financial Services

These system types are either explicitly listed in Annex III or squarely within its scope. Each triggers the full Art. 9–17 high-risk obligation stack.

Credit scoring & creditworthiness assessment

Annex III, cat. 5(b)

AI systems used to evaluate creditworthiness of natural persons, or to determine their credit score, are explicitly high-risk. This covers retail lending, mortgage eligibility, credit card limits, and buy-now-pay-later affordability checks.

Typical deployers

Banks, building societies, BNPL providers, credit reference agencies deploying scoring models

Borderline considerations

Purely rule-based scorecards with no ML component may fall outside the AI definition (Art. 3(1)), but most modern scoring models will be in scope.

Life & health insurance pricing using AI

Annex III, cat. 5(b)

Where AI is used to determine the price, terms, or eligibility for life, health, or critical illness insurance for natural persons, the system is high-risk. The impact on access to essential financial protection is the trigger.

Typical deployers

Life insurers, health insurers, reinsurers feeding pricing models into retail products

Borderline considerations

Purely actuarial tables without an ML inference layer are not AI systems. Hybrid models need case-by-case analysis.

Fraud detection affecting eligibility

Annex III, cat. 5(b)

AI fraud detection systems that result in account freezing, transaction blocking, or denial of services to natural persons are high-risk. The key test is whether the output significantly affects the person's access to financial services.

Typical deployers

Payment processors, card networks, banks using real-time fraud scoring in authorisation pipelines

Borderline considerations

Fraud systems that only flag for human review, with a human making the final access decision, may reduce risk classification — but human oversight must be genuine and documented.

Anti-money laundering (AML) systems

Annex III, cat. 5(b) — arguable; also cat. 6 intersection

AI-based transaction monitoring and customer risk scoring systems used in AML compliance affect access to banking services when they trigger account restrictions or Suspicious Activity Reports. Classification depends on the degree of automation in outcomes.

Typical deployers

Banks, payment institutions, crypto-asset service providers, money transmitters

Borderline considerations

Where the AI output is a risk score reviewed by a compliance analyst who makes the final decision, full high-risk obligations may not apply — but demonstrably human-controlled processes are required.

GDPR Art. 22 — The automated decision-making obligation

Credit decisions and insurance pricing based on AI are, in nearly all cases, “automated decisions with legal or similarly significant effects” under GDPR Art. 22. This creates obligations that stack on top of the AI Act — they are not satisfied by AI Act compliance alone.

What GDPR Art. 22 requires (alongside AI Act)

  • Identify a lawful basis for purely automated decision-making (usually Art. 22(2)(a): contract necessity)
  • Provide meaningful information about the logic involved, significance, and likely consequences (must be specific, not boilerplate)
  • Enable the data subject to obtain human intervention, express their view, and contest the decision
  • Ensure special category data is not used unless Art. 22(4) exception applies

Where AI Act Art. 14 and GDPR Art. 22 overlap

  • Both require human oversight / human review capability for significant automated decisions
  • Build one integrated process: AI Act human oversight procedures satisfy GDPR Art. 22 'human intervention' requirement
  • Document the process under both instruments — auditors and DPAs will cross-reference
  • Data subjects have the right to explanation under GDPR; AI Act Art. 13 'instructions for use' must enable deployers to provide this

Regulatory Overlap — AI Act + Sectoral Law

GDPR Art. 22Automated decisions

Credit scoring and insurance pricing decisions are typically 'automated decisions with significant effect' under GDPR Art. 22. Lenders relying on Art. 22(2)(a) (contract necessity) must still provide the data subject with meaningful information about the logic, significance, and consequences — and enable them to contest the decision. This stacks on top of AI Act Art. 13 transparency and Art. 14 human oversight obligations.

CRD VI / CRR IIICapital Requirements

The Capital Requirements Directive VI and Regulation III require robust model risk management for credit risk models used in regulatory capital calculations. AI-based internal ratings models (IRB models) already require regulator approval. The AI Act adds a further layer: CE marking and conformity assessment. For IRB models classified as high-risk AI systems, banks must map AI Act obligations onto existing EBA model risk management guidelines.

PSD2 / PSD3Payments

Strong Customer Authentication (SCA) systems and fraud risk scoring under PSD2 Article 18 (transaction risk analysis exemptions) use AI. Where those systems result in transaction blocking or access denial, the AI Act high-risk classification is triggered alongside PSD2 compliance obligations. PSD3 is expected to maintain the same AI intersection.

MiFID II / MiFIRInvestment services

AI systems used for suitability assessments (determining whether a financial product is suitable for a retail investor) or for automated investment advice fall within Annex III cat. 5(b) as they affect access to essential private services. MiFID II already requires suitability documentation. The AI Act adds technical documentation (Art. 11), risk management (Art. 9), and post-market monitoring (Art. 72) on top.

Solvency II / EIOPA guidelinesInsurance

EIOPA guidelines on AI governance in insurance (published 2021) set expectations for explainability, fairness, and human oversight in insurance AI. The AI Act formalises and legally mandates many of those expectations. Insurers that have implemented EIOPA AI guidelines will have a head-start on AI Act compliance but will need to fill documentation and conformity assessment gaps.

Key Deployer Obligations — Art. 26

Most financial institutions deploying AI — even third-party AI — are “deployers” under the AI Act. Art. 26 imposes the following obligations on deployers of high-risk AI systems.

ArticleObligation
Art. 26(1)Use in accordance with instructions
Art. 26(2)Human oversight measures
Art. 26(5)FRIA for credit institutions
Art. 26(6)Logging and monitoring
Art. 26(4)Notify natural persons
Art. 26(5) — Credit institutions only

Fundamental Rights Impact Assessment (FRIA) — unique financial services obligation

Art. 26(5) creates an obligation that applies exclusively to deployers that are credit institutions within the meaning of CRD. These entities must carry out a FRIA before putting any high-risk AI system into service. The FRIA must assess the potential impact on fundamental rights — including non-discrimination, data protection, and access to services — and document how risks are mitigated. This is separate from the provider's conformity assessment and from the GDPR DPIA. All three documents may be required for a single AI credit scoring deployment.

Compliance Timeline for Financial Services

2 February 2025Prohibitions in force

Prohibited AI practices (Art. 5) apply. Social scoring and real-time remote biometric identification in public spaces are banned. Most financial services AI is not in this category, but review any customer risk-scoring systems that could constitute social scoring.

2 August 2025GPAI obligations

General-purpose AI model obligations apply. Financial institutions using or fine-tuning foundation models as part of credit or insurance AI products must review GPAI provider transparency obligations.

2 August 2026Full high-risk enforcement

Full Annex III high-risk obligations apply. Credit scoring, insurance pricing, fraud detection, and AML AI systems must be compliant: risk management system, technical documentation, conformity assessment (internal or third-party), CE marking, registration in the EU AI database.

Practical Compliance Action Plan

A phased approach for compliance teams in banks, insurers, and other financial institutions deploying AI.

Phase 1 — Inventory (now)

  • 1Map every AI system in use that touches credit decisions, insurance pricing, fraud outcomes, or AML — whether built in-house or procured from vendors
  • 2For each system, determine: is it an AI system under Art. 3(1)? Does its output significantly affect a natural person's access to financial services?
  • 3Flag all systems that are likely high-risk under Annex III cat. 5(b)
  • 4For each vendor AI system, request the provider's technical documentation and instructions for use

Phase 2 — Gap assessment (Q3–Q4 2025)

  • 1For each high-risk system, assess compliance gaps against Art. 9 (risk management), Art. 10 (data governance), Art. 11 (technical documentation), Art. 13 (transparency), Art. 14 (human oversight)
  • 2Conduct the Art. 26(5) Fundamental Rights Impact Assessment for all credit-institution-operated AI systems
  • 3Map GDPR Art. 22 compliance position for each automated decision system — confirm lawful basis and information notice adequacy
  • 4Review existing EBA model risk management and EIOPA AI governance implementations for reusable compliance evidence

Phase 3 — Remediation (Q4 2025 – Q2 2026)

  • 1Implement or upgrade risk management systems (Art. 9) for each high-risk AI system — integrate with model risk management frameworks
  • 2Prepare or obtain technical documentation (Annex IV) for each high-risk system
  • 3Establish human oversight procedures: define who can override AI decisions, under what circumstances, and how that is logged
  • 4Implement post-market monitoring systems and define performance metrics, drift detection thresholds, and incident escalation procedures
  • 5Register high-risk AI systems in the EU AI database before the August 2026 deadline

Phase 4 — Conformity assessment & go-live (by 2 August 2026)

  • 1Complete internal conformity assessment for Annex III systems (external notified body assessment required only where Annex I sectoral legislation mandates it)
  • 2Affix CE marking and draw up EU Declaration of Conformity
  • 3Ensure ongoing post-market monitoring, incident reporting procedures, and staff training are operational
  • 4Establish records retention policy: technical documentation and logs must be kept for 10 years (Art. 18)

Build your financial services compliance checklist

Generate a tailored checklist covering Annex III cat. 5, Art. 26 deployer obligations, and the FRIA requirement.

Build compliance checklist →