Skip to main contentSkip to main content
Omnibus update (7 May 2026): Annex III high-risk provisionally shifted from 2 Aug 2026 to 2 Dec 2027. Art. 50 transparency (chatbots, deepfakes, emotion, biometric) still 2 Aug 2026. Art. 4 / Art. 5 / GPAI live now. Details.
Why this matters

Art. 15 requires documented evidence that the system achieves appropriate accuracy across its lifetime, is resilient to errors and adversarial inputs, and is cybersecure. This record is required in the Annex IV technical file and is reviewed by notified bodies.

How to use it
  1. 1Record accuracy metrics with threshold values and test conditions
  2. 2Document robustness testing results (stress testing, edge cases)
  3. 3Record adversarial testing results and countermeasures
  4. 4Document cybersecurity measures (authentication, encryption, access controls)
  5. 5Specify monitoring procedures for accuracy degradation over time
  6. 6Export as legal-grade evidence of Art. 15 compliance
Legal basis
  • Art. 15
  • Art. 15(3)
  • Annex IV §8

Accuracy, Robustness & Cybersecurity Record

Required under Art. 15 EU AI Act for all high-risk AI providers.

Accuracy Metrics

Art. 15(1)

Robustness Testing Results

Art. 15(3)

Foreseeable Misuse Testing

Art. 15(3) read with Art. 9(2)(b)

(a) Training-data poisoning resilience

Recital 76 / Art. 15(5)

(b) Model poisoning / supply-chain integrity

Recital 76 / Art. 15(5)

(c) Adversarial / evasion attacks (incl. prompt injection)

Recital 76 / Art. 15(5)

(d) Confidentiality attacks (model inversion / membership inference)

Recital 76 / Art. 15(5)

Fallback & Error-Handling Mechanisms

Art. 15(4) — fail-safe plans for each foreseeable failure mode

For each foreseeable failure mode, document the planned fallback behaviour, the recovery protocol, who is notified and on what timeframe, and whether the plan has been tested. Minimum one row required to export.

Failure mode #1

General Cybersecurity Measures

Art. 15(5)

Ongoing Monitoring Plan

Art. 72

Export blocked. 4 required fields missing: AI system name, Version, Tested by, +1 more

Important Legal Disclaimer

This tool is a self-assessment aid only and does not constitute legal advice, a formally certified compliance assessment, or an independently audited report.

Outputs — including reports, scores, checklists, and generated documents — are for internal use and should be reviewed by a qualified legal representative or independent AI compliance auditor before being relied upon for regulatory, procurement, or public-disclosure purposes.

This tool does not replace a notified body conformity assessment where one is required under Art. 43(1) of the EU AI Act (e.g. biometric identification systems for law enforcement).

All assessment risk lies with the user. AIAuditRef, its developers, and staff accept no liability for losses arising from use of or reliance on these outputs. Always verify against official sources: the EU AI Act (Regulation 2024/1689) and your national enforcement authority.