Skip to main contentSkip to main content
Omnibus update (7 May 2026): Annex III high-risk provisionally shifted from 2 Aug 2026 to 2 Dec 2027. Art. 50 transparency (chatbots, deepfakes, emotion, biometric) still 2 Aug 2026. Art. 4 / Art. 5 / GPAI live now. Details.
Why this matters

Art. 14 requires providers to design high-risk AI systems so that natural persons can effectively oversee them. The oversight mechanism must be documented, trained for, and evidenced — this is frequently the weakest point in Annex VII assessments.

How to use it
  1. 1Identify who is designated as the human oversight person(s)
  2. 2Document their qualifications, authority, and training received
  3. 3Describe technical mechanisms enabling intervention or halt
  4. 4Record how the system displays confidence levels and uncertainty
  5. 5Define escalation procedures and override capabilities
  6. 6Export the document as part of your Annex IV technical file
Legal basis
  • Art. 14
  • Art. 14(4)
  • Annex IV §5

Human Oversight Design Document

Required under Art. 14 EU AI Act. Documents the human oversight mechanisms built into the system and the procedures for oversight persons.

Art. 14(5) — Annex III(1)(a) biometric identification

Does this system perform biometric identification within the scope of Annex III(1)(a)? Selecting "Yes" reveals the Art. 14(5) dual-confirmation field, which is mandatory for those systems — no action or decision may be taken by the deployer on the basis of the system's output unless that output is separately verified and confirmed by at least two competent natural persons.

0/10 fields completed

Designated Oversight Persons

Art. 14(1)

Qualifications & Competence

Art. 14(4)(a)

Training Programme

Art. 14(4)(a)

Understanding of Capabilities & Limitations

Art. 14(4)(b)

Bias & Risk Awareness

Art. 14(4)(b)

Automation-Bias Avoidance Training Programme

Art. 14(4)(b)

Evidence requirement: Programme name + version + owner + recertification cadence required (not just a description).

Technical Oversight Mechanisms

Art. 14(4)(c)

Intervention & Override Capability

Art. 14(4)(d)

Procedures to Disregard Outputs — with Quantitative Triggers

Art. 14(4)(e)

Evidence requirement: At least one numeric threshold (confidence, anomaly score, or performance gate) is required.

Ongoing Monitoring Procedures

Art. 14(5)

Export blocked. 4 required fields missing: AI system name, Whether the system performs biometric ID under Annex III(1)(a), Procedures to disregard (with quantitative trigger), +1 more
Complete the required legal-basis fields above to enable evidence export.

Important Legal Disclaimer

This tool is a self-assessment aid only and does not constitute legal advice, a formally certified compliance assessment, or an independently audited report.

Outputs — including reports, scores, checklists, and generated documents — are for internal use and should be reviewed by a qualified legal representative or independent AI compliance auditor before being relied upon for regulatory, procurement, or public-disclosure purposes.

This tool does not replace a notified body conformity assessment where one is required under Art. 43(1) of the EU AI Act (e.g. biometric identification systems for law enforcement).

All assessment risk lies with the user. AIAuditRef, its developers, and staff accept no liability for losses arising from use of or reliance on these outputs. Always verify against official sources: the EU AI Act (Regulation 2024/1689) and your national enforcement authority.