Skip to main contentSkip to main content
Free Tool

NIST AI RMF Self-Assessment

Evaluate your organisation's AI risk management maturity across the four NIST AI Risk Management Framework functions. Score each practice on a 1–5 maturity scale and identify your highest-priority gaps.

About NIST AI RMF: The NIST AI Risk Management Framework (AI RMF 1.0, published January 2023) provides voluntary guidance for managing risks in AI systems across their full lifecycle. It is widely adopted by US federal agencies and international organisations, and complements EU AI Act compliance by addressing risk management practices that the regulation requires but does not prescribe in detail.

GOVERN

Cultivate a culture of AI risk management across the organisation.

MAP

Establish the context for AI risk by categorising AI systems, identifying stakeholders, and understanding the environment in which AI operates.

MEASURE

Analyse and assess AI risks using quantitative and qualitative methods.

MANAGE

Respond to, prioritise, and treat identified AI risks.

0 of 28 practices assessed

Cultivate a culture of AI risk management across the organisation. Establish policies, accountability structures, and workforce capabilities to govern AI development and deployment responsibly.

GV-1: Policies & Accountability

Organisational policies, roles, and accountability structures for AI risk management.

GV-1.1

AI risk management policies are documented and approved by senior leadership.

Art. 9
GV-1.2

Roles and responsibilities for AI risk management are clearly assigned.

Art. 26
GV-1.3

AI governance is integrated into existing organisational governance structures.

Art. 17
GV-1.4

Senior leadership is briefed on AI risks and approves risk tolerance levels.

GV-2: Culture & Workforce

Building AI risk awareness and capability across the organisation.

GV-2.1

Staff involved in AI development and deployment receive AI risk training.

Art. 4
GV-2.2

There are clear processes for staff to raise AI-related concerns without retaliation.

GV-2.3

The organisation engages diverse stakeholders when designing and evaluating AI systems.

GV-3: Third-Party & Supply Chain

Managing AI risk from vendors, partners, and supply chain participants.

GV-3.1

AI vendor and supplier risk is assessed before procurement and periodically thereafter.

Art. 26(6)
GV-3.2

Contracts with AI vendors include provisions for risk management, audit rights, and incident notification.

Important Legal Disclaimer

This tool is a self-assessment aid only and does not constitute legal advice, a formally certified compliance assessment, or an independently audited report.

Outputs — including reports, scores, checklists, and generated documents — are for internal use and should be reviewed by a qualified legal representative or independent AI compliance auditor before being relied upon for regulatory, procurement, or public-disclosure purposes.

This tool does not replace a notified body conformity assessment where one is required under Art. 43(1) of the EU AI Act (e.g. biometric identification systems for law enforcement).

All assessment risk lies with the user. AIAuditRef, its developers, and staff accept no liability for losses arising from use of or reliance on these outputs. Always verify against official sources: the EU AI Act (Regulation 2024/1689) and your national enforcement authority.