Law Enforcement & Public Safety
Law enforcement AI sits at the intersection of some of the AI Act's strictest provisions: several uses are outright prohibited, most operational AI is high-risk, and fundamental rights safeguards are mandatory. This guide covers the key obligations for police, prosecution authorities, border control, and criminal justice AI.
High-stakes sector — read prohibitions first
The AI Act contains specific prohibitions that apply exclusively or primarily to law enforcement contexts. Deploying prohibited systems carries fines of up to €35,000,000 or 7% of global turnover. Before building or procuring any law enforcement AI, verify it does not fall within Art. 5.
Prohibited AI practices (Art. 5)
The following practices are prohibited across the EU from February 2, 2025. Providers and deployers must immediately discontinue any system falling within these categories.
Real-time remote biometric identification in public spaces
Law enforcement use of AI for real-time remote biometric identification of natural persons in publicly accessible spaces is prohibited. Three narrow exceptions exist: targeted searches for specific missing persons, prevention of genuine, present threats to life or specific terrorist attack, and identification of suspected perpetrators of specified serious offences (listed in Annex II). Exceptions require prior judicial or independent administrative authorisation (except urgent cases).
Post-remote biometric identification (legacy cases)
Use of post-remote biometric identification systems (searching databases of biometric data retrospectively) for law enforcement purposes is prohibited unless specifically authorised under national law for targeted searches in relation to serious offences listed in Annex II, subject to judicial authorisation.
Individual criminal risk assessment based on profiling
AI systems that assess the risk of an individual committing a criminal offence based solely on profiling or assessing their personality traits and characteristics are prohibited. This targets actuarial tools that predict future offending behaviour based on demographic or personal profiles rather than actual behaviour.
Emotion recognition in law enforcement contexts
The use of AI systems to infer emotions of individuals — except for medical or safety reasons — is prohibited. For law enforcement, this means AI tools that analyse facial expressions, voice patterns, or physiological signals to infer emotional states during interviews, interrogations, or surveillance are prohibited.
High-risk AI systems (Annex III, point 6)
AI systems intended for use by law enforcement authorities that are not prohibited are generally classified as high-risk under Annex III, point 6. All high-risk AI obligations apply (Art. 9–17).
Individual risk assessment for law enforcement
AI systems used to assess the risk of an individual becoming a victim of criminal offences. Must not be confused with prohibited individual criminal risk assessment.
Lie detection and similar tools
AI systems used for polygraph-type assessments and related tools intended to detect the emotional state or truthfulness of individuals during law enforcement questioning.
Crime analytics and prediction
AI systems for detecting, recognising, or identifying persons using biometric data not covered by the prohibition, and for crime analytics (assessing likelihood of offences in certain locations/times).
Criminal investigation support
AI systems that process criminal justice data to support crime investigation, including systems for analysing complex data sets to generate leads or detect patterns.
Justice and democracy (Annex III, point 8)
AI systems used to assist judicial authorities in researching and interpreting facts and applying the law are also high-risk under Annex III, point 8. This covers AI tools used by courts and prosecutors to research case law, interpret facts, or support sentencing decisions.
Key obligations for law enforcement AI
Fundamental Rights Impact Assessment (FRIA)
Deployers of high-risk AI systems in law enforcement must conduct a Fundamental Rights Impact Assessment before deployment. The FRIA is more extensive for law enforcement uses given the severity of potential impacts on liberty, privacy, and non-discrimination. Must be made available to market surveillance authorities.
Human oversight is non-negotiable
Human oversight for law enforcement AI must be especially robust. High-risk AI systems must be under the oversight of a natural person with the authority and capability to override. For law enforcement, AI outputs should never automatically trigger enforcement actions without human review.
Logging and record retention
All law enforcement AI systems must generate logs of operation. Logs must be retained for at least 6 months (or longer where required by other law). For justice systems, retention may need to align with criminal procedure rules.
Data governance and bias assessment
Training data for law enforcement AI must be examined for biases, particularly racial bias, gender bias, and socioeconomic profiling. Datasets must be representative of the populations the system will encounter. Bias audits should be independent.
Transparency toward individuals
Where individuals interact with high-risk AI systems or are subject to their decisions (e.g., risk scoring), they have rights to information about the AI system. This aligns with GDPR Art. 22 rights regarding automated decision-making.
Border management and migration (Annex III, point 7)
AI systems used in border management are a separate Annex III category but often relevant to law enforcement authorities:
- Annex III — 7(a)AI systems used for risk assessment of individuals at borders (irregular migration, security risks)
- Annex III — 7(b)AI for detecting, identifying, or verifying persons at borders, including forgery detection
- Annex III — 7(c)AI used in examination of asylum and refugee applications and related border decisions
Tools for law enforcement AI compliance
This guide is for informational purposes only and is not legal advice. Law enforcement AI is a particularly sensitive area — consult a qualified EU AI Act specialist.