AI Risk Categories
The AI Act is built on a risk-based approach that classifies AI systems into four categories: unacceptable, high, limited and minimal risk. The categorisation determines which requirements apply to your AI system.
Back to Dictionary- Dictionary
- AI Risk Categories
Table of Contents
The risk-based approach
The AI Act does not regulate all AI systems in the same way. Instead, it uses a risk-based approach: the higher the risk an AI system poses to people’s health, safety or fundamental rights, the stricter the requirements it must meet.
The approach is proportionate. An AI system that sorts e-mails has virtually no requirements, whilst an AI system that assesses credit applications or assists with medical diagnoses must meet strict rules. This ensures that innovation is not unnecessarily stifled, whilst citizens are protected against the riskiest uses of AI.
The principle is familiar from other EU regulation. GDPR uses a similar approach, where high-risk processing of personal data requires a data protection impact assessment. The AI Act takes this logic further with four clearly defined levels.
The four risk levels
1. Unacceptable risk (prohibited)
Prohibited AI practices are AI applications that pose such a serious threat that they must not be used in the EU at all. This includes social scoring of citizens, AI that manipulates vulnerable groups, and real-time biometric mass surveillance in public spaces (with narrow exceptions). The prohibition has been in force since February 2025.
2. High risk
High-risk AI systems are used in critical areas such as healthcare, education, employment, law enforcement and critical infrastructure. They are not prohibited, but must meet comprehensive requirements for risk management, technical documentation, data governance, transparency, human oversight and cybersecurity. The provider must carry out a conformity assessment and register the system in the EU database. The requirements apply from August 2026.
3. Limited risk
AI systems with limited risk have transparency obligations. This covers three types:
- AI interacting with humans: Chatbots and virtual assistants must inform the user that they are communicating with an AI system (unless it is obvious).
- AI generating synthetic content: Deepfakes and AI-generated images, audio and video must be labelled machine-readably as AI-generated.
- Emotion recognition systems: Systems using emotion recognition or biometric categorisation (outside the prohibited uses) must inform the persons concerned.
4. Minimal risk
The vast majority of AI systems fall into this category. Spam filters, AI in computer games, AI-powered autocorrect and recommendation systems for music and films are all examples. The AI Act imposes no specific requirements on these systems. The European Commission encourages voluntary codes of conduct, but this is not a legal requirement.
How to classify your AI system
Start by mapping all AI systems in your organisation. For each system, ask four questions:
- Is it prohibited? Check the system against the list of prohibited AI practices in Article 5. If so, usage must be stopped immediately.
- Is it high-risk? Check whether the system is a safety component in a regulated product (Annex I) or is used in one of the eight specific areas in Annex III (biometrics, critical infrastructure, education, employment, public services, law enforcement, migration, administration of justice).
- Does it have transparency requirements? Does the system interact directly with humans, generate synthetic content, or use emotion recognition? Then it has transparency obligations.
- None of the above? The system is minimal risk and has no specific AI Act requirements.
Document the classification and rationale for each system. This gives you an overview of your organisation’s overall AI risk profile and is the starting point for planning your compliance work.
Cross-cutting requirements
Regardless of risk category, two requirements apply to all organisations that use AI:
AI literacy: All employees and persons working with AI systems must have sufficient understanding of AI to use it responsibly. The requirement applies from August 2025.
Other legislation: The AI Act does not replace existing legislation. GDPR still applies to all processing of personal data. Sector-specific legislation applies in parallel. An AI system with minimal risk under the AI Act may well have extensive requirements under other regulatory frameworks.
The risk-based approach makes it possible to prioritise resources correctly. Start by ensuring you are not using prohibited practices, then identify your high-risk systems, and finally ensure that transparency requirements are met.
Frequently Asked Questions about AI Risk Categories
What are the four risk categories in the AI Act?
The four categories are: unacceptable risk (prohibited), high risk (strict requirements), limited risk (transparency requirements) and minimal risk (no specific requirements). The categorisation determines which obligations apply to your AI system.
How do I find out which risk category my AI system belongs to?
First check whether the system falls under the prohibitions in Article 5. Then whether it is used in one of the eight high-risk areas in Annex III or is a safety component in a product under Annex I. If neither applies, it is either limited or minimal risk.
What is limited risk in the AI Act?
Limited risk covers AI systems with transparency obligations. This includes chatbots that must inform the user they are speaking with an AI, and systems generating deepfakes that must be labelled.
Do AI systems with minimal risk have any requirements?
The AI Act imposes no specific requirements on AI systems with minimal risk. However, the AI literacy requirement applies to all organisations using AI, regardless of risk level. Other laws such as GDPR may still impose requirements.
Related Terms
AI Act
The EU's comprehensive regulation on artificial intelligence, classifying AI systems by risk level and imposing requirements from development to deployment.
ai_actHigh-Risk AI System
An AI system used in critical areas that must meet strict requirements for safety, transparency and human oversight under the AI Act.
ai_actProhibited AI Practices
AI systems and applications entirely banned under the EU AI Act due to the unacceptable risk they pose to fundamental rights.
Related Articles
Info
.legal A/S
hello@dotlegal.com
+45 7027 0127
VAT-no: DK40888888
Support
support@dotlegal.com
+45 7027 0127
Need help?
Let me help you get started
+45 7027 0127 and I'll get you started
.legal is not a law firm and is therefore not under the supervision of the Bar Council.