High-Risk AI System
A high-risk AI system is an AI system used in critical areas that can affect people's health, safety or fundamental rights. The AI Act imposes strict requirements on these systems, including documentation, testing and human oversight.
Back to Dictionary- Dictionary
- High-Risk AI System
Table of Contents
What is a high-risk AI system?
A high-risk AI system is an AI system used in areas where errors or misuse can have serious consequences for people. It is the second-highest risk category in the AI Act, just below prohibited AI practices.
High-risk AI systems are not prohibited. They may be used, but only if they meet a range of strict requirements for quality, safety and transparency. The purpose is to ensure that AI in critical decisions is reliable and can be controlled by humans.
Examples of high-risk AI systems include AI for credit scoring, AI-based recruitment, medical diagnostic tools and AI for allocation of public benefits. What they have in common is that they make or support decisions that directly affect people’s lives.
How is high-risk AI classified?
The AI Act defines two routes to high-risk classification:
Annex I: Product safety. AI systems that function as safety components in products already regulated by EU legislation. This includes medical devices, machinery, toys, lifts and vehicles. If the AI system must carry a CE mark under existing legislation, it is automatically high-risk.
Annex III: Specific use areas. AI systems used in the following eight areas:
- Biometrics: Remote identification of persons (except verification, e.g. facial unlocking of a phone).
- Critical infrastructure: AI for managing water, gas, electricity, heating and transport.
- Education: AI that assesses students’ access to or outcomes in educational institutions.
- Employment: AI for recruitment, screening of candidates, promotions and dismissals.
- Public services: AI that assesses citizens’ access to benefits, credit scoring and insurance calculations.
- Law enforcement: AI for risk assessment of persons, polygraphs and evidence analysis.
- Migration and border management: AI for visa and asylum processing and border surveillance.
- Administration of justice and democracy: AI assisting judicial authorities and used to influence election outcomes.
There is an important exception: an AI system listed in Annex III is not high-risk if it performs a narrow procedural task, improves the result of an already completed human activity, or is a preparatory tool with no influence on the final decision. The provider must document this assessment.
Requirements for high-risk AI systems
The requirements for high-risk AI systems are the most comprehensive in the AI Act. They apply from 2 August 2026 (2027 for products under Annex I).
- Risk management system: A continuous risk management system that identifies, analyses and mitigates risks throughout the system’s lifecycle. This resembles the approach you know from ISMS and risk assessment.
- Data governance: Training, validation and test data must meet quality requirements. Data must be relevant, representative and, as far as possible, free from errors and bias.
- Technical documentation: Full documentation of the system’s design, purpose, capabilities and limitations. The documentation must enable authorities to assess the system.
- Logging: Automatic recording of events (logs) that make it possible to trace the system’s decisions and identify risks.
- Transparency: Users must receive clear instructions about the system’s purpose, capabilities, limitations and risks.
- Human oversight: The system must be designed so that humans can effectively monitor and, where necessary, override it. See human oversight of AI.
- Accuracy, robustness and cybersecurity: The system must function correctly and withstand errors, attacks and unexpected situations.
Before a high-risk AI system can be placed on the market, the provider must carry out a conformity assessment and register the system in the EU database.
Roles and responsibilities
The provider bears the primary responsibility. This is the organisation that develops a high-risk AI system or has it developed for the purpose of placing it on the market. The provider must fulfil all the requirements above, carry out the conformity assessment and continuously monitor the system after it has been placed on the market.
The deployer also has obligations. If your organisation uses a high-risk AI system, you must:
- Use the system in accordance with the instructions for use.
- Ensure human oversight by the persons who supervise the system.
- Monitor the system’s operation and report risks or serious incidents to the provider.
- Carry out a fundamental rights impact assessment (for certain public bodies and private organisations).
If you use a high-risk AI system for decisions about individuals within the meaning of GDPR, you must also comply with the requirements on automated decision-making in GDPR Article 22. The two sets of rules complement each other.
Frequently Asked Questions about High-Risk AI System
What is a high-risk AI system?
A high-risk AI system is an AI system used in critical areas such as healthcare, education, employment or law enforcement. It must meet strict requirements for documentation, testing, transparency and human oversight.
How do I know if my AI system is high-risk?
The AI Act defines two categories: AI systems that are safety components in products regulated by EU legislation (Annex I), and AI systems used in specific areas such as biometrics, critical infrastructure, education, employment, law enforcement and migration (Annex III).
When do the requirements for high-risk AI systems apply?
The majority of requirements apply from 2 August 2026. For high-risk AI systems in regulated sectors (e.g. medical devices), the requirements apply from 2 August 2027.
Who is responsible for complying with high-risk AI requirements?
The provider (the party that develops or markets the system) bears the primary responsibility. Deployers of high-risk AI also have obligations, including using the system correctly and monitoring its operation.
Related Terms
AI Risk Categories
The AI Act’s risk-based classification system dividing AI systems into four levels: unacceptable (prohibited), high, limited and minimal risk.
ai_actConformity Assessment (AI)
The formal process by which a provider documents that a high-risk AI system meets all requirements of the EU AI Act before it can be placed on the market.
ai_actHuman Oversight of AI
The requirement that high-risk AI systems must be designed so that humans can effectively monitor, understand and override the system.
Related Articles
Info
.legal A/S
hello@dotlegal.com
+45 7027 0127
VAT-no: DK40888888
Support
support@dotlegal.com
+45 7027 0127
Need help?
Let me help you get started
+45 7027 0127 and I'll get you started
.legal is not a law firm and is therefore not under the supervision of the Bar Council.