Human Oversight of AI
Human oversight is the requirement that high-risk AI systems must be designed and used in a way that enables humans to effectively monitor and, where necessary, override the system. It is one of the most central requirements in the AI Act.
Back to Dictionary- Dictionary
- Human Oversight of AI
Table of Contents
What is human oversight of AI?
Human oversight of AI is about ensuring that humans retain genuine decision-making authority when AI systems are used in critical contexts. It is not enough merely to have an "approve" button. The person overseeing the system must understand what the AI system is doing and be able to intervene meaningfully.
The AI Act sets out this requirement in Article 14. It applies specifically to high-risk AI systems and is based on a fundamental idea: the greater the influence an AI system has on people’s lives, the more important it is that a human can control it.
The requirement is closely linked to GDPR’s rules on automated decision-making (Article 22), which give individuals the right to human involvement in decisions with legal effects. The AI Act extends this principle from the data subject’s rights to a design requirement for the system itself.
Requirements in the AI Act
Article 14 sets out specific requirements for how human oversight must function:
- Design requirement: The provider must build the system so that it can be effectively monitored by humans. This includes tools for understanding the system’s output and the ability to override or stop the system.
- Comprehensibility: The system must provide sufficient information for the supervising person to understand its capabilities and limitations, and to detect errors, bias and unexpected results.
- Override capability: It must be possible to override or disregard the system’s output. For certain systems, it must also be possible to stop the system entirely via a "stop" button.
- Competence: The persons exercising oversight must have the necessary training and competence. This is directly linked to the requirement for AI literacy.
The requirement applies to both the provider (who designs the system) and the deployer (who uses it). The provider must make oversight possible. The deployer must ensure that oversight is actually exercised.
Three approaches to human oversight
The AI Act mentions three levels of human oversight. Which approach is appropriate depends on the risk level and context.
Human-in-the-loop (HITL): A human is directly involved in every decision. The AI system provides a recommendation, and the human approves or rejects it. This is the most demanding level and is relevant when the decision has significant impact, e.g. credit assessments or consent-related decisions.
Human-on-the-loop (HOTL): A human monitors the system’s operation on an ongoing basis and can intervene, but does not approve each individual decision. The system runs automatically, but the human can stop it or change its behaviour. This is comparable to the supervisory function known from monitoring of critical infrastructure.
Human-in-command (HIC): A human has overarching control of the system and can decide when and how it is used. The human can permanently disconnect the system, change its role or reverse its decisions. This is the highest level and is relevant for systems with extensive societal impact.
The AI Act requires, at a minimum, that the provider enables an appropriate form of human oversight and that the deployer implements it in practice.
Implementation in practice
Human oversight is not just a legal requirement. It is an organisational challenge. Here are four concrete steps:
- Designate responsible persons: Define who is responsible for overseeing each high-risk AI system in your organisation. Document this as part of your governance structure.
- Ensure competence: The designated persons must understand what the system does and what errors may occur. Invest in AI literacy training specific to the system they oversee.
- Implement processes: Describe procedures for when and how a person should intervene. What does the oversight person do if the system produces an unusual result? When is a case escalated?
- Avoid automation blindness: The greatest risk factor is that the oversight person blindly trusts the AI system’s output. Build oversight mechanisms that actively require human assessment, e.g. through spot checks or by displaying alternative results.
Human oversight is also relevant for personal data processing. If your AI system processes personal data and makes decisions about data subjects, the requirements of GDPR and the AI Act combine. You must ensure both the data subject’s right to human involvement and the AI Act’s design requirements.
Frequently Asked Questions about Human Oversight of AI
What is human oversight in the AI Act?
Human oversight is the requirement that high-risk AI systems must be designed so that humans can monitor the system’s operation, understand its output and, where necessary, intervene or stop it.
Who must perform human oversight of AI?
Both the provider and the deployer have responsibilities. The provider must design the system so that human oversight is possible. The deployer must designate competent persons to carry out oversight in practice.
What is the difference between human-in-the-loop and human-on-the-loop?
Human-in-the-loop means that a human actively approves each decision. Human-on-the-loop means that a human monitors the system and can intervene, but does not approve each individual decision. The AI Act requires at least human-on-the-loop for high-risk AI.
Does the human oversight requirement apply to all AI systems?
The specific requirement in Article 14 of the AI Act applies only to high-risk AI systems. However, the principle of human oversight is also relevant for other AI systems, and GDPR imposes similar requirements for automated decision-making.
Related Terms
High-Risk AI System
An AI system used in critical areas that must meet strict requirements for safety, transparency and human oversight under the AI Act.
ai_actAI Risk Categories
The AI Act’s risk-based classification system dividing AI systems into four levels: unacceptable (prohibited), high, limited and minimal risk.
ai_actAI Act
The EU's comprehensive regulation on artificial intelligence, classifying AI systems by risk level and imposing requirements from development to deployment.
Related Articles
Info
.legal A/S
hello@dotlegal.com
+45 7027 0127
VAT-no: DK40888888
Support
support@dotlegal.com
+45 7027 0127
Need help?
Let me help you get started
+45 7027 0127 and I'll get you started
.legal is not a law firm and is therefore not under the supervision of the Bar Council.