Prohibited AI Practices
Prohibited AI practices are AI applications that the EU considers to pose an unacceptable risk to fundamental rights. They are entirely banned under the AI Act, and the prohibition has been in force since February 2025.
Back to Dictionary- Dictionary
- Prohibited AI Practices
Table of Contents
What are prohibited AI practices?
Prohibited AI practices represent the highest risk category in the AI Act. They are AI systems and applications that the EU has determined pose such a serious threat to fundamental rights that they must not be used at all. No conformity assessment or safeguard can make them lawful.
The prohibition took effect on 2 February 2025 and was the first part of the AI Act to become applicable. This means that your organisation must already ensure that none of the prohibited practices are in use.
The prohibitions target AI applications that manipulate people, exploit vulnerabilities or lead to arbitrary surveillance. They protect principles familiar from GDPR and the EU Charter of Fundamental Rights.
The eight prohibitions
The AI Act Article 5 sets out eight categories of prohibited AI practices:
- Manipulative AI: AI systems that use subliminal techniques or deliberately manipulative methods to distort a person’s behaviour in a way likely to cause harm.
- Exploitation of vulnerabilities: AI that exploits persons on account of their age, disability or socio-economic situation to materially influence their behaviour.
- Social scoring: AI systems used by public authorities to evaluate and classify citizens over time based on their social behaviour, where the resulting treatment is unjustified or disproportionate.
- Predicting criminality: AI that assesses the risk of a person committing a criminal offence based solely on profiling or personality traits (not concrete behaviour).
- Untargeted scraping of facial images: Building facial recognition databases through untargeted collection of images from the internet or CCTV footage.
- Emotion recognition in workplaces and education: AI that reads employees’ or students’ emotions, unless for medical or safety reasons.
- Biometric categorisation by sensitive characteristics: AI that categorises persons using biometric data to infer race, political opinions, trade union membership, religion or sexual orientation.
- Real-time remote biometric identification in public spaces: Use of AI to identify persons in real time via biometrics in publicly accessible spaces for law enforcement (with narrow exceptions).
Exceptions and grey areas
The prohibitions are broad, but not without nuance. There are three important exceptions for real-time remote biometric identification for law enforcement:
- Searching for victims of abduction, human trafficking or sexual exploitation.
- Prevention of a specific, imminent terrorist threat.
- Localisation and identification of suspects in serious criminal cases (e.g. murder or terrorism).
Even in these cases, prior judicial authorisation is required, and the use must be strictly necessary and proportionate.
The grey area is greatest with AI for marketing. Personalised advertising using psychological profiling could potentially fall under the prohibition on manipulation if it deliberately exploits vulnerabilities. If your organisation uses AI-driven marketing, you should assess whether the technique respects the boundary between personalisation and manipulation.
The prohibition on emotion recognition applies specifically in workplaces and educational institutions. In other contexts, e.g. for medical purposes, emotion recognition may still be lawful but may fall under the rules for high-risk AI systems.
Sanctions for breaches
Breaching the prohibitions carries the highest fines in the AI Act: up to EUR 35 million or 7% of global annual turnover. This exceeds GDPR’s maximum fine level of EUR 20 million / 4% of turnover.
For SMEs and start-ups, the lower of the two amounts applies. This still makes the sanctions significant even for smaller organisations.
National supervisory authorities enforce the prohibitions. Your organisation should, at a minimum, review all AI systems in use and ensure that none falls under the eight prohibitions. Document the assessment so you can present it during inspections.
The combination of an early entry date and high fines makes prohibited AI practices the most urgent compliance area in the AI Act. If you have not already mapped your AI systems, you should start now.
Frequently Asked Questions about Prohibited AI Practices
When did the prohibition on AI practices take effect?
The prohibition on the prohibited AI practices took effect on 2 February 2025. It was the first part of the AI Act to become applicable.
What is social scoring in the AI Act?
Social scoring is when public authorities use AI to evaluate citizens based on their social behaviour and assign them a score that affects their rights. It is entirely prohibited in the EU.
What is the penalty for using prohibited AI practices?
The fine for using prohibited AI practices is up to EUR 35 million or 7% of the organisation’s global annual turnover, whichever is higher.
Is facial recognition prohibited in the EU?
Real-time remote biometric identification in public spaces for law enforcement is prohibited as a general rule, but there are narrow exceptions for serious threats such as terrorism and searching for victims.
Related Terms
AI Act
The EU's comprehensive regulation on artificial intelligence, classifying AI systems by risk level and imposing requirements from development to deployment.
ai_actAI Risk Categories
The AI Act’s risk-based classification system dividing AI systems into four levels: unacceptable (prohibited), high, limited and minimal risk.
ai_actHigh-Risk AI System
An AI system used in critical areas that must meet strict requirements for safety, transparency and human oversight under the AI Act.
Related Articles
Info
.legal A/S
hello@dotlegal.com
+45 7027 0127
VAT-no: DK40888888
Support
support@dotlegal.com
+45 7027 0127
Need help?
Let me help you get started
+45 7027 0127 and I'll get you started
.legal is not a law firm and is therefore not under the supervision of the Bar Council.