AI Act

The AI Act (EU AI Regulation) is the world's first comprehensive legislation on artificial intelligence. It classifies AI systems by risk level and imposes requirements on everything from development to the use of AI in the EU.

Back to Dictionary

Table of Contents

    What is the AI Act?

    The AI Act (Regulation (EU) 2024/1689) is the EU's regulation of artificial intelligence. It was adopted in March 2024 and entered into force on 1 August 2024. Its purpose is to ensure that AI systems in the EU are safe, transparent and respect fundamental rights.

    The Regulation applies directly in all EU Member States, and there is no need to wait for national implementing legislation. If your organisation develops, imports or uses AI systems, you are already covered.

    The AI Act supplements existing legislation such as GDPR, which regulates the processing of personal data. Whereas GDPR focuses on data protection, the AI Act focuses on the safety and reliability of the AI system itself.

    The risk-based approach

    The AI Act is built on a risk-based approach with four risk categories. The higher the risk an AI system poses, the stricter the requirements it must meet.

    • Unacceptable risk: Prohibited AI practices such as social scoring of citizens and manipulation of vulnerable groups. These AI systems are entirely banned in the EU.
    • High risk: High-risk AI systems are used in critical areas such as healthcare, law enforcement and recruitment. They must meet strict requirements for documentation, testing and human oversight.
    • Limited risk: AI systems such as chatbots must inform the user that they are interacting with AI. The requirements primarily concern transparency.
    • Minimal risk: Most AI systems fall here, e.g. spam filters and AI in computer games. No specific requirements apply.

    This approach ensures that regulation is proportionate. An organisation using AI for e-mail sorting has far fewer obligations than one using AI to assess credit applications.

    Timeline for entry into force

    The AI Act is being phased in over three years:

    • February 2025: The ban on prohibited AI practices takes effect.
    • August 2025: Requirements on AI literacy apply, and rules for general-purpose AI models (such as GPT and Llama) begin to apply.
    • August 2026: The majority of requirements take effect, including all rules for high-risk AI systems and requirements for providers.
    • August 2027: The final requirements for certain high-risk systems in regulated sectors (e.g. medical devices) take effect.

    Although 2026 and 2027 may seem far off, many of the requirements demand preparation now. A conformity assessment takes time, and you should begin mapping your AI systems today.

    Key requirements for organisations

    Whether you are a provider or a deployer of AI, the Regulation imposes requirements on you. Here are the most important:

    For all organisations: You must ensure that employees who work with AI have sufficient AI literacy. This applies from August 2025. You must also avoid the prohibited AI practices.

    For providers of high-risk AI: You must carry out a conformity assessment, establish a quality management system, maintain technical documentation and ensure human oversight.

    For deployers of high-risk AI: You must use the system in accordance with the instructions for use, monitor its operation and report serious incidents to the provider and the authorities.

    A good starting point is to map all AI systems in your organisation and assess which risk category they fall into.

    Sanctions and enforcement

    The AI Act has a sanctions regime that exceeds GDPR:

    • Up to EUR 35 million / 7% of global turnover for the use of prohibited AI practices.
    • Up to EUR 15 million / 3% of global turnover for breaching most other requirements.
    • Up to EUR 7.5 million / 1.5% of global turnover for providing incorrect information to authorities.

    Each EU Member State must designate a national supervisory authority. Enforcement is coordinated at EU level by the new European AI Office.

    Frequently Asked Questions about AI Act

    When does the AI Act enter into force?

    The AI Act entered into force on 1 August 2024. The ban on certain AI practices applies from February 2025, AI literacy requirements from August 2025, and most other requirements apply from August 2026.

    Who is covered by the AI Act?

    All organisations that develop, distribute or use AI systems in the EU are covered. This also applies to organisations outside the EU if their AI systems are used by persons in the EU.

    What are the penalties for breaching the AI Act?

    Fines can reach up to EUR 35 million or 7% of global annual turnover for the most serious infringements, such as the use of prohibited AI practices.

    Does the AI Act apply to all types of AI?

    Yes, but the requirements vary by risk level. AI systems posing minimal risk have virtually no requirements, whilst high-risk systems must meet strict requirements for documentation, transparency and human oversight.

    +400 companies use .legal
    Region Sjælland
    Aarhus Universitet
    aj_vaccines_logo
    Realdania
    Right People
    IO Gates
    PLO
    Finans Danmark
    geia-food
    Vestforbrænding
    Evida
    Klasselotteriet
    NRGI1
    BLUE WATER SHIPPING
    Karnov
    Ingvard Christensen
    VP Securities
    AH Industries
    Lægeforeningen
    InMobile
    AK Nygart
    ARP Hansen
    DEIF
    DMJX
    Axel logo
    qUINT Logo
    KAUFMANN (1)
    SMILfonden-logo
    kurhotel_skodsborg
    nemlig.com
    Molecule Consultancy
    Novicell