Legislation

Everything you need to know about the European AI Act: entry into force, risks and obligations

Implementation
AI
Legislation

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum by sit amet, consectetur adipiscing elit, sed do eusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Dis aute irure door in reprehenderit in voluptate velit se cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

On December 9, the AI Act was passed in Europe, a law that restricts the use of AI, algorithms and machine learning and puts the protection of people and society at the center. Curious about what this means for you? We list the most important points for you and help you get started in a focused way.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum by sit amet, consectetur adipiscing elit, sed do eusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Dis aute irure door in reprehenderit in voluptate velit se cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

This article was last updated on
24.03.2026
Written by
Mathijs
Oppelaar
Operational Manager & Partner

Entry into force of the AI Act

After many negotiations, a preliminary agreement was reached on December 9, 2023 on the AI Act. Formal approval by the European Parliament and Council is expected before the end of 2023, after which the law is expected to take effect in the first weeks of 2024. Implementation will be gradual, with deadlines depending on the risk level of the AI systems.

Once approved, the final text will be published in the Official Journal. The obligations for companies and institutions take effect two years after publication. However, AI systems with unacceptable risks must be taken off the market within six months. Within one year of publication, the supervisors per country must be known, and the enforcement and penalty policies must be published.

What does the AI Act mean?

The AI Act aims to regulate the use of artificial intelligence (AI), algorithms and machine learning in the European Union. The law sets rules for autonomous computer systems and algorithms that make decisions, generate content or assist people. In other words, all uses of AI fall under the AI Act. The main goal of the AI Act is to ensure that people and companies in the EU can rely on safe, transparent, traceable, non-discriminatory and environmentally friendly AI systems that are under human supervision.

It is likely that all parties working with AI must comply with these regulations from 2026. The AI Act provides for significant fines for offenders, ranging from 7.5 million to 35 million euros, depending on the seriousness of the offence.

What is covered by the regulation

The AI Act regulates all use of AI, regardless of the sector. It follows a risk-based approach and imposes obligations on both providers and users, depending on the risk level of the AI system. The law applies to various sectors, including healthcare, government, finance, education, and entertainment. Adjusted rules only apply to investigation and security services.

Who does the AI Act apply to?

The new rules apply to everyone involved in developing, marketing or using AI. Manufacturers, integrators, importers and users must be able to demonstrate that their AI systems comply with the rules. This law is in line with and complements existing European laws and regulations in various sectors. It is important that the AVG remains in force alongside the AI Act.

Risk levels and prohibited AI systems

The AI Act categorizes AI systems into three levels of risk:

  • Prohibited AI systems with unacceptable risks:
    AI activities that violate European norms and values, such as predicting criminal behavior and biometric surveillance in public areas, are prohibited on the European market.

  • High-risk AI systems:
    AI systems with significant risks to health, safety, fundamental rights or the environment are allowed under strict conditions. Clear sources of training data, human supervision and thorough technical documentation are required. Examples include assessing applicants, medical devices, and handling insurance claims.

  • Low-risk AI systems:
    AI systems that do not fall under the previous categories may be introduced to the European market without major barriers. However, these must be transparent and not take autonomous decisions.

In addition to these 3 risk categories, there are also basic models that do not fall under the risk categories.

Basic AI model obligations

Basic models, such as Bing Chat and ChatGPT, come after the banned systems. Developers must comply with the AI Act's obligations within 12 months, beginning in 2025. Major core models, such as ChatGPT and Gemini, have an even higher priority because of their potential for abuse and are subject to extensive obligations. For smaller basic models, transparency obligations only apply.

Preparation for organizations that work with AI or algorithms

  1. Inventory and identify: Start by mapping the AI and algorithms used. Note the suppliers and gather all necessary information from them. Don't forget to also include so-called Shadow-IT, which are the AI services that employees may have purchased on their own initiative.
  2. Research and assess your role: Research what role you have in AI use and what responsibilities come with it. Check that the existing contracts match these responsibilities. Aspects such as liability, notice periods and the ability to request documentation about operations and data sets should be considered.
  3. Evaluate AI output: Provide standardized protocols to evaluate the output of the AI system. Implement periodic review policies to ensure the accuracy of recommendations.
  4. Identify human interference: Research what level of human involvement there is with the AI systems within your organization.
  5. Categorize the risk: Determine the risk category of the AI system used. Consult Annex lll for high-risk AI systems and note possible changes to this list.

With the upcoming implementation of the AI Act, a new phase in the era of artificial intelligence is imminent. We understand that these regulations may raise questions about how they will specifically affect your organization and AI applications. Since the legislation is still in the approval process, there is room for further clarification and adjustments. Feel free to contact us with any questions or concerns and keep an eye on our website for updates.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum by sit amet, consectetur adipiscing elit, sed do eusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Dis aute irure door in reprehenderit in voluptate velit se cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

How many people participate?

Request now

Thanks!
Oops! The form could not be submitted. Please try again.

More resources

News

Dag stoffige consultancy, hallo Next-Gen C!

thru
Ruben
Blog
Partners

Our Trusted MSPs

thru
Jurre
Blog
Implementation

What documents do you need for ISO 27001 proof?

thru
Mathijs
Kennisartikel