Information Security

All you need to know about the European AI Act: Entry into force, Risks and Obligations

On Dec. 9, the AI Act was passed in Europe, a law that restricts the use of AI, algorithms and machine learning and puts the protection of people and society at its core. Wondering what this means for you? We list the main points for you and help you get started in a focused way.
This article was last updated on
14/5/2024

Entry into force of the AI Act

After much negotiation, a tentative agreement on the AI Act was reached on December 9, 2023. Formal approval by the European Parliament and Council is expected before the end of 2023, with the Act expected to take effect in the first weeks of 2024. Implementation will be gradual, with deadlines depending on the risk level of AI systems.

Once the final text is approved, it will be published in the Official Journal. The obligations for companies and institutions take effect two years after publication. However, AI systems with unacceptable risks must already be taken off the market within six months. Within a year of publication, the regulators for each country must be known, and enforcement and penalty policies must be published.

What does the AI Act entail?

The AI Act aims to regulate the use of artificial intelligence (AI), algorithms and machine learning in the European Union. The law sets rules for autonomous computer systems and algorithms that make decisions, generate content or assist humans. In other words, all uses of AI fall under AI Act. The main purpose of the AI Act is to ensure that people and businesses in the EU can rely on safe, transparent, traceable, non-discriminatory and environmentally friendly AI systems under human supervision.

It is likely that all parties working with AI will have to comply with these regulations beginning in 2026. The AI Act provides for significant fines for violators, ranging from 7.5 million to 35 million euros, depending on the severity of the violation.

What is covered by regulation

The AI Act regulates all deployments of AI, regardless of industry. It takes a risk-based approach and imposes obligations on both providers and users depending on the level of risk of the AI system. The law applies to various sectors, including healthcare, government, finance, education and entertainment. Custom rules apply only to investigative and security agencies.

Who does the AI Act apply to?

The new rules apply to anyone involved in developing, marketing or using AI. Manufacturers, integrators, importers and users must be able to demonstrate that their AI systems comply with the rules. This law follows and complements existing European laws and regulations in various sectors. Importantly, the AVG remains in force alongside the AI Act.

Risk levels and prohibited AI systems

The AI Act categorizes AI systems into three levels of risk:

  • Banned AI systems with unacceptable risks:
    AI activities that violate European norms and values, such as predicting criminal behavior and biometric surveillance in public places, are banned from the European market.

  • High-risk AI systems:
    AI systems with significant risks to health, safety, land rights or the environment are allowed under strict conditions. Clear provenance of training data, human supervision and thorough technical documentation are required. Examples include assessing job applicants, medical devices and handling insurance claims.

  • Low-risk AI systems:
    AI systems that do not fall into the previous categories may be introduced to the European market without major obstacles. However, they must be transparent and not make autonomous decisions.

In addition to these 3 risk categories, there are basic models that do not fall under the risk categories.

Commitments base models AI

Basic models, such as Bing Chat and ChatGPT, come after the banned systems. Developers must comply with AI Act obligations within 12 months, beginning 2025. Large base models, such as ChatGPT and Gemini, have an even higher priority because of their potential for abuse and are subject to extensive obligations. Smaller grassroots models are only subject to transparency obligations.

Preparation for organizations working with AI or algorithms

  1. Inventory and identify: Start by identifying the AI and algorithms used. List the vendors and gather all the necessary information from them. Don't forget to include so-called shadow IT, which are the AI services that employees may have purchased on their own initiative.
  2. ‍Examineand assess your role: Research what role you have within AI use and what responsibilities go with it. Check that the existing contracts match these responsibilities. Aspects such as liability, notice periods and the ability to request documentation of operation and datasets should be considered.
    ‍‍
  3. Evaluate AI output: Provide standardized protocols to evaluate AI system output. Implement policies for periodic reviews to ensure accuracy of recommendations.
    ‍‍
  4. Identify human interference: Investigate the degree of human involvement in the AI systems within your organization.
    ‍‍
  5. Categorize the risk: Determine the risk category of the AI system used. Refer to Annex lll for high-risk AI systems and note possible changes to this list.

With the upcoming implementation of the AI Act, a new phase in the era of artificial intelligence is upon us. We understand that these regulations may raise questions about how they will specifically affect your organization and AI applications. Since the legislation is still in the approval process, there is room for further clarification and adjustments. Please feel free to contact us with any questions or uncertainties and keep an eye on our website for updates.

Discover the right software for your ISO management system
Download free whitepaper
Mathijs Oppelaar
Information Security Consultant
085 773 60 05
To news overview
KAM Certifications is now Fendix

We are a partner of