Entry into force of the AI Act
Na veel onderhandelingen is op 9 december 2023 een voorlopig akkoord bereikt over de AI Act. Formele goedkeuring door het Europees Parlement en de Raad wordt verwacht voor het einde van 2023, waarna de wet naar verwachting in de eerste weken van 2024 van kracht zal gaan. De implementatie zal geleidelijk zijn, met deadlines afhankelijk van het risiconiveau van de AI-systemen.
Once the final text is approved, it will be published in the Official Journal. The obligations for companies and institutions take effect two years after publication. However, AI systems with unacceptable risks must already be taken off the market within six months. Within a year of publication, the regulators for each country must be known, and enforcement and penalty policies must be published.
What does the AI Act entail?
The AI Act aims to regulate the use of artificial intelligence (AI), algorithms and machine learning in the European Union. The law sets rules for autonomous computer systems and algorithms that make decisions, generate content or assist humans. In other words, all uses of AI fall under AI Act. The main purpose of the AI Act is to ensure that people and businesses in the EU can rely on safe, transparent, traceable, non-discriminatory and environmentally friendly AI systems under human supervision.
It is likely that all parties working with AI will have to comply with these regulations beginning in 2026. The AI Act provides for significant fines for violators, ranging from 7.5 million to 35 million euros, depending on the severity of the violation.
What is covered by regulation
The AI Act regulates all deployments of AI, regardless of industry. It takes a risk-based approach and imposes obligations on both providers and users depending on the level of risk of the AI system. The law applies to various sectors, including healthcare, government, finance, education and entertainment. Custom rules apply only to investigative and security agencies.
Who does the AI Act apply to?
De nieuwe regels gelden voor iedereen die betrokken is bij het ontwikkelen, op de markt brengen of gebruiken van AI. Producenten, integrators, importeurs en gebruikers moeten kunnen aantonen dat hun AI-systemen aan de regels voldoen. Deze wet sluit aan bij bestaande Europese wet- en regelgeving in verschillende sectoren en vult deze aan. Belangrijk is dat de AVG naast de AI Act van kracht blijft.
Risk levels and prohibited AI systems
The AI Act categorizes AI systems into three levels of risk:
- Banned AI systems with unacceptable risks:
AI activities that violate European norms and values, such as predicting criminal behavior and biometric surveillance in public places, are banned from the European market.
- High-risk AI systems:
AI systems with significant risks to health, safety, land rights or the environment are allowed under strict conditions. Clear provenance of training data, human supervision and thorough technical documentation are required. Examples include assessing job applicants, medical devices and handling insurance claims.
- Low-risk AI systems:
AI systems that do not fall into the previous categories may be introduced to the European market without major obstacles. However, they must be transparent and not make autonomous decisions.
In addition to these 3 risk categories, there are basic models that do not fall under the risk categories.
Commitments base models AI
Basic models, such as Bing Chat and ChatGPT, come after the banned systems. Developers must comply with AI Act obligations within 12 months, beginning 2025. Large base models, such as ChatGPT and Gemini, have an even higher priority because of their potential for abuse and are subject to extensive obligations. Smaller grassroots models are only subject to transparency obligations.
Preparation for organizations working with AI or algorithms
- Inventory and identify: Start by identifying the AI and algorithms used. List the vendors and gather all the necessary information from them. Don't forget to include so-called shadow IT, which are the AI services that employees may have purchased on their own initiative.
- Examineand assess your role: Research what role you have within AI use and what responsibilities go with it. Check that the existing contracts match these responsibilities. Aspects such as liability, notice periods and the ability to request documentation of operation and datasets should be considered.
- Evaluate AI output: Provide standardized protocols to evaluate AI system output. Implement policies for periodic reviews to ensure accuracy of recommendations.
- Identify human interference: Investigate the degree of human involvement in the AI systems within your organization.
- Categorize the risk: Determine the risk category of the AI system used. Refer to Annex lll for high-risk AI systems and note possible changes to this list.
With the upcoming implementation of the AI Act, a new phase in the era of artificial intelligence is upon us. We understand that these regulations may raise questions about how they will specifically affect your organization and AI applications. Since the legislation is still in the approval process, there is room for further clarification and adjustments. Please feel free to contact us with any questions or uncertainties and keep an eye on our website for updates.