Make AI more responsible!
By clicking “Yes”, you agree to the storing of cookies on your device to enhance site navigation, and to improve our marketing — which makes more people aware of how to govern AI effectively. View our Privacy Policy for more information. Do you want to help us?

EU AI Act: A brief overview

As part of their AI strategy, the European Union proposed the EU AI Act to ensure AI is developed and used in a safe, reliable, and transparent way. The new regulation classifies AI systems by their risks and has significant implications for organizations that develop or use AI systems within the EU. This article briefly summarizes the EU AI Act and how you can prepare for it today.

EU AI Act: A brief overview

What is the EU AI Act?

Introduced in April 2021 by the European Commission, the EU AI Act is a first-of-its-kind legal framework that tries to regulate the use of AI systems across the EU to ensure safety, reliability, and transparency. The EU wants to make sure that AI is aligned with existing laws on fundamental rights and Union values.
As different AI applications impose different risks, the proposal follows a risk-based approach which leads to a horizontal regulation (i.e. applicable across sectors):

  • Unacceptable risk (e.g., social scoring, remote biometric identification, or subliminal manipulation)
  • High risk (e.g., AI in recruitment, law enforcement, finance/insurance, biometric identification, or safety components in regulated systems, such as medical devices)
  • Limited risk (e.g., interaction with a chatbot, deep fakes)
  • Low risk (e.g., spam filter)

Depending on the risk classification, the AI system may be prohibited, specific requirements must be fulfilled, or users interacting with the AI must be notified about this interaction. This applies to any AI system affecting a natural person in the EU. Read more about how AI risk is classified in this article.

The EU’s definition of AI

As a result of the trilogue, the EU agreed on a new, broader definition to align with the OECD’s definition. Now, an AI system in light of the EU AIA is defined as a machine-based system that generates output such as predictions, content, recommendations or decisions influencing physical or virtual environments, inferring it from the input the system receives. These systems can vary in their levels of autonomy and adaptiveness.

Previously, the EU defined an AI system as software developed with machine learning or logic- and knowledge-based approaches that produce content, predictions, recommendations, decisions, or similar output with influence on the environment the AI is interacting with, including AI systems that may be part of a hardware device. This definition was heavily criticized by people in the AI ecosystem, as it could include simpler systems that are not AI systems.

What are the benefits for society? 

The EU wants to make sure that citizens are safe from any negative consequences of AI. Thus, the EU AI Act aims to help ensure that organizations that use AI to make decisions do not discriminate against people or that these systems are not biased against certain groups based on race, gender, religion, or any other attribute.

Organizations using AI systems must provide an explanation for every outcome as well as the decision-making process behind these outcomes. This will pose significant challenges for organizations, as the current AI development process is very scattered and lacks transparency. Solutions to unlock structured and transparent processes will be needed.

The EU AI Act gives affected individuals also the right to challenge the decisions made by algorithms and have them reviewed by data scientists of the responsible organizations.

Current status and timeline of the EU AI Act

As of May 2024, the EU AI Act was passed by the EU and its Member States and soon enters into force. The EU AI Act becomes effective in August 2024. After that, harmonized standards have to be established and translated into national laws.

Different timelines for the provisions and obligations will be applied. 6 months after enactment, the AI Act’s part on prohibited AI systems and AI literacy requirements will be enforced (February 2025). After 12 months, GPAI system providers will need to fulfil the new regulatory requirements (August 2025). After two years, all other parts will be enforced (August 2026). However, providers of high-risk AI systems that are already regulated by other laws, such as medical devices, vehicles or machinery (see Annex I of the EU AI Act), have an extended period of 36 months after enactment to prepare for the AI Act (August 2027).

Timeline of the EU AI Act

High-Risk applications in the EU AI Act

AI systems that will be classified as high-risk can still be used and developed, as long as they fulfil the proposed requirements. This includes a “conformity assessment” (or audit) to ensure that the developed AI system complies with the EU AI Act, which must be repeated when large modifications to the system are made.

It will also be mandatory to monitor the risk and quality of the system while it is in use, which includes:

  • Fundamental rights impact assessment
  • Technical documentation and record-keeping
  • Human oversight
  • Data governance
  • Accurate, robust, and secure models
  • Transparency regarding the output and the model.

Non-compliance with the EU AI Act will cause penalties of up to €35 million or 7% of the organization’s global annual revenue. Missing to supply correct information to authorities will cause penalties of up to €7.5 million or 1.5% of the organization’s global annual revenue. SMEs and start-ups have capped fines. Consult this article to learn more about the requirements to fulfil.

How to prepare

While the impact of the EU AI Act will be drastic and costly for some organizations, the good news is there is still enough time to prepare and to adopt your AI system to the law. Nevertheless, starting as soon as possible is important to ensure that the AI systems you are currently using and developing are compliant once the EU AI Act becomes mandatory. Otherwise, you face the risk of having to take them offline or paying high fees.

As pointed out earlier, the EU’s AI strategy is to make AI trustworthy, and the key element of that is ensuring transparency through all development stages. This doesn’t only make compliance easier, but it also helps in bringing everybody involved during AI development on the same page.

We at trail want you to fully understand the whole development process, regardless of your (non-)technical background. Check out here, how we can help you with documentation, audits, and understanding model as well as data during development.