The European Union Artificial Intelligence (AI) Act is finally in force five months after being passed by the European Union parliament. With the law now in effect, it becomes the landmark regulation on AI and could set the pace for other countries to regulate the emerging sector.
However, most provisions in the AI Act will not go into effect immediately, as there are timelines for applying the rules. This is expected to give companies more time to comply with the law as the sector evolves and allow member states to equip themselves for its enforcement.
AI use cases are classified based on their risks
The Act adopts a risk-based classification system to determine the rules that will apply to each AI company. The categories are no-risk, minimal risk, high risk, and prohibited, and whatever category an AI system falls into will determine when rules surrounding its operations will become effective.
For prohibited AI systems, the regulator has set a deadline of February 2025 to ban systems that scrape the internet to expand facial recognition databases or manipulate users into making a decision.
Those in the high-risk application category will also have six months from August 1 to comply with the strict rules applicable to them. AI systems classified as high-risk include those used for facial recognition, biometrics, critical public services, education, employment, and medical software.
The requirements for the group include presenting training datasets to regulators for audit and providing proof of human oversight. They are also required to conduct pre-market conformity tests. For high-risk systems used by government agencies or for public service, the developer would have to register them in the EU database.
Meanwhile, about 85% of AI systems fall into the minimal risk category, with more flexible rules. Still, the Act has set out penalties to deter violations. Fines range from 7% of global annual turnover when companies violate the prohibited AI systems to 1.5% for providing wrong information to regulators.
Generative AI with minimal restrictions
For developers of generative AI systems, their concerns are limited as the Act labels their models as examples of general-purpose AIs (GPAIs) and classifies most of them as minimal risk. Thus, top AI chatbots such as MetaAI, ChatGPT, Perplexity, Claude AI, etc., would not be significantly affected.
The Act only requires higher transparency from these AI companies through disclosures of their training data. They are also expected to comply with the EU copyright rules. However, a small group of the GPAIs are considered crucial enough to cause systemic risks. These refer to those trained using computing power above a specified level.
Interestingly, the EU is also responsible for enforcing the rules related to GPAIs, although each country is responsible for implementing the general rules under the Act. States have until August 2025 to set up bodies to implement the Act within their country.
Meanwhile, there are still plenty of grey areas regarding several aspects of the Act. These include the specific guidelines that GPAI developers must comply with under the Act. Regulators are developing Codes of Practice for this purpose, with the AI Office leading the charge.
The office, which is the body that monitors and builds the AI ecosystem, recently announced a consultation process and has called on all stakeholders to participate in the rule-making process. It intends to have the Codes ready by April 2025.