The European Union’s Artificial Intelligence Act (officially takes effect on Aug. 1, following its publication in the Official Journal of the EU on July 12.
This landmark legislation marks a significant step toward regulating the rapidly evolving landscape of AI within the boundaries of the EU. As stakeholders across various industries prepare for these new rules, understanding the phased implementation and key aspects of the AI Act is crucial.
AI Act implemented
Under the AI Act’s implementation scheme, the legislation will be introduced gradually, similar to the EU’s approach of introducing its MiCA crypto regulation, which allows organizations time to adjust and comply.
The EU is renowned for its complex bureaucracy. As a result, the practical implications of the EU AI Act’s implementation mean that starting Aug. 1, the official countdown will commence, with the key stages of implementation set to unfold throughout 2025 and 2026.
The first of these will be the “Prohibitions of Certain AI Systems,” which will take place in February 2025. This set of rules will prohibit AI applications that exploit individual vulnerabilities, engage in non-targeted scraping of facial images from the internet or CCTV footage, and create facial recognition databases without consent.
Following this, general-purpose AI (GPAI) models will have a new set of requirements implemented in August 2025. These AI systems are made to handle various tasks rather than being used for unique and specific purposes, such as image identification.
Related: Apple Intelligence may be absent from initial iOS 18 rollout: Report
Rules for certain high-risk AI (HRAI) systems with specific transparency risks will come into effect by August 2026.
For example, if the HRAI system is part of a product subject to EU health and safety laws, such as toys, the rules will apply by August 2027. For HRAI systems used by public authorities, compliance is mandatory by August 2030, irrespective of any design changes.
Companies in compliance
The enforcement of the AI Act will be robust and multi-faceted. The EU intends to establish and designate national regulatory authorities in each of the 27 member states to oversee compliance.
These authorities will have the power to conduct audits, demand documentation, and enforce corrective actions. The European Artificial Intelligence Board (EAIB) will coordinate and ensure consistent application across the EU.
Companies dealing with AI will have to meet compliance obligations in the categories of risk management, data governance, information transparency, human oversight and post-market monitoring.
Industry insiders have recommended that for companies to comply with these obligations, they should begin, as of now, to conduct thorough audits of their AI systems, and establish comprehensive documentation practices, along with investing in robust data governance frameworks.
Non-compliance with the AI Act can result in severe penalties, such as fines of up to 35 million euros or seven percent of their total worldwide annual turnover, whichever is higher.
The AI Act complements the General Data Protection Regulation (GDPR), which was enacted back in May 2018, by addressing AI-specific risks and ensuring that AI systems respect fundamental rights.
While GDPR focuses on data protection and privacy, the AI Act emphasizes safe and ethical AI deployment. Already, we’ve seen major tech companies like Meta, the parent company of Facebook and Instagram, delay AI-integrated products in the EU due to “regulatory uncertainty” in regard to the GDPR and AI Act legislation.
AI Eye: $1M bet ChatGPT won’t lead to AGI, Apple’s intelligent AI use, AI millionaires surge