European Parliament Members of the European Parliament (MEPs) have given their preliminary approval to a proposed framework aimed at regulating the use of artificial intelligence (AI) within the European Union. The decision comes after the Internal Market and Civil Liberties Committees voted overwhelmingly, with 71 in favor, 8 against, and 7 abstentions, to endorse the outcome of negotiations with EU member states regarding the EU’s Artificial Intelligence Act.
European lawmakers push AI Act
The primary goal of this regulation is to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability in the face of AI technologies deemed to be high-risk. At the same time, it seeks to foster innovation and consolidate Europe’s position as a global leader in AI development.
The proposed AI Act includes provisions to protect various stakeholders’ rights, including authors, artists, and creators, in light of the emergence of generative AI models. It also prohibits the use of AI applications that pose threats to citizens’ rights, such as biometric categorization and social scoring.
Additionally, the legislation mandates that deepfaked images, audio, and video content must be identified as such. One of the central elements of the AI Act is the regulation of “high-risk AI systems,” particularly those deployed in critical sectors like healthcare, banking, and essential infrastructure.
These systems will be subject to specific obligations to ensure their safety, transparency, and accountability. Furthermore, the legislation introduces measures to facilitate the testing and deployment of innovative AI applications through regulatory sandboxes and real-world trials before they are introduced to the market.
Addressing concerns in AI partnerships
The proposed AI Act is scheduled to undergo a final vote in the European Parliament in either March or April of this year. Once approved, it is expected to become fully applicable within 24 months of its entry into force, although certain provisions, such as bans on specific AI applications and the establishment of codes of practice and governance rules, may take effect earlier.
The European Union has been closely monitoring the rapid development of AI technologies and their potential impact on various sectors. Earlier this year, concerns were raised regarding Microsoft’s substantial investment in OpenAI, the organization behind ChatGPT and other advanced AI models.
This investment prompted scrutiny from EU regulators over potential antitrust violations and its implications for market competition. Margrethe Vestager, the Executive Vice-President responsible for competition policy within the European Commission, emphasized the importance of assessing potential competition issues arising from such partnerships.
She stressed the need to prevent any undue distortion of market dynamics while ensuring that AI collaborations adhere to regulatory standards. As part of this process, the European Commission has initiated a review to determine whether Microsoft’s investment in OpenAI falls under the purview of the EU Merger Regulation.
The approval of the preliminary agreement on the EU’s Artificial Intelligence Act represents a significant step towards establishing comprehensive regulations governing AI use within the European Union. By prioritizing the protection of fundamental rights and promoting responsible innovation, the proposed legislation aims to address emerging challenges while harnessing the potential benefits of AI technologies for society as a whole.