According to Cointelegraph, the European Union is advancing its efforts to shape the future of artificial intelligence with the creation of the first 'General-Purpose AI Code of Practice' under its AI Act. This initiative, led by the European AI Office, involves hundreds of global experts from academia, industry, and civil society to collaboratively draft a framework addressing key issues such as transparency, copyright, risk assessment, and internal governance.

The process began with an online plenary session on September 30, which saw nearly 1,000 participants. This marks the start of a months-long effort that will culminate in a final draft by April 2025. The Code of Practice aims to be a cornerstone for applying the AI Act to general-purpose AI models, including large language models (LLMs) and AI systems used across various sectors.

Four working groups, led by distinguished industry chairs and vice-chairs, have been introduced to drive the development of the Code of Practice. These groups include experts like Nuria Oliver, an artificial intelligence researcher, and Alexander Peukert, a German copyright law specialist. They will focus on transparency and copyright, risk identification, technical risk mitigation, and internal risk management. The European AI Office stated that these groups will meet between October 2024 and April 2025 to draft provisions, gather stakeholder input, and refine the Code of Practice through ongoing consultation.

The EU’s AI Act, passed by the European Parliament in March 2024, is a landmark piece of legislation aimed at regulating AI technology across the bloc. It establishes a risk-based approach to AI governance, categorizing systems into different risk levels and mandating specific compliance measures. This act is particularly relevant to general-purpose AI models due to their broad applications and potential for significant societal impact, often placing them in higher-risk categories.

However, some major AI companies, including Meta, have criticized the regulations as too restrictive, arguing that they could stifle innovation. In response, the EU’s collaborative approach to drafting the Code of Practice aims to balance safety and ethics with fostering innovation. The multi-stakeholder consultation has already garnered over 430 submissions, which will help shape the writing of the code.

The EU aims to set a precedent for the responsible development, deployment, and management of general-purpose AI models by April 2025, emphasizing minimizing risks and maximizing societal benefits. As the global AI landscape evolves, this effort is likely to influence AI policies worldwide, with more countries looking to the EU for guidance on regulating emerging technologies.