The world has moved a step closer to unifying its goals and values related to artificial intelligence following a meeting of world leaders to sign the Council of Europe’s AI convention.
The United States, European Union and the United Kingdom are expected to sign the convention on Sept. 5. The convention highlights human rights and democratic values as key to regulating public and private-sector AI models.
United in AI
The convention on AI would be the world’s first international treaty on AI that is legally binding to its signatories. It makes all accountable for any harm or discrimination that comes from AI systems.
It also mandates that the outputs of said systems respect citizens’ equality and privacy rights, giving victims of AI rights violations legal resources. However, should there be any violations, consequential reinforcement, such as fines, has yet to be put into place.
Additionally, compliance with the treaty is currently only enforced via monitoring.
The UK’s minister for science, innovation and technology, Peter Kyle, said that signing the treaty would be an “important” first step globally.
“The fact that we hope such a diverse group of nations is going to sign up to this treaty shows that actually, we are rising as a global community to the challenges posed by AI.”
The treaty was initially drafted two years ago, with contributions from over 50 countries, including Canada, Israel, Japan, and Australia.
While it may be the first instance of an international treaty on AI, individual nations have been actively considering implementing more localized AI regulations.
Global AI rules
Over the summer, the EU became the first region to implement sweeping rules that govern the development and deployment of AI models, particularly high-level ones equipped with large amounts of computing power.
The EU AI Act came into force on Aug. 1 and introduced significant regulations for AI through phased implementation and key compliance obligations.
Though the bill was developed with safety in mind, it has also been a contention point among developers in the AI space, saying it stifles abilities for further innovation in the region.
This can be seen with companies such as Meta, Facebook’s parent company, which is responsible for building one of the biggest large language models (LLMs), Llama2.
Meta has stopped rolling out their latest products in the EU because of regulations that prevent Europeans from having access to the latest and most advanced AI tools.
Tech firms banded together in August to pen a letter to EU leaders, asking for more time to comply with the regulations.
AI in the US
In the US, Congress has not yet implemented a nationwide framework for AI regulation. However, committees and task forces for AI safety have been put into place by the Biden Administration.
Meanwhile, in California, lawmakers have been busy drafting and passing AI regulations. In the last two weeks alone, two bills have been pushed through the State Assembly and are awaiting a final decision for California Governor Gavin Newsom.
The first regulates and penalizes the creation of unauthorized AI-generated digital replicas of deceased personalities.
The second is a highly controversial bill opposed by many of the leading AI developers, which makes safety testing mandatory for the majority of the most advanced AI models and the need to procure outlines on a “kill switch” for such models.
California’s AI regulation is important because it is home to the headquarters of some of the world’s leading developers, such as OpenAI, Meta and Alphabet (Google).
Magazine: AI may already use more power than Bitcoin — and it threatens Bitcoin mining