According to Cointelegraph, the Group of Seven (G7) industrial countries are set to agree on an artificial intelligence (AI) code of conduct for developers on October 30. The code consists of 11 points aimed at promoting 'safe, secure, and trustworthy AI worldwide' and helping to 'seize' the benefits of AI while addressing and troubleshooting the risks it poses. The plan was drafted by G7 leaders in September and offers voluntary guidance for organizations developing advanced AI systems, including foundation models and generative AI systems.
The code also suggests that companies should publicize reports on the capabilities, limitations, use, and misuse of the systems being built, and recommends robust security controls for these systems. G7 countries include Canada, France, Germany, Italy, Japan, the United Kingdom, the United States, and the European Union. This year's G7 took place in Hiroshima, Japan, with a meeting held between participating Digital and Tech Ministers on April 29 and 30. Topics covered included emerging technologies, digital infrastructure, and AI, with an agenda item specifically dedicated to responsible AI and global AI governance.
The G7's AI code of conduct comes as governments worldwide are trying to navigate the emergence of AI, balancing its useful capabilities with its concerns. The EU was among the first governing bodies to establish guidelines with its landmark EU AI Act, which had its first draft passed in June. On October 26, the United Nations established a 39-member advisory committee to tackle issues related to the global regulation of AI. The Chinese government also launched its own AI regulation, which began to take effect in August. From within the industry, OpenAI, the developer of the popular AI chatbot ChatGPT, announced plans to create a 'preparedness' team to assess a range of AI-related risks.