A group of leading artificial intelligence scientists has published an open letter warning of the dire risks of losing control of AI. They call for international cooperation to build a global monitoring system to mitigate the risks.

On September 16, AI experts from around the world, including Turing Award winners, warned of the risk of disaster if artificial intelligence goes out of control. They pointed out that the development of current AI systems has exceeded the necessary management and protection capabilities.

“We do not yet have a strong scientific foundation for ensuring the safety of advanced AI systems,” the group of experts said. Without strict controls, AI could cause irreparable damage on a global scale.

The panel of experts recommended the need for a monitoring and rapid response system to detect AI-related incidents. This requires international cooperation, with governments around the world playing an active role. They stressed the need to develop a global contingency plan to address and mitigate potential risks from advanced AI systems.

International cooperation seen as key to ensuring AI safety

The letter stems from concerns discussed at the International Dialogue on AI Safety in Venice in early September. The meeting, organized by the AI ​​Safety Forum, brought together experts from around the world to address the growing risks posed by the unchecked development of AI.

Key recommendations from the group include the creation of emergency contingency organizations, a comprehensive safety assurance framework, and the establishment of independent research into AI safety and verification on a global scale. This international effort would ensure that AI technology is developed and deployed in a way that minimizes risks while maintaining control over its progress.

Professor Gillian Hadfield from Johns Hopkins University, who shared the statement on X (formerly Twitter), asked: “If a disaster happens in the next six months, if we find that AI models are automatically improving themselves, who will be responsible for solving it?”

However, political tensions between the US and China have significantly reduced scientific cooperation, making it difficult to agree on AI safety measures, further highlighting the urgency of building a global cooperation system to deal with potential risks from AI.

Despite the difficulties, there have been significant steps toward regulating AI. In early September, the US, EU and UK signed the first legally binding international treaty on AI, emphasizing human rights and accountability in the development of the technology. However, many tech companies are concerned that tightening regulations could stifle innovation, especially in regions like the European Union.

The letter, signed by more than 30 AI experts from countries including the US, Canada, China, the UK and Singapore, stressed the importance of a global approach. The group of experts called on countries to come together to build a strong international regulatory framework to ensure that artificial intelligence develops in a controlled manner and does not cause potential disasters for humanity.