Contingency Plans Needed If Humans Lose Control of AI
On 16 September, a coalition of leading AI scientists in a statement, called for the establishment of a global oversight system to mitigate the risk of "catastrophic outcomes" if AI systems go beyond human control.
They expressed concerns that the technology they helped create could pose significant threats if not properly managed.
The statement reads:
“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity. Unfortunately, we have not yet developed the necessary science to control and safeguard the use of such advanced intelligence.”
⚠️ Scientists warn of losing control over AI, calling for a global plan to prevent catastrophic risks. 🌍 They urge emergency preparedness, AI safety systems, and independent research. US, EU, UK signed the first AI regulation treaty. 🤖⚖️ pic.twitter.com/TUGpIQbzgA
— MINE.exchange (@mineexchange_) September 17, 2024
The scientists emphasized the need for national authorities to be equipped to detect and address AI-related incidents and risks.
They also advocated for the development of a "global contingency plan" to prevent the emergence of AI models with potential global catastrophic risks.
As they stated:
“In the longer term, states should develop an international governance regime to prevent the development of models that could pose global catastrophic risks.”
AI Safety is a Global Public Good
The statement builds on discussions from the International Dialogue on AI Safety held in Venice in early September, the third event organised by the Safe AI Forum, a nonprofit US research organisation.
Professor Gillian Hadfield from Johns Hopkins University highlighted the urgency in a post on X (formerly known as Twitter):
“We don't have to agree on the probability of catastrophic AI events to agree that we should have some global protocols in place in the event of international AI incidents that require coordinated responses.”
We don't have to agree on the probability of catastrophic AI events to agree that we should have some global protocols in place in the event of international AI incidents that require coordinated responses. More here: https://t.co/FrKyLXJLYf
— Gillian Hadfield (@ghadfield) September 16, 2024
The statement underscores that AI safety is a global public good, necessitating international collaboration and governance:
“The global nature of these risks from AI makes it necessary to recognize AI safety as a global public good, and work towards global governance of these risks. Collectively, we must prepare to avert the attendant catastrophic risks that could arrive at any time.”
The AI developers outlined three critical processes:
The statement, endorsed by over 30 signatories from the US, Canada, China, Britain, Singapore, and other countries, includes experts from top AI research institutions, universities, and several Turing Award winners.
The scientists noted that the growing scientific rift between superpowers and increasing distrust between the US and China have made it more challenging to reach consensus on AI risks.