**AI Scientists Call for Global Oversight to Prevent Catastrophic Outcomes**

A group of top AI scientists is urging nations to establish a global oversight system to prevent potential disasters if humans lose control of AI. In a statement released on Sept. 16, the scientists warned that advanced AI could cause severe harm if not properly managed. They emphasized the need for international cooperation and governance to detect and respond to AI incidents.

Key proposals include:

- Emergency preparedness agreements

- A safety assurance framework

- Independent global AI safety research

The call for action follows findings from the International Dialogue on AI Safety in Venice. The scientists stressed that AI safety is a global public good, requiring a unified approach to mitigate risks.