Ethereum co-founder Vitalik Buterin has raised alarms about the risks associated with superintelligent AI and the need for a strong defense mechanism.
Buterin’s comments come at a time when with the rapid development of artificial intelligence, concerns about AI safety have grown significantly.
Buterin’s AI Regulation Plan: Liability, Pause Buttons, and International Control
In a blog post dated January 5, Vitalik Buterin outlined his idea behind ‘d/acc or defensive acceleration,’ where technology should be developed to defend rather than cause harm. However, this is not the first time Buterin has opened up about the risks associated with Artificial Intelligence.
“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction,” Buterin said in 2023.
Buterin has now followed up on his theories from 2023. According to Buterin, superintelligence is just potentially a few years away from existence.
“It’s looking likely we have three-year timelines until AGI and another three years until superintelligence. And so, if we don’t want the world to be destroyed or otherwise fall into an irreversible trap, we can’t just accelerate the good, we also have to slow down the bad,” Buterin wrote.
To mitigate AI-related risks, Buterin advocates for the creation of decentralized AI systems that remain tightly linked with human decision-making. By ensuring that AI remains a tool in the hands of humans, the threat of catastrophic outcomes can be minimized.
Buterin then explained how militaries could be the responsible actors for an ‘AI doom’ scenario. AI military use is rising globally, as was seen in Ukraine and Gaza. Buterin also believes that any AI regulation that comes into effect would most likely exempt militaries, which makes them a significant threat.
The Ethereum co-founder further outlined his plans to regulate AI usage. He said that the first step in avoiding risks associated with AI is to make users liable.
“While the link between how a model is developed and how it ends up being used is often unclear, the user decides exactly how the AI is used,” Buterin explained, highlighting the role played by users.
If the liability rules don’t work, the next step would be to implement “soft pause” buttons that allow AI regulation to slow down the pace of potentially dangerous advancements.
“The goal would be to have the capability to reduce worldwide available compute by ~90-99% for 1-2 years at a critical period, to buy more time for humanity to prepare.”
He said the pause can be implemented by AI location verification and registration.
Another approach would be to control AI hardware. Buterin explained that AI hardware could be equipped with a chip to control it.
The chip will allow the AI systems to function only if they get three signatures from international bodies weekly. He further added that at least one of the bodies should be non-military affiliated.
Nevertheless, Buterin admitted that his strategies have holes and are only ‘temporary stopgaps.’