Ethereum (ETH) co-founder Vitalik Buterin has raised concerns about the risks associated with superintelligent artificial intelligence (AI) and the need for a strong defense mechanism.
Buterin's comments come at a time when, with the rapid development of artificial intelligence, security concerns have grown significantly.
Buterin’s AI Regulatory Plan
In a blog post on January 5, Vitalik Buterin outlined his idea behind ‘d/acc or defensive acceleration’, where technology should be developed to defend rather than cause harm. In this sense, this is not the first time Buterin has addressed the risks associated with Artificial Intelligence.
One way AI could go wrong and make the world worse is in (almost) the worst possible way: It could literally cause human extinction, Buterin said in 2023.
Furthermore, Buterin has now followed up on his 2023 theories. According to him, superintelligence is potentially just a few years away.
It seems likely that we will have timelines of three years until AGI and another three years until superintelligence. And so, if we don't want the world to be destroyed or fall into an irreversible trap, we can't just speed up the good, we also have to slow down the evil, Buterin wrote.
To mitigate AI-related risks, the Ethereum co-founder advocates creating decentralized AI systems that remain tightly linked to human decision-making. By ensuring that AI remains a tool in the hands of humans, the threat of catastrophic outcomes can be minimized.
Buterin then explained how the military could be the one to blame for an ‘AI doom’ scenario. Military use of AI is increasing globally, as seen in Ukraine and Gaza. Buterin also believes that any AI regulations that come into effect would likely exempt the military, making them a significant threat.
The Ethereum co-founder further outlined his plans to regulate the use of the technology. He said the first step to avoiding risks associated with AI is to hold users accountable.
While the link between how a model is developed and how it ends up being used is often murky, the user ultimately decides exactly how AI is used, Buterin explained, highlighting the role users play.
If liability rules don’t work, the next step would be to implement “soft pause” buttons that allow AI regulation to slow the pace of potentially dangerous advances.
The goal would be to have the ability to reduce globally available computing by ~90-99% for 1-2 years in a critical period, to buy more time for humanity to prepare.
He said it is possible to implement the pause by checking and recording the AI's location. So another approach would be to control the AI hardware. Buterin explained that a chip could be equipped to control the AI hardware.
The chip will only allow AI systems to operate if they receive three signatures from international bodies every week. He added that at least one of the bodies must be non-military.
However, Buterin admitted that his strategies have flaws and are only 'temporary solutions'.
The article Vitalik Buterin warns: Superintelligent AI could arrive soon appeared first on BeInCrypto Brasil.