Vitalik Buterin Sounds the Alarm: The Looming Threat of Superintelligent AI
Imagine a world where artificial intelligence surpasses human intelligence, leading to catastrophic outcomes, including the possibility of human extinction. This isn’t a scene from a sci-fi movie; it’s a real concern raised by Ethereum co-founder Vitalik Buterin. As AI technology rapidly advances, Buterin is urging the world to take action and develop a strong defense mechanism against the risks associated with superintelligent AI.
The Risks of Unchecked AI Growth
Buterin’s warnings come at a time when AI safety concerns are growing exponentially. In a recent blog post, he outlined his concept of “d/acc or defensive acceleration,” which emphasizes the need for technology to be developed with defense in mind, rather than solely focusing on progress. This isn’t the first time Buterin has spoken out about AI risks; in 2023, he highlighted the potential for AI to cause human extinction.
A Three-Year Timeline to Superintelligence
Buterin’s latest warnings are more pressing than ever, as he believes superintelligence could be just a few years away. “It’s looking likely we have three-year timelines until AGI and another three years until superintelligence,” he wrote. This timeline underscores the need for immediate action to mitigate AI-related risks.
Decentralized AI Systems: A Potential Solution
To minimize the threat of catastrophic outcomes, Buterin advocates for the creation of decentralized AI systems that remain tightly linked with human decision-making. By ensuring that AI remains a tool in human hands, we can reduce the risk of devastating consequences.
The Role of Militaries in AI Regulation
Buterin also highlights the potential risks associated with AI military use, which is on the rise globally. He believes that any AI regulation would likely exempt militaries, making them a significant threat. This raises important questions about the need for international cooperation and regulation in the development and use of AI.
A Three-Step Plan to Regulate AI
Buterin proposes a three-step plan to regulate AI usage:
1. Liability: Make users liable for the consequences of AI use. By holding users accountable, we can encourage more responsible AI development and deployment. 2. Soft Pause Buttons: Implement “soft pause” buttons that allow AI regulation to slow down the pace of potentially dangerous advancements. This could involve reducing worldwide available compute by 90-99% for 1-2 years to buy time for humanity to prepare. 3. International Control: Control AI hardware by equipping it with a chip that requires three signatures from international bodies weekly, including at least one non-military affiliated body.
Temporary Stopgaps, Not Permanent Solutions
Buterin acknowledges that his strategies have holes and are only temporary stopgaps. However, he emphasizes the need for immediate action to mitigate AI-related risks.
As the world hurtles towards a future where AI surpasses human intelligence, it’s essential to take Buterin’s warnings seriously. Will we be able to develop a strong defense mechanism against the risks associated with superintelligent AI, or will we succumb to the dangers of unchecked AI growth? The clock is ticking.
Source: Beincrypto.com
The post Vitalik Buterin’s AI Regulation Plan: Liability & Control appeared first on CoinBuzzFeed.