Ethereum co-founder #VitalikButerin has voiced serious concerns about the rapid advancements in artificial intelligence, highlighting the potential risks posed by superintelligent AI systems. As AI development accelerates, Buterin stresses the importance of implementing robust safeguards to mitigate the possibility of catastrophic outcomes. His recent insights, shared in a January 5 blog post, emphasize the necessity of proactive measures to counter potential harm from AI technology.
In his post, Buterin introduces the concept of “𝐝𝐞𝐟𝐞𝐧𝐬𝐢𝐯𝐞 𝐚𝐜𝐜𝐞𝐥𝐞𝐫𝐚𝐭𝐢𝐨𝐧” (d/acc), advocating for technology development focused on protection rather than destruction. He warns that superintelligence—AI surpassing human intelligence—could emerge within the next few years, potentially threatening human survival. “It’s increasingly likely that we have a three-year window until Artificial General Intelligence (AGI) arrives and another three years before superintelligence follows. To avoid irreversible disasters, we must not only accelerate positive advancements but also curb the negative,” Buterin wrote. His vision includes creating decentralized AI systems that prioritize human control, ensuring AI serves humanity rather than endangering it.
Buterin also highlights the risks associated with military applications of AI, citing recent global conflicts like those in Ukraine and Gaza as examples of its growing use in warfare. He warns that military exemptions from AI regulations could amplify these dangers, making militaries key contributors to potential AI-driven disasters. To address such risks, Buterin proposes a multi-pronged strategy for regulating AI usage. This includes making users accountable for how AI systems are employed, implementing “𝐬𝐨𝐟𝐭 𝐩𝐚𝐮𝐬𝐞” mechanisms to temporarily slow down advancements, and controlling AI hardware through specialized chips. These chips would require weekly approval from multiple international organizations, with at least one being non-military, to ensure responsible operation.
While Buterin acknowledges that his proposed measures are not without flaws, he views them as essential interim solutions. By addressing the dual challenges of accelerating beneficial AI applications while restraining harmful ones, Buterin underscores the urgency of global collaboration to safeguard humanity’s future in the age of superintelligent AI.