Ethereum co-founder Vitalik Buterin has once again sparked heated discussions with his profound insights into cutting-edge technology. In a blog post on January 5, he made a highly controversial suggestion regarding the potential threats of superintelligent AI—implementing a 'soft pause' on global AI hardware to limit industrial-scale computing capacity. This proposal aims to buy humanity time to prevent the pace of AI development from exceeding control.
Superintelligent AI: A Future or a Threat?
Superintelligent AI, referring to entities that surpass humans in nearly all cognitive tasks, has long been regarded as a 'double-edged sword' in the tech world. While it brings infinite possibilities, it also raises widespread concerns. Buterin predicts that the emergence of superintelligent AI could take as little as five years, and such speed could present challenges for which humanity is unprepared.
His core argument is that the rapid evolution of AI may trigger unpredictable social risks, even posing threats to human survival. Therefore, he proposes a 'soft pause' measure to temporarily reduce global computing capacity significantly to slow the R&D process of superintelligent AI.
What is a 'soft pause'?
Buterin's proposed 'soft pause' is not simply about halting technological development, but rather limiting the industrial-scale use of computational resources. Its core measures include:
Reduction in Computing Power:
A reduction of up to 99% in global computing resources, effectively slowing down the training and deployment speed of AI models.
The focus is on industrial-scale computing rather than completely restricting the R&D of individuals or small laboratories.
Authorization Mechanism:
Industrial-grade AI hardware needs to receive weekly authorization from international regulatory bodies to operate.
Through this dynamic approval mechanism, it ensures that computing resources are not misused.
Blockchain Technology Verification:
Buterin also proposed using blockchain technology to provide a tamper-proof verification mechanism for AI hardware to ensure compliance with authorization agreements.
This concept aims to buy humanity time to adjust social, legal, and ethical frameworks in preparation for the arrival of superintelligent AI.
The Rationale for a Pause in AI Hardware: Necessary or a Hindrance?
Buterin's proposal has sparked deep reflection in the tech community and the public:
Reasons for Support:
Avoid 'Technological Runaway'
Once superintelligent AI exceeds human control, the consequences could be catastrophic. A pause can ensure that technology develops on a 'safe track.'
Promote Global Coordination
Authorization mechanisms and hardware regulation help facilitate cooperation among countries to prevent an uncontrolled AI arms race.
Risk Management
If liability rules and other policies cannot effectively curb AI risks, hardware pauses provide a more direct intervention.
Voices of Opposition:
Stifling Innovation
Limiting computing power may undermine competitiveness in the field of AI, especially for research and medical applications that require high-performance computing.
Implementation Difficulty
How to persuade major global tech powers and companies to jointly limit resources? The execution of this policy has significant uncertainties.
Technical Bypass Risk
Hardware limitations may lead to black market trading or technological circumvention, ultimately resulting in regulatory failure.
Buterin's 'Defensive Accelerationism'
This proposal is not Buterin's first statement. His earlier advocacy of 'Defensive Accelerationism' (d/acc) promotes advancing technology prudently and responsibly rather than blindly pursuing speed. His blog views the 'soft pause' as an extension of this philosophy, an attempt to formulate specific solutions for the risks posed by superintelligent AI.
Combining Blockchain and AI Regulation
Notably, Buterin specifically mentions the potential of blockchain technology in AI hardware regulation. By providing transparent and tamper-proof verification mechanisms through blockchain, the authorization process can become more trustworthy. This combination not only aligns with his technological philosophy as a leader in the blockchain field but also demonstrates the possibility of new technologies in addressing global issues.
Is a pause a necessary brake or an impossible luxury?
Vitalik Buterin's proposal deeply examines the development path of superintelligent AI from ethical, technological, and social perspectives. His 'soft pause' advocacy offers a worthy option for global consideration, but whether it can be widely accepted remains uncertain.
On the road to superintelligent AI, will humanity hit the brakes? How do you view the feasibility of this 'pause,' and how will it affect the development of technology and society? Feel free to leave comments for discussion!