Ethereum co-founder Vitalik Buterin shared his unique views on AGI (Artificial General Intelligence) and ASI (Artificial Superintelligence) on Twitter, emphasizing the need to focus on AI tools rather than pursuing the development of superintelligent beings that could replace humans. He is quite afraid of the excessive development of AGI and ASI.
AGI is artificial intelligence that maintains civilization independently
Vitalik defines AGI as a type of quite powerful AI. He states that if all humans suddenly disappeared, and this AI were installed in a robot capable of operating independently and maintaining the entire development of civilization. He adds that this concept will evolve from the 'tool-like' nature of traditional AI into a form of 'self-sustaining life.'
Vitalik pointed out that current technology cannot simulate such a scenario; we cannot truly test whether AI can maintain civilization without humans. It is even more difficult to define the standards of 'civilizational development' and what conditions represent civilization continuing to operate. These questions are inherently complex, but they may be the main distinguishing source for differentiating AGI from ordinary AI.
(Note: Self-sustaining life forms refer to organisms or life systems that can independently acquire and utilize resources to sustain life activities, adapt to environmental changes, and continue to survive under certain conditions.)
Emphasizing the importance of intelligent assistance tools, not to let AI replace humans
Vitalik defines artificial superintelligence (ASI) as when AI's advancement surpasses the value that human participation can provide and reaches a stage of complete autonomy and higher efficiency. He cites that chess has only truly entered this stage in the last decade, where AI's level has surpassed the best performance achieved through human and AI collaboration, and admits that ASI frightens him because it means humans might truly lose control over AI.
Vitalik states that rather than developing superintelligent beings, we should focus on developing tools that can enhance human intelligence and abilities. He believes that AI should assist humans, not replace them. He thinks this developmental path can reduce the risk of AI becoming uncontrollable while enhancing overall societal efficiency and stability.
(The unemployment threat brought by generative AI: Will Amazon workers be completely replaced by fleets of machines?)
This article discusses Vitalik's fears regarding AGI and ASI: Humanity should prioritize focusing on intelligence-enhancing tools, not letting AI replace humans. It first appeared in Chain News ABMedia.