【Vitalik: Afraid of AGI and ASI, supports building intelligence augmentation tools for humanity rather than superintelligent beings】On December 21, Ethereum founder Vitalik Buterin posted on the X platform regarding recent developments in AGI-related products in the AI field, stating, "My definition of AGI (Artificial General Intelligence) is: AGI is a sufficiently powerful artificial intelligence that, if one day all humans suddenly disappear and this AI is uploaded into a robotic body, it will be able to independently continue civilization. Clearly, this is a very difficult definition to measure, but I think it captures the core intuitive distinction in many people's minds between 'the AI we are used to' and 'AGI'. It marks a shift from a tool that constantly relies on human input to a self-sufficient form of life. ASI (Artificial Superintelligence) is a completely different matter—my definition is that when humans no longer add value to productivity in the loop (just like in board games, we actually only reached this point in the past ten years). Yes, ASI scares me—even the AGI I defined scares me because it brings obvious risks of losing control. I support focusing our work on building intelligence augmentation tools for humanity rather than creating superintelligent beings."