🧠#AI Elon Musk and Vitalik Buterin have both shown support for California's SB 1047 AI safety bill. Vitalik particularly appreciated the bill's introduction of a "critical harm" category, distinguishing it from other negative outcomes. He also noted that the bill's objective, at least in the medium term, appears to be the enforcement of safety testing. This would mean that if testing potentially reveals world-threatening capabilities or behaviors in an AI model, it would be prohibited from being released.