Elon Musk and Vitalik both expressed support for the California SB 1047 AI safety bill. Vitalik said he liked that the bill introduced a "critical harm" category and explicitly separates between that and other bad things and the charitable read of the bill is that the (medium- term) goal is to mandate safety testing, so if during testing you discover world-threatening capabilities/behavior you would not be allowed to release the model at all.