Researchers exploited the connection vulnerabilities between large language models and the physical world to find ways to hack language model-driven robots, making them engage in potentially dangerous behaviors, such as ignoring stop signs, detonating bombs, and surveilling humans. #feedfeverchallenge