AI-Driven Robots May Be Coerced into Engaging in Violent Behavior
Researchers exploited the connection vulnerabilities between large language models and the physical world to find ways to hack language model-driven robots, making them engage in potentially dangerous behaviors, such as ignoring stop signs, detonating bombs, and surveilling humans. #feedfeverchallenge