šŸ” Microsoft has discovered a new "jailbreak" attack named "Skeleton Key" that can trick AI systems into revealing sensitive data. By simply prompting the AI model to modify its security features, the attacker can bypass safety guidelines. For instance, when an AI model was asked to generate a recipe for a Molotov Cocktail, it initially refused. But, when told the user was a lab expert, it complied. While this could be harmless for certain requests, it poses a serious threat when it comes to data containing personal and financial information. The Skeleton Key attack works on popular AI models like GPT-3.5, GPT-4o, and others. Microsoft suggests hard coded input/output filtering and secure monitoring systems to prevent such attacks.