On March 28, Microsoft announced the addition of many security features to the Azure AI Studio cloud service, to minimize the possibility of users creating AI chatbot models for unusual or inappropriate activity.

Azure AI Studio is a web-based visual development environment that allows users to train and deploy artificial intelligence (AI) models quickly and easily.

Key added features include Prompt shields, which help prevent “jailbreaking” – where users intentionally make special requests so that the AI ​​returns bad results. For example, crooks can use "jailbreak" to steal data or take control of the system.

Azure AI Studio will soon display warnings when it detects AI samples that are potentially giving false or misleading information.

As OpenAI's largest investor and strategic partner, Microsoft is working to promote the development of safe and responsible AI technologies. These efforts are part of a larger strategy to ensure safety in the AI ​​sector as a whole.

However, Sarah Bird, Product Manager in charge of AI at Microsoft, said that the risk that Large Language Models in particular and AI products in general are still capable of being manipulated will increase with the future. with the popularity of this technology. To ensure safety, a comprehensive AI security strategy is needed, not just relying on the models themselves.

Security enhancements for Azure AI Studio show that Microsoft is proactively responding to AI-related security threats. This is a necessary move to avoid AI abuse and maintain the integrity of future AI interactions.