**Anthropic Launches Expanded Bug Bounty Program**

Artificial intelligence firm Anthropic has unveiled an expanded bug bounty program, offering rewards up to $15,000 for participants who can "jailbreak" its unreleased, next-gen AI model. This initiative aims to enhance the safety of its flagship AI, Claude-3, by identifying vulnerabilities through "red teaming"—a method where engineers attempt to trick the AI into generating unwanted outputs.

Anthropic's new program targets universal jailbreaks that could bypass safety measures in critical domains like cybersecurity. Experienced AI researchers are encouraged to apply by August 16 for early access to the model. Selected participants will help ensure the robustness of AI safety guardrails.