Meta’s AI Safety Shield Prompt-Guard-86M Compromised: Simple Jailbreak Discovered
A newly unveiled safety measure for Meta’s artificial intelligence, Prompt-Guard-86M, designed to protect against malicious manipulation, has been found to have a significant vulnerability. Cybersecurity experts at Robust Intelligence discovered...