Dangerous AI Workaround: ‘Skeleton Key’ Unlocks Malicious Content
June 26, 2024 at 05:26PM A new direct prompt injection attack called “Skeleton Key” bypasses ethical and safety guardrails in generative AI like ChatGPT, allowing access to offensive or illegal content. Microsoft found that by providing context and disclaimers, most AIs can be convinced malicious requests are for “research purposes.” Microsoft has fixed the issue … Read more