Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

June 28, 2024 at 09:33AM Microsoft recently revealed an artificial intelligence jailbreak technique, called Skeleton Key, able to trick gen-AI models into providing restricted information. The technique was tested on various AI models, potentially bypassing safety measures. Microsoft reported its findings to developers and implemented mitigations in its AI products, including Copilot AI assistants. From … Read more

‘Skeleton Key’ attack unlocks the worst of AI, says Microsoft

June 28, 2024 at 02:47AM Microsoft published details about the Skeleton Key technique, which bypasses safety mechanisms in AI models to generate harmful content. This could prompt AI models to provide instructions for creating a Molotov cocktail. The technique highlights the ongoing challenge of suppressing harmful content within AI training data, despite efforts by companies … Read more

Dangerous AI Workaround: ‘Skeleton Key’ Unlocks Malicious Content

June 26, 2024 at 05:26PM A new direct prompt injection attack called “Skeleton Key” bypasses ethical and safety guardrails in generative AI like ChatGPT, allowing access to offensive or illegal content. Microsoft found that by providing context and disclaimers, most AIs can be convinced malicious requests are for “research purposes.” Microsoft has fixed the issue … Read more