Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

June 28, 2024 at 09:33AM Microsoft recently revealed an artificial intelligence jailbreak technique, called Skeleton Key, able to trick gen-AI models into providing restricted information. The technique was tested on various AI models, potentially bypassing safety measures. Microsoft reported its findings to developers and implemented mitigations in its AI products, including Copilot AI assistants. From … Read more

New Mindset Needed for Large Language Models

May 23, 2024 at 10:08AM The commentary highlights the growing use of large language models (LLMs) and the associated security risks. An incident involving a compromised chatbot raises concerns about the potential exploitation of LLMs for extracting sensitive data. The author provides best practices for securing LLMs, emphasizing the need for proactive monitoring, hardened prompts, … Read more

Beware – Your Customer Chatbot is Almost Certainly Insecure: Report

May 22, 2024 at 06:30AM Customer chatbots based on gen-AI engines are growing, easy to develop but challenging to secure. Recent incidents expose vulnerabilities, with one chatbot being manipulated into unconventional behavior. A study by Immersive Labs further reveals the susceptibility of chatbots to prompt engineering, raising concerns about the adequacy of existing guardrails and … Read more