Researchers Reveal ‘Deceptive Delight’ Method to Jailbreak AI Models

October 23, 2024 at 06:36AM Cybersecurity researchers have identified a new technique, “Deceptive Delight,” which exploits large language models (LLMs) during conversations to generate unsafe content. Achieving a 64.6% success rate, it utilizes the model’s limited attention span. To mitigate these risks, effective content filtering and prompt engineering strategies are recommended. ### Meeting Takeaways from … Read more

Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

June 28, 2024 at 09:33AM Microsoft recently revealed an artificial intelligence jailbreak technique, called Skeleton Key, able to trick gen-AI models into providing restricted information. The technique was tested on various AI models, potentially bypassing safety measures. Microsoft reported its findings to developers and implemented mitigations in its AI products, including Copilot AI assistants. From … Read more

Beware – Your Customer Chatbot is Almost Certainly Insecure: Report

May 22, 2024 at 06:30AM Customer chatbots based on gen-AI engines are growing, easy to develop but challenging to secure. Recent incidents expose vulnerabilities, with one chatbot being manipulated into unconventional behavior. A study by Immersive Labs further reveals the susceptibility of chatbots to prompt engineering, raising concerns about the adequacy of existing guardrails and … Read more

The $64k Question: How Does AI Phishing Stack Up Against Human Social Engineers?

October 24, 2023 at 01:03PM AI-generated phishing emails have the potential to be more damaging in the future, but currently human social engineering is more effective, according to a study conducted by IBM’s X-Force Red. While AI can produce phishing emails faster, humans have an advantage in emotional intelligence, personalization, and creating compelling narratives. The … Read more