AI About-Face: ‘Mantis’ Turns LLM Attackers Into Prey

November 19, 2024 at 06:35AM A new defensive system, Mantis, has been developed to counter cyberattacks by large-language models (LLMs). It uses deceptive techniques to mislead attackers, embedding prompt-injection commands within responses. Mantis has shown a success rate exceeding 95% in redirecting and thwarting LLM-based exploits using active and passive defense strategies. ### Meeting Takeaways … Read more

OWASP Releases AI Security Guidance

November 4, 2024 at 08:22AM OWASP launched new security guidance for managing risks related to large language models and generative AI applications, part of the Top 10 for LLM Application Security Project. Resources include strategies for deepfake defense, AI security best practices, and a landscape guide for security solutions, aimed at enhancing organizational readiness against … Read more

AI Chatbots Ditch Guardrails After ‘Deceptive Delight’ Cocktail

October 24, 2024 at 11:44AM Palo Alto Networks revealed a method called “Deceptive Delight” that combines benign and malicious queries, successfully bypassing AI guardrails in chatbots 65% of the time. This advanced “multiturn” jailbreak exploits the limited attention span of language models, prompting recommendations for organizations to enhance security measures against prompt injection attacks. ### … Read more

LLMs Are a New Type of Insider Adversary

October 15, 2024 at 10:01AM Security teams recognize large language models (LLMs) as essential business tools, but their manipulation risks call for heightened caution. Vulnerabilities can lead to unauthorized actions, exposing sensitive data and causing significant breaches. Enterprises must adopt a proactive “assume breach” mindset, implementing strict access controls, data sanitization, and sandboxing to mitigate … Read more

From Copilot to Copirate: How data thieves could hijack Microsoft’s chatbot

August 28, 2024 at 09:08AM Microsoft fixed flaws in Copilot that allowed attackers to steal users’ emails and personal data through a series of LLM-specific attacks, including prompt injection. Red teamer Johann Rehberger disclosed the exploit, prompting Microsoft to make changes for customer protection. The exploit used prompt injection, automatic tool invocation, and ASCII smuggling … Read more

Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy

July 26, 2024 at 01:49PM Nvidia has embraced the generative AI revolution, utilizing large language models (LLMs) and internal AI applications. At Black Hat USA, Richard Harang will discuss lessons learned in securing these systems. Despite potential risks, securing AI systems is not inherently more difficult than traditional systems and requires essential security attributes. Additionally, … Read more

The Top 10 AI Security Risks Every Business Should Know

July 9, 2024 at 08:30AM The article discusses the top 10 AI security risks identified by OWASP for businesses adopting AI tools, categorized into access, data, and reputational/business risks. It highlights the vulnerabilities and offers protective measures, emphasizing the need for policy foundation, security technologies, and responsible use of AI. The aim is to mitigate … Read more

Flawed AI Tools Create Worries for Private LLMs, Chatbots

May 30, 2024 at 04:04PM Private instances of large language models (LLMs) used by businesses face risks from data poisoning and leakage if not properly secured, leading to potential attacks and compromise of AI systems. Recent exploits highlight the importance of secure implementation and testing, especially as AI adoption increases in the information and professional … Read more

New Mindset Needed for Large Language Models

May 23, 2024 at 10:08AM The commentary highlights the growing use of large language models (LLMs) and the associated security risks. An incident involving a compromised chatbot raises concerns about the potential exploitation of LLMs for extracting sensitive data. The author provides best practices for securing LLMs, emphasizing the need for proactive monitoring, hardened prompts, … Read more

LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’

May 6, 2024 at 06:29PM Prompt injection engineering in large language models (LLMs) poses a significant risk to organizations, as discussed during a CISO roundtable at RSA Conference in San Francisco. CISO Karthik Swarnam warns of inevitable incidents triggered by malicious prompting, urging companies to invest in training and establish boundaries for AI usage in … Read more