Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls

August 13, 2024 at 06:51AM Various attempts at prompt injection into large language models (LLMs) have been identified, with the majority coming from job seekers seeking to manipulate automated HR screening systems. Kaspersky’s research found instances of direct and indirect prompt injections, often aiming to influence HR processes or as a form of protest against … Read more

How to Weaponize Microsoft Copilot for Cyberattackers

August 8, 2024 at 02:56PM Enterprises are rapidly adopting Microsoft’s Copilot AI-based chatbots to enhance employee productivity, but security researcher Michael Bargury demonstrated at Black Hat USA how attackers could exploit Copilot for data theft and social engineering. He also released an offensive toolset for Copilot and emphasized the need for better detection of “promptware” … Read more

Meta’s AI safety system defeated by the space bar

July 29, 2024 at 05:09PM Meta’s machine-learning model designed to detect prompt injection attacks, known as Prompt-Guard-86M, has ironically been found vulnerable to such attacks. This model, introduced by Meta in conjunction with its Llama 3.1 generative model, aims to catch problematic inputs for AI models. However, a recent discovery by bug hunter Aman Priyanshu … Read more

Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

June 28, 2024 at 09:33AM Microsoft recently revealed an artificial intelligence jailbreak technique, called Skeleton Key, able to trick gen-AI models into providing restricted information. The technique was tested on various AI models, potentially bypassing safety measures. Microsoft reported its findings to developers and implemented mitigations in its AI products, including Copilot AI assistants. From … Read more

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

June 27, 2024 at 05:20AM A high-severity security flaw (CVE-2024-5565, CVSS score: 8.1) has been disclosed in the Vanna.AI library, which could lead to remote code execution via prompt injection techniques. This vulnerability allows the execution of arbitrary commands, posing a significant risk to the security of organizations using this Python-based machine learning library. Prompt … Read more

LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’

May 6, 2024 at 06:29PM Prompt injection engineering in large language models (LLMs) poses a significant risk to organizations, as discussed during a CISO roundtable at RSA Conference in San Francisco. CISO Karthik Swarnam warns of inevitable incidents triggered by malicious prompting, urging companies to invest in training and establish boundaries for AI usage in … Read more

Microsoft Beefs Up Defenses in Azure AI

April 1, 2024 at 06:26PM Microsoft introduces new tools to safeguard Azure AI from threats like prompt injection, while empowering developers to enhance the resilience of generative AI applications against model and content manipulation attacks. Based on the meeting notes, the key takeaways are: 1. Microsoft has added tools to protect Azure AI from threats … Read more

Shadow AI – Should I be Worried?

March 14, 2024 at 07:57AM Since November 2022, the use of Generative AI has surged, with around 12,000 AI tools available for over 16,000 job tasks. Many employees are using these tools without employer approval, raising concerns about data protection and compliance. Security issues include privacy policies, prompt injection, and account takeover risks. Educating users … Read more

Three Tips to Protect Your Secrets from AI Accidents

February 26, 2024 at 06:09AM OWASP published the “OWASP Top 10 For Large Language Models,” reflecting the evolving nature of Large Language Models and their potential vulnerabilities. The article discusses techniques like “prompt injection,” the accidental disclosure of secrets, and offers tips such as secret rotation, data cleaning, and regular patching to secure LLMs. From … Read more

Forget Deepfakes or Phishing: Prompt Injection is GenAI’s Biggest Problem

February 2, 2024 at 06:06PM The security community should shift focus to generative artificial intelligence (GenAI) risks, particularly prompt injection, which involves inserting text to manipulate large language models (LLMs). This method allows attackers to trigger unintended actions or access sensitive information. Recognizing prompt injection as a top security concern is crucial as cyber threats … Read more