Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls

August 13, 2024 at 06:51AM Various attempts at prompt injection into large language models (LLMs) have been identified, with the majority coming from job seekers seeking to manipulate automated HR screening systems. Kaspersky’s research found instances of direct and indirect prompt injections, often aiming to influence HR processes or as a form of protest against … Read more

Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats

March 13, 2024 at 07:03AM Google’s Gemini large language model faces security threats, potentially allowing disclosure of system prompts, generating harmful content, and indirect injection attacks. Vulnerabilities include leak of system prompts, misinformation generation, and potential malicious action control. Findings by HiddenLayer highlight widespread need for testing and safeguarding language models. Google responds by implementing … Read more

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

October 27, 2023 at 08:00AM Google is expanding its Vulnerability Rewards Program to reward researchers for finding vulnerabilities in generative artificial intelligence systems. The program aims to address concerns such as bias, model manipulation, and data misinterpretation. Additionally, Google is working on securing the AI supply chain through open-source security initiatives. OpenAI has also formed … Read more