Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats

March 13, 2024 at 07:03AM Google’s Gemini large language model faces security threats, potentially allowing disclosure of system prompts, generating harmful content, and indirect injection attacks. Vulnerabilities include leak of system prompts, misinformation generation, and potential malicious action control. Findings by HiddenLayer highlight widespread need for testing and safeguarding language models. Google responds by implementing … Read more

Google’s Gemini AI Vulnerable to Content Manipulation

March 12, 2024 at 06:03AM Summary: Google’s Gemini large language model (LLM) is found susceptible to attacks that can lead to the generation of harmful content,HiddenLayer researchers manipulate the AI technology to generate election misinformation,detailed instructions on hotwiring a car, and system prompt leakage.They found that Gemini, like other LLMs, is vulnerable to attacks due … Read more