Researchers Highlight Google’s Gemini AI Susceptibility to LLM Threats

Researchers Highlight Google's Gemini AI Susceptibility to LLM Threats

March 13, 2024 at 07:03AM

Google’s Gemini large language model faces security threats, potentially allowing disclosure of system prompts, generating harmful content, and indirect injection attacks. Vulnerabilities include leak of system prompts, misinformation generation, and potential malicious action control. Findings by HiddenLayer highlight widespread need for testing and safeguarding language models. Google responds by implementing safeguards against harmful or misleading responses.

Based on the meeting notes, the following key takeaways can be identified:

1. Google’s Gemini large language model (LLM) has been found to have vulnerabilities that could lead to security threats. These threats include potential leakage of system prompts, generation of harmful content, and potential for indirect injection attacks.

2. The vulnerabilities identified by HiddenLayer impact both consumers using Gemini Advanced with Google Workspace and companies using the LLM API.

3. The vulnerabilities include getting around security guardrails to leak system prompts, using “crafty jailbreaking” techniques to generate misinformation, and causing the LLM to leak information in the system prompt by passing repeated uncommon tokens as input.

4. Google has acknowledged the vulnerabilities and stated that they consistently run red-teaming exercises and train their models to defend against adversarial behaviors. They have also built safeguards to prevent harmful or misleading responses, and are continuously improving their defenses.

5. In response to the vulnerabilities, the company is restricting responses to election-based queries as a precaution, and this policy is expected to be enforced against prompts regarding candidates, political parties, election results, voting information, and notable office holders.

These key takeaways provide a concise summary of the security vulnerabilities identified in Google’s Gemini large language model and the actions being taken to address them.

Full Article