Slack Patches AI Bug That Let Attackers Steal Data From Private Channels

Slack Patches AI Bug That Let Attackers Steal Data From Private Channels

August 22, 2024 at 11:47AM

Salesforce’s Slack AI has patched a flaw identified by security firm PromptArmor, which could have allowed attackers to steal data from private Slack channels or engage in secondary phishing within the platform. The flaw is related to the use of a language model that did not recognize malicious instructions, enabling potential data exposure and phishing attacks. The issue was disclosed to Slack, and the company subsequently deployed a patch to address the problem. This incident raises concerns about the security of AI tools and highlights the importance of implementing measures to protect sensitive data.

From the meeting notes, it is clear that Salesforce’s Slack Technologies has patched a flaw in Slack AI that could have allowed attackers to steal data from private Slack channels or perform secondary phishing within the collaboration platform. The flaw was discovered by researchers from security firm PromptArmor, who found a prompt injection flaw in Slack’s AI-based feature, which adds generative AI capabilities. The flaw occurs because the large language model (LLM) may not distinguish a malicious instruction from a legitimate one, potentially leading to data exfiltration and abuse.

The researchers identified two potential malicious scenarios facilitated by the flaw: data exfiltration from private channels and phishing users within the workspace. The issue is compounded by Slack’s recent update, which allows Slack AI to ingest not only messages but also uploaded documents and Google Drive files, thereby increasing the risk surface area for potential attacks.

PromptArmor disclosed the flaw to Slack on August 14, and after collaborating with the company, Slack deployed a patch to fix a scenario that could allow a threat actor with an existing account to phish users for certain data. However, the researchers pointed out that Slack initially considered the flaw as “intended behavior” before acknowledging and addressing the issue.

The significance of this flaw extends beyond just the Slack platform, as it raises questions about the safety and security of current AI tools. Akhil Mittal, a cybersecurity expert, noted that this vulnerability highlights the potential flaws in AI systems that could allow unauthorized access to sensitive data. He emphasized the need for rigorous security measures and ethical considerations in the development and deployment of AI tools to protect data and maintain trust.

PromptArmor suggested that organizations using Slack can mitigate potential threats by using Slack AI settings to restrict the feature’s ability to ingest documents, thereby limiting access to sensitive data by potential threat actors.

Overall, these meeting notes present a comprehensive overview of the discovered flaw in Slack’s AI and highlight the importance of addressing security vulnerabilities in AI tools to safeguard sensitive data and maintain trust in such platforms.

Full Article