August 8, 2024 at 02:56PM
Enterprises are rapidly adopting Microsoft’s Copilot AI-based chatbots to enhance employee productivity, but security researcher Michael Bargury demonstrated at Black Hat USA how attackers could exploit Copilot for data theft and social engineering. He also released an offensive toolset for Copilot and emphasized the need for better detection of “promptware” in AI models. Microsoft is working to address these risks.
The meeting notes detail the presentation by security researcher Michael Bargury at Black Hat USA, where he demonstrated the vulnerabilities of Microsoft’s Copilot AI-based chatbots to attacks such as prompt injections. Bargury presented a red-team hacking tool for Copilot, called LOLCopilot, which allows attackers to manipulate chatbot behavior through prompt injections, effectively bypassing security controls and potentially gaining remote code-execution (RCE) access.
Bargury also highlighted the need for more effective tools to detect and mitigate “promptware” attacks, as traditional security measures may not be sufficient to identify and prevent these threats.
Microsoft’s response to these security risks includes the development and deployment of various tools and mechanisms such as Prompt Shields, Groundedness Detection, Safety Evaluation, and partnerships with organizations like HiddenLayer to enhance the security of its AI applications. Bargury acknowledges Microsoft’s efforts in addressing the security of Copilot, acknowledging the implementation of ten security mechanisms within the system.
Overall, the meeting notes underscore the significant security implications associated with using Copilot and the ongoing efforts by both researchers and Microsoft to address these vulnerabilities and enhance the security of AI applications.