March 28, 2024 at 03:10AM
Generative AI concocts fake software packages, adopted by businesses like Alibaba, which unknowingly incorporate these non-existent dependencies into their software. The experiment demonstrates that malicious actors could create and distribute malware under the guise of these AI-hallucinated package names, potentially endangering unsuspecting developers who follow the AI’s suggestions.
The meeting notes summarize a concerning discovery about the potential misuse of generative AI models to create and distribute software packages. Research by security researcher Bar Lanyado has revealed that AI models can generate non-existent software package names, which could be used to implement and distribute malicious code, leading to serious consequences. Lanyado’s experiment with AI models, including GPT-3.5, GPT-4, Gemini Pro, and Coral, demonstrated that persistent and repeated hallucinated package names could be exploited for malicious purposes, particularly in the Python and Node.js ecosystems.
Furthermore, Lanyado’s proof-of-concept (PoC) malware, created based on ChatGPT’s advice to install “huggingface-cli,” was able to garner over 15,000 genuine downloads within three months. This is a stark indication of the potential impact of such a technique, as it has already found its way into the instructions and repositories of major companies, including Alibaba.
Lanyado’s findings underscore the urgent need to address the security risks posed by AI-generated package names and the subsequent potential for distributing malicious code. It is alarming that this technique has not yet been identified in an actual attack, highlighting the elusive nature of such threats.
The posed question and concerns would warrant a thorough review by relevant stakeholders to consider the implications and necessary actions to address this security vulnerability.