AI Hallucinated Packages Fool Unsuspecting Developers

AI Hallucinated Packages Fool Unsuspecting Developers

April 1, 2024 at 11:42AM

Report by Lasso Security warns of AI chatbots leading software developers to use nonexistent packages, potentially exploited by threat actors. Bar Lanyado demonstrated large language model (LLM) chatbots’ susceptibility to spreading and recommending hallucinated packages. The research emphasizes the importance of cross-verifying uncertain LLM answers and exercising caution when integrating open-source software.

Key takeaways from the meeting notes:

– Software developers using AI chatbots for building applications may inadvertently utilize non-existent packages due to hallucinations generated by large language models (LLMs).

– Lasso Security’s Bar Lanyado has conducted research demonstrating the potential for spreading non-existent software packages using LLM tools.

– Threat actors could exploit this by creating malicious packages with the same names as hallucinated ones, which may be downloaded based on AI chatbot recommendations.

– The extensive research involved testing four different AI models with over 40,000 “how to” questions, revealing a significant occurrence of hallucinations, with Gemini peaking at 64.5%.

– Notably, an empty package uploaded by the researcher was downloaded over 30,000 times based on AI recommendations and was found to be used or recommended by several large companies, emphasizing the potential real-world impact of the issue.

– Lanyado emphasizes the need for cross-verification when dealing with uncertain answers from LLMs, particularly regarding software packages, and advises caution when relying on open source software. Developers are urged to verify package repositories, evaluate community engagement, assess publication dates, and perform comprehensive security scans before integration.

– The research underscores the risk of exploitation and the importance of vigilance in dealing with potentially malicious software packages.

In addition, the meeting notes reference related articles on the exploitation of chatbot hallucinations to distribute malicious code packages and the vulnerability of thousands of code packages to repojacking attacks.

Let me know if you need further details or clarifications on the meeting notes.

Full Article