Easily Exploitable Critical Vulnerabilities Found in Open Source AI/ML Tools

Easily Exploitable Critical Vulnerabilities Found in Open Source AI/ML Tools

June 14, 2024 at 03:00AM

A Protect AI report has revealed a dozen critical vulnerabilities in open-source AI/ML tools, including issues that could lead to information exposure, privilege escalation, and server takeover. The most severe is CVE-2024-22476 in Intel Neural Compressor, allowing remote privilege escalation. The report emphasizes timely reporting to maintainers for fixes. Various high-severity vulnerabilities were also reported.

Based on the meeting notes, here are the key takeaways:

1. A Protect AI report highlighted a total of 32 security defects, including critical-severity issues in various open source AI/ML tools.
2. The most severe bug is CVE-2024-22476 in Intel Neural Compressor software, with a CVSS score of 10, allowing remote attackers to escalate privileges. This flaw was addressed in mid-May.
3. Multiple critical-severity vulnerabilities were discovered in ChuanhuChatGPT, LoLLMs, Qdrant, and Lunary, posing risks such as stealing sensitive files, arbitrary file reading, and unauthorized access.
4. Other critical-severity flaws include SSRF, IDOR, missing authorization and authentication mechanisms, improper path sanitization, path traversal, and log injection in various AI/ML platforms.
5. Protect AI reported that all vulnerabilities were disclosed to the maintainers at least 45 days before publication, and they continue to collaborate for timely fixes.

Let me know if you need further information or analysis on the meeting notes.

Full Article