October 29, 2024 at 09:36AM
Over three dozen security vulnerabilities in open-source AI/ML models have been disclosed, with significant risks including remote code execution and data theft. Key flaws include IDOR vulnerabilities in Lunary and a critical path traversal issue in ChuanhuChatGPT. Users are urged to update their systems for protection against potential attacks.
### Meeting Takeaways – AI Security / Vulnerability Discussion (Oct 29, 2024)
1. **Overview of Vulnerabilities**:
– Over **three dozen security vulnerabilities** have been disclosed in various open-source AI and ML models.
– Key tools affected include **ChuanhuChatGPT**, **Lunary**, and **LocalAI**.
2. **Severe Vulnerabilities Identified**:
– **Lunary**:
– Two critical flaws (CVE-2024-7474 & CVE-2024-7475) with a **CVSS score of 9.1** allowing unauthorized data access and potentially sensitive information exposure.
– Another IDOR vulnerability (CVE-2024-7473, CVSS 7.5) enables attackers to manipulate other users’ prompts.
– **ChuanhuChatGPT**:
– Critical path traversal flaw (CVE-2024-5982, CVSS 9.1) could allow arbitrary code execution and sensitive data exposure.
– **LocalAI**:
– Two significant vulnerabilities allowing arbitrary code execution via malicious configuration uploads and timing attacks to guess valid API keys (CVE-2024-6983, CVSS 8.8 & CVE-2024-7010, CVSS 7.5).
– **Deep Java Library (DJL)**:
– A flaw leading to remote code execution (CVE-2024-8396, CVSS 7.8).
3. **Remediation Actions**:
– **NVIDIA** has released patches for a path traversal flaw in its **NeMo** AI framework (CVE-2024-0129, CVSS 6.3).
– **Advisory**: Users are urged to update to the latest versions to protect their AI/ML applications.
4. **New Tools for Vulnerability Detection**:
– **Vulnhuntr**: An open-source Python static code analyzer announced by Protect AI to detect zero-day vulnerabilities using LLMs.
5. **Emerging Threats**:
– A new jailbreak technique reveals that hex-encoded malicious prompts can bypass **OpenAI ChatGPT** safeguards, demonstrating the need for better detection and prevention mechanisms.
6. **Next Steps**:
– Stay updated on patches for affected tools.
– Monitor new developments in security vulnerabilities associated with AI/ML frameworks and models.
### Recommendations:
– Regularly review and apply security patches for all AI/ML tools.
– Consider integrating tools like **Vulnhuntr** for ongoing vulnerability detection in Python projects.
– Increase awareness and training on potential new exploits, including emerging jailbreak techniques.