June 24, 2024 at 10:24AM
Cybersecurity researchers disclosed a security flaw, CVE-2024-37032, affecting the Ollama open-source AI platform, enabling remote code execution. The issue was fixed in version 0.1.34. Exploiting the vulnerability involves manipulating HTTP requests. In default Linux installations, the risk is lowered, but Docker deployments are at high risk. Wiz identified over 1,000 unprotected Ollama instances.
Summary:
The meeting notes from Jun 24, 2024, discuss a security flaw, tracked as CVE-2024-37032 and codenamed Probllama, affecting the Ollama open-source artificial intelligence (AI) infrastructure platform. This vulnerability could be exploited to achieve remote code execution. The flaw, which was responsibly disclosed on May 5, 2024, was addressed in version 0.1.34 released on May 7, 2024.
The vulnerability is related to insufficient input validation, resulting in a path traversal flaw that could be exploited by sending specially crafted HTTP requests to the Ollama API server. The issue could lead to remote code execution by overwriting arbitrary files on the server. The risk is particularly high in Docker installations, where the API server runs with root privileges and is publicly exposed by default.
The lack of authentication associated with Ollama makes it easier for attackers to exploit publicly-accessible servers to steal or tamper with AI models and compromise self-hosted AI inference servers. The security researcher Sagi Tzadik emphasized the severity of the issue, highlighting the impact on modern AI infrastructure despite the codebase being relatively new.
Additionally, the meeting notes mention other vulnerabilities affecting open-source AI/ML tools, including the most severe CVE-2024-22476, an SQL injection flaw in Intel Neural Compressor software that was addressed in version 2.5.0.
Let me know if you need any further information or if there’s anything specific you would like to focus on from the meeting notes.