June 24, 2024 at 04:43PM
A critical vulnerability, dubbed Probllama (CVE-2024-37032), in the Ollama project for running LLMs allows remote code execution. The flaw, fixed in version 0.1.34, impacts over 1,000 exposed instances. Wiz Research urges timely updating and implementing strong authentication measures, emphasizing the risk associated with unpatched instances. This underscores the need for improved security practices in modern AI tools.
Summary of Meeting Notes:
– Ollama, an open-source project for running LLMs, was found to have a vulnerability that allows remote code execution. The flaw, dubbed Probllama and tracked as CVE-2024-37032, was disclosed by Wiz Research and fixed in version 0.1.34 released via GitHub.
– The vulnerability is due to insufficient validation on the server side of the REST API provided by Ollama, allowing for malicious HTTP requests to exploit the flaw.
– The vulnerability is particularly severe in Docker installations, as it enables remote exploitation due to the server running with root privileges and being exposed to the internet.
– Despite the patched version being available for over a month, there were still over 1,000 vulnerable Ollama server instances exposed to the internet as of June 10.
– Recommendations for protecting Ollama instances include updating to version 0.1.34 or newer, implementing authentication or using reverse-proxy, and putting the server behind firewalls to restrict internet access.
Please let me know if you need any further information or if there’s anything else I can assist you with.