Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

Researchers Uncover Flaws in Popular Open-Source Machine Learning Frameworks

December 6, 2024 at 07:18AM

Cybersecurity researchers uncovered multiple vulnerabilities in open-source machine learning tools like MLflow, H2O, and PyTorch, which can enable code execution. Detected by JFrog, these flaws potentially allow attackers to access sensitive information and perform lateral movements within organizations, highlighting the need for caution with untrusted ML models.

### Meeting Takeaways – December 6, 2024

**Topic:** Vulnerabilities in Open-Source Machine Learning Tools

**Key Points:**

1. **Overview of Vulnerabilities:**
– Multiple security flaws in open-source ML tools (MLflow, H2O, PyTorch, MLeap) reported by JFrog could allow for code execution.
– These vulnerabilities affect ML clients and libraries handling safe model formats.

2. **Risks:**
– An attacker who hijacks an ML client can move laterally within an organization, accessing critical ML services and sensitive information (e.g., model registry credentials).
– The potential for backdooring stored ML models or executing unauthorized code poses significant organizational risks.

3. **Specific Vulnerabilities Identified:**
– **CVE-2024-27132** (CVSS 7.2) – XSS vulnerability in MLflow via untrusted Jupyter Notebook recipes leading to RCE.
– **CVE-2024-6960** (CVSS 7.5) – Unsafe deserialization in H2O when importing untrusted ML models, potentially leading to RCE.
– **PyTorch Issue** – Path traversal in TorchScript allowing DoS or arbitrary file overwrite (no CVE identifier).
– **CVE-2023-5245** (CVSS 7.5) – Zip Slip vulnerability in MLeap when loading zipped models, resulting in arbitrary file overwrite and potential code execution.

4. **Recommendations:**
– Always validate the models being used and refrain from loading untrusted ML models, even from supposedly safe repositories.
– Awareness of the potential threats posed by AI and ML tools is crucial to safeguard organizations from extensive damage.

5. **Quotes:**
– Shachar Menashe, VP of Security Research at JFrog emphasizes the dual nature of AI/ML tools, highlighting the importance of caution when using these technologies.

**Action Items:**
– Review the identified vulnerabilities and assess organizational exposure.
– Implement practices to verify the trustworthiness of ML models before use.
– Stay informed by following reliable sources on cybersecurity developments in ML tools.

Full Article