Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

November 15, 2024 at 08:30AM Cybersecurity researchers uncovered two vulnerabilities in Google’s Vertex AI platform that could allow exploitation for privilege escalation and data exfiltration. Attackers could manipulate job permissions to access restricted resources and deploy malicious models to extract sensitive information. Google has addressed these issues, urging organizations to implement stricter model deployment controls. … Read more

Security Flaws in Popular ML Toolkits Enable Server Hijacks, Privilege Escalation

November 11, 2024 at 05:39AM Cybersecurity researchers have identified nearly 24 vulnerabilities in 15 machine learning open-source projects, including Weave and ZenML. These flaws could allow unauthorized access, remote code execution, and escalation of privileges, posing significant risks to ML infrastructure. This discovery follows previous vulnerabilities and the introduction of a new defense framework, Mantis. … Read more

Noma Launches With Plans to Secure Data, AI Life Cycle

October 31, 2024 at 10:08AM Noma has launched a platform to help organizations manage risks associated with AI applications, securing the AI life cycle against issues like misconfigured pipelines and malicious models. The service works across various environments without requiring code changes. Noma received $32 million in series A funding and serves Fortune 500 clients. … Read more

Ex-Oracle, Google Engineers Raise $7m From Accel for Public Launch of Simplismart to Empower AI Adoption

October 17, 2024 at 04:57PM OpenAI is expected to generate over $10 billion in 2025, highlighting the rapid adoption of generative AI. Simplismart has announced a $7 million funding round to enhance its AI deployment infrastructure, addressing challenges faced by enterprises. The platform optimizes machine learning operations, aiming to streamline generative AI adoption in organizations. … Read more

How to Mitigate the Impact of Rogue AI Risks

October 17, 2024 at 04:18PM The text discusses managing Rogue AI risks through Zero Trust architecture and layered defenses. It identifies causal factors for vulnerabilities in AI, such as misconfiguration and excessive autonomy. For effective mitigation, organizations must employ a comprehensive defense strategy and advance through the Zero Trust Maturity Model to strengthen security. ### … Read more

Researchers Identify Over 20 Supply Chain Vulnerabilities in MLOps Platforms

August 26, 2024 at 07:30AM Cybersecurity researchers have identified over 20 vulnerabilities in machine learning (ML) software supply chain, posing severe risks like arbitrary code execution and dataset loading. These affect MLOps platforms and ML libraries, like MLFlow and Seldon Core, enabling attackers to execute code and move laterally. The disclosure emphasizes the need for … Read more

To Spot Attacks Through AI Models, Companies Need Visibility

March 12, 2024 at 04:03PM The rush to develop AI/ML models overlooks their security, risking backdoor and hijacking attacks. Companies lack visibility into their 1,600+ models in production, leaving them vulnerable. Pretrained models from repositories raise security concerns, with potential for attackers to compromise systems. Securing ML operations and assessing model security are crucial in … Read more