New Scoring System Helps Secure the Open Source AI Model Supply Chain

October 24, 2024 at 06:09AM AI models from Hugging Face may harbor hidden issues similar to open-source software from platforms like GitHub. A new scoring system has been introduced to enhance the security of the open-source AI model supply chain. This aims to address potential vulnerabilities in AI models. **Meeting Takeaways:** 1. **Similarity in Issues**: … Read more

Secrets Exposed in Hugging Face Hack

June 3, 2024 at 04:07AM Hugging Face, an AI tool development company, reported unauthorized access to its Spaces platform, potentially exposing a subset of Spaces’ secrets. The company has revoked compromised tokens, advised users to refresh keys and switch to fine-grained access tokens, and engaged external forensics experts. It has also made significant security improvements … Read more

AI platform Hugging Face says hackers stole auth tokens from Spaces

June 2, 2024 at 04:57PM Hugging Face’s Spaces platform was breached, exposing authentication secrets for its members. The company detected unauthorized access and suspects a subset of Spaces’ secrets were compromised. They have revoked authentication tokens and recommend users refresh tokens and switch to fine-grained access tokens for tighter security. The company is working with … Read more

AI Company Hugging Face Notifies Users of Suspected Unauthorized Access

June 1, 2024 at 03:48AM AI company Hugging Face detected unauthorized access to its Spaces platform, affecting users creating, hosting, and sharing AI and machine learning apps. The company is revoking tokens and investigating the breach’s impact on users. The incident underscores the increased risk to AIaaS providers, with previous security flaws exposing potential supply … Read more

Critical Flaw in AI Python Package Can Lead to System and Data Compromise

May 17, 2024 at 09:57AM A critical vulnerability, tracked as CVE-2024-34359 and named Llama Drama, was discovered in a Python package used by AI developers. The flaw allows for arbitrary code execution, posing a risk to systems and data. Cybersecurity firm Checkpoint detailed the issue, and a patch has been released with llama_cpp_python 0.2.72. More … Read more

Critical Bugs Put Hugging Face AI Platform in a ‘Pickle’

April 5, 2024 at 04:51PM Two critical security vulnerabilities in the Hugging Face AI platform allowed attackers to access customer data and overwrite images in a shared container registry. Researchers at Wiz found weaknesses in Hugging Face’s Inference API, Endpoints, and Spaces. The vulnerabilities were exploited by uploading a Pickle-based model. Hugging Face has since … Read more

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

April 5, 2024 at 10:39AM New research has revealed that AI-as-a-service providers, like Hugging Face, are vulnerable to threats allowing attackers to gain access to private AI models and apps. The findings highlight the risk of supply chain attacks on machine learning pipelines. Recommendations include using trusted AI models, enabling multi-factor authentication, and avoiding pickle … Read more

ML Model Repositories: The Next Big Supply Chain Attack Target

March 18, 2024 at 06:15PM Machine-learning model platforms, such as Hugging Face, are vulnerable to attacks similar to those experienced by npm, PyPI, and other open source repositories. These attacks have been successfully executed by threat actors for years. It seems from the meeting notes that the discussion highlighted the susceptibility of machine-learning model platforms … Read more

In the rush to build AI apps, please, please don’t leave security behind

March 17, 2024 at 07:08AM AI developers and data scientists are urged to be mindful of security and supply-chain attacks amidst the relentless progress in AI technology. With a growing threat of malware in models and libraries, cybersecurity and AI startups are emerging to address the vulnerability. Ensuring supply-chain security in the AI community is … Read more

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

March 4, 2024 at 04:54AM Security researchers have discovered around 100 malicious AI/ML models on the Hugging Face platform. These models pose a significant security threat, potentially allowing attackers to gain control over machines, leading to data breaches and corporate espionage. Furthermore, researchers have developed techniques to manipulate large-language models (LLMs) for harmful purposes, demonstrating … Read more