Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

March 4, 2024 at 04:54AM

Security researchers have discovered around 100 malicious AI/ML models on the Hugging Face platform. These models pose a significant security threat, potentially allowing attackers to gain control over machines, leading to data breaches and corporate espionage. Furthermore, researchers have developed techniques to manipulate large-language models (LLMs) for harmful purposes, demonstrating increasing security risks in the AI ecosystem.

Based on the meeting notes, here are the highlighted takeaways:

1. Security vulnerabilities have been discovered in the Hugging Face platform, with as many as 100 malicious AI/ML models identified.
2. The presence of a backdoor in the rogue model could potentially grant attackers full control over compromised machines, leading to large-scale data breaches and corporate espionage.
3. A specific IP address associated with the Korea Research Environment Open Network (KREONET) has been identified in connection with the malicious payload.
4. Researchers have developed techniques such as BEAST to elicit harmful responses from large-language models (LLMs) and generative AI worms like Morris II, capable of spreading malware and stealing data.
5. The attack technique known as ComPromptMized impacts applications reliant on generative AI services and retrieval augmented generation (RAG).

These are the crucial points distilled from the meeting notes. Let me know if you need further details or a summary.

Full Article