February 29, 2024 at 11:35AM
Approximately 100 machine learning models were discovered on the Hugging Face platform, posing a risk of allowing attackers to inject malicious code onto user machines. JFrog’s ongoing research found malicious PyTorch models with potentially harmful payloads, highlighting the need for constant vigilance and proactive security measures to safeguard AI/ML engineers and organizations.
Based on the meeting notes, the key takeaways are:
– Researchers discovered about 100 potentially malicious machine learning models uploaded to the Hugging Face AI platform, posing a risk of injecting malicious code onto user machines.
– JFrog Security Research found that models uploaded to Hugging Face were harboring malicious payloads, one of which initiated a reverse shell connection to an actual IP address, indicating a potential security threat.
– The discovered IP address range belongs to Kreonet, a high-speed network in South Korea, suggesting potential involvement of AI researchers or practitioners.
– Loading certain types of ML models, such as those using the “pickle” format, can lead to code execution and potentially malicious behavior.
– Hugging Face has various security protections, but it doesn’t outright block or restrict potentially harmful models, emphasizing the need for constant vigilance and proactive security measures.
– The existence of publicly available and potentially malicious AI/ML models poses a significant risk to the supply chain and calls for the use of new tools such as Huntr to enhance the security posture of AI models and platforms.
Let me know if you need any further information or if there’s anything else I can assist you with.