June 13, 2024 at 10:25AM
The discovery of a new attack method called Sleepy Pickle poses a significant security risk to machine learning (ML) models. The attack targets ML models by corrupting the Pickle format, allowing for the insertion of payloads to modify model behavior and output. It is recommended to load models from trusted sources to mitigate this risk.
Key Takeaways from the Meeting Notes:
– A new security risk, named Sleepy Pickle, has been discovered, which exploits the Pickle format commonly used to package and distribute machine learning models. This poses a significant supply chain risk to organizations.
– Sleepy Pickle is a stealthy attack that targets the machine learning model itself rather than the underlying system, allowing attackers to corrupt models and manipulate their behavior.
– The attack method involves inserting a payload into a pickle file using open-source tools like Fickling, which can then be delivered to a target host using various techniques like phishing, supply chain compromise, or system weakness exploitation.
– The payload injected into the pickle file can be abused to alter model behavior, tamper with model weights, manipulate input and output data, and generate harmful outputs or misinformation.
– Threat actors can use Sleepy Pickle to maintain surreptitious access on machine learning systems, evading detection, and modifying model behavior or outputs dynamically.
– Sleeping Pickle effectively broadens the attack surface, as control over any pickle file in the supply chain of the target organization is enough to attack their models.
Please feel free to reach out if you need further details or have additional questions.