Protect AI Releases 3 AI/ML Security Tools as Open Source

Protect AI Releases 3 AI/ML Security Tools as Open Source

October 11, 2023 at 08:42AM

Protect AI, the maker of Huntr, a bug bounty program for open source software, has licensed three of its AI/ML security tools under the permissive Apache 2.0 terms. The first tool, NB Defense, helps protect machine learning projects in Jupyter Notebooks. The second tool, ModelScan, scans ML models for attacks and the third tool, Rebuff, addresses prompt injection attacks. All three tools are available on GitHub.

Protect AI, the company behind Huntr, a bug bounty program for open source software (OSS), is expanding its presence in the OSS world by licensing three AI/ML security tools under the permissive Apache 2.0 terms.

The first tool, NB Defense, was developed by Protect AI to protect machine learning projects created in Jupyter Notebooks. Jupyter Notebooks have become a common application favored by data scientists, making them a target for hackers. NB Defense consists of two tools: the JupityrLab extension, which identifies and fixes security issues within a Notebook, and the CLI tool, which allows for scanning multiple Notebooks simultaneously and automatically scans those being uploaded to a central repository. NB Defense scans for vulnerabilities such as secrets, personally identifiable information (PII), CVE exposures, and code subject to restrictive third-party licenses.

The second tool, ModelScan, addresses the need for secure sharing of ML models across the Internet. It scans popular formats such as Pytorch, Tensorflow, and Keras for model serialization attacks such as credential theft, data poisoning, model poisoning, and privilege escalation.

Rebuff, the third tool, was an open source project that Protect AI acquired in July 2023. Rebuff focuses on detecting and mitigating prompt injection (PI) attacks, where attackers send malicious inputs to large language models (LLMs). It uses a four-layer defense approach, including heuristics, a dedicated LLM for analysis, a database of known attacks, and canary tokens to detect leaks and prevent manipulation of outputs and exposure of sensitive data.

The increasing use of AI and LLMs by organizations of all sizes has created a demand for tools to secure these models. Other companies, such as HiddenLayer and Microsoft, have also developed tools to protect AI systems against adversarial attacks. Recent vulnerabilities in TorchServe highlight the importance of securing AI systems even for major players like Walmart and major cloud service providers.

All three of Protect AI’s tools, NB Defense, ModelScan, and Rebuff, are available on GitHub.

Full Article