Google Announces Bug Bounty Program and Other Initiatives to Secure AI

Google Announces Bug Bounty Program and Other Initiatives to Secure AI

October 26, 2023 at 10:39AM

Google has announced several initiatives to enhance the safety and security of AI. This includes a bug bounty program to reward researchers for identifying vulnerabilities in generative AI, a Secure AI Framework (SAIF) to protect critical components of machine learning, and a $10 million AI Safety Fund in collaboration with Anthropic, Microsoft, and OpenAI.

During the meeting, Google made several announcements regarding their efforts to enhance the safety and security of AI. They introduced a bug bounty program and a $10 million fund as part of these initiatives.

Google’s bug bounty program, called the vulnerability reporting program (VRP), will reward researchers who discover vulnerabilities in generative AI. The aim is to address concerns such as unfair bias, hallucinations, and model manipulation. The program encourages researchers to report attack scenarios, such as prompt injections, data leaks, tampering with model behavior, misclassifications in security controls, and the extraction of confidential model architecture or weights. Google is also open to rewarding researchers for identifying other vulnerabilities or behaviors in AI-powered tools that could pose security or abuse risks. The amounts of these rewards will be determined based on the severity of the attack scenario and the affected target.

To further enhance AI security, Google introduced the Secure AI Framework (SAIF). This framework focuses on securing critical components of the machine learning (ML) supply chain. Google is now announcing prototypes for model signing and attestation verification, which utilize Sigstore and SLSA to verify software identities and improve supply chain resilience. The goal is to increase transparency in the ML supply chain throughout the development lifecycle and mitigate the rise in supply chain attacks. Google believes that the supply chain solutions from SLSA and Sigstore can be applied to ML models to protect them against supply chain attacks.

In collaboration with Anthropic, Microsoft, and OpenAI, Google also announced the establishment of a $10 million AI Safety Fund. This fund is dedicated to promoting research in the field of AI safety.

These initiatives demonstrate Google’s commitment to addressing AI security concerns and ensuring the safe and responsible development of AI technologies.

Full Article