Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

October 27, 2023 at 08:00AM

Google is expanding its Vulnerability Rewards Program to reward researchers for finding vulnerabilities in generative artificial intelligence systems. The program aims to address concerns such as bias, model manipulation, and data misinterpretation. Additionally, Google is working on securing the AI supply chain through open-source security initiatives. OpenAI has also formed a Preparedness team to protect against risks to generative AI.

According to the meeting notes, Google announced the expansion of its Vulnerability Rewards Program (VRP) to reward researchers for discovering attack scenarios specifically tailored to generative artificial intelligence (AI) systems. This initiative aims to enhance AI safety and security. Google acknowledges that generative AI presents new concerns compared to traditional digital security, such as unfair bias, model manipulation, and misinterpretation of data. The note highlights several categories that fall within the program’s scope, including prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks, and model theft.

Furthermore, Google established an AI Red Team in July, as part of its Secure AI Framework (SAIF), to address threats to AI systems. The company also mentioned efforts to strengthen the AI supply chain through existing open-source security initiatives like Supply Chain Levels for Software Artifacts (SLSA) and Sigstore. Sigstore allows users to verify that software hasn’t been tampered with or replaced, while SLSA provenance provides metadata about the software’s composition and construction, enabling consumers to ensure license compatibility, identify known vulnerabilities, and detect advanced threats.

Additionally, OpenAI has formed an internal Preparedness team to monitor and protect against catastrophic risks to generative AI, including cybersecurity and various types of threats like chemical, biological, radiological, and nuclear (CBRN).

Together with Anthropic and Microsoft, Google has also announced the creation of a $10 million AI Safety Fund, which aims to promote research in the field of AI safety.

If you find this article interesting, you can follow us on Twitter and LinkedIn for more exclusive content.

Full Article