Tech Giants Agree to Standardize AI Security

Tech Giants Agree to Standardize AI Security

July 19, 2024 at 11:43AM

The largest AI companies have formed CoSAI to prioritize security in the development and use of generative AI. This coalition aims to create guardrails and security technologies, focusing on AI and software supply chain security, protecting AI models from cyberattacks, and developing a framework for AI security. CoSAI will work with other organizations to develop common standards and best practices.

From the meeting notes, here are the key takeaways:

1. The largest AI companies are forming the Coalition for Secure AI (CoSAI) to prioritize AI security and develop standardized guardrails and security technologies, focusing on creating a secure framework to protect AI models from cyberattacks.

2. CoSAI’s initial workstreams include AI and software supply chain security, as well as preparing defenders for a changing cyber landscape.

3. CoSAI’s founding members include Google, OpenAI, and Anthropic, along with infrastructure providers like Microsoft, IBM, Intel, Nvidia, and PayPal.

4. There is a focus on AI safety, trust, and transparency due to potential cybersecurity concerns and the need to mitigate misuse and threat detection capabilities.

5. President Biden issued an executive order in July 2023, requiring commitments from major companies to develop safety standards and prevent AI’s misuse for biological materials and fraud and deception.

6. CoSAI aims to work with other organizations, such as the Frontier Model Forum, Partnership on AI, OpenSSF, and MLCommons, to develop common standards and best practices.

7. MLCommons plans to release an AI safety benchmarking suite to rate LLMs on responses related to hate speech, exploitation, child abuse, and sex crimes.

8. CoSAI will be managed by OASIS Open, known for its work in open-source development projects.

These takeaways provide a clear overview of the key discussions and initiatives from the meeting.

Full Article