October 4, 2024 at 05:17PM
MITRE’s Center for Threat-Informed Defense launched the AI Incident Sharing initiative, collaborating with over 15 companies to enhance community knowledge of threats and defenses for AI-enabled systems. The Secure AI project aims to facilitate secure collaboration on AI incidents and has extended the ATLAS threat framework to address generative AI-enabled system threats. The initiative invites incident submissions and aims to enable data-driven risk intelligence and analysis.
From the meeting notes, it is clear that MITRE’s Center for Threat-Informed Defense has announced the launch of the AI Incident Sharing initiative, which is a collaboration with over 15 companies. The initiative aims to increase community knowledge of threats and defenses for AI-enabled systems.
The initiative is part of the Secure AI project and seeks to enable quick and secure collaboration on threats, attacks, and accidents involving AI-enabled systems. It will expand the MITRE ATLAS community knowledge base by providing protected and anonymized data on real-world AI incidents to a community of collaborators.
Organizations can submit incidents via the web and will be considered for membership with the goal of enabling data-driven risk intelligence and analysis at scale. MITRE has also extended the ATLAS threat framework to incorporate information on the generative AI-enabled system threat landscape, adding new case studies, attack techniques, and mitigation methods.
Additionally, it is noted that MITRE collaborates with Microsoft and has extended the ATLAS knowledge base focused on generative AI. Douglas Robbins, vice president of MITRE Labs, emphasized the importance of standardized and rapid information sharing about incidents to improve the collective defense of such systems and mitigate external harms.
Furthermore, MITRE has a public-private partnership with the Aviation Safety Information Analysis and Sharing database for sharing data and safety information to identify and prevent hazards in aviation.
The collaborators on Secure AI come from various industries, including financial services, technology, and healthcare, and include companies such as AttackIQ, BlueRock, Booz Allen Hamilton, CATO Networks, Citigroup, Cloud Security Alliance, CrowdStrike, FS-ISAC, Fujitsu, HCA Healthcare, HiddenLayer, Intel, JPMorgan Chase Bank, Microsoft, Standard Chartered, and Verizon Business.