IriusRisk Brings Threat Modeling to Machine Learning Systems

IriusRisk Brings Threat Modeling to Machine Learning Systems

October 26, 2023 at 10:06PM

Organizations are increasingly adopting threat modeling to identify security flaws in software design, particularly with the rising use of machine learning. Threat modeling helps organizations understand security risks and mitigate them in machine learning systems. IriusRisk offers a threat modeling tool that automates the process and includes an AI & ML Security Library to help organizations analyze and secure their machine learning systems. It is recommended that organizations incorporate threat modeling into their software design process to enhance security.

The meeting notes discuss the importance of threat modeling in software development, particularly in relation to incorporating machine learning. Threat modeling helps identify security risks and flaws in software design, such as data poisoning, input manipulation, and data extraction. By addressing these risks through threat modeling early in the development process, it can reduce the need for extensive security testing later on. It is recommended by the NIST Guidelines on Minimum Standards for Developer Verification of Software.

IriusRisk offers a threat modeling tool that automates both threat modeling and architecture risk analysis. Developers and security teams can import code to generate diagrams and threat models. The tool also provides threat modeling templates to make it accessible for individuals who are not familiar with diagramming tools or risk analysis. Additionally, the newly launched AI & ML Security Library allows organizations using IriusRisk to threat model their machine learning systems and understand the associated security risks and mitigation strategies.

Some essential questions that organizations using AI and ML should consider during threat modeling include:
1. Where did the data used to train the machine learning model come from? Has anyone embedded incorrect or malicious data?
2. How does the machine continue to learn once it is in production? Online machine learning systems that continuously learn from users may have more risks compared to offline systems.
3. Can confidential information be extracted from the machine? It is important to ensure that confidential data cannot be extracted by those using the machine learning system.

The AI & ML Security Library is based on the BIML ML Security Risk Framework, which was developed by Gary McGraw, co-founder of Berryville Institute of Machine Learning. This framework offers a taxonomy of machine learning threats and an architectural risk assessment of typical machine learning components. It is designed to assist developers, engineers, and designers during the early phases of machine learning projects. The library is available for both IriusRisk customers and those using the community edition of the platform.

Threat modeling using the AI & ML Security Library can provide significant value for organizations, especially those in the finance and technology sectors. It helps teams understand security goals during the design phase of ML projects and guides them on the necessary steps to ensure security. However, it is important to note that organizations need visibility into their machine learning usage as shadow machine learning may exist, where different departments use various applications and tools without IT and security oversight.

While many organizations do not currently incorporate threat modeling into their software design, those that do can enhance their practices by including machine learning threat modeling. Tools like IriusRisk can aid organizations in improving their threat modeling program. The notes suggest that organizations that are not currently conducting threat modeling should consider starting as it is a well-established practice in the industry and the time to adopt it is now.

Full Article