January 19, 2024 at 08:31AM
The growing use of AI in organizations poses new security risks. The adoption of AI tools without informing security teams leads to “shadow ML” and “shadow AI.” Legit Security’s platform provides visibility into all software components and developer tools. Securing machine learning involves finding its usage, threat modeling, and implementing controls, with different platforms serving these purposes.
After analyzing the meeting notes, here are the key takeaways:
1. The growing use of AI in organizations has created challenges in managing the software supply chain and ensuring security and data governance policies adapt to the inclusion of AI components.
2. “Shadow ML” and “shadow AI” exist as business groups and individual employees often adopt machine learning applications and AI tools without involving the security teams, leading to a lack of visibility and control over these tools.
3. Legit Security’s platform provides visibility into all software and developer tools being used, addressing concerns around unauthorized use of AI tools and generative AI technologies.
4. The platform helps identify all AI components being used, threat models the risks associated, and assists in implementing controls to manage the risks effectively.
5. Legit Security, IriusRisk, and Calypso AI are working together to address these challenges by finding AI components, threat modeling, and implementing controls respectively to manage the risks associated with the growing use of AI in organizations.
These takeaways highlight the need for organizations to actively manage and secure the use of AI components and tools within their software supply chain to ensure security and governance policies are adapted to reflect the evolving AI landscape.