Startups Scramble to Build Immediate AI Security

Startups Scramble to Build Immediate AI Security

January 2, 2024 at 10:07AM

In early 2003, the emergence of artificial intelligence (AI) security became imminent with the introduction of ChatGPT, impacting startups focusing on machine learning security operations, AppSec remediation, and privacy enhancement through homomorphic encryption. Today’s AI faces significant vulnerability challenges, particularly concerning the security of foundational models. Startups are debating various approaches, including fully homomorphic encryption, but concerns about its practical implementation persist due to its computational and cost implications. Despite the potential benefits of AI, a few early-stage startups are leading the way in addressing AI security concerns, highlighting the importance of closely monitoring their developments.

The key takeaways from the meeting notes are as follows:

1. AI Insecurity: Foundational models in AI are vulnerable to attacks due to existing vulnerabilities and new mathematical threats. There is a lack of widespread practice in enumerating model versions before release, and the AI establishment has yet to address cybersecurity risks.

2. MLSecOps Startups: Startups within the MLSecOps space are engaging in debates about focusing on different parts of the ML life cycle. There is a focus on addressing adversarial AI attacks on deployed production models, securing bespoke model development, and analyzing foundational models for vulnerabilities.

3. Fully Homomorphic Encryption (FHE): FHE offers potential for secure collaboration and privacy but faces challenges such as the increase in ciphertext size after encryption and prohibitive computing time and cost. Some startups are focused on specific high-value uses of FHE, such as blockchain encryption.

4. Spiraling Compute Costs: AI is burdened by high compute costs, and only a small number of innovators at early growth startups have coherent visions of AI security.

These takeaways highlight the ongoing concerns and efforts in addressing the security challenges associated with AI and the potential of technologies like FHE within the industry.

Full Article