Protect AI Raises $60 Million in Series B Funding

August 2, 2024 at 08:12AM Protect AI, an AI and ML security firm, raised $60 million in Series B funding, bringing the total raised to $108.5 million. The investment, led by Evolution Equity Partners, will support their AI Security Posture Management platform, expansion of sales and customer support, R&D, and hiring 50 more specialists. The … Read more

How to Write a Generative AI Cybersecurity Policy

July 29, 2024 at 05:52AM Generative AI has become a permanent IT tool, placing pressure on CISOs to develop policies and technologies to address its risks. Practical guidance on establishing AI security practices and policies is urgently needed, with a focus on addressing emerging risks and implementing sensible policies for AI tools and platforms. Corporate … Read more

Nvidia Embraces LLMs & Commonsense Cybersecurity Strategy

July 26, 2024 at 01:49PM Nvidia has embraced the generative AI revolution, utilizing large language models (LLMs) and internal AI applications. At Black Hat USA, Richard Harang will discuss lessons learned in securing these systems. Despite potential risks, securing AI systems is not inherently more difficult than traditional systems and requires essential security attributes. Additionally, … Read more

Securing AI around the world

July 23, 2024 at 04:29AM Join Intel, DETASAD, Juniper Networks, and Arqit on July 31 for the webinar “Securing AI in the Middle East: Defend Against Cyber Threats.” Topics include industry awareness, AI threat landscape, security practices, building trust in AI, and enhancing AI security. Tailored for professionals in various sectors, this session aims to … Read more

Tech Giants Agree to Standardize AI Security

July 19, 2024 at 11:43AM The largest AI companies have formed CoSAI to prioritize security in the development and use of generative AI. This coalition aims to create guardrails and security technologies, focusing on AI and software supply chain security, protecting AI models from cyberattacks, and developing a framework for AI security. CoSAI will work … Read more

CoSAI: Tech Giants Form Coalition for Secure AI

July 19, 2024 at 10:12AM Google has launched the Coalition for Secure AI (CoSAI), in partnership with industry players like Amazon, IBM, and Microsoft, to address cybersecurity risks in artificial intelligence. CoSAI aims to establish common security standards, provide guidance on evaluating software supply chains, and develop frameworks for identifying and mitigating AI security impacts. … Read more

Three words to send a chill down your spine: Snowflake. Intrusion. Alert

July 13, 2024 at 11:10AM This week’s Kettle episode features a discussion on security, including the AT&T Snowflake storage account intrusion and the marketing claims of AI to protect systems. The 15-minute episode includes Tobias Mann, Brandon Vigliarolo, Jessica Lyons, and host Iain Thomson. The series is available in various formats for easy access. Find … Read more

The Top 10 AI Security Risks Every Business Should Know

July 9, 2024 at 08:30AM The article discusses the top 10 AI security risks identified by OWASP for businesses adopting AI tools, categorized into access, data, and reputational/business risks. It highlights the vulnerabilities and offers protective measures, emphasizing the need for policy foundation, security technologies, and responsible use of AI. The aim is to mitigate … Read more

Not-so-OpenAI allegedly never bothered to report 2023 data breach

July 7, 2024 at 09:52PM OpenAI faced backlash this week, following revelations of a 2023 system breach and privacy issues with its ChatGPT app for macOS. Moreover, the departure of key personnel raised concerns about its safety culture. The International Automobile Federation also reported a data breach, and a new ransomware group, Volcano Demon, was … Read more

Microsoft Details ‘Skeleton Key’ AI Jailbreak Technique

June 28, 2024 at 09:33AM Microsoft recently revealed an artificial intelligence jailbreak technique, called Skeleton Key, able to trick gen-AI models into providing restricted information. The technique was tested on various AI models, potentially bypassing safety measures. Microsoft reported its findings to developers and implemented mitigations in its AI products, including Copilot AI assistants. From … Read more