Black Hat USA 2024 – Summary of Vendor Announcements

August 12, 2024 at 09:18AM The 2024 Black Hat conference in Las Vegas saw numerous cybersecurity product and service announcements. Highlights include free ICS analysis tools from Claroty, a bug bounty initiative by Anthropic, and new offerings from companies like Sysdig, Cymulate, and Vectra AI. Additionally, findings from various security firms and platform launches were … Read more

Anthropic: Expanding Our Model Safety Bug Bounty Program

August 9, 2024 at 02:04PM To enhance AI model safety, we’re expanding our bug bounty program to focus on identifying and mitigating universal jailbreak attacks that could bypass AI safety measures. The $15,000 reward program, in partnership with HackerOne, invites experienced AI security researchers to apply for an early access test phase before public deployment. … Read more

EQT buys majority share in Swiss cybersecurity biz Acronis

August 7, 2024 at 06:15AM Swiss disaster recovery and cybersecurity firm, Acronis, has been majority acquired by Europe’s largest private equity firm, EQT. Acronis’s CEO, Ezequiel Steiner, expressed enthusiasm about this partnership and acknowledged existing investors. The deal is estimated at $3.5 billion, with the transaction expected to close in Q1 or Q2 2025. Co-founders … Read more

Google Introduces Project Naptime for AI-Powered Vulnerability Research

June 24, 2024 at 11:24AM Google has unveiled Project Naptime, a framework allowing AI to conduct vulnerability research, mimicking human security researchers. It comprises tools like Code Browser, Python tool, Debugger, and Reporter. Naptime is model-agnostic and better at flagging security flaws, achieving higher scores than OpenAI GPT-4 Turbo in vulnerability tests. It enables LLM … Read more

OpenAI Co-Founder Sutskever Sets up New AI Company Devoted to ‘Safe Superintelligence’

June 20, 2024 at 11:18AM Ilya Sutskever, a co-founder of OpenAI, has started a new company called Safe Superintelligence Inc. with a focus on developing “superintelligence” safely. The company aims to prioritize safety and security over short-term commercial pressures, based in Palo Alto and Tel Aviv. Sutskever and his co-founders have resigned from OpenAI to … Read more

AI Companies Make Fresh Safety Promise at Seoul Summit, Nations Agree to Align Work on Risks

May 21, 2024 at 08:06PM Top AI companies including Google, Meta, and OpenAI made voluntary safety commitments at the AI Seoul Summit, agreeing to pull the plug on their cutting-edge systems in extreme cases. World leaders also pledged to establish safety institutes and align their work on AI research. The meeting aims to address the … Read more

A Former OpenAI Leader Says Safety Has ‘Taken a Backseat to Shiny Products’ at the AI Company

May 17, 2024 at 03:37PM Former OpenAI leader Jan Leike resigned, stating that safety has been neglected at the influential AI company for shiny products. He disagreed with the company’s core priorities, emphasizing the need to focus on safety and societal impacts of AI. His resignation follows that of co-founder Ilya Sutskever, who is now … Read more

Tech Companies Want to Build Artificial General Intelligence. But Who Decides When AGI is Attained?

April 5, 2024 at 10:48AM The race for artificial general intelligence (AGI) is led by tech giants, but its definition remains elusive. AGI, surpassing human cognitive abilities, remains a distant goal. Ethical concerns arise, with researchers debating measures for its achievement and potential existential risks. As companies like OpenAI and Meta prioritize AGI, its implications … Read more

SecurityWeek to Host AI Risk Summit June 25-26 at the Ritz-Carlton, Half Moon Bay CA

March 6, 2024 at 08:31AM SecurityWeek will host the AI Risk Summit on June 25-26, 2024, at the Ritz-Carlton in Half Moon Bay, CA. The summit brings together industry experts to discuss the risks of deploying AI tools, adversarial use of AI technology, compliance and regulations, and cybersecurity. Registration is open with a discounted rate … Read more

Malicious AI models on Hugging Face backdoor users’ machines

February 28, 2024 at 05:16PM JFrog’s security team detected around 100 malicious AI ML models on the Hugging Face platform, potentially giving attackers persistent backdoor access to victims’ machines. Despite Hugging Face’s security measures, the models evaded detection, indicating significant security risks. JFrog urges heightened vigilance and proactive measures to protect against such threats. Based … Read more