A Former OpenAI Leader Says Safety Has ‘Taken a Backseat to Shiny Products’ at the AI Company

May 17, 2024 at 03:37PM Former OpenAI leader Jan Leike resigned, stating that safety has been neglected at the influential AI company for shiny products. He disagreed with the company’s core priorities, emphasizing the need to focus on safety and societal impacts of AI. His resignation follows that of co-founder Ilya Sutskever, who is now … Read more

User Outcry as Slack Scrapes Customer Data for AI Model Training

May 17, 2024 at 01:42PM Slack’s privacy controversy arises from scraping customer data, including messages and files, for AI/ML model development without user opt-in. Despite assurances, Slack admins are seeking to opt-out of data scraping. While Slack insists on technical controls, CISOs argue customers should not bear this burden. Slack assures platform-level ML model transparency … Read more

Critical Flaw in AI Python Package Can Lead to System and Data Compromise

May 17, 2024 at 09:57AM A critical vulnerability, tracked as CVE-2024-34359 and named Llama Drama, was discovered in a Python package used by AI developers. The flaw allows for arbitrary code execution, posing a risk to systems and data. Cybersecurity firm Checkpoint detailed the issue, and a patch has been released with llama_cpp_python 0.2.72. More … Read more

Cybercriminals Weigh Options for Using LLMs: Buy, Build, or Break?

April 1, 2024 at 05:07PM Cybercriminals pose a threat by coercing legitimate AI models to turn malicious, but the greater danger lies in their creation of malicious chatbot platforms and the use of open source models. Based on the meeting notes, it seems that there are concerns about cybercriminals bypassing security measures to manipulate legitimate … Read more

CISO Corner: Operationalizing NIST CSF 2.0; AI Models Run Amok

March 1, 2024 at 05:44PM CISO Corner provides a weekly digest with cybersecurity articles for security operations readers and leaders. The current issue covers topics such as NIST Cybersecurity Framework 2.0, quantum-resistant encryption, managing AI models, SEC penalties for data breach disclosure, biometric regulation challenges, Iranian hacking group targeting aerospace and defense firms, microprocessor security … Read more

It’s 10PM, Do You Know Where Your AI Models are Tonight?

March 1, 2024 at 04:08PM The explosive growth in AI will immensely complicate software supply chain security. AI and ML models, integral to AI applications, contribute to the complexity. Developers must understand and secure these models, but existing security tools are ill-equipped for this task. Consequently, a new approach called MLSecOps is needed to address … Read more

White House Wades Into Debate on ‘Open’ Versus ‘Closed’ Artificial Intelligence Systems

February 24, 2024 at 02:45PM The Biden administration seeks public input on the benefits and risks of open-source versus closed powerful AI systems, as part of an executive order. Tech companies vary in their approach, with some advocating for open models for innovation, while others prioritize safety. Google has released an open model called Gemma, … Read more

Exposed Hugging Face API tokens offered full access to Meta’s Llama 2

December 4, 2023 at 09:06AM Lasso Security researchers found over 1,500 API tokens, including those of Meta and Google, exposed on Hugging Face, risking supply chain attacks and allowing access to 723 organizations. Exposed tokens with write permissions could alter files, steal private models, or poison data, affecting over a million users. All affected parties … Read more

A Closer Look at ChatGPT’s Role in Automated Malware Creation

November 14, 2023 at 05:07AM This blog entry discusses the risks associated with the use of ChatGPT and other AI technologies, particularly in the development of malware. It explores the effectiveness of safety filters implemented by OpenAI to prevent misuse, as well as the limitations of current AI models in automated malware creation. The blog … Read more