Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code

August 21, 2024 at 08:08AM Developers are turning to AI programming assistants, but recent research warns about the risk of incorporating code suggestions without scrutiny, as large language models (LLMs) can be manipulated to release vulnerable code. The CodeBreaker method effectively poisons LLMs to suggest exploitable code. Developers must critically assess code suggestions and focus … Read more

Top Lessons for CISOs From OWASP’s LLM Top 10

April 23, 2024 at 10:05AM The OWASP released its top 10 list for large language model (LLM) applications, addressing security threats. This framework educates and aligns the industry on potential risks, emphasizing the need for effective authentication and authorization of LLM technologies. The list highlights the importance of preventing misuse and compromise, urging security leaders … Read more

Forward Momentum: Key Learnings From Trend Micro’s Security Predictions for 2024

December 6, 2023 at 03:21AM Trend Micro’s security predictions for 2024 emphasize the need for organizations to adapt their cybersecurity to evolving technologies like cloud, AI/ML, and Web3. Expected threats include cloud-native worms exploiting misconfigurations, sophisticated AI-generated social engineering scams, data poisoning against ML models, CI/CD pipeline attacks on software supply chains, and blockchain-based extortion … Read more

Exposed Hugging Face API tokens offered full access to Meta’s Llama 2

December 4, 2023 at 09:06AM Lasso Security researchers found over 1,500 API tokens, including those of Meta and Google, exposed on Hugging Face, risking supply chain attacks and allowing access to 723 organizations. Exposed tokens with write permissions could alter files, steal private models, or poison data, affecting over a million users. All affected parties … Read more

IriusRisk Brings Threat Modeling to Machine Learning Systems

October 26, 2023 at 10:06PM Organizations are increasingly adopting threat modeling to identify security flaws in software design, particularly with the rising use of machine learning. Threat modeling helps organizations understand security risks and mitigate them in machine learning systems. IriusRisk offers a threat modeling tool that automates the process and includes an AI & … Read more