Researchers Reveal ‘Deceptive Delight’ Method to Jailbreak AI Models

October 23, 2024 at 06:36AM Cybersecurity researchers have identified a new technique, “Deceptive Delight,” which exploits large language models (LLMs) during conversations to generate unsafe content. Achieving a 64.6% success rate, it utilizes the model’s limited attention span. To mitigate these risks, effective content filtering and prompt engineering strategies are recommended. ### Meeting Takeaways from … Read more

Threat Detection Report: Cloud Attacks Soar, Mac Threats and Malvertising Escalate

March 15, 2024 at 06:57AM Red Canary’s 2024 Threat Detection Report, based on the analysis of 60,000 threats and 216 petabytes of telemetry, highlights the rise of cloud account attacks, Mac malware, and the transformation of malvertising from adware to more dangerous malware. It emphasizes the increasing use of adversarial AI and the growing threats … Read more

Forget Deepfakes or Phishing: Prompt Injection is GenAI’s Biggest Problem

February 2, 2024 at 06:06PM The security community should shift focus to generative artificial intelligence (GenAI) risks, particularly prompt injection, which involves inserting text to manipulate large language models (LLMs). This method allows attackers to trigger unintended actions or access sensitive information. Recognizing prompt injection as a top security concern is crucial as cyber threats … Read more

Researchers Map AI Threat Landscape, Risks

January 24, 2024 at 09:07AM The heart of large language models (LLMs) is a black box, leading to risks from lack of transparency in AI decision-making. A report from BIML outlines 81 risks and aims to help security practitioners understand and address these challenges. NIST also emphasizes the need for a common language to discuss … Read more

Startups Scramble to Build Immediate AI Security

January 2, 2024 at 10:07AM In early 2003, the emergence of artificial intelligence (AI) security became imminent with the introduction of ChatGPT, impacting startups focusing on machine learning security operations, AppSec remediation, and privacy enhancement through homomorphic encryption. Today’s AI faces significant vulnerability challenges, particularly concerning the security of foundational models. Startups are debating various … Read more

How AI Is Shaping the Future of Cybercrime

December 21, 2023 at 10:02AM AI’s increasing influence on cybersecurity is evident from a surge in cyberattacks, with AI tools being used for automated phishing, impersonation, social engineering, and fake customer support chatbots. On the brighter side, the industry is leveraging AI to develop security measures, including creating “good AI,” anomaly detection, and utilizing AI … Read more