Generative AI Security – Secure Your Business in a World Powered by LLMs

March 20, 2024 at 07:30AM Join industry experts Elad Schulman and Nir Chervoni in a webinar discussing the opportunities and risks of Generative AI. Learn about its transformative potential, security challenges, and effective strategies for securing GenAI applications. This session is essential for IT professionals, security experts, and business leaders navigating the complexities of Generative … Read more

Gone in 60 seconds: BEAST AI model attack needs just a minute of GPU time to breach LLM guardails

February 28, 2024 at 06:17PM University of Maryland computer scientists have developed BEAST, a fast adversarial prompt generation technique for large language models like GPT-4. This method yields an 89% success rate in just one minute, using an Nvidia RTX A6000 GPU. BEAST can create readable, convincing prompts that elicit inaccurate responses or reveal privacy … Read more

Three Tips to Protect Your Secrets from AI Accidents

February 26, 2024 at 06:09AM OWASP published the “OWASP Top 10 For Large Language Models,” reflecting the evolving nature of Large Language Models and their potential vulnerabilities. The article discusses techniques like “prompt injection,” the accidental disclosure of secrets, and offers tips such as secret rotation, data cleaning, and regular patching to secure LLMs. From … Read more

Researchers Show How to Use One LLM to Jailbreak Another

December 7, 2023 at 03:52PM Researchers at Robust Intelligence and Yale University developed Tree of Attacks with Pruning (TAP), a method to prompt “aligned” large language models (LLMs) into producing harmful content. They demonstrated success in “jailbreaking” LLMs like GPT-4, bypassing safety guardrails using an “unaligned” model to iteratively refine prompts. This poses potential risks … Read more