LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’
May 6, 2024 at 06:29PM Prompt injection engineering in large language models (LLMs) poses a significant risk to organizations, as discussed during a CISO roundtable at RSA Conference in San Francisco. CISO Karthik Swarnam warns of inevitable incidents triggered by malicious prompting, urging companies to invest in training and establish boundaries for AI usage in … Read more