LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’

LLMs & Malicious Code Injections: 'We Have to Assume It's Coming'

May 6, 2024 at 06:29PM

Prompt injection engineering in large language models (LLMs) poses a significant risk to organizations, as discussed during a CISO roundtable at RSA Conference in San Francisco. CISO Karthik Swarnam warns of inevitable incidents triggered by malicious prompting, urging companies to invest in training and establish boundaries for AI usage in operations and application development.

From the meeting notes, there are several key takeaways:

1. Prompt injection engineering into large language models (LLMs) poses a significant risk to organizations, as discussed during a CISO roundtable at the RSA Conference. Karthik Swarnam, CISO at ArmorCode, emphasized that incidents from prompt injections in code are inevitable and companies should anticipate and prepare for them.

2. Socially engineered text alerts generated by LLMs trained with malicious prompting could lead to unauthorized data sharing, highlighting the potential for nefarious actions triggered by user responses to these alerts.

3. Despite concerns about the risks of using AI, large organizations are widely embracing it for operations such as customer service and marketing. Many organizations may be unknowingly using “shadow AI,” and there is a need to establish boundaries and monitor AI usage through network and firewall logs.

4. The use of AI in incident response and threat analytics is disrupting security information and event management, eliminating the need for triaging at various levels.

5. CISOs and CIOs should carefully consider the practicality and testing aspects of using AI in application development tools based on their organizations’ capabilities and risk tolerance. It’s essential for leaders to track organizational failures and reinforce training in areas where mistakes are consistently made during development work or software development.

These takeaways highlight the need for organizations to proactively address the risks associated with AI usage and to engage in ongoing training and monitoring to mitigate potential threats.

Full Article