Who uses LLM prompt injection attacks IRL? Mostly unscrupulous job seekers, jokesters and trolls

August 13, 2024 at 06:51AM Various attempts at prompt injection into large language models (LLMs) have been identified, with the majority coming from job seekers seeking to manipulate automated HR screening systems. Kaspersky’s research found instances of direct and indirect prompt injections, often aiming to influence HR processes or as a form of protest against … Read more

Forget Deepfakes or Phishing: Prompt Injection is GenAI’s Biggest Problem

February 2, 2024 at 06:06PM The security community should shift focus to generative artificial intelligence (GenAI) risks, particularly prompt injection, which involves inserting text to manipulate large language models (LLMs). This method allows attackers to trigger unintended actions or access sensitive information. Recognizing prompt injection as a top security concern is crucial as cyber threats … Read more