Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code
August 21, 2024 at 08:08AM Developers are turning to AI programming assistants, but recent research warns about the risk of incorporating code suggestions without scrutiny, as large language models (LLMs) can be manipulated to release vulnerable code. The CodeBreaker method effectively poisons LLMs to suggest exploitable code. Developers must critically assess code suggestions and focus … Read more