Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code

August 21, 2024 at 08:08AM Developers are turning to AI programming assistants, but recent research warns about the risk of incorporating code suggestions without scrutiny, as large language models (LLMs) can be manipulated to release vulnerable code. The CodeBreaker method effectively poisons LLMs to suggest exploitable code. Developers must critically assess code suggestions and focus … Read more

X begins training Grok AI with your posts, here’s how to disable

July 27, 2024 at 04:33PM X has been quietly training its Grok AI chat platform using public posts without alerting users, with the option to use the data being enabled by default. Users only noticed the new setting on July 25 and can now opt out by accessing the privacy settings. This update indicates the … Read more