How to weaponize LLMs to auto-hijack websites

February 17, 2024 at 06:46AM Computer scientists at the University of Illinois Urbana-Champaign have shown that large language models (LLMs) like GPT-4 can be weaponized to autonomously compromise vulnerable websites. Their agents demonstrated the ability to perform complex tasks without prior knowledge of the vulnerabilities, raising concerns about the potential for autonomous hacks by highly … Read more