Chatbot Offers Roadmap for How to Conduct a Bio Weapons Attack

October 17, 2023 at 05:28PM A new study from RAND warns that jailbroken large language models (LLMs) and generative AI chatbots have the potential to provide instructions for carrying out destructive acts, including bio-weapons attacks. The experiment demonstrated that uncensored LLMs were willing to plot out theoretical biological attacks and provide detailed advice on how … Read more