Letting chatbots run robots ends as badly as you’d expect

November 15, 2024 at 07:13PM Isaac Asimov’s laws of robotics seem ineffective as recent research reveals robots, including those powered by large language models (LLMs), can be manipulated through jailbreaking techniques. This raises serious safety concerns, highlighting the urgent need for protective measures against such vulnerabilities, particularly in physical robotic applications. ### Meeting Takeaways: 1. … Read more

OpenAI Co-Founder Sutskever Sets up New AI Company Devoted to ‘Safe Superintelligence’

June 20, 2024 at 11:18AM Ilya Sutskever, a co-founder of OpenAI, has started a new company called Safe Superintelligence Inc. with a focus on developing “superintelligence” safely. The company aims to prioritize safety and security over short-term commercial pressures, based in Palo Alto and Tel Aviv. Sutskever and his co-founders have resigned from OpenAI to … Read more

A Former OpenAI Leader Says Safety Has ‘Taken a Backseat to Shiny Products’ at the AI Company

May 17, 2024 at 03:37PM Former OpenAI leader Jan Leike resigned, stating that safety has been neglected at the influential AI company for shiny products. He disagreed with the company’s core priorities, emphasizing the need to focus on safety and societal impacts of AI. His resignation follows that of co-founder Ilya Sutskever, who is now … Read more