Can AI be Meaningfully Regulated, or is Regulation a Deceitful Fudge?

July 10, 2024 at 09:48AM The rush to regulate artificial intelligence is driven by its emerging potential and associated risks. The dominance of Big Tech in developing AI raises concerns about their profit-driven approach. OpenAI’s transition from non-profit to Microsoft-influenced illustrates complexities and need for regulation. However, the effectiveness of regulation is in question, given … Read more

Hacker Stole Secrets From OpenAI

July 5, 2024 at 12:42PM OpenAI experienced an undisclosed breach in early 2023, where attacker stole employee forum discussions. The event raised internal concerns over security measures. Leopold Aschenbrenner, a former OpenAI employee, expressed concerns over AGI security and was fired. This incident illuminates internal disagreements on OpenAI’s security approach and its impact on national … Read more

OpenAI Appoints Former NSA Director Paul Nakasone to Board of Directors

June 14, 2024 at 10:27AM Retired U.S. Army General and former NSA Director Paul M. Nakasone has joined the Board of Directors and Safety and Security Committee at OpenAI. His cybersecurity insights will contribute to understanding AI’s role in strengthening cybersecurity. Nakasone’s experience aligns with OpenAI’s mission to ensure safe and beneficial artificial general intelligence. … Read more

OpenAI Disrupts 5 AI-Powered, State-Backed Influence Ops

May 31, 2024 at 02:42PM OpenAI has flagged five influence operations from China, Iran, Israel, and Russia, all employing AI tools to spread political messaging, but with insignificant impact. Notable activities include Spamouflage from China, Bad Grammar targeting Eastern Europe and the United States, Doppelganger engaging on various platforms, and IUVM from Iran. OpenAI is … Read more

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

May 31, 2024 at 07:36AM OpenAI CEO Sam Altman addressed the U.N. telecommunications agency’s annual gathering at the AI for Good conference, discussing AI’s societal promise, but faced scrutiny over governance and an AI voice controversy. The discontent coincided with concerns about OpenAI’s business practices and AI safety. The two-day event also focused on AI … Read more

OpenAI, Meta, TikTok Disrupt Multiple AI-Powered Disinformation Campaigns

May 31, 2024 at 04:21AM OpenAI revealed five covert influence operations from China, Iran, Israel, and Russia, utilizing AI to manipulate public discourse. These operations involved generating and posting comments, articles, and social media content across various platforms to influence audiences in different regions. Meta also disclosed details of additional influence operations targeting users in … Read more

OpenAI Forms Another Safety Committee After Dismantling Prior Team

May 28, 2024 at 03:08PM OpenAI forms a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The committee will make safety and security recommendations for OpenAI’s projects and operations, starting with a 90-day evaluation period. Concerns have been raised about the potential impact on societal … Read more

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

May 28, 2024 at 11:12AM OpenAI announced the establishment of a safety and security committee to advise on critical decisions for its projects and operations. This comes amidst debate on AI safety, following resignations and criticism from researchers. The company is training a new AI model and claims industry-leading capability and safety. The committee, including … Read more

US Intelligence Agencies’ Embrace of Generative AI Is at Once Wary and Urgent

May 23, 2024 at 02:15PM A Silicon Valley firm used generative AI to collect and analyze non-classified data on Chinese fentanyl trafficking. U.S. intelligence agencies embraced the technology, leading to successful results in identifying illicit activities. Despite its potential, concerns remain about security, privacy, and the technology’s limitations. The CIA, in particular, is cautiously experimenting … Read more

A Former OpenAI Leader Says Safety Has ‘Taken a Backseat to Shiny Products’ at the AI Company

May 17, 2024 at 03:37PM Former OpenAI leader Jan Leike resigned, stating that safety has been neglected at the influential AI company for shiny products. He disagreed with the company’s core priorities, emphasizing the need to focus on safety and societal impacts of AI. His resignation follows that of co-founder Ilya Sutskever, who is now … Read more