Ensuring AI Safety While Balancing Innovation

July 5, 2024 at 10:21AM A panel at Black Hat 2024 in Las Vegas will explore AI safety, emphasizing the responsibility of organizations and security professionals. Led by Nathan Hamiel, it will address the intersection of AI safety and security, technical and human harms, and the importance of organizations taking responsibility for the safety of … Read more

California Advances Unique Safety Regulations for AI Companies Despite Tech Firm opposition

July 4, 2024 at 12:33PM California lawmakers advanced legislation requiring AI companies to test their systems to prevent potential harm, such as disrupting the electric grid or building chemical weapons. The bill, fiercely opposed by tech companies, aims to regulate AI safety standards and oversight. It also addresses concerns about AI discrimination and data privacy, … Read more

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

May 31, 2024 at 07:36AM OpenAI CEO Sam Altman addressed the U.N. telecommunications agency’s annual gathering at the AI for Good conference, discussing AI’s societal promise, but faced scrutiny over governance and an AI voice controversy. The discontent coincided with concerns about OpenAI’s business practices and AI safety. The two-day event also focused on AI … Read more

OpenAI Forms Another Safety Committee After Dismantling Prior Team

May 28, 2024 at 03:08PM OpenAI forms a safety and security committee led by company directors Bret Taylor, Adam D’Angelo, Nicole Seligman, and CEO Sam Altman. The committee will make safety and security recommendations for OpenAI’s projects and operations, starting with a 90-day evaluation period. Concerns have been raised about the potential impact on societal … Read more

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

May 28, 2024 at 11:12AM OpenAI announced the establishment of a safety and security committee to advise on critical decisions for its projects and operations. This comes amidst debate on AI safety, following resignations and criticism from researchers. The company is training a new AI model and claims industry-leading capability and safety. The committee, including … Read more

WitnessAI Launches With Guardrails for AI

May 21, 2024 at 11:07PM WitnessAI, a startup in artificial intelligence safety, emerged from stealth to address the barriers hindering organizations from adopting AI tools. Their Secure AI Enablement Platform offers observability, policy enforcement, and data protection for enterprises using AI. The platform, deploying cloud-based instances with unique encryption keys, has secured funding and plans … Read more

AI Companies Make Fresh Safety Promise at Seoul Summit, Nations Agree to Align Work on Risks

May 21, 2024 at 08:06PM Top AI companies including Google, Meta, and OpenAI made voluntary safety commitments at the AI Seoul Summit, agreeing to pull the plug on their cutting-edge systems in extreme cases. World leaders also pledged to establish safety institutes and align their work on AI research. The meeting aims to address the … Read more

In First AI Dialogue, US Cites ‘Misuse’ of AI by China, Beijing Protests Washington’s Restrictions

May 15, 2024 at 10:55PM U.S. and Chinese officials expressed concerns and perspectives on AI during closed-door talks in Geneva. The discussions revealed tension over technology and highlighted differing approaches to AI safety and risk management. The talks stemmed from a November meeting between Presidents Biden and Xi, signaling both countries’ concerns and ambitions regarding … Read more

VP Harris Says US Agencies Must Show Their AI Tools Aren’t Harming People’s Safety or Rights

March 30, 2024 at 04:18AM New White House rules require U.S. federal agencies to ensure their AI tools do not endanger the public or cease using them. Agencies must implement safeguards by December covering everything from facial recognition at airports to AI tools for controlling the electric grid and mortgages. The policy directive aims to … Read more

AI Companies Will Need to Start Reporting Their Safety Tests to the US Government

January 29, 2024 at 09:28AM The Biden administration is enforcing a new mandate for major AI system developers to disclose safety test results to the government. The White House AI Council is reviewing progress made on this executive order, prioritizing safety before public release. Efforts include the development of safety test frameworks and collaboration with … Read more