IBM Boosts Guardium Platform to Address Shadow AI, Quantum Cryptography

October 23, 2024 at 07:42AM IBM is enhancing its Guardium platform to address security challenges related to AI models and quantum safety. This upgrade aims to tackle issues associated with Shadow AI and improve quantum cryptography measures. **Meeting Takeaways:** 1. **Platform Update**: IBM is enhancing its Guardium platform. 2. **Focus Areas**: The upgrade aims to … Read more

AI Pulse: What’s new in AI regulations?

October 1, 2024 at 06:25PM California’s SB 1047 bill to regulate AI faced controversy for its broad scope, with supporters praising the move and critics concerned about stifling innovation. The bill’s impact on AI risk assessment, model development, and potential regulation challenges is discussed, as nations grapple with the need for clear frameworks to manage … Read more

California Governor Vetoes Bill to Create First-in-Nation AI Safety Measures

September 29, 2024 at 11:18PM California Gov. Newsom vetoed a bill seeking to regulate large AI models, drawing criticism and support. The bill aimed to establish safety measures for AI and could have set a precedent for national regulations. Newsom, instead, plans to collaborate with industry experts to develop guardrails for powerful AI models, spurring … Read more

How Do You Know When AI is Powerful Enough to be Dangerous? Regulators Try to Do the Math

September 5, 2024 at 07:12AM Regulators are grappling with determining the threshold for reporting powerful AI systems to the government, with California and the European Union setting specific criteria based on the number of floating-point operations per second. This approach aims to distinguish current AI models from potentially more potent next-generation systems, though it has … Read more

California Advances Landmark Legislation to Regulate Large AI Models

August 30, 2024 at 09:00AM California is moving towards establishing groundbreaking safety measures for large artificial intelligence systems. The proposed bill aims to mitigate potential risks by requiring companies to disclose safety protocols and test AI models. Despite opposition from tech firms, the bill could set essential safety rules for AI in the United States. … Read more

Anthropic: Expanding Our Model Safety Bug Bounty Program

August 9, 2024 at 02:04PM To enhance AI model safety, we’re expanding our bug bounty program to focus on identifying and mitigating universal jailbreak attacks that could bypass AI safety measures. The $15,000 reward program, in partnership with HackerOne, invites experienced AI security researchers to apply for an early access test phase before public deployment. … Read more

AI Consortium Plans Toolkit to Rate AI Model Safety

July 17, 2024 at 08:58AM MLCommons plans to run stress tests on large language models to gauge the safety of their responses. The AI Safety suite will assess the models’ output in categories like hate speech and exploitation. By providing safety ratings, the benchmark aims to guide companies and organizations in selecting AI systems, with … Read more

Ensuring AI Safety While Balancing Innovation

July 5, 2024 at 10:21AM A panel at Black Hat 2024 in Las Vegas will explore AI safety, emphasizing the responsibility of organizations and security professionals. Led by Nathan Hamiel, it will address the intersection of AI safety and security, technical and human harms, and the importance of organizations taking responsibility for the safety of … Read more

California Advances Unique Safety Regulations for AI Companies Despite Tech Firm opposition

July 4, 2024 at 12:33PM California lawmakers advanced legislation requiring AI companies to test their systems to prevent potential harm, such as disrupting the electric grid or building chemical weapons. The bill, fiercely opposed by tech companies, aims to regulate AI safety standards and oversight. It also addresses concerns about AI discrimination and data privacy, … Read more

OpenAI’s Altman Sidesteps Questions About Governance, Johansson at UN AI Summit

May 31, 2024 at 07:36AM OpenAI CEO Sam Altman addressed the U.N. telecommunications agency’s annual gathering at the AI for Good conference, discussing AI’s societal promise, but faced scrutiny over governance and an AI voice controversy. The discontent coincided with concerns about OpenAI’s business practices and AI safety. The two-day event also focused on AI … Read more