Former OpenAI Employees Lead Push to Protect Whistleblowers Flagging Artificial Intelligence Risks

June 4, 2024 at 03:24PM Former and current OpenAI workers urge AI companies, like ChatGPT-maker, to safeguard employees who report AI safety concerns. They seek stronger whistleblower protections to voice worries about developing high-performing AI systems without fear of retaliation. The letter, with 13 signatories, also calls for an end to non-disparagement agreements and has … Read more

OpenAI Forms Safety Committee as It Starts Training Latest Artificial Intelligence Model

May 28, 2024 at 11:12AM OpenAI announced the establishment of a safety and security committee to advise on critical decisions for its projects and operations. This comes amidst debate on AI safety, following resignations and criticism from researchers. The company is training a new AI model and claims industry-leading capability and safety. The committee, including … Read more

Why We Need to Get a Handle on AI

May 23, 2024 at 07:22AM The text discusses the rising concerns around deepfake technology and its potential to deceive through audio and video manipulation. As deepfakes pose a threat to various sectors and can exacerbate disinformation campaigns, security teams are encouraged to adopt a Zero Trust Approach and consider AI labeling as a strategy. The … Read more

Feds: Reducing AI Risks Requires Visibility & Better Planning

May 7, 2024 at 12:32PM The US DoE identified top 10 beneficial applications of AI/ML in critical infrastructure, along with four risk categories. The Biden administration is assessing the benefits and risks of AI, as highlighted by the DoT and DHS. The DHS provided recommendations to mitigate AI risks, focusing on a four-part strategy. Organizations … Read more

LLMs & Malicious Code Injections: ‘We Have to Assume It’s Coming’

May 6, 2024 at 06:29PM Prompt injection engineering in large language models (LLMs) poses a significant risk to organizations, as discussed during a CISO roundtable at RSA Conference in San Francisco. CISO Karthik Swarnam warns of inevitable incidents triggered by malicious prompting, urging companies to invest in training and establish boundaries for AI usage in … Read more

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

April 30, 2024 at 06:49AM The U.S. government has issued new security guidelines to protect critical infrastructure from AI-related threats. Emphasizing responsible and safe AI usage, the guidelines address the potential risks associated with AI systems and recommend measures such as risk management, secure deployment environment, and identifying AI dependencies. The focus is on protecting … Read more

CISA Rolls Out New Guidelines to Mitigate AI Risks to US Critical Infrastructure

April 29, 2024 at 01:59PM CISA, the US government cybersecurity agency, has released guidelines to enhance critical infrastructure security against AI-related threats. The guidelines identify three types of AI risks and advocate a four-part mitigation strategy, emphasizing a robust organizational culture focused on AI risk management. CISA also stresses the need for contextualized risk evaluation … Read more

IMF: Financial Firms Lost $12 Billion to Cyberattacks in Two Decades

April 11, 2024 at 08:18AM Financial organizations have faced 20,000 cyberattacks in the last 20 years, resulting in over $12 billion in losses, according to the IMF. These attacks, particularly targeting banks, carry a risk of extreme financial losses and potential economic instability. Effective regulations and international collaboration are crucial for mitigating these risks in … Read more

AI-as-a-Service Providers Vulnerable to PrivEsc and Cross-Tenant Attacks

April 5, 2024 at 10:39AM New research has revealed that AI-as-a-service providers, like Hugging Face, are vulnerable to threats allowing attackers to gain access to private AI models and apps. The findings highlight the risk of supply chain attacks on machine learning pipelines. Recommendations include using trusted AI models, enabling multi-factor authentication, and avoiding pickle … Read more

New Bipartisan Bill Would Require Online Identification, Labeling of AI-Generated Videos and Audio

March 21, 2024 at 04:24PM The bipartisan legislation introduced in the House aims to address concerns over deepfake content created by artificial intelligence. It mandates AI developers to mark their content with digital watermarks or metadata to help online platforms identify it. Violators could face civil lawsuits. The bill has garnered support from both tech … Read more