DHS Releases Secure AI Framework for Critical Infrastructure

November 18, 2024 at 08:33AM The U.S. Department of Homeland Security issued voluntary recommendations for securely developing and deploying AI in critical infrastructure. The “Roles and Responsibilities Framework” emphasizes responsibilities for all supply chain participants, focusing on security, governance, and model design. It aims to enhance AI system safety and transparency while adapting to evolving … Read more

Risk Strategies Drawn From the EU AI Act

October 10, 2024 at 08:52AM As AI integration in business increases, organizations must adapt their governance, risk, and compliance strategies to address associated privacy and security risks. The EU AI Act provides a framework categorizing AI systems by risk levels, outlining requirements for High and Limited Risk systems to ensure safety, transparency, and compliance. ### … Read more

Anthropic: Expanding Our Model Safety Bug Bounty Program

August 9, 2024 at 02:04PM To enhance AI model safety, we’re expanding our bug bounty program to focus on identifying and mitigating universal jailbreak attacks that could bypass AI safety measures. The $15,000 reward program, in partnership with HackerOne, invites experienced AI security researchers to apply for an early access test phase before public deployment. … Read more

AI Coding Companions 2024: AWS, GitHub, Tabnine + More

June 28, 2024 at 09:45AM AI coding companions from companies like AWS, GitHub, and Tabnine are rapidly evolving, promising to make software development faster and easier with capabilities such as code completion and automation. Each platform, like Amazon Q Developer from AWS, GitHub Copilot, and Tabnine, offers unique features tailored to different languages and environments. … Read more

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

June 27, 2024 at 05:20AM A high-severity security flaw (CVE-2024-5565, CVSS score: 8.1) has been disclosed in the Vanna.AI library, which could lead to remote code execution via prompt injection techniques. This vulnerability allows the execution of arbitrary commands, posing a significant risk to the security of organizations using this Python-based machine learning library. Prompt … Read more

Adobe Adds Content Credentials and Firefly to Bug Bounty Program

May 1, 2024 at 11:21AM Adobe recently expanded its bug bounty program to include Content Credentials and Adobe Firefly, offering incentives for hackers to search for and report security defects. The program aims to reinforce the resilience of Adobe’s implementation against traditional risks and unique considerations and to test the resilience of AI models. Interested … Read more

Google Gives Gemini a Security Boost

April 10, 2024 at 08:34AM Google has announced the integration of Mandiant’s security offerings into its AI platform, adding new security capabilities. This includes automated security agents using generative AI to detect, stop, and remediate cybersecurity attacks, and enhance speed of investigations. Additionally, the use of AI tools like Gemini and ChatGPT is seen as … Read more

VP Harris Says US Agencies Must Show Their AI Tools Aren’t Harming People’s Safety or Rights

March 30, 2024 at 04:18AM New White House rules require U.S. federal agencies to ensure their AI tools do not endanger the public or cease using them. Agencies must implement safeguards by December covering everything from facial recognition at airports to AI tools for controlling the electric grid and mortgages. The policy directive aims to … Read more

Security Pros Grapple With Ways to Manage GenAI Risk

December 26, 2023 at 02:02PM Security professionals are excited about the potential of generative AI (GenAI) but express concerns about its impact. A survey by Dark Reading finds high awareness and concern about security risks, unauthorized use by employees, and the need for risk management tools in organizations. Respondents also highlight challenges in regulatory compliance, … Read more

Key Building Blocks to Advance American Leadership in AI

December 20, 2023 at 07:53AM The US government has set voluntary commitments for companies to guide the development and deployment of AI tools focusing on safety, security, and trust. Google, along with other organizations, has signed on to these commitments, making specific progress toward these goals. Secure AI development and deployment will require collaboration between … Read more