Nvidia Patches High-Severity Vulnerabilities in AI, Networking Products

July 25, 2024 at 05:16AM Nvidia announced patches for vulnerabilities impacting AI and networking products. The security bulletins cover high-severity flaws affecting Jetson products, leading to denial of service, code execution, and privilege escalation. Vulnerabilities in Mellanox OS switch OS and successors were also addressed. Nvidia has disclosed over 60 vulnerabilities in its products this … Read more

The Top 10 AI Security Risks Every Business Should Know

July 9, 2024 at 08:30AM The article discusses the top 10 AI security risks identified by OWASP for businesses adopting AI tools, categorized into access, data, and reputational/business risks. It highlights the vulnerabilities and offers protective measures, emphasizing the need for policy foundation, security technologies, and responsible use of AI. The aim is to mitigate … Read more

Flawed AI Tools Create Worries for Private LLMs, Chatbots

May 30, 2024 at 04:04PM Private instances of large language models (LLMs) used by businesses face risks from data poisoning and leakage if not properly secured, leading to potential attacks and compromise of AI systems. Recent exploits highlight the importance of secure implementation and testing, especially as AI adoption increases in the information and professional … Read more

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

April 30, 2024 at 06:49AM The U.S. government has issued new security guidelines to protect critical infrastructure from AI-related threats. Emphasizing responsible and safe AI usage, the guidelines address the potential risks associated with AI systems and recommend measures such as risk management, secure deployment environment, and identifying AI dependencies. The focus is on protecting … Read more

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

April 15, 2024 at 09:39AM The text discusses the security implications of AI in software development, with a focus on GitHub Copilot. It highlights the potential vulnerabilities of AI-generated code and advises on secure coding practices, including strict input validation, managing dependencies, conducting regular security assessments, gradual adoption of AI suggestions, informed decision-making, and continuous … Read more

Hugging Face AI Platform Riddled With 100 Malicious Code-Execution Models

February 29, 2024 at 11:35AM Approximately 100 machine learning models were discovered on the Hugging Face platform, posing a risk of allowing attackers to inject malicious code onto user machines. JFrog’s ongoing research found malicious PyTorch models with potentially harmful payloads, highlighting the need for constant vigilance and proactive security measures to safeguard AI/ML engineers … Read more

Forget Deepfakes or Phishing: Prompt Injection is GenAI’s Biggest Problem

February 2, 2024 at 06:06PM The security community should shift focus to generative artificial intelligence (GenAI) risks, particularly prompt injection, which involves inserting text to manipulate large language models (LLMs). This method allows attackers to trigger unintended actions or access sensitive information. Recognizing prompt injection as a top security concern is crucial as cyber threats … Read more

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

January 8, 2024 at 04:27AM NIST highlights AI’s security and privacy challenges, including adversarial manipulation of training data, exploitation of model vulnerabilities, and exfiltration of sensitive information. Rapid integration of AI into online services exposes models to threats like corrupted training data and privacy breaches. NIST urges the tech community to develop better defenses against … Read more

Unpatched Critical Vulnerabilities Open AI Models to Takeover

November 28, 2023 at 03:53AM Researchers have discovered multiple critical vulnerabilities in the infrastructure used by AI models, exposing companies to risk as they adopt AI technology. The affected platforms include Ray, MLflow, ModelDB, and H20 version 3. The vulnerabilities could allow attackers unauthorized access to AI models and the network. Companies must prioritize security … Read more