In the rush to build AI apps, please, please don’t leave security behind

March 17, 2024 at 07:08AM AI developers and data scientists are urged to be mindful of security and supply-chain attacks amidst the relentless progress in AI technology. With a growing threat of malware in models and libraries, cybersecurity and AI startups are emerging to address the vulnerability. Ensuring supply-chain security in the AI community is … Read more

Google gooses Safe Browsing with real-time protection that doesn’t leak to ad giant

March 14, 2024 at 02:06PM Google has upgraded Safe Browsing in Chrome for desktop, iOS, and soon Android, providing real-time protection against risky websites without sharing browsing history with Google. The enhanced service uses real-time URL lookups and machine learning, while the Standard version now supports privacy-preserving real-time data lookup. It employs a technical enhancement … Read more

Dtex Systems Snags $50M from Alphabet’s CapitalG

March 5, 2024 at 11:06AM Dtex Systems, a California-based company, has secured $50 million in late-stage funding, with a total of $138 million raised. The funding aims to accelerate the application of large language models and behavioral science research to disrupt the insider risk management market. Dtex utilizes machine learning and network monitoring to detect … Read more

AI-Generated Patches Could Ease Developer, Operations Workload

February 21, 2024 at 01:40AM Large language models (LLMs) show potential in speeding up software development by detecting and addressing common bugs. Google’s Gemini LLM can fix 15% of bugs found using dynamic application security testing (DAST), helping prioritize vulnerabilities often overlooked by developers. AI-powered bug-fixing systems are crucial as machine learning models produce more … Read more

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

February 19, 2024 at 07:27AM Network Detection and Response (NDR) has become the most effective technology for detecting cyber threats, offering adaptive cybersecurity with reduced false alerts and efficient threat response. NDR uses risk-based alerting to prioritize alerts based on potential risk, enabling more efficient resource allocation, prompt response to high-risk alerts, and better decision-making. … Read more

Google open sources file-identifying Magika AI for malware hunters and others

February 16, 2024 at 09:19PM Google has open sourced Magika, a machine-learning-powered file identifier, as part of its AI Cyber Defense Initiative. It aims to provide better automated tools for IT network defenders. Magika uses a trained model to rapidly identify file types from file data, enhancing security. Google also plans to partner with startups … Read more

New Offerings From Protect AI, Venafi Tackle Software Supply Chain Security

January 25, 2024 at 11:48AM The growing use of open source software expands into the AI market. Venafi offers Stop Unauthorized Code Solution for traditional OSS, while Protect AI’s Guardian secures open source machine learning models. Both products aim to tackle the unique security challenges of their respective markets. They operate as crucial security measures … Read more

Researchers Map AI Threat Landscape, Risks

January 24, 2024 at 09:07AM The heart of large language models (LLMs) is a black box, leading to risks from lack of transparency in AI decision-making. A report from BIML outlines 81 risks and aims to help security practitioners understand and address these challenges. NIST also emphasizes the need for a common language to discuss … Read more

NIST: No Silver Bullet Against Adversarial Machine Learning Attacks

January 8, 2024 at 08:36AM NIST’s report cautions on the vulnerability of AI to adversarial machine learning attacks and emphasizes the absence of foolproof defenses. It covers attack types, including evasion, poisoning, privacy, and abuse, and urges the community to develop better safeguards. Industry experts acknowledge the report’s depth and importance in understanding and mitigating … Read more

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

January 8, 2024 at 04:27AM NIST highlights AI’s security and privacy challenges, including adversarial manipulation of training data, exploitation of model vulnerabilities, and exfiltration of sensitive information. Rapid integration of AI into online services exposes models to threats like corrupted training data and privacy breaches. NIST urges the tech community to develop better defenses against … Read more