Why Criminals Like AI for Synthetic Identity Fraud

March 5, 2024 at 05:37PM Generative AI technology is expected to boost cybercriminals’ synthetic identity fraud capabilities, with current fraud detection tools likely insufficient to counter this emerging threat. Cybercriminals leverage generative AI for creating fake documents, exploiting its widespread availability and affordability. Fighting synthetic identity fraud requires a multilayered approach, including AI and behavioral … Read more

Alarm Over GenAI Risk Fuels Security Spending in Middle East & Africa

February 23, 2024 at 10:20AM The fast adoption of Generative Artificial Intelligence (GenAI) in the Middle East and Africa is prompting organizations to increase data privacy and cloud security measures. Concerns about GenAI are driving a 24% and 17% increase in budgets for data privacy and cloud security, respectively, according to Gartner. However, the potential … Read more

Microsoft, OpenAI Warn of Nation-State Hackers Weaponizing AI for Cyberattacks

February 14, 2024 at 09:46AM Nation-state actors from Russia, North Korea, Iran, and China are leveraging artificial intelligence and large language models (LLMs) to enhance their cyber attacks. Microsoft and OpenAI published a report detailing disruptions to state-affiliated actors’ malicious cyber activities. The report also highlights the use of AI technologies across various phases of … Read more

Researchers Map AI Threat Landscape, Risks

January 24, 2024 at 09:07AM The heart of large language models (LLMs) is a black box, leading to risks from lack of transparency in AI decision-making. A report from BIML outlines 81 risks and aims to help security practitioners understand and address these challenges. NIST also emphasizes the need for a common language to discuss … Read more

Combating IP Leaks into AI Applications with Free Discovery and Risk Reduction Automation

January 17, 2024 at 09:57AM Wing Security introduces a free discovery and a paid tier for automated control over AI SaaS applications, aiming to enhance intellectual property and data protection. 83.2% of companies use GenAI applications, with 99.7% employing AI-powered SaaS. Their solution offers steps to Know, Assess, and Control AI risks while automating workflows … Read more

AI-Powered Misinformation is the World’s Biggest Short-Term Threat, Davos Report Says

January 11, 2024 at 09:43AM The World Economic Forum’s Global Risks Report identified artificially powered misinformation as the top immediate risk to the global economy, with environmental risks posing long-term threats. The report emphasized the potential impact of AI on polarizing societies and eroding democracy, and highlighted the risks associated with deepfake technology and AI-powered … Read more

NIST: No Silver Bullet Against Adversarial Machine Learning Attacks

January 8, 2024 at 08:36AM NIST’s report cautions on the vulnerability of AI to adversarial machine learning attacks and emphasizes the absence of foolproof defenses. It covers attack types, including evasion, poisoning, privacy, and abuse, and urges the community to develop better safeguards. Industry experts acknowledge the report’s depth and importance in understanding and mitigating … Read more

Skynet Ahoy? What to Expect for Next-Gen AI Security Risks

December 28, 2023 at 09:05AM In 2024, the rapid evolution of large language models (LLMs) like OpenAI’s GPT-4 and the upcoming GPT-5 poses significant security risks. Concerns include data leaks, potential misuse for malicious activities, and inaccurate outputs leading to negative consequences. Security experts stress the need for rigorous ethical considerations, risk assessments, and the … Read more

Unpatched Critical Vulnerabilities Open AI Models to Takeover

November 28, 2023 at 03:53AM Researchers have discovered multiple critical vulnerabilities in the infrastructure used by AI models, exposing companies to risk as they adopt AI technology. The affected platforms include Ray, MLflow, ModelDB, and H20 version 3. The vulnerabilities could allow attackers unauthorized access to AI models and the network. Companies must prioritize security … Read more

A Closer Look at ChatGPT’s Role in Automated Malware Creation

November 14, 2023 at 05:07AM This blog entry discusses the risks associated with the use of ChatGPT and other AI technologies, particularly in the development of malware. It explores the effectiveness of safety filters implemented by OpenAI to prevent misuse, as well as the limitations of current AI models in automated malware creation. The blog … Read more