AI Models in Cybersecurity: From Misuse to Abuse

October 16, 2024 at 07:06AM The article examines variations in AI models regarding security measures and reveals tactics employed by threat actors. It discusses the implications of AI in cybersecurity, highlighting the transition from misuse to more harmful abuse of these technologies. **Meeting Takeaways:** 1. **Discussion Topic:** The meeting focused on exploring the differences in … Read more

AI Pulse: What’s new in AI regulations?

October 1, 2024 at 06:25PM California’s SB 1047 bill to regulate AI faced controversy for its broad scope, with supporters praising the move and critics concerned about stifling innovation. The bill’s impact on AI risk assessment, model development, and potential regulation challenges is discussed, as nations grapple with the need for clear frameworks to manage … Read more

AI code helpers just can’t stop inventing package names

September 30, 2024 at 12:04AM Two recent studies highlight the issue of AI models generating fictitious software package names, raising concerns about the potential security risks. Researchers found that LLMs, including commercial and open-source models, exhibited significant rates of hallucinated package names, posing a threat to code quality and reliability. The studies emphasize the need … Read more

Australia’s government spent the week boxing Big Tech

September 13, 2024 at 12:57AM Australian government spent the week reining in Big Tech. Prime Minister announced plans to set minimum age for social media and compel Big Tech to pay for linking to local content. Meta faced parliamentary questioning over use of Australians’ posts for AI training. Other laws introduced to address privacy breaches, … Read more

How Do You Know When AI is Powerful Enough to be Dangerous? Regulators Try to Do the Math

September 5, 2024 at 07:12AM Regulators are grappling with determining the threshold for reporting powerful AI systems to the government, with California and the European Union setting specific criteria based on the number of floating-point operations per second. This approach aims to distinguish current AI models from potentially more potent next-generation systems, though it has … Read more

California Advances Landmark Legislation to Regulate Large AI Models

August 30, 2024 at 09:00AM California is moving towards establishing groundbreaking safety measures for large artificial intelligence systems. The proposed bill aims to mitigate potential risks by requiring companies to disclose safety protocols and test AI models. Despite opposition from tech firms, the bill could set essential safety rules for AI in the United States. … Read more

Researchers Highlight How Poisoned LLMs Can Suggest Vulnerable Code

August 21, 2024 at 08:08AM Developers are turning to AI programming assistants, but recent research warns about the risk of incorporating code suggestions without scrutiny, as large language models (LLMs) can be manipulated to release vulnerable code. The CodeBreaker method effectively poisons LLMs to suggest exploitable code. Developers must critically assess code suggestions and focus … Read more

Meta’s AI safety system defeated by the space bar

July 29, 2024 at 05:09PM Meta’s machine-learning model designed to detect prompt injection attacks, known as Prompt-Guard-86M, has ironically been found vulnerable to such attacks. This model, introduced by Meta in conjunction with its Llama 3.1 generative model, aims to catch problematic inputs for AI models. However, a recent discovery by bug hunter Aman Priyanshu … Read more

Dangerous AI Workaround: ‘Skeleton Key’ Unlocks Malicious Content

June 26, 2024 at 05:26PM A new direct prompt injection attack called “Skeleton Key” bypasses ethical and safety guardrails in generative AI like ChatGPT, allowing access to offensive or illegal content. Microsoft found that by providing context and disclaimers, most AIs can be convinced malicious requests are for “research purposes.” Microsoft has fixed the issue … Read more

Google’s Privacy Sandbox Accused of User Tracking by Austrian Non-Profit

June 14, 2024 at 10:18AM Google’s deprecation of third-party tracking cookies has faced opposition from Austrian privacy non-profit noyb, which claims that the proposed Privacy Sandbox can still be used for tracking. Noyb criticized Google’s ad privacy feature, alleging it tricks users into consenting to first-party ad tracking. The dispute highlights privacy concerns and ongoing … Read more