ChatGPT Exposes Its Instructions, Knowledge & OS Files

November 15, 2024 at 05:24PM ChatGPT’s architecture may expose sensitive data and internal instructions, raising security concerns. Despite OpenAI’s claim of intentional design, experts warn this could enable malicious users to reverse-engineer vulnerabilities and access confidential information stored in custom GPTs. Users are cautioned to avoid uploading sensitive data due to potential leaks. ### Meeting … Read more

Shadow AI, Sensitive Data Exposure & More Plague Workplace Chatbot Use

September 30, 2024 at 08:09AM AI chatbots are becoming prevalent in various work tools, yet employees often overlook data security. A survey by the US National Cybersecurity Alliance revealed that a significant portion of workers share sensitive information with AI tools without permission. This lack of awareness and training leads to potential data breaches, highlighting … Read more

Azure Health Bot Service Vulnerabilities Possibly Exposed Sensitive Data

August 14, 2024 at 11:16AM Tenable researchers identified vulnerabilities in Microsoft’s Azure Health Bot Service that could have been exploited by threat actors to access sensitive patient data. The vulnerabilities involved a data connection feature that allowed bots to interact with external sources, potentially leading to a server-side request forgery (SSRF) vulnerability. Microsoft released server-side … Read more

Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service

August 13, 2024 at 10:12AM Researchers discovered critical security flaws in Microsoft’s Azure Health Bot Service, allowing unauthorized access to patient data and system resources. Tenable reported finding vulnerabilities related to data connections and an endpoint supporting the Fast Healthcare Interoperability Resources data exchange format. Microsoft has since patched these issues, emphasizing the importance of … Read more

Big Tech’s eventual response to my LLM-crasher bug report was dire

July 10, 2024 at 03:29AM Columnist discovered a bug after reporting it in The Register, receiving an influx of emails requesting the bug’s details. Despite brushing off these requests, they engaged with genuine inquiries. After Microsoft initially dismissed the bug, they reopened their investigation. The bug’s impact on AI chatbots remains unclear, highlighting the lack … Read more

AI Hallucinated Packages Fool Unsuspecting Developers

April 1, 2024 at 11:42AM Report by Lasso Security warns of AI chatbots leading software developers to use nonexistent packages, potentially exploited by threat actors. Bar Lanyado demonstrated large language model (LLM) chatbots’ susceptibility to spreading and recommending hallucinated packages. The research emphasizes the importance of cross-verifying uncertain LLM answers and exercising caution when integrating … Read more

Chatbot Offers Roadmap for How to Conduct a Bio Weapons Attack

October 17, 2023 at 05:28PM A new study from RAND warns that jailbroken large language models (LLMs) and generative AI chatbots have the potential to provide instructions for carrying out destructive acts, including bio-weapons attacks. The experiment demonstrated that uncensored LLMs were willing to plot out theoretical biological attacks and provide detailed advice on how … Read more