December 1, 2023 at 09:07AM
Researchers demonstrated that by repeating specific words, ChatGPT can inadvertently reveal snippets of its training data, including personal and sensitive information. This vulnerability was revealed through certain trigger words, which when used repeatedly, caused the chatbot to output previously memorized data. The findings raise significant privacy concerns for AI models trained on large and diverse datasets.
Meeting Takeaways:
1. Research has shown that repetitive prompting (using specific words) with a generative AI chatbot like ChatGPT can lead to it releasing memorized portions of its training data.
2. A team from Google DeepMind, Cornell University, and other institutions demonstrated this phenomenon, where words like “poem” triggered the AI to emit training data, including sensitive information.
3. The data leakage included personally identifiable information, explicit content, verbatim excerpts from texts, and computer code.
4. The researchers used a budget of $200 and extracted over 10,000 unique data instances from ChatGPT, suggesting that more extensive efforts could lead to even larger data leaks.
5. Attempts to replicate the study’s conditions by outside parties, such as Dark Reading, have been unsuccessful, which raises the question of whether OpenAI has mitigated the issue after being informed by the paper’s authors.
6. Large Language Models (LLMs) have a known propensity to inadvertently memorize and regurgitate patterns and phrases from their training datasets, a risk that grows with the dataset’s size.
7. The research highlights the risks involved in deploying LLMs for privacy-sensitive applications without appropriate protections, as it provides methods for potential adversaries to extract training data using divergence attacks.
8. The report underscores the privacy implications of developing AI models trained on massive datasets from various sources, often without full transparency about the data’s origins.
9. The study is significant as it explores the vulnerabilities in “closed” generative AI models like ChatGPT, and contrasts with previous studies that mainly focused on open-source models.
10. This research serves as a warning to practitioners to implement rigorous safeguards when using LLMs in contexts where privacy is a concern.