May 31, 2024 at 02:42PM
OpenAI has flagged five influence operations from China, Iran, Israel, and Russia, all employing AI tools to spread political messaging, but with insignificant impact. Notable activities include Spamouflage from China, Bad Grammar targeting Eastern Europe and the United States, Doppelganger engaging on various platforms, and IUVM from Iran. OpenAI is collaborating with industry partners to develop secure platforms and combat such misuse.
From the meeting notes, it is clear that OpenAI has identified and disrupted five influence operations employing AI tools. These operations originated from China, Iran, Israel, and Russia, focusing on disseminating political messaging through various means such as social media posts and comments. The operations were found to be relatively ineffective, scoring low on the Brookings Breakout Scale which measures the impact of influence operations.
Specifically, notable operations mentioned include Spamouflage and Bad Grammar from China and Russia respectively, both using OpenAI tools for debugging code and creating social media content. Another operation, Doppelganger, originating from Russia, utilized AI to post comments and generate content in multiple European languages. The International Union of Virtual Media (IUVM) from Iran employed AI for article generation and translation.
The meeting also discussed the efforts of Stoic, a company based in Tel Aviv, which used AI to create content for various social media platforms. Meta took action against Stoic by removing multiple accounts associated with the company. OpenAI has expressed its commitment to collaborating with industry partners and using threat activity to develop more secure platforms. It also aims to invest in technology and teams to identify and disrupt these malicious actors.
Overall, OpenAI is taking proactive measures to combat AI misuse and is committed to leveraging AI tools to combat abuses and enhance platform security.