April 2, 2024 at 11:07AM
Amid upcoming global elections, Clint Watts, Microsoft’s Threat Analysis Center GM, anticipates AI-driven misinformation influencing outcomes. Deception may not reach feared sophistication but will remain effective due to simple tactics. Watts’ team monitors government-linked threat groups globally and identified AI techniques utilized by Russian actors to post fake news. AI audio poses concern, lacking contextual clues for audience evaluation.
From the meeting notes, it’s evident that there is a growing concern around the use of AI to influence elections. Clint Watts, a general manager at Microsoft’s Threat Analysis Center, emphasized that although AI deception may not be as sophisticated as some fear, it can still be very effective. He highlighted that in the upcoming 2024 elections, there will be fake content, some of which will be deep, but most will be shallow. He also mentioned that simple manipulations can have a significant impact on the internet.
Watts’ team has been tracking government-linked threat groups from various countries, including Russia, Iran, and China, and has conducted a deep dive into how these groups are using AI to influence elections. They have observed that posting a picture with a real news organization logo has been an effective technique used by Russian actors to influence millions of shares.
Watts also discussed indicators to distinguish between real and fake news, including considering the setting and the medium. He highlighted that AI-generated content featuring elected officials in private settings is easier to pass off as legitimate. Additionally, he pointed out that AI audio is a particularly concerning medium due to its ease of creation and the difficulty in evaluating contextual clues for authenticity.
Overall, the meeting notes emphasize the need to be cautious about the potential impact of AI on election interference, particularly through the creation and dissemination of fake content.