OpenAI has identified and deactivated a cluster of ChatGPT accounts being used by an Iranian group to create fake news articles and social media comments aimed at influencing the 2024 U.S. elections.
This marks the first time OpenAI has detected and removed an operation specifically targeting the U.S. elections, underscoring the growing concern among experts that AI tools like ChatGPT could accelerate the spread of disinformation campaigns by nation-state adversaries.
The discovered operation
As explained in a blog post, OpenAI identified the accounts in question as part of a group known as Storm-2035, which has been linked to creating fake news websites and disseminating them on social media platforms to sway electoral outcomes. The Iranian operators utilized ChatGPT both for crafting long-form fake news stories and writing comments for social media posts.
The topics covered by the fabricated content ranged from the Israel-Hamas war and Israel’s presence at the Olympic Games to the U.S. presidential election. OpenAI linked the accounts to a dozen X (formerly Twitter) accounts and one Instagram account, which have since been deactivated. Meta has also taken down the identified Instagram account, which was reportedly part of a 2021 Iranian campaign targeting users in Scotland.
In addition to social media activity, the operators created five websites posing as both progressive and conservative news outlets, sharing information about the elections. One example of AI-generated content spotted by OpenAI featured a headline that read, “Why Kamala Harris Picked Tim Walz as Her Running Mate: A Calculated Choice for Unity”.
The impact and future concerns
While most of the social media accounts sharing this AI-fueled disinformation did not gain significant traction, experts warn that the threat is far from over. Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team, emphasized the importance of remaining vigilant but calm.
EU imposes election guidelines on big tech companies
As the 2024 U.S. elections approach, it remains to be seen whether foreign influence operations will intensify their efforts online. In response to this development, OpenAI has stressed the need for continued innovation in detecting and counteracting disinformation campaigns.
The role of AI in disinformation
The use of AI-assisted tools like ChatGPT to create and disseminate disinformation raises significant concerns about the potential scale and impact of future influence operations. By leveraging advanced algorithms, nation-state adversaries may be able to generate more convincing content at an unprecedented pace, making it challenging for both platforms and users to identify and counteract these efforts.
As AI technology continues to evolve, it is essential for developers, policymakers, and social media platforms to work together to address the risks associated with disinformation campaigns and safeguard the integrity of democratic processes.
Featured image credit: Freepik