top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Marijan Hassan - Tech Journalist

OpenAI shuts down malicious ChatGPT accounts to curb US election interference


OpenAI has taken decisive action against a cluster of ChatGPT accounts tied to an Iranian influence operation that was attempting to interfere in the upcoming U.S. presidential election. In a blog post released on Friday, the company announced that it had banned these accounts, which were found to be generating AI-crafted content, including articles and social media posts.



While the operation appeared to have limited reach, it marked another example of how state-affiliated actors are leveraging generative AI to spread misinformation.


This crackdown is not OpenAI’s first encounter with state-linked actors misusing its technology. In May, the company disrupted five campaigns that were similarly using ChatGPT to manipulate public opinion. These incidents echo the tactics used by state actors in previous election cycles, where social media platforms like Facebook and Twitter were exploited to influence voters.


Now, similar groups, potentially including those previously involved, are using AI tools like ChatGPT to flood social media with false narratives.


OpenAI’s investigation into this latest cluster of accounts was initiated by a recent Microsoft Threat Intelligence report. The report identified the group, labeled Storm-2035, as part of a broader campaign that has been attempting to influence U.S. elections since 2020.


According to Microsoft, Storm-2035 is an Iranian network that operates multiple websites masquerading as legitimate news outlets, engaging U.S. voter groups with polarizing content on topics such as presidential candidates, LGBTQ rights, and the Israel-Hamas conflict. The goal of these operations appears to be sowing discord rather than promoting a specific agenda.


Among the tactics employed by Storm-2035 was the use of ChatGPT to draft long-form articles under the guise of both progressive and conservative news outlets. One such article falsely claimed that Elon Musk’s X platform was censoring former President Donald Trump’s tweets, despite Musk’s efforts to encourage Trump’s engagement on the platform.


Additionally, the group managed several social media accounts, using ChatGPT to craft misleading political commentary. One such post falsely alleged that Vice President Kamala Harris blamed “increased immigration costs” on climate change, accompanied by the hashtag “#DumpKamala.”


Despite these efforts, OpenAI reported that the operation’s content had minimal impact, with most social media posts receiving little to no engagement. However, the ease and low cost of deploying such campaigns using AI tools mean that similar incidents are likely to recur as the U.S. presidential election approaches and political discourse intensifies. Consequently, the company and other tech platforms will need to remain vigilant in identifying and disrupting these operations to protect the integrity of the democratic process.

wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page