OpenAI Blocks AI-Driven Disinformation Campaign

OpenAI has taken strong actions against a network of ChatGPT accounts linked to an Iranian influence group known as Storm-2035, which was using the tool to spread articles and posts generated by artificial intelligence in order to influence public opinion on various topics, including the presidential elections in the United States.

The detection of Storm-2035 was made possible thanks to a report from ‘Microsoft Threat Intelligence’, which revealed that the group has been creating and sharing fake news since 2020, with the intention of affecting electoral outcomes in the United States. Storm-2035 operated through an Instagram account and twelve on X (formerly Twitter), using ChatGPT to draft and publish messages, comments, and lengthy articles. Among these contents was a post that claimed “X was censoring Trump’s tweets.”

OpenAI confirmed that the AI-generated content was not widely shared nor achieved “significant public interaction,” a common trend in operations that employ AI to spread disinformation, as they are often quick and cost-effective to set up.

This action by OpenAI follows the recent removal of five similar campaigns in the last three months, which also used ChatGPT to manipulate public opinion. The situation reflects an evolution in disinformation tactics, where malicious actors can now use AI to create and distribute misleading information more quickly and widely.

OpenAI’s intervention underscores the growing concern over the use of artificial intelligence in manipulating public opinion and the need for constant vigilance to protect the integrity of democratic processes and online information.

via: MiMub in Spanish

Last articles

Scroll to Top
×