Are Russia and China Using AI in Covert Campaigns?


OpenAI recently identified and disrupted five online campaigns that used its generative AI to manipulate public opinion and influence geopolitics. The technology was used to generate social media posts, translate and edit articles, write headlines, and debug computer programs, typically to win political campaign support or swing public opinion in geopolitical conflicts.

This marks the first time OpenAI has revealed how its tools were used for such online deception campaigns, showing the realities of how its technology is changing online deception. OpenAI’s technology is being used to post political content; however, the campaigns have failed to gain much traction. We expect these online disinformation attempts to evolve as generative AI becomes increasingly powerful.

OpenAI’s online chatbots and other AI tools can write social media posts, generate photorealistic images, and write computer programs. In the report, the company claims its tools have been used in influence campaigns for years, including a Russian campaign called Doppelganger and a Chinese campaign called Spamouflage. Russia also used the tools in a campaign targeting people in Ukraine, Moldova, the Baltic States,  and the United States, mainly via the Telegram messaging service. It was also used to debug computer code designed to automatically post information to Telegram.

OpenAI said the political comments received few replies and “likes,” although the efforts were unsophisticated at times. At one point, the campaign posted text that had obviously been generated by AI—”As an AI language model, I am here to assist and provide the desired comment.” At other points, it posted in poor English, leading OpenAI to call the effort “Bad Grammar.”

According to the report, the Iranian campaign used OpenAI tools to produce and translate long-form articles and headlines that aimed to spread pro-Iranian, anti-Israeli, and anti-U.S. sentiment on websites. The Israeli campaign, which OpenAI called “Zeno,” was used to generate fictional personas and biographies meant to stand in for real people on social media services used in Israel, Canada, and the United States and to post anti-Islamic messages.

Leave A Reply

Your email address will not be published.