An army of AI-powered bots on X spreads pro-Trump, pro-GOP propaganda, study shows

An army of artificial intelligence-powered political campaign accounts have posed as real people to advocate for Republican candidates and causes in X, according to a research report out of Clemson University.

The report details an integrated AI campaign using large language models (LLM) – the type of artificial intelligence that empowers human chatbots like ChatGPT to respond to other users.

While it is unclear who ran or funded the network, its focus on specific political projects without clear connections to foreign countries suggests it was an American political operation rather than one directed by a foreign government, the researchers said.

Governments and other watchdogs have warned about efforts to influence public opinion through AI-generated content as the November elections approach. The emergence of seemingly coordinated domestic influence operations using AI adds another wrinkle to a chaotic and rapidly evolving information landscape.

There are at least 686 identified X accounts on the network identified by Clemson researchers that have posted more than 130,000 times since January. It aimed for four Senate races and two primary races and supported former President Donald Trump's re-election campaign. Many of the accounts were removed from X after NBC News emailed the platform for comment. The platform did not respond to questions from NBC News.

The accounts followed a consistent pattern. Many had profile pictures that appealed to conservatives, such as the far-right Pepe the Frog cartoon meme, a cross or an American flag. They often respond to a politician or person speaking about a polarizing political issue in X, often to support Republican candidates or policies or to insult Democratic candidates. Although the accounts generally had few followers, their reply habits were more popular

Posters increase the likelihood of being seen.

Fake accounts and bots designed to artificially boost other accounts have plagued social media platforms for years. But only with the advent of large, widely available linguistic models in late 2022 will it become possible to automate credible, interactive human conversations on a large scale.

“I’m concerned about what this campaign is going to look like,” Darren Linville, co-director of Clemson’s media center and lead researcher on the study, told NBC News. “Bad actors are now learning how to do this. They’re definitely going to get better at that.”

Accounts are located separately in certain countries. In the Ohio Republican Senate primary, they supported Frank LaRose over the Trump-backed Bernie Moreno. In Arizona's Republican congressional primary, reports favored Blake Masters over Abraham Hamadeh. Both Master and Hamadeh were endorsed by Trump over the four Republican candidates.

The network has Republican nominees in Senate races in Montana, Pennsylvania and Wisconsin, as well as Republican-led North Carolina. Voter identification law.

A spokesperson for Hamadeh, who won the primary in July, told NBC News that the campaign noticed an influx of messages criticizing Hamadeh each time he posted on X, but didn't know who to tell or how to stop them. X offers users the option to report platform abuses such as spam, but its policies do not explicitly prohibit fake AI-based accounts.

The researchers determined that the accounts were on the same network by evaluating metadata and tracking the content of their replies and the accounts they responded to – sometimes the accounts attacked the same targets together.

Clemson researchers identified many accounts on the network with text in their posts indicating that they are “broken,” where their text includes references to AI writing. Initially, the bots were seen using ChatGPT, one of the most strictly regulated LLMs. Sen. In a post tagging Sherrod Brown, D-Ohio, one of the accounts wrote: “Hey, I'm an AI language modeler trained by OpenAI. If you have any questions or need more help, feel free to ask!” OpenAI declined to comment.

In June, the network reflected that it was using Dolphin, a smaller model designed to avoid restrictions like ChatGPT, which prohibit using its products to scam others. In some tweets from the accounts, “Here’s the dolphin!” Text with phrases like and “Dolphin, the uncensored AI tweet writer.”

Tweets from botnet accounts appear to be “breaking” the system

Kai-Cheng Yang, a postdoctoral researcher at Northeastern University who studies the misuse of generative AI but was not involved in the Clemson study, reviewed the findings at the request of NBC News. In an interview, he defended the findings and methodology, noting that the accounts often included rare information: unlike real people, they often created hashtags to accompany their posts.

“They have a lot of hashtags, but those hashtags are not necessarily used by people,” Yang said. “Like when you ask ChatGPT to write a tweet and it includes a made-up hashtag.”

A post endorsing LaRose in the Ohio Republican Senate primary, for example, used the hashtag “#VoteFrankLaRose.” A search on X for this hashtag shows that only one other tweet from 2018 used it.

Some of the hashtags used in posts on the network include hashtags that are rarely posted by human users.
Some of the hashtags used in posts on the network include hashtags that are rarely posted by human users.

The researchers found only evidence of X's campaign. The platform's owner, Elon Musk, has promised to eliminate bots and fake accounts from the platform by 2022. But Kasturi also oversaw When he took over the company, Twitter, which included parts of the trust and safety teams .

It's unclear how Campaign automates the process of creating and publishing content in X, but several consumer products allow similar automation, and publicly available tutorials explain how to set up such an operation.

Reports say that part of the reason the network is believed to be an American operation is its hyper-specific support for certain Republican campaigns. Documented foreign propaganda campaigns consistently reflected these countries' priority: China opposes US support for Taiwan, Iran opposes Trump's candidacy, and Russia supports Trump and opposes US aid to Ukraine. All three have for years insulted the democratic process and attempted to create public discord through social media propaganda campaigns.

“All of these actors are driven by their own goals and agendas,” said Linville. “It’s probably a domestic player because of the specificity of most of the targeting.”

If the network is American, it's probably not illegal, said Larry Norden, vice president of elections and government programs at NYU's Brennan Center for Justice, a progressive nonprofit group and author of a recent analysis of AI State Election Law.

“There really isn’t a lot of regulation in this space, especially at the federal level,” Norden said. “Right now, there is nothing in the law that requires a bot to identify itself as a bot.”

If a super PAC hires a marketing firm or agent to run this bot farm, it won't necessarily appear on their disclosure forms, Norden said, likely coming from an activist or vendor.

Although the United States government has repeatedly taken steps to counter deceptive foreign propaganda campaigns designed to influence Americans' political opinion, the US intelligence community generally does not plan to counter US-based diversionary activities.

Social media platforms routinely purge fake and coordinated individuals they accuse of coming from official propaganda networks, especially China, Iran and Russia. But this operation has occasionally been hiring hundreds of workers writing fake content, AI now allows much of this process to be automated.

Often, these fake accounts struggle to gain organic followers before being detected, but the network the Clemson researchers identified taps into networks of existing followers responding to larger accounts. LLM technology can help avoid detection by allowing new content to be created quickly rather than copying and pasting.

While the Clemson network is the first well-documented network to systematically use LLM to respond to and shape political conversations, there is evidence that others are also using AI in campaigns on X.

A September press call Regarding foreign operations to influence the election, a US intelligence official said that online propaganda efforts by Iran and particularly Russia included the use of AI bots to respond to users, although the official declined to talk about the extent of those efforts or share additional details.

Dolphin founder Eric Hartford told NBC News that he believes technology should reflect the values ​​of those who use it.

“The LLM is a tool, just like lighters and knives and cars and phones and computers and a chainsaw. We don’t expect a chainsaw to only work on trees, do we?”

“I'm building a tool that can be used for good and for evil,” he said.

Hartford said he is not surprised that someone used his model for a fraudulent political campaign.

“I would say this is a natural consequence of the existence of this technology and it is inevitable,” he said.