Supported by
How Russia, China and Iran Are Interfering in the Presidential Election
Eight years after Russia interfered in the 2016 presidential election, foreign influence with American voters has grown more sophisticated. That could have outsize consequences in the 2024 race.
When Russia interfered in the 2016 U.S. presidential election, spreading divisive and inflammatory posts online to stoke outrage, its posts were brash and riddled with spelling errors and strange syntax. They were designed to get attention by any means necessary.
“Hillary is a Satan,” one Russian-made Facebook post read.
Now, eight years later, foreign interference in American elections has become far more sophisticated, and far more difficult to track.
Disinformation from abroad — particularly from Russia, China and Iran — has matured into a consistent and pernicious threat, as the countries test, iterate and deploy increasingly nuanced tactics, according to U.S. intelligence and defense officials, tech companies and academic researchers. The ability to sway even a small pocket of Americans could have outsize consequences for the presidential election, which polls generally consider a neck-and-neck race.
Russia, according to American intelligence assessments, aims to bolster the candidacy of former President Donald J. Trump, while Iran favors his opponent, Vice President Kamala Harris. China appears to have no preferred outcome.
But the broad goal of these efforts has not changed: to sow discord and chaos in hopes of discrediting American democracy in the eyes of the world. The campaigns, though, have evolved, adapting to a changing media landscape and the proliferation of new tools that make it easy to fool credulous audiences.
Here are the ways that foreign disinformation has evolved:
Now, disinformation is basically everywhere.
Russia was the primary architect of American election-related disinformation in 2016, and its posts ran largely on Facebook.
Now, Iran and China are engaging in similar efforts to influence American politics, and all three are scattering their efforts across dozens of platforms, from small forums where Americans chat about local weather to messaging groups united by shared interests. The countries are taking cues from one another, although there is debate over whether they have directly cooperated on strategies.
There are hordes of Russian accounts on Telegram seeding divisive, sometimes vitriolic videos, memes and articles about the presidential election. There are at least hundreds more from China that mimicked students to inflame the tensions on American campuses this summer over the war in Gaza. Both countries also have accounts on Gab, a less prominent social media platform favored by the far right, where they have worked to promote conspiracy theories.
Russian operatives have also tried to support Mr. Trump on Reddit and forum boards favored by the far right, targeting voters in six swing states along with Hispanic Americans, video gamers and others identified by Russia as potential Trump sympathizers, according to internal documents disclosed in September by the Department of Justice.
One campaign linked to China’s state influence operation, known as Spamouflage, operated accounts using a name, Harlan, to create the impression that the source of the conservative-leaning content was an American, on four platforms: YouTube, X, Instagram and TikTok.
The content is far more targeted.
The new disinformation being peddled by foreign nations aims not just at swing states, but also at specific districts within them, and at particular ethnic and religious groups within those districts. The more targeted the disinformation is, the more likely it is to take hold, according to researchers and academics who have studied the new influence campaigns.
“When disinformation is custom-built for a specific audience by preying on their interests or opinions, it becomes more effective,” said Melanie Smith, the research director for the Institute for Strategic Dialogue, a research organization based in London. “In previous elections, we were trying to determine what the big false narrative was going to be. This time, it is subtle polarized messaging that strokes the tension.”
Iran in particular has spent its resources setting up covert disinformation efforts to draw in niche groups. A website titled “Not Our War,” which aimed to draw in American military veterans, interspersed articles about the lack of support for active-duty soldiers with virulently anti-American views and conspiracy theories.
Other sites included “Afro Majority,” which created content aimed at Black Americans, and “Savannah Time,” which sought to sway conservative voters in the swing state of Georgia. In Michigan, another swing state, Iran created an online outlet called “Westland Sun” to cater to Arab Americans in suburban Detroit.
“That Iran would target Arab and Muslim populations in Michigan shows that Iran has a nuanced understanding of the political situation in America and is deftly maneuvering to appeal to a key demographic to influence the election in a targeted fashion,” said Max Lesser, a senior analyst at the Foundation for Defense of Democracies.
China and Russia have followed a similar pattern. On X this year, Chinese state media spread false narratives in Spanish about the Supreme Court, which Spanish-speaking users on Facebook and YouTube then circulated further, according to Logically, an organization that monitors disinformation online.
Experts on Chinese disinformation said that inauthentic social media accounts linked to Beijing had become more convincing and engaging and that they now included first-person references to being an American or a military veteran. In recent weeks, according to a report from Microsoft’s Threat Analysis Center, inauthentic accounts linked to China’s Spamouflage targeted House and Senate Republicans seeking re-election in Alabama, Tennessee and Texas.
Artificial intelligence is propelling this evolution.
Recent advances in artificial intelligence have boosted disinformation capabilities beyond what was possible in previous elections, allowing state agents to create and distribute their campaigns with more finesse and efficiency.
OpenAI, whose ChatGPT tool popularized the technology, reported this month that it had disrupted more than 20 foreign operations that had used the company’s products between June and September. They included efforts by Russia, China, Iran and other countries to create and fill websites and to spread propaganda or disinformation on social media — and even to analyze and reply to specific posts. (The New York Times sued OpenAI and Microsoft last year for copyright infringement of news content; both companies have denied the claims.)
“A.I. capabilities are being used to exacerbate the threats that we expected and the threats that we’re seeing,” Jen Easterly, the director of the Cybersecurity and Infrastructure Security Agency, said in an interview. “They’re essentially lowering the bar for a foreign actor to conduct more sophisticated influence campaigns.”
The utility of commercially available A.I. tools can be seen in the efforts of John Mark Dougan, a former deputy sheriff in Florida who now lives in Russia after fleeing criminal charges in the United States.
Working from an apartment in Moscow, he has created scores of websites posing as American news outlets and used them to publish disinformation, effectively doing by himself the work that, eight years ago, would have involved an army of bots. Mr. Dougan’s sites have circulated several disparaging claims about Ms. Harris and her running mate, Gov. Tim Walz of Minnesota, according to NewsGuard, a company that has tracked them in detail.
China, too, has deployed an increasingly advanced tool kit that includes A.I.-manipulated audio files, damaging memes and fabricated voter polls in campaigns around the world. This year, a deepfake video of a Republican congressman from Virginia circulated on TikTok, accompanied by a Chinese caption falsely claiming that the politician was soliciting votes for a critic of Beijing who sought (and later won) the Taiwanese presidency.
It’s becoming much harder to identify disinformation.
All three countries are also becoming better at covering their tracks.
Last month, Russia was caught obscuring its attempts to influence Americans by secretly backing a group of conservative American commentators employed through Tenet Media, a digital platform created in Tennessee in 2023.
The company served as a seemingly legitimate facade for publishing scores of videos with pointed political commentary as well as conspiracy theories about election fraud, Covid-19, immigrants and Russia’s war with Ukraine. Even the influencers who were covertly paid for their appearances on Tenet said they did not know the money came from Russia.
In an echo of Russia’s scheme, Chinese operatives have been cultivating a network of foreign influencers to help spread its narratives, creating a group described as “foreign mouths,” “foreign pens” and “foreign brains,” according to a report last fall by the Australian Strategic Policy Institute.
The new tactics have made it harder for government agencies and tech companies to find and remove the influence campaigns — all while emboldening other hostile states, said Graham Brookie, the senior director at the Atlantic Council’s Digital Forensic Research Lab.
“Where there is more malign foreign influence activity, it creates more surface area, more permission for other bad actors to jump into that space,” he said. “If all of them are doing it, then the cost for exposure is not as high.”
Technology companies aren’t doing as much to stop disinformation.
The foreign disinformation has exploded as tech giants have all but given up their efforts to combat disinformation. The largest companies, including Meta, Google, OpenAI and Microsoft, have scaled back their attempts to label and remove disinformation since the last presidential elections. Others have no teams in place at all.
The lack of cohesive policy among the tech companies has made it impossible to form a united front against foreign disinformation, security officials and executives at tech companies said.
“These alternative platforms don’t have the same degree of content moderation and robust trust and safety practices that would potentially mitigate these campaigns,” said Mr. Lesser of the Foundation for Defense of Democracies.
He added that even larger platforms such as X, Facebook and Instagram were trapped in an eternal game of Whac-a-Mole as foreign state operatives quickly rebuilt influence campaigns that had been removed. Alethea, a company that tracks online threats, recently discovered that an Iranian disinformation campaign that used accounts named after hoopoes, the colorful bird, recently resurfaced on X despite having been banned twice before.
Sheera Frenkel is a reporter based in the San Francisco Bay Area, covering the ways technology impacts everyday lives with a focus on social media companies, including Facebook, Instagram, Twitter, TikTok, YouTube, Telegram and WhatsApp. More about Sheera Frenkel
Tiffany Hsu reports on misinformation and disinformation and its origins, movement and consequences. She has been a journalist for more than two decades. More about Tiffany Hsu
Steven Lee Myers covers misinformation and disinformation from San Francisco. Since joining The Times in 1989, he has reported from around the world, including Moscow, Baghdad, Beijing and Seoul. More about Steven Lee Myers
Advertisement
No comments:
Post a Comment