Wed. Oct 30th, 2024

Hurricane Helene arrived along with a storm of misinformation, including claims that Democrats were controlling the weather to dampen the GOP vote. (NOAA photo)

It’s election season, and as falsehoods rain down like a monsoon, the “rapid research” team of academics at the University of Washington’s Center for an Informed Public has been scrambling to analyze it all. 

They’ve tracked the claim that hundreds of illegal voters in Washington state were registered at a single address. (It turned out to be a homeless services organization.) They cataloged the assertions that thousands of “illegal aliens” were being allowed to vote in Arizona. (The vast majority were likely citizens.) They followed the conspiracy theories that Democrats altered the path of Hurricane Helene to suppress conservative votes in Florida. (Democrats did not.)

Their mission isn’t to check facts. Instead, their goal is to understand how algorithms and influencers can spread claims rapidly throughout the globe, with careful attention to how a speck of truth can be misinterpreted and manipulated into a wild falsehood. Then, the thinking goes, they can educate the public to become better at parsing fact from fiction.

But in the years since 2020, said the center’s co-founder, Kate Starbird, that mission has become a lot harder. 

As social media sites have cut off low-cost data access to researchers, the vast panoramic view researchers had into the millions of posts on major social media networks shrank to a small porthole.

“We are doing our best to continue the research that we’ve been doing since 2013 on crisis events in an environment where it’s harder to get the data,” Starbird said, “and in some cases where the platform owners are hostile to researchers working on their platform.”

Twitter, the site Starbird spent nearly 15 years studying, was purchased by billionaire Elon Musk — a fervent supporter of former President Trump — who rebranded it X and used it to personally spread falsehoods about FEMA’s response to Hurricane Helene and false conspiracy theories about the 2020 election.

Now even trying to elaborate on how she believes X has become more “toxic” feels like a minefield. 

“I’m literally afraid to make a statement on the record, because of the way that it could be used to attack me and my team,” Starbird said. 

It’s not an exaggeration to say that there are powerful figures who have been rooting for these researchers to fail. Trump has made an exhaustively disproven lie — that the 2020 election was stolen from him — a core part of his campaign message, and researchers have become conservative targets. Starbird has been sued. She’s been badgered by internet trolls. She’s been grilled in a Republican congressional committee. 

Meanwhile, losing data access meant losing the ability to make sweeping evidence-supported assertions about how the platform had become more toxic. 

“I can’t measure it!” she said with a frustrated half-laugh. “I can’t say there’s more of this than that.” 

The university’s researchers have adapted. As the internet has fragmented, they’re partnering with third-party companies to overcome the data access issues and study a broader array of networks. 

Yet even they can feel disillusionment creep in: Have “large swaths of the public changed what they think about whether we can have a shared reality?” Starbird asks herself.

Or have people always been like this, and now she simply sees the truth? 

The Elon factor  

The researchers at the Stanford Internet Observatory — a similar coalition of misinformation researchers who partnered with UW — have also been reeling from the loss of data access. 

“Has the golden age of quantitative social science on social media passed?” Alex Stamos, founder of the observatory, asked Starbird on a podcast in April. “Do we have to just accept that there’s going to be diminished answers in the future of what’s going on in these platforms?”

As if providing an answer to his questions, three months later Stanford effectively shut down the observatory. 

Amid all the political pressure, the result wasn’t surprising, wrote Renée DiResta, a former research director for the Stanford Internet Observatory, in a June op-ed in the New York Times. 

“Misleading media claims have put students in the position of facing retribution for an academic research project,” DiResta wrote. “Even technology companies no longer appear to be acting together to disrupt election influence operations by foreign countries on their platforms.”

If you could chart the rise and fall of misinformation researchers’ influence, you might place the peak of the graph four years ago, toward the end of 2020. 

Near-unlimited access to social-media data had been a dream come true for academics: Billions of conversations between real people were all happening in public view. By plugging into the site’s application programming interface, or API, and researchers could rapidly track the trends, detailing the ebb and the flow of a given rumor on a global scale like a meteorologist could track the path of a hurricane. 

“Historically, if data was shared on public accounts in public ways on Twitter, researchers could access that,” Starbird said. “We could search it just like Google could search it. We could download the data and analyze it.” 

Kate Starbird, right, co-founder of the UW’s Center for an Informed Public, speaks at Town Hall Seattle in September with NPR’s Shannon Bond. (University of Washington)

In the summer of 2020, the UW had been able to track the way that discussions about a COVID conspiracy-theory movie called “Plandemic: The Hidden Agenda Behind Covid-19” spread across millions of tweets. 

Starbird’s team charted the movement with a graphic resembling a multicolored petri dish — showing blooms of tweets spreading across overlapping communities: far-right hyperpartisan media Twitter. Pro-Trump political Twitter. Conspiracy-theory Twitter. Anti-vaccine Twitter. One tendril jutted out, spreading across Spanish language Twitter. 

With COVID spreading and killing hundreds of thousands, social media platforms were paying attention. They began to ban accounts that they concluded had spread COVID misinformation.

UW and Stanford teamed up to form the Election Integrity Partnership, a coalition of researchers tracking false election rumors. They identified thousands of election-related Twitter posts that were in apparent violation of the social media company’s policies, and they let the social media company know. In one case, a Twitter executive personally emailed Starbird to inform her they were putting into place a new policy. 

Not fast enough, she’d replied. 

But, inevitably, that meant wading into dangerous political territory. The new COVID and election misinformation policies from social media sites had infuriated conservatives. So did the decision by Facebook and Twitter to temporarily restrict links to a story about leaks from a laptop of then-presidential candidate Joe Biden’s son Hunter in the lead-up to the election — suspected misinformation that turned out to be true. 

And in the wake of the Jan. 6, 2021, riots, Facebook and Twitter banned Trump himself. 

So in 2022, when Musk purchased Twitter for $44 billion — far more than most observers thought it was worth — he was treated as a conquering hero. He killed the site’s COVID misinformation restrictions and introduced a new community-driven fact-checking feature. He leaked internal correspondence between Twitter executives, misinformation researchers and the government to a friendly journalist. 

While the Election Integrity Partnership hadn’t been involved with the decision to censor the Hunter Biden story, it got caught up in the larger backlash. 

In 2023, Starbird was asked to testify for four hours before the House Republicans’ Weaponization of the Federal Government committee. While Starbird stressed she wasn’t trying to censor anybody, that didn’t stop her from being sued by the state of Louisiana, which accused her of violating the First Amendment. It took two years for the U.S. Supreme Court to declare that Louisiana lacked standing to bring the lawsuit. 

And finally, Musk dramatically increased the cost for access to the site’s data interface, putting it far out of reach for researchers. 

Three years ago, the UW was able to download 100 million Twitter posts a day. Today, unless they want to pay a fortune, they’re limited to downloading about 10,000 tweets a day.

Those sprawling network graphs the university made in 2020 were now impossible to create. 

“We just don’t have enough data,” Starbird wrote in an email to InvestigateWest.

It wasn’t just conservative backlash that caused tech companies to pull back on data access. In Facebook’s case, it was backlash from the left. In 2018, Facebook had been excoriated by congressional Democrats in the wake of revelations that a company had scraped the profiles of millions of Facebook users to bolster Trump’s campaign. Facebook started limiting data access as a result. 

Initially, it kept a data portal called CrowdTangle open exclusively for researchers, but this year, Facebook killed that off too. Tech companies had discovered what a gold mine that data could be. 

Websites like the forum site Reddit realized that generative AI companies had been scraping their sites for free, feeding user data into their large language models to train their own apps.

“The Reddit corpus of data is really valuable,” Reddit CEO Steve Huffman told The New York Times. “But we don’t need to give all of that value to some of the largest companies in the world for free.”

A graphic produced by the University of Washington’s Center for an Informed Public depicts the spread on social media of a false claim that elections officials in Arizona had illegally registered thousands of noncitizens to vote. (University of Washington)

Not only did companies understandably want to protect their intellectual property and their users’ privacy from being turned into AI fodder for free, Starbird said, “transparency data tends to not make the platforms look good.”

Adaptation

Joey Schafer, a graduate student who’s been with the Center for an Informed Public since 2020, has seen the impact of the changes firsthand. The old model let researchers draw confident conclusions about the size of conversations — for example, that people were talking about controversies over ballots in Arizona much more than they’re talking about ballots in New York or Kansas, Schafer said. 

“You could get a much more reliable sense of the relative size and scale of the conversation,” Schafer said. 

Today, the team has to be choosier, targeting relatively limited numbers of conversations.  

But unlike Stanford, the University of Washington hasn’t given up on the project. 

Their rapid response team of 30 professors, graduates and undergraduates, drawn from all types of academic disciplines, works in shifts. In the morning, one crew may be scouring a slew of social media sites, searching for the latest voting-related rumors that are gaining steam, while the other is analyzing a different haul of election rumors. In the afternoon, other teams repeat the process.

Without access to CrowdTangle, said Danielle Lee Tomson, research manager for the center, team members generally “don’t do Facebook research anymore,” though research continues on other platforms in the now-fractured social media landscape.

A wave of Twitter and Facebook bans after the “Stop the Steal” rally on Jan. 6, 2021, had sent many right-wing users scrambling for alternative social media apps like Gab, Telegram and, eventually, the Trump-run Truth Social platform. 

When Musk purchased Twitter, he threw open the door for many banned right-wing trolls to return. He let anyone willing to pay $8 a month snap up “verified” badges that distinguished the expert and celebrity voices and gave them increased visibility on the platform. 

That triggered an exodus of left-wing users on the site toward alternative social media sites like Bluesky, Threads, and Mastodon. Some researchers did the same. 

“I can’t be a part of something so actively harmful for society,” Starbird wrote two years ago as she left for Mastodon. 

All those sites were dwarfed by the real growing giant: TikTok. The video site featured an addictive algorithm that was hyper-responsive to user behavior — and that could send a comparative nobody skyrocketing in incredibly viral ways. 

“There were so many different conversations that were happening in places that weren’t X,” Starbird said. 

Mert Can Bayar, a postdoctoral student in the University of Washington’s Center for Public Information, speaks at an event in February about the online discourse around Israel and Hamas. (Investigate West)

Researchers can get access to the TikTok data interface, Starbird said, but that would require agreeing to a host of restrictions on how they could work with it. 

Instead, they’re doing smaller-scale experiments to test the algorithms. 

“It’s a lot of human power,” Starbird said. “Setting up phones and having researchers using their phones and writing about what they’re seeing on their phones.”

One researcher lingers on right-wing videos to trigger the algorithm to show what they nickname “RedTok” — the videos the site serves up for conservative viewers — while another lingers on left-wing videos to skew their feed toward “BlueTok.” 

Other team members specialize in seeking insight from smaller platforms. Schafer, for example, focuses on Bluesky. 

The center teamed up with Open Measures, a third-party firm that specializes in tracking communications on fringe and relatively unmoderated platforms, to track behavior on right-wing sites like Truth Social, Gettr and Gab.

“We’re hopefully serving some semblance of a stopgap for an otherwise restricted landscape,” said Hank Teran, CEO of Open Measures.

Schafer and other team members still work to create charts tracking how rumors spread on X. One chart that focused on rumors about illegal voting in Arizona tracked how each time a big conservative account reacted to the story, it would create a boom of reactions. A tweet from Laura Loomer, a 9/11 conspiracy theorist, about “100,000 illegal aliens” potentially voting triggered one spike of discussion in mid-September. Then came another major spike from a tweet from an account called “EndWokeness” in early October. Finally, Musk himself shared the story, triggering the last big burst of activity.

But the limitations are apparent, too. 

“We can download 10,000 tweets a day. That’s it. You got to be really, really careful about what you want when you only get 10,000,” Starbird said. “How do we get a similar signal from a lot less data?”

Doing that often requires working with a third-party company to tailor the way they scrape to make sure they’re getting the ideal sample of tweets. 

Starbird is wary of giving too many details. Because of his clear contempt toward misinformation researchers, Starbird is wary of even uttering the words “Elon Musk,” lest she draw his ire. 

“The owner of that platform has a lot of power, and has used that to silence researchers in the past,” Starbird said. 

Years ago, at the beginning of her research career, there was “a certain feeling that at the end of the day people don’t want to be misled, that we’re all searching for the truth,” Starbird said. 

Today that seems naive to her.

As heartbreaking images flooded social media in the wake of Hurricane Helene, one in particular stood out: a little girl in a life jacket, crying and soaked, holding a puppy during the evacuation. 

The image was fake, generated by artificial intelligence. But when it was pointed out, Starbird noted, plenty of people who’d been fooled simply shrugged off the truth, arguing that it didn’t matter that it wasn’t real. It felt true, or stood for a greater truth. 

“People don’t want to be manipulated,” Starbird said. “But they also don’t want to recognize they’ve been manipulated.”

Four Election Claims the University of Washington Tracked

The rapid research team at the University of Washington’s Center for an Informed Public is focused on the torrent of election-related rumors in the leadup to November’s vote. But unlike many journalistic outfits, the center is less focused on fact-checking the claims than on how and where they are spreading. 

Here are four they’ve tackled this year:

Address questions 

The claim: A thread in April on X flagging suspicious entries from the Washington Voter Registration Database claimed more than 200 voters were supposedly registered at a single house in Seattle. 

The truth: The database had a typo. The intended address was the site for Seattle’s Compass Housing Alliance, which provides a fixed address for homeless people.  

The spread: By the following day, the tweet had been shared more than 2,000 times on X. It was also posted on Truth Social and linked to as a way of getting around prohibitions on sharing addresses on X. 

Overseas voters

The claim: In a Sept. 23 post on Truth Social, Trump claimed Democrats were “getting ready to CHEAT” by encouraging overseas Americans to vote.

The truth: It’s true that Democrats had been encouraging Americans living overseas to vote through a program called the Uniformed and Overseas Citizens Absentee Voting Act. But federal laws still require that first-time voters prove their identity before voting by mail, and there’s no evidence that overseas voters have turned into a vector for voter fraud. 

The spread: The rumor likely gathered steam from a Sept. 6 post on the infamously inaccurate far-right Gateway Pundit website. It was boosted by the Federalist, a right-wing media site, and an account apparently impersonating former Trump lawyer Sidney Powell, before eventually being echoed by Trump himself. 

Assassination attempt 

The claim: After Thomas Matthew Crooks’ attempted assassination of Donald Trump on July 13, anti-Trump accounts began stoking speculation that it was all staged for Trump’s political advantage. 

The truth: Reporters quickly confirmed that the assassination attempt was real and that the shooter had been identified and killed. 

The spread: The rumors peaked in the first two hours after the shooting, as confusion and speculation reigned, and largely subsided once reliable reporting emerged. But on TikTok, X and Bluesky, anti-Trump accounts — in both English and Spanish — continued to search for things to be suspicious about, ranging from the position of the photographer to the color of the blood on Trump’s ear. 

Hurricane claims

The claim: Dark and shadowy forces can control the weather, and they intentionally steered the course of Hurricane Helene directly toward Republican-dominated rural areas of Florida and Georgia.

The truth: While man-made climate change likely did contribute to the severity of Hurricane Helene, nothing suggests that anyone did — or could — intentionally control a hurricane.

The spread: The conspiracy theory began heating up on Sept. 28 when @MattWallace888, an X account with over 2.2 million followers, began posting misleading maps and scoffing at the idea that the course of the hurricane was a coincidence. Soon, TikTok videos making the same claim were getting over 1 million views. Rep. Marjorie Taylor Green, R-Georgia, joined the chorus on Oct. 3, with a tweet claiming “Yes they can control the weather.”

InvestigateWest (invw.org) is an independent news nonprofit dedicated to investigative journalism in the Pacific Northwest. A Report for America corps member, Daniel Walters covers democracy and extremism across the region. He can be reached at daniel@invw.org.

By