Sat. Nov 23rd, 2024

Behind the obvious uses of AI in the election, for good and ill,  there are “unseen” jobs it is performing, a University of Maryland researcher says. (Photo illustration by Joe Raedle/Getty Images)

Americans are worried about the effect of artificial intelligence on the election, as polls show, but the public probably doesn’t understand the full extent of its influence on what they experience every day, an academic studying the technology says.

There have been obvious examples of AI-generated misinformation, like false audio of President Joe Biden, a fake video about voting irregularities, or memes intended to generate emotion or spread propaganda. AI is also regularly used to generate legitimate campaign messages, like phone calls and texts.

But behind those public examples, there are the “unseen” jobs AI is performing in the election, said Cody Buntain, an assistant professor at the University of Maryland’s College of Information, most prominently in determining the nature of your social media feeds.

“The systems that determine what piece of content is put in front of you, that’s AI at work,” Buntain said. “From TikTok’s For You Page, the X’s feed or profile page to Facebook’s feed. All that is AI driven.”

GET THE MORNING HEADLINES.

Buntain is currently teaching a course about the way AI is reshaping politics, and said one of the biggest places AI has made an impact are things we don’t generally see, like your “information diet.”

In a Pew Research Center survey of nearly 10,000 Americans across the political spectrum, released in September, a feeling of unease about artificial intellegence’s role in the presidential election was shared nearly equally by both Democrats and Republicans. The survey found that 41% of Republicans and 39% of Democrats feel AI is being used “mostly for bad” during the campaign. Similarly, 56% of Republicans and 58% of Democrats feel “very concerned” about AI’s influence on the election.

A separate Pew study, also released in September, found that many Americans cite their primary news source as social media.

Though the general sentiments about AI involvement in the election are negative, most Americans probably don’t understand the full scope of how the technologies are being used by campaigns and outside forces, Buntain said. They likely don’t understand the way they engineer your social media to feed your existing views and preconceptions.

The algorithms are built to promote angry and emotional content in feeds, which can potentially contribute to information silos and echo chambers.

Echo chambers aren’t an inherently bad thing — they can bring a sense of safety and community, Buntain said. And though there’s algorithmic ranking happening on social media, people tend to self-sort into the feeds they identify with. Lately, more conservatives are flocking to X after Elon Musk purchased the platform, and more liberal people are spending their time on TikTok, for example.

“Generally, actually, echo chambers in your offline world are much more echoey than echo chambers online,” he said.

But campaign advertising is another system that’s been using “unseen” AI for well over a decade, Buntain said. Although it feels like AI has only been prominent for a few years — especially since the release of ChatGPT in 2022 — this type of information seeking, categorizing and targeted advertising has long been a tool of political campaigns.

The 2012 Obama for America campaign used data, technology and analytics to better reach American television audiences. This type of information seeking, categorizing and targeted advertising is the foundation of many AI systems today, and the strategies used by the Obama campaign were further defined and deployed for the 2016 and 2020 elections.

Today’s AI algorithms can extract information about you far beyond general demographics like age and gender, to include unique interests and affiliations. That information is then used by campaigns for targeted advertising to nearly all of your online spaces.

Outside of these “unseen” AI jobs, Buntain zeroed in on the potential harms the Pew study participants were likely worried about. People are often concerned about inequalities and disinformation perpetuated by AI. They’re also concerned about being able to trust the information given to them by AI systems, like chatbots. Many are also probably worried about whether they’re connecting with a real person or a bot throughout the campaign cycle.

People are rightfully concerned about these AI strategies and systems playing a role in the election, but Buntain has worries about the ways in which AI may be used in the days after, especially if it’s a very tight race.

“AI tools will allow people to very rapidly create content that makes the situation worse,” he said. “Five years ago, you could still make some sort of misinformation content, but it would take longer, and be much more expensive.”

If you’re not a technologist, there’s a lot about AI that probably mystifies you, and amplifies concerns about society that you already had, Buntain said.

“Is this all just some chat bot behind the scenes that’s trying to get us to donate or trying to get us angry?” Buntain said. “I think that concern about, you know, ‘is this an authentic actor,’ is a concern that AI really amplifies, but it’s a concern that’s been around certainly since 2016.”

Buntain hopes that public perception of AI will change over time. He believes that anxieties about it, especially related to its role in the election, are driven by larger-scale society issues, like the economy, feeling safe and being able to trust information.

“Being in an increasingly online but yet isolated world, I think, makes us a little bit ripe for …  being negative about how these new technologies are likely not going to help us, like we thought,” he said.

YOU MAKE OUR WORK POSSIBLE.

By