Most Americans are concerned about AI’s impact on the 2024 presidential election, Northeastern survey finds

Circular stickers that have US Flags on them and say "I Voted"
The poll was the first project put on by the AI Literacy Lab. Photo by Matthew Modoono/Northeastern University

The majority of Americans are concerned that artificial intelligence will be used to spread falsehoods during the next presidential election, according to a recent Northeastern University survey.

The survey was conducted by Northeastern’s new AI Literacy Lab to gauge the general public’s perceptions on AI. It found that 83% of respondents are worried about the proliferation of AI-generated misinformation during the 2024 presidential campaign.

One thousand American adults 18 and older were polled from Aug. 15 to Aug. 29. The lab released the findings as part of its official launch during the Institute for Experiential AI’s business conference last month.

The survey is the first project to come out of the lab, which plans to work collaboratively with computer scientists, journalists and other media professionals to help them understand and use artificial intelligence. 

“What we’re doing is trying to be a bridge between the scientific community and mass media,” says John Wihbey, an assistant professor of journalism and media innovation at Northeastern. 

Wihbey is a co-chair of the lab with Rupal Patel, a professor in the Khoury College of Computer Sciences and Bouve College of Health Sciences.

The survey is step one and will help inform the AI Literacy Lab as it hosts workshops and stakeholders meetings, Wihbey says. 

“Really the goal of this particular survey was to look at the ways in which people are informing themselves about this emerging technology and to begin to look at areas where the public feels trepidation, anxiety or optimism,” Wihbey says. 

The finding that 83% of Americans worry about the spread of misinformation during elections is a reflection of online platforms like X/Twitter and the lack of tools to detect AI-generated content, Wihbey says. 

“As a research community, we’re really facing a crisis of certain data access, the ability to really look into some of these systems to detect, let’s say, large-scale disinformation campaigns that are driven by generative AI,” he says. 

“At the same time, we are a year out from what will likely be a very consequential election that could go a whole number of different ways, including sideways if misinformation and disinformation run rampant and really disrupt access to the ballot but also the election process itself,” Wihbey says.

With AI, bad actors have the potential to more easily create “troll farms,” which Garrett Morrow, a Ph.D. researcher in the AI Literacy Lab, describes as an “organization that employs people to make provocative posts, spread disinformation or propaganda, harass people online, or engage in other antisocial behavior, on purpose.”

“The goal of a troll farm is to sow discord, manipulate the public and even make money via ad revenue on their posts,” he says. “A now classic example would be the Russian Internet Research Agency and their actions during the 2016 presidential election, but many different entities have employed troll farms, including the national governments of India and the Philippines, and even political campaigns in Western countries such as the U.S. or the U.K.” 

That is just one of the many ways generative AI, or gen AI, can cause disruption and chaos, Wihbey and Morrow note.

“Gen AI changes the economies of information, making it easier for bad actors to create plausible-sounding/seeming content, in ways that can cross language and cultural barriers,” Wihbey says. 

Another interesting insight from the survey results is that women are more skeptical than men about AI, Wihbey says. 

Of those surveyed, 36.5% of men said media reports about AI make them optimistic, while 22.2% of women said the same thing. When it comes to developing AI responsibly, 42.8% of men believe it will happen, compared to 26.2% of women.

Additionally, those with STEM backgrounds tend to be more optimistic about AI. Of those surveyed, 54.6% with STEM backgrounds said they are optimistic, while 26.2% without STEM backgrounds said the same.

Patel says people who have spent time learning and engaging with AI tend to have a more positive perspective on it. 

“People who have spent a bit more time understanding and learning this technology have a much more balanced view,” she says. “So I think awareness and information is a critical way to demystify anything.” 

The survey says people are reading about AI in the media (77% consume news about AI on a weekly basis), but most people aren’t using it (68% have never tried a large language model like ChatGPT).

“Even though we are seeing a lot about artificial intelligence in the news, people aren’t necessarily engaging with it that much, at least with generative AI,” Morrow says. 

Cesareo Contreras is a Northeastern Global News reporter. Email him at c.contreras@northeastern.edu. Follow him on X/Twitter @cesareo_r and Threads @cesareor.