Skip to content

Published on 

Is AI killing the social media star?

Companies are cashing in on virtual influencers. Not only does AI image generation bring down costs, it allows companies to have stronger control over messaging, Northeastern experts say.


iPhone displaying the Instagram account of Miquela, an AI generated social media influencer.
AI image generation tools are being used to create social media influencers. Photo by Matthew Modoono/Northeastern University

Instagram’s rising stars are a new breed of social media influencer. In fact, many of them aren’t even technically human. 

Take Lil Miquela, otherwise known as Miquela Sousa. The 19-year old Brazilian-American social media star has amassed over 2.6 million followers on the platform and regularly posts sponsored content in partnership with brands like BMW and Pacsun

But she’s not a young adult who moved to Los Angeles to try to make it in the creator economy. She was created there and was designed to disrupt that economy.

In other words, she’s virtual and was made using computer-generated imagery (CGI).

Other notable virtual social media influencers include Lu Do Magalu, a Brazilian social media star with more than 6.8 million followers on Instagram; Noonoouri, an animated 19-year-old Asian fashionista with over 431,000 followers on the platform; and Bermuda, Miquela’s blond sister who has over 230,000 Instagram followers. 

Many of these stars have been around for a while. Lil Miquela first appeared back in 2016. But with the rise of AI image generation tools such as DALL-E 2 and Midjourney, virtual influencers are easier than ever to make, and companies are cashing in.

AI modeling agencies like The Clueless have cropped up in the past year to partner with brands to help them create tailor-made avatars, as reported by the Financial Times.

Aitana Lopez, a pink-haired 25-year-old woman with over 265,000 Instagram followers, is one of Clueless’ most well-known virtual stars, raking in thousands of dollars in brand deals on a regular basis despite only having been created last summer.

From a marketing and branding perspective, it’s easy to understand the phenomenon, says Yakov Bart, professor of marketing at Northeastern University and a leadership committee member at Northeastern’s Institute for Experiential AI. Not only does it bring down costs, it allows companies to have stronger control over messaging. 

“Brands are definitely open to leverage whatever advertising technology that they can get their hands on that gives them a good return on investment,” he says. “In some contexts using virtual or synthetic influencers is more efficient when you take into account the return in terms of changes to consumer mindset after interacting with the influencer versus the costs.” 

But where does that leave human influencers who rely on these platforms to make a living? 

Also, should AI influencers be required to disclose they are computer-generated? 

Many of the AI social media stars active today do deliberately state they are computer-generated. Lil Miquela’s Instagram bio for example says, “19-year-old Robot living in LA.”  

Bart says accounts may be doing this in part to highlight the novelty of the technology and help them stand out. 

“Because consumers are bombarded with communications from brands across hundreds of different channels every day, it’s getting harder and harder to stand out in the crowd,” he says. 

But highlighting the technology for marketing purposes and disclosing for transparency purposes are two different things.

Instagram is exploring bringing labels for AI-generated content on its platform. In September, TikTok added the option to allow creators to label their AI-made content and is exploring automating those labels as well. 

There is a bill making its way through Congress that would require companies to label AI-made content. President Joe Biden’s executive order on regulating AI also provides guidance on labeling AI content.  

But what’s a human social media influencer to do? 

Bart is quick to note that in terms of marketing, humans will continue to play a pivotal role as these technologies develop. 

While these avatars may give the illusion of being sentient, they aren’t. Lil Miquela, for example, has a whole team of humans behind her pulling the strings in the background, Bart explains. 

Additionally, human social media influencers are also using AI tools to help reduce their workload and collaborate with brands. 

“There are more and more agencies that are utilizing sophisticated machine learning AI tools to match with millions of influencers and thousands of brands looking to connect with the exact right type of influencer,” Bart says. 

But the concern is still there. 

In 2022, clothing brand Pacsun faced major backlash when it announced that Lil Miquela would be its newest ambassador. Critics of the move said the influencer perpetuates stereotypes and unrealistic beauty standards. They argued that a real human should have been hired instead. 

More recently, Formula E Team Mahindra this month fired its female AI influencer, “Ava Beyond Reality,” after fans complained for many of the same reasons. 

Tomo Lazovich, a senior research scientist at the Institute for Experiential AI at Northeastern University, compared the situation to strikes in Hollywood this past summer and fall. 

Among the demands of the actors and writers was to establish guardrails on how and when AI could be used in the creative process.

“It’s not super surprising that we’re seeing the issue in the online content generation space where companies are trying to use generative AI to replace people that would otherwise be creating that content,” Lazovich says.

There have been efforts to help social media influencers to unionize. In 2021, SAG AFTRA released the Influencer Agreement, “a pre-negotiated deal that offers members who work as influencers some protections when negotiating a contract for sponsored content,” as reported by Time magazine, for example. 

NGN Magazine, in your inbox.

Sign up to receive thoughtful stories that chronicle innovations and examine inspiring solutions to global problems.

But the true impact AI will have on the creator economy is still up in the air as both brands and influencers continue to learn how to use the technology most efficiently. 

Moreover, Lazovich has concerns around how these AI-generated tools will be used to spread misinformation and elevate bad actors. These social media stars have also been called out by parents for the potential harm they could cause teens.

“Like many AI tools, it’s going to be another way to amplify existing biases within our society,” says Lazovich. 

The way Michael Ann DeVito, a Northeastern professor of computer sciences and communication studies, sees it, the problem starts on a foundational level, specifically with the mountains of images and text these AI tools are being trained on. 

“The bigger issue I’m seeing, and I would actually say the more insidious issue, is that all of the AI technology that is being used to generate these influencers in the first place are based on massive amounts of copyright violations,” she says. “As far as AI influencers are concerned, this is the core of the problem.”

Many of the AI tools out in the market today are large language models, or LLMs. For them to function, they have to take in large amounts of data to be able to produce original content. 

The companies behind tools like DALLE- 2 and Midjourney are being sued right now for violating copyright laws by training their AI models on what the plaintiffs are arguing is copyrighted work. 

The companies assert that they can train their models on this data because it all falls under fair use, a U.S. copyright law that allows for a party to use the copyrighted work of another for purposes such as “criticism, comment, news reporting, teaching, scholarship, or research,” according to the Copyright Alliance

DeVito isn’t buying that argument. 

“There are criteria around fair use,” she says. “There are limits on how much you can sample. They scoop up entire web pages, entire videos. They have no internal control.” 

So what’s the best way forward? 

DeVito says we start by abandoning the large language models like DALLE 2 and Midjourney and create models that get trained on smaller and more specific data sets with stronger curation tools and with data that is accurate and can actually be legally used.

“Smaller purpose-built models that have more curation in the data that is going into them could be a lot safer,” she says. 

Cesareo Contreras is a Northeastern Global News reporter. Email him at c.contreras@northeastern.edu. Follow him on X/Twitter @cesareo_r and Threads @cesareor.