Skip to content

AI influencer marketing may pose risk to brand trust, new Northeastern research finds

Sian Joel-Edgar, associate professor in human-centered computing, says AI-powered influencers have the potential to damage brand reputation more than their human equivalents.

AI-generated virtual influencer with dark hair in buns and freckles, looking directly at the camera.
Virtual influencers — artificial personalities created by developers — like Lil Miquela are being used to advertise brands both in and out of the metaverse (Screenshot via YouTube)

LONDON — The metaverse and virtual reality worlds offer a whole new avenue for advertisers and marketers.

But a new study led by a Northeastern University researcher delivers a warning for brands looking to delve into using virtual influencers to help them sell their products.

The research found that there is likely to be more reputational damage done to a brand’s trust if artificial intelligence-powered influencers — rather than their human equivalents — are involved in selling a product that a consumer is unhappy with.

Sian Joel-Edgar, an associate professor in human-centered computing who conducted the study, says the findings should drive home to brands that they must keep close control over virtual influencers that they employ to market their products in the metaverse. 

It is a conclusion made in a paper she worked on with three colleagues titled “Virtual influencers in social media versus the metaverse: mind perception, blame judgments and brand trust” and that has been published in the Journal of Business Research.

Those scrolling on social media are likely to be well aware of human influencers — users with huge follower numbers who often use their popularity to advertise brands — operating on the major online platforms, with the likes of footballer Cristiano Ronaldo and singer Selena Gomez topping the follower chart.

As Joel-Edgar explains, virtual influencers — artificial personalities created by developers — are now doing the same in the world of virtual reality, where users, often represented by avatars, are interacting through visual headsets with humans and AI alike.

Some virtual influencers have become such big names that they have been used to advertise brands outside of the virtual world. 

Miquela Silva, also known as Lil Miquela, is an AI robot designed and operated by the Los Angeles-based technology startup Brud. This fictional celebrity, whose TikTok account has 3.4 million followers, has advertised for global brands such as car manufacturer BMW and clothes designer Calvin Klein, among others, with their ads gracing cinema screens.

“Miquela has represented Prada and L’Oreal’s new futuristic cosmetics and beauty ranges because she is seen as the epitome of the future,” Joel-Edgar highlights.

The researcher, who teaches on Northeastern’s London campus, has been keen to understand through her work how humans perceive virtual influencers and other emerging technologies.

“Something that has interested me is the artificial nature of what we’re being presented in the media — people being airbrushed and being made to look very beautiful,” she continues.

“Well, virtual influencers are the extreme end of that. It’s that uncanny valley of where things seem eerie in that they’re actually perfect, from their skin texture to their symmetry. And I thought, I wonder how people will react to this? So that’s how it started — that interest in what it means to be human, with virtual influencers being a quite extreme version of that.”

In a study conducted by Joel-Edgar and her colleagues with 255 participants, they wanted to find out from metaverse participants where they would place blame if they were sold a faulty product on the back of advertising by a virtual influencer.

The researchers — who included Soumyadeb Chowdhury from the TBS Business School in Toulouse, France; Peter Nagy from Arizona State University; and Shuang Ren from Queen’s University in Belfast, Northern Ireland — discovered that users were likely to attribute more blame on a human influencer versus a virtual influencer following a negative experience, as they regarded the human as having “more agency and experience.”

But because they regarded the AI-powered influencers as being less responsible for their actions, that had the potential to impact more negatively on brand trust as a result.

Joel-Edgar gives the example of an influencer falsely advertising that a product comes with a 10-year warranty. “In the case of a human influencer,” she explains, “the brand could dissociate themselves and say, ‘That was the human just saying that and they made an error’. 

“However, if it was a virtual influencer saying it, people would not think it was an error. They would think that had been pre-built in and that it was the brand that was at fault there.”

Joel-Edgar says the study shows that brands will need to ensure they have a level of control over what virtual influencers are saying about their product in order to avoid repercussions if consumers are dissatisfied.

The paper concludes that “organisations must recognise their accountability for the actions of AI-powered virtual influencers, as these directly affect brand trust. Selecting virtual influencers should involve not only their ability to attract followers but also a careful evaluation of potential risks to brand reputation.”

They recommend regular monitoring of what virtual influencers are saying and have a crisis communications plan in place to respond quickly to any negative publicity. “In virtual spaces like the metaverse,” the researchers say, “clearly defining virtual influencers’ roles, capabilities, and limitations can enhance transparency and consumer trust.”

In the long run, legal and regulatory frameworks are likely to be required to provide oversight over virtual influencer marketing, Joel-Edgar argues.

“I think [the study] raises questions about tighter control of virtual influencers and establishing greater transparency — like actually having to declare that they are powered by AI,” she says.

“Things do move quickly in this field. There is now the legal precedent of influencers having to say when they’re advertising — they have to be very upfront about that. But that is relatively recent legislation that has come in. I think this will be another area that will need to have some level of control around it.”