The AI portrait app Lensa has gone viral, but it might be more problematic than you think

Various images of President Biden in Lensa's AI portrait app
The AI model behind Lensa’s app is trained on artwork from other artists collected from across the internet. By learning the techniques used in these works and facial recognition, the AI can map those styles onto whatever images the app users input. Photo illustrations created by Northeastern University.

If you’ve been scrolling through your social media timeline this week, you’ve probably noticed a shocking number of colorful, artistically rendered selfies. No, an artist didn’t just rake in a huge commission check. These portraits are made using an artificial intelligence portrait app, Lensa, which went viral this week after the company launched its “magic avatar” feature.

With Lensa, users upload 10 to 20 images–and pay $7.99–and the app uses a neural network known as Stable Diffusion to map a user’s facial features and produce dozens of portraits a variety of art styles, like fantasy, science fiction or anime, within 20 minutes. But while the app has taken the internet by storm, it’s also reigniting debates around ethics, representation and data bias that have become increasingly common as AI and AI art have become more pervasive. 

Specifically, artists have raised concerns about the murky ethics underlying the technology. The way AI applications like Lensa work is that developers and engineers use large data sets to train a model to recognize and learn certain characteristics or styles. Once the model learns that information, it can look at a new picture and reproduce that image in one of the styles it’s been trained to reproduce.

In this case, Lensa’s app has been trained on artwork created and posted by artists across the internet, and some artists claim this not only devalues their own work, churning out 50 images at a fraction of the cost of a commission, but it is potentially appropriating their work, including their signature.

“You can learn from other artists and their artwork, but just like I do not have the right to pick my favorite photo from an artist and use it like it’s mine, we should ask: Should it be allowed to use every artwork out there to train AI models to imitate artists’ unique styles so closely?” says Cansu Canca, ethics lead for Northeastern’s Institute for Experiential AI. “How far should the copyright extend given the use of copyrighted artwork for these new technologies?”

Artists in online communities like DeviantArt, which produces the kind of art that Lensa is drawing from, typically self-regulate. If someone posts art that looks too similar to another artist’s work, that person is usually criticized and ostracized. But it’s more difficult to point the finger at an AI.

“How do you attribute responsibility when an algorithm generated it?” asks Jennifer Gradecki, an associate professor of art and design at Northeastern.

But does the relationship between AI art applications and human artists need to be adversarial? 

Dakuo Wang, an associate professor of art and design at Northeastern, researches human-centered AI with the intent of getting algorithm engineers and designers to consider the impact of AI on individuals and society at large. He argues that the future of AI is collaborative, not competitive.

“I totally believe that AI will alter how people think, how people work, how people create, but it shouldn’t be a competitor,” Wang says. “The artist or writer has a vision, a story, in their mind. They know what they want to write or paint. AI is simply providing them a better brush or painting tool to realize their vision and their storyline in their mind.”

Unfortunately, companies often focus on drumming up hype or investment in their technology, Wang says, portraying “AI technology to be as good or even better than” humans. 

“That relationship cannot be sustainable, and I don’t think it will last,” he says.

But Lensa raises ethical questions that go beyond the relationship between artists and AI. Many have raised privacy and data collection concerns as well. 

Users might just want to use the app to create a new profile picture, but Lensa’s privacy policy allows the company to use their “face data” to train their AI algorithm. The app’s terms of use also give Lensa “perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable, sub-licensable license” to use the portraits created in the app. 

“They have your face data in perpetuity,” Gradecki says. “They can do what they want with it, train whatever other algorithm. …It’s one thing when you’re training an algorithm to recognize a face and it’s for a photo app. It’s another thing when it’s potentially going to be used by law enforcement to try to find suspects.”

The images and model being used by Lensa are also stored on Amazon Cloud Computing Services, which comes with its own terms of service.

“It’s quite possible when it’s uploaded to that, Amazon uses that data for its own analytic purposes, and it becomes part of the products that they market and sell,” says Derek Curry, an assistant professor of art and design at Northeastern.

Some users, particularly women, have also said the app produces oversexualized images with no prompting. Other users have criticized Lensa for struggling to portray Asian faces, or even erasing their race all together.

Gradecki says these representation biases are more an indictment of the art styles the AI was trained on than the technology. Women are frequently oversexualized in fantasy art, she says. The model simply learned what it was taught.

“There are a lot of stereotypes that are built into these styles,” Gradecki says. “Algorithms replicate bias. They’re just built to do that.”

But Wang says biased data is not an excuse to reproduce bias through an app like Lensa. 

There are already technical solutions to these problems, he says. Companies can put in more work into scrubbing and scraping the data that is being fed to their algorithms, using categories like gender and race as “control signals” to make the model more fair. The real problem is that engineers and designers intentionally or unintentionally “don’t even think about it.”

“[We need] to make sure we can prepare the next generation of engineers and designers to be better prepared and to think more thoughtfully about the human value when they design this,” Wang says.

For media inquiries, please contact media@northeastern.edu