Facebook’s ad delivery algorithm is discriminating based on race, gender and age in photos, Northeastern researchers find by Cody Mello-Klein October 25, 2022 Share Facebook LinkedIn Twitter Photo by Matthew Modoono/Northeastern University Have you ever noticed the faces in Facebook ads seem to match your gender, race or age? That isn’t an accident, Northeastern computer science researchers say. A new paper published by a group of researchers from Northeastern’s Khoury College of Computer Sciences found that Facebook’s algorithm delivers advertisements differently based on who is pictured in the ad. Alan Mislove, professor and senior associate dean for academic affairs. Photo by Matthew Modoono/Northeastern University “When you choose to include a picture of a Black person, that will significantly make it more likely the ad will be delivered to Black users,” says Alan Mislove, professor and senior associate dean for academic affairs in Khoury and one of the authors of the research. “When you choose to include a picture of a woman versus a man, in general it will go more to women, except images of young women, which go more to older men.” Discriminatory advertising is well-documented on Facebook. In June, the U.S. Department of Justice secured a settlement agreement after charging Meta with algorithmic bias for its housing advertisement delivery system. The paper itself is part of a broader focus on algorithmic auditing and ad delivery for Mislove, who co-authored the paper with Khoury associate research scientist Piotr Sapiezynski, Ph.D. candidate Levi Kaplan and third-year cybersecurity student Nicole Gerzon. The researchers’ previous work showed how problematic Facebook’s ad delivery system was, skewing ad delivery along largely demographic lines. Job ads in the lumber industry are delivered disproportionately to white men, while jobs for janitorial positions go disproportionately to Black women, according to Mislove. Mislove says this often happens independent of what advertisers have told Facebook’s ad delivery system. The way it works is advertisers upload their ad to Facebook and then specify their targeted audience, such as 18- to 35-year-olds in Boston. “That’s a big population,” Mislove says. “Your ad very likely will not be shown to them all. The algorithm is going to decide, in some sense, which subset sees them, and it does that by making an estimate of relevance, meaning which users are most likely to engage with this.” But how is the algorithm learning to discriminate? Like any algorithm, Facebook’s ad delivery system is trained using data. In this case, that includes all the data Meta has collected on all the previous ads that have run on Facebook and who has clicked on those ads. This latest research shows that the image included in the ad is what Facebook’s algorithm responds most strongly to. “The algorithm is going to figure out, ‘What can I use that is most likely to cause somebody to click?’” Mislove says. “In this case, race and gender are predictive of whether somebody’s going to click, so it uses that just because that’s exactly what it’s designed to do.” Piotr Sapieżyński, Associate Research Scientist at the Khoury College of Computer Sciences. Photo by Matthew Modoono/Northeastern University The algorithm doesn’t know or care about race, gender and age, but it still uses those features to make “very crude” estimations about where to send housing or job ads, Sapiezynski says. “Probably Facebook might say they don’t try to do race classification from pictures, but the results that we’re presenting show, at some level, it is happening because the algorithm does not recognize that this is just an ad of a person but it is a particular kind of person that Black people are more likely to engage with,” Sapiezynski says. “So, effectively, it is doing race classification on pictures of people.” In some cases, this might be exactly what an advertiser is looking for. If they want to attract more women or people of color, they will likely use images with women and people of color and the algorithm will pick up on that when it delivers the ad. In other cases, it can be extremely problematic. In what Mislove called the Creepy Old Man Effect, ads featuring young women were delivered disproportionately to older men. Part of the challenge is that there is very little transparency when it comes to how this system works. Mislove, Sapiezynski and their team spent tens of thousands of dollars and countless hours setting up the ad campaigns they used to figure out how this system functions. But the average advertiser doesn’t necessarily have the time or resources to do that. There are also broader policy questions about how existing civil rights protections play into algorithms and artificial intelligence. The Fair Housing Act, Equal Credit Opportunity Act and Age Discrimination in Employment Act all include regulations around traditional advertising––but not ads on social media. “We need to make it more clear when this is happening, to whom it’s happening and then give advertisers control to say, ‘Maybe I don’t want this ad delivery algorithm doing this on an opportunity ad where it potentially could be illegal because of civil rights protections,’” Mislove says. Between the Justice Department’s recent lawsuit against Meta and the White House’s blueprint for an AI bill of rights, the debate around the real-world implications of these systems is heating up. Social media companies are pushing for self-regulation, but Mislove says there is no guarantee that would address the problem. “I think they have a poor track record of self-regulating,” Mislove says. “In many cases, they don’t want to engage on these issues because it goes at the core of their business model. … You’d certainly need regulation and laws to address what you can do, but it’s not clear what’s the best way to do that yet.” For media inquiries, please contact Shannon Nargi at s.nargi@northeastern.edu or 617-373-5718.