Supreme Court ruling clears path for researchers, journalists who test Facebook and other platforms for discrimination by Molly Callahan June 3, 2021 Share Mastodon Facebook LinkedIn Twitter Photo by Ruby Wallau/Northeastern University In a major victory for researchers and journalists who test online platforms for discrimination, the U.S. Supreme Court on Thursday narrowed the scope of a federal law that previously created the potential for criminal and civil liability for violations of website terms of service. The 6-3 decision rests on the Court’s interpretation of a 1986 federal law, the Computer Fraud and Abuse Act, which prohibits people from “intentionally accessing a computer without authorization” or “exceeding authorized access.” The case, Van Buren v. United States, deals with a former police officer who agreed to search digital license plate records outside his official duties as an officer, in exchange for cash. It’s not the broad strokes of the incident, but the implication of the court ruling, that clears the way for researchers to continue the important work of probing websites and online platforms for evidence of bias and discrimination. “The ruling removes a significant cloud of legal uncertainty around the kind of work we do,” said Alan Mislove, professor of computer science at Northeastern and one of a number of researchers, journalists, and representatives from nonprofit organizations that signed an amicus brief with the American Civil Liberties Union. Left, Christo Wilson, associate professor in the Khoury College of Computer Sciences. Right, Alan Mislove, professor, senior associate dean of academic affairs, and associate dean of faculty in the Khoury College of Computer Sciences. Photos by Matthew Modoono/Northeastern University Mislove, his colleagues at Northeastern, and many others at universities, news organizations, and civil rights groups across the country are involved in research that uses data from social media platforms and other websites to determine whether the algorithms that power them may be implementing discriminatory behavior. To do so, the researchers often have to study these websites without the cooperation of the companies themselves—often in ways that violate the company’s byzantine terms of service. “Because we study these things that the providers of these systems often don’t want to be studied, we’re typically running these algorithmic audits from outside the company and often without the company’s knowledge,” Mislove said. For example, Mislove and a team of Northeastern researchers recently tested Facebook’s advertising system with a series of online advertisements. As the researchers tweaked the images, Facebook’s system presented the ads more predominantly to specific racial and gender groups. The findings represented a troubling consequence: They showed that the group of users to whom Facebook chose to show ads could be skewed along gender and racial lines, in potential violation of federal laws that prevent discrimination in ads for employment, housing, and credit. (In late 2019, Facebook promised to change the way it manages the advertisements for housing, employment, and credit that run on its platform. Subsequent research by Mislove and his colleagues, however, showed that ads could still skew along racial and gender lines.) A Northeastern University team tested Facebook’s algorithm and found its delivery of advertisements is skewed by race and gender read more Until Thursday, such audits of company algorithms were a risky endeavor: There hadn’t been a clear legal interpretation of the Computer Fraud and Abuse Act that allowed for the ethical work that the researchers were doing. As it stood, the CFAA “provided a significant chilling effect on the kind of research we do,” Mislove said. “We have had students in the past who, when we explained the potential risks based on this 1980s law, decided that they didn’t feel comfortable doing this research with us.” Indeed, an overly blunt interpretation of the CFAA could have made criminal “a breathtaking amount of commonplace computer activity,” wrote Supreme Court Justice Amy Coney Barrett in the majority opinion. “Take the workplace,” Barrett wrote. “Employers commonly state that computers and electronic devices can be used only for business purposes. So, on the government’s reading of the statute, an employee who sends a personal email or reads the news using her work computer has violated” the 1986 law. In the opinion, Barrett also cited the amicus brief signed by Mislove and his Northeastern colleague Christo Wilson, associate professor of computer science: An overly broad interpretation of the CFAA might “criminalize everything from embellishing an online dating profile to using a pseudonym on Facebook.” Thursday’s ruling represented “an important victory for civil liberties and civil rights enforcement in the digital age,” said Esha Bhandari, deputy director of the ACLU’s Speech, Privacy, and Technology Project, in a press release. “The Supreme Court’s decision will allow researchers and journalists to use common investigative techniques online without fear of CFAA liability,” Bhandari said. “It clears away a major barrier to online anti-discrimination testing and research, which is necessary to hold powerful companies and platforms accountable.” Mislove said: “The ruling puts us on firm legal ground. We don’t have to worry about a CFAA prosecution, although we still think very carefully about our methods and the ethics of what we do.” For media inquiries, please contact Shannon Nargi at email@example.com or 617-373-5718.