Algorithmic predictions are ubiquitous these days—think of Amazon recommending a book based on past purchases. More controversial use arises when algorithms incorporate not just personal history, but information about people generally, blurring the lines of personal causation and broad, population-level trends.
More and more decisions are made using machine learning algorithms, which, in theory, can be useful and objective. In reality, says Kay Mathiesen, associate professor of philosophy and religion at Northeastern, “data is biased—because it’s data coming from human beings.”
Mathiesen is the lead organizer of the 17th Annual Information Ethics Roundtable, a three-day event that will address the role of artificial intelligence—if it has one at all—in law, employment, and beyond.
This year’s roundtable, titled “Justice and Fairness in Data Use and Machine Learning,” will convene 60 scholars and community members to discuss solutions to a flawed system: the infusion of human bias into artificial intelligence. The event kicks off Friday, April 5, at 909 Renaissance Park on Northeastern’s Boston campus.
Mathiesen emphasizes the timeliness of this event’s topic.
Machine learning is currently used to increase efficiency in granting parole or selecting job candidates. But being trained on past data (which contains correlations, for example, of higher incarceration among people of color and more frequent hiring of men) can cause historically criminalized or underrepresented groups of people to continue to be so.
However, there is the potential to correct these biases, says Mathiesen.
Machine learning, she says, has the ability to both consider the most relevant factors in a decision (“instead of just stuff we think is relevant,” Mathiesen says) and produce more tailored analyses, processing more details about individual situations than people currently have the ability to process on their own. “In principle, it could be much more objective,” she says.
As participants discuss this topic over the course of the three-day roundtable, their time will be equally split between formal presentations and discussion. This is meant to facilitate conversation and ideas among attendees, who, in the 16 years of the event’s existence, have represented such fields as library and information science, computer science, communications, law, politics, psychology, and philosophy.
Though the event is currently at capacity, the sessions will be recorded and published on the Northeastern Ethics Institute website.
Professor Tina Eliassi-Rad, an associate professor in the Khoury College of Computer Sciences, will deliver one of three keynote speeches. Her research explores the intersection of machine learning and network science and has recently expanded to include the ethics of artificial intelligence.
Other keynotes will be delivered by Solon Barocas, assistant professor of information science at Cornell University, and Reuben Binns, a postdoctoral researcher of computer science at the University of Oxford. All speakers and topic areas can be found in the event schedule.
Mathiesen, who founded the annual roundtable in 2002 with fellow philosophy professor Don Fallis, has helped organize the event all 17 years since its inception. Other organizers of this year’s roundtable include professors Ron Sandler and John Basl of the Northeastern Ethics Institute, one of the co-sponsors of the event.
The 2019 Information Ethics Roundtable is also sponsored by Northeastern’s College of Social Sciences and Humanities; Humanities Center; Center for Law, Innovation and Creativity; and Khoury College of Computer Sciences.
For media inquiries, please contact firstname.lastname@example.org.