Northeastern researchers team up with Accenture to offer a road map for artificial intelligence ethics oversight by Khalida Sarwari August 29, 2019 Share Mastodon Facebook LinkedIn Twitter Northeastern professors Ron Sandler and John Basl provide organizations a framework for creating a well-designed and effective artificial intelligence and data ethics committee in a new report produced in collaboration with global professional services company Accenture. Photo by Matthew Modoono/Northeastern University As artificial intelligence rapidly advances, and concerns over data use and security grow after each high-profile scandal, the discussion among researchers, policymakers, and consumers has centered on how to ensure that human values such as privacy and autonomy are protected as we adopt these systems. These discussions have driven efforts to develop ethical guidelines for our increasingly automated world in the form of oversight committees. Now, Northeastern professors John Basl and Ron Sandler are offering organizations guidance for how to create a well-designed and effective committee based on similar models used in biomedical research. Maintaining that an ethics committee that is adequately resourced and thoughtfully designed can play an important role in mitigating digital risks and maintaining trust between an organization and the public, the researchers provide a framework for such a system in a new report produced in collaboration with global professional services company Accenture. “It’s not just that we think it’s a good thing to do,” says Sandler, a philosophy professor who directs the Ethics Institute at Northeastern. “It’s one of the options that is on the table at companies, and policymakers and regulators are thinking about how to do this, but the problem is that guidance on how to effectively build the committees is woefully lacking. The report is meant to provide some guidance to organizations for how to start setting up these committees.” “In our conversations with clients across a range of industries, we are consistently seeing this issue as top of mind as organizations implement new technologies,” says Steven Tiell, who is in charge of Responsible Innovation at Accenture Labs. “One of our objectives, in addition to raising awareness of the broad diversity of considerations in this process, was to develop a thoughtful framework for how companies can approach ethics oversight to raise relevant questions and concerns internally and early on.” Does artificial intelligence deserve the same ethical protections we give to animals? read more In developing the report, the professors drew upon the establishment of committee-based oversight precedents that have been set in medical research, specifically in the domain of experiments and tests on animals and humans. Existing committee models run the gamut from being mandated by law, to extra-legal, to consultative and advisory. Some are guided by highly specified guidelines and protocols, while others exist in a less legalistic context. “We’ve been doing research on these issues for some time and it became really clear about a year ago that there was a significant need for some kind of committee-based oversight related to data and information ethics,” says Sandler. “It was also clear that there wasn’t any good guidance on it, and so we thought, well why don’t we take what we’ve learned from other contexts and apply it to this context rather than trying to start from scratch.” The professors considered, for example, institutional review boards, which are committees that oversee research on human subjects that is federally regulated or receives federal funding. These boards arose as a result of ethical misconduct, the professors say, such as in the Tuskegee syphilis studies and the Willowbrook children studies. The professors also considered committees that oversee embryonic stem-cell research. Unlike institutional review boards, these committees are not federally mandated; instead, the responsibility of developing guidelines for stem-cell research fell upon experts and stakeholders selected by the National Academies of Sciences, Engineering, and Medicine. The new report emphasizes the importance of assembling panels comprised of a diverse field of experts who are charged with analyzing from an ethical standpoint research projects involving personal data, programs that use big data, machine learning, and artificial intelligence, and products and services that employ those technologies. The report encourages organizations that are building ethics committees to ask themselves several fundamental questions: What are the foundational values we care about? Where is this committee going to sit in the organization? How is it going to function? What is its purview and what are its powers? “If you want to build a committee that works effectively and if you really want to build ethical capacity within an organization, it’s a significant undertaking where you can’t just throw together a few people with ethical expertise,” says Sandler. Added Basl: “We lay out the kinds of experts an organization will need—someone who knows local laws, someone who knows ethics, a variety of technical experts, and members of an affected community. Who those individuals are, or what their particular expertise is, depends on the kind of technology being developed and deployed.” The collaborators say an oversight committee is only one component among a suite of interventions and governance mechanisms that should be considered to raise the ethical bar within organizations and in the products they create. Accenture’s Tiell builds on this, saying, “in many cases, ethics translates to culture change. Integrating robust ethics often requires a strong definition of company-wide values, building context-specific principles from those values, communicating those values and principles internally and externally, and ultimately the practice of building those values into products and services. It is critical to establish a strong foundation of governance and risk management that will support the scalable and generative growth of ethical practices and culture while reducing existential risks.” “This paper is meant to start the process,” says Sandler. “It’s not meant to be the end. We don’t think we’ve got it all figured out. What we’re trying to do is help organizations ask the questions to get started.” For media inquiries, please contact firstname.lastname@example.org.