Renowned tech scholar and ex-Biden official talks AI hype, White House work and the prospects of ‘superintelligence’

Alondra Nelson at undergraduate commencement at Fenway Park
Photo by Matthew Modoono/Northeastern University

This is part of our coverage of Northeastern’s 2023 commencement exercises. For more information, visit our dedicated commencement page.

Few experts in the world of science and technology are as in tune with developments in artificial intelligence as Alondra Nelson, a renowned scholar, writer and policy expert, who oversaw the drafting of the White House’s “AI Bill of Rights.” 

For the last two years, Nelson served as acting director at the White House Office of Science and Technology Policy and a deputy assistant to President Joe Biden. The AI Bill of Rights—the first document of its kind as it pertains to the emerging technology—articulates a set of five principles to help steer the development, use and implementation of AI-based tools. 

Over the weekend, Nelson was awarded an honorary doctorate at Northeastern’s 2023 undergraduate commencement ceremony for being a “groundbreaking advocate for scientific discovery and technology innovation that focuses on ethics, racial and gender equity, and access.” Northeastern Global News sat down with Nelson for a wide-ranging conversation about so-called generative AI, tech-based “hype cycles” and the promises (and dangers) associated with the current moment of AI enthusiasm.

The conversation has been edited for brevity and clarity.

In October 2021, you and one of your colleagues in the White House’s office of science & technology policy wrote an op-ed discussing the need for an “AI Bill of Rights.” What developments in the AI space prompted this op-ed? It would be great if you give us some insight into what you all were seeing then that might give us some context into the present.

The Biden-Harris administration came into office with a kind of tech policy agenda—and a big tech accountability agenda. It’s hard to see all the pieces come together, because there is a lot going on in this space. For example, there was, and continues to be, an antitrust and competition policy push that was happening in the National Economic Council (NEC) and also with folks like Sen. [Amy] Klobuchar. There was this sense that one of the ways that we could have accountability in big tech was through things like competition. The U.S. and the EU established what’s called the Trade and Technology Council that was starting to meet (AI was one of the important workstreams there). 

It was also the case that my colleague Tim Wu, who was on the NEC until January of this year, was working on a project called the Declaration for the Future of the Internet, which was an assertion by 61 nations of internet freedoms. And then there were things like the Summit for Democracy; the first one was in November of 2021, and the Blueprint for the AI Bill of Rights was announced as a deliverable for the second Summit for Democracy, which took place earlier this year. 

So there was this larger context, this larger cauldron of things happening; and the Blueprint for the AI Bill of Rights was part of this bigger Biden administration strategy. That said, we also came into office with some of the concerns that people had [about the uses of emerging technologies]. Like information integrity, sometimes called misinformation and disinformation; mental health harms that result from people engaging in social media, particularly young people; concerns about facial recognition technologies and their use in surveillance. These are three distinct examples, but what they share in common is the use of AI and algorithmic amplification. 

In the social media space, something like YouTube—it is AI and the algorithms that are used in these systems that sometimes makes them more pernicious and less helpful. But those processes also serve us sometimes, right? If you think of YouTube, they help us find things that maybe we want to see; things that we might be really into. It was clear that AI was undergirding a lot of both the tremendous, exciting possibilities for work in the space of science and technology policy, and also those three concerns—and others as well. There was almost a year of engagement on these issues that started with that op-ed.

Is it the algorithmic processes that you just mentioned the core of how AI operates?

There are different processes, but what AI does in general is use data that you put into a system, and then that system will make a prediction, make a decision, make a decision about a consumer choice that might lead to certain kinds of consumer behavior, or output text or images. What we have with the turn to generative AI, or advanced AI, is these processes at scale. It’s both the velocity and the scale that we haven’t experienced before. We have been, for quite a while, in an AI moment, and what we have now is kind of a step change.

Lets talk about the AI Bill of Rights. Thinking about it metaphorically, it sounds almost like a founding document. I’m wondering if you can talk about how the public should think about this document?

The “bill of rights” framing is quite deliberate because part of what we were trying to communicate to lots of different stakeholders was that, even as new technologies come [onboard]—even with GPT-4—it doesn’t change our fundamental rights; it doesn’t change U.S. employment law; it doesn’t change U.S. civil rights law; it doesn’t change [privacy law], to the extent that we have federal privacy law.

It was partly to say that technologies change, but conversations in policy circles around AI always come back to wanting to preserve civil liberties and civil rights—and to wanting to advance these technologies while hanging on to democratic values. 

Part of what the document is attempting to do is to say: what does it look like? What does it mean, both on the level of principles and technical practices, that we are being mindful of democratic principles while doing this? The use of the bill of rights as a frame is to say, not that we need another bill of rights, but that the rights that we already have, and the constraints that we have on people violating the law, exist even if we’re talking about, say, quantum computing, a technology that is quite speculative.

Then [the Blueprint for an AI Bill of Rights] had two other purposes. One is educational. The document is long but it is also intended to be read by a general audience. We hope that it is pleasant to read; it’s meant to be illustrative, with lots of examples. It’s meant to be at a reading level that’s not technical, that doesn’t assume an expert. We were anticipating, much like the stakeholders we engaged, a pretty wide readership, from high school students to parents, to policymakers and state and federal legislators. We really imagined that the American public would be reading it, and we wrote it to be read as such.

And it’s also meant to be aspirational. So, this is a new space of technology, even in October of 2021 [when the Blueprint for an AI Bill of Rights consultation process got underway]. What is the world we want to live in with technology? And technology, whether it’s AI or the Internet of Things or quantum computing, are tools for us to use. And some of the rhetoric around AI and advanced AI suggests that we’re not in control; that humans are not in control of, so to speak, setting the table and setting the values around how the technology is used. 

The aspiration here is to say: you should have a right to privacy, and you should not be subject to algorithmic discrimination, and you should have at least the option to get a human being if you are put into an algorithmic system loop, or an AI system loop. These things are hard to do; sometimes they’re more expensive. Sometimes they require that we go back to the woodshed one more time, and do a bit more on the engineering side. And it was intended to be aspirational. Part of what the White House is supposed to do—what the president does—is set a vision for us at our best.

There seems to be two somewhat disparate objections to the pace of AI development at present—one being that the technology will fall into the hands of bad actors (in the op-ed, you allude to how China has used AI-based facial recognition technology to racially profile Uighurs), and the other being that, without the appropriate guardrails, AI could become “superintelligent” and subjugate or even kill human beings. Where do you stand in relation to these potential dangers, and do you lend credibility to the latter claims?

I think they’re related. What we’re seeing is a spectrum of risks. Some of those risks were some of the things I talked about that were very much front of mind for us when we were beginning the public engagement process around the AI Bill of Rights—things like algorithmic discrimination; surveillance and facial recognition technologies keeping people away from benefits and services because they’re being tossed out of employment pools, for example; and then, of course, disinformation and misinformation. With generative AI comes concerns about disinformation and misinformation at scale; concerns about copyright, too. There have been conversations for probably just under a decade about job loss and automation, too. You can even think about late 20th century conversations about the robots taking our jobs, and these sorts of things. 

AI is a quite powerful technology, and job loss is important; but what job loss means potentially is a change in social organization. And that’s actually quite a profound future that we need to think about and anticipate—but also one that we have to shape. As much as some of the risks give one pause, it’s also an opportunity that we can still shape what this is going to become, and that’s a tremendous opportunity. 

AI is at least a dual-use technology, so you’re always worried about adversarial actors. Especially when you’re working in government; that’s part of what we’re supposed to do is keep the nation safe. I would put the two examples that you raise together by saying: you’re worried about bad actors, but you’re also worried about people who just don’t know what they’re doing; someone who is just sort of messing around with a powerful tool and isn’t some autocratic mastermind. 

And I think empirically—I am a researcher, among other things—the jury is still out on superintelligence. I don’t think there’s a lot of clarity on what it even might mean, and what is the threshold for reaching it. I think for me fundamentally, part of the reason I’ve worked in the space of science and technology is that I think the science and technology is cool, and I also believe in the ability for researchers to ultimately be able to understand it and get a handle on it. I personally am not terribly compelled by arguments that say: we can’t understand it, we can’t control it, and it’s all just going to be unleashed on society, etc.  

What advice would you give people about how to separate legitimate concerns about the dangers of AI from what could be described as fear mongering?

I will say, as someone who’s been working in the space of tech policy, on the one hand I’ve been really encouraged that we are talking about AI on the nightly news, and that it is having a big moment in the news cycle because it is a big change. Pick up your iPhone and it was already the case that half or more of the apps were using some form of AI; and we now know that companies like Snapchat, Instacart and few others who are leaning in as early adopters of generative AI. It’s already here with us, and so people need to understand what that is. 

I also think that the more people who are using it and aware of it from the perspective of caring about democracy means that we’re not giving away to just a few experts the role of making decisions about what these technologies can be. You might not totally get it, and all of us don’t have to get it on the level of expert, but what I hope this public conversation of the last few months has been doing is helping people have a stake or an opinion. 

With regards to your question, it’s hard because you have a lot of really prominent scientists saying a lot of conflicting things. I would say this. For me, when there are very smart people who have done extraordinary things in the world—Turing award winners and others who’ve created new innovations—I’m a little skeptical of these same people saying, you know, I don’t have the power to control this thing. These are people who have worked on the very edges of human knowledge and I want to take them as seriously as possible, but I also want to push them to use their innovation and their brilliance to create an optimal future. And moreover, a future in which we all thrive.

Tanner Stening is a Northeastern Global News reporter. Email him at t.stening@northeastern.edu. Follow him on Twitter @tstening90.