Skip to content

How should AI be regulated? Northeastern expert explains what safety guardrails are needed in the tech industry

Governor Gavin Newsom speaking at a podium during a press conference.
California Gov. Gavin Newsom vetoed a proposed AI safety building over the weekend. Sept. 25, 2024. AP Photo/Eric Thayer

Over the weekend, California Gov. Gavin Newsom vetoed what many believed would have been a landmark bill in establishing safety guidelines for companies developing artificial intelligence technologies.   

The bill, SB-1047, called for the establishment of a new state entity known as the Board of Frontier Models and would have required companies to follow new safety standards and make certain disclosures and stipulations before and during the development of their AI models. 

Introduced by state Sen. Scott Wiener, the bill drew the ire of the tech industry, with major players like Google, OpenAI, Meta and Microsoft speaking out against the legislation, saying it would stifle innovation.   

In a letter to the state Senate last weekend detailing his decision, Newsom outlined several issues he saw with the bill, including that it focused too broadly on expensive large-scale models without considering small, specialized models. 

Additionally, the bill did not take into account when these AI systems are deployed in high-risk environments or use sensitive data, and instead applied “stringent standards to even the most  basic functions — so long as a large system deploys it,” he wrote.  

Northeastern Global News caught up with Usama Fayyad, executive director of Northeastern University’s Institute for Experiential AI, to get his thoughts on the bill and what advice he has for lawmakers looking to develop guardrails on AI technologies. 

His comments have been edited for brevity and clarity. 

Portrait of Usama Fayyad.
Usama Fayyad, Executive Director for the Institute of Experiential Artificial Intelligence, said the proposed bill focused too closely on large models and laid much of the blame on developers. Photo by Matthew Modoono/Northeastern University

Do you think Newsom’s issues with the bill were justified?

They are politically astute comments because they are correct. I’m not particularly a fan of restricting based on how much money was spent to develop the model. And he’s correct that blocking these large-scale systems might give a false sense of security for some people. Because these measures wouldn’t restrict the wrong stuff, just some stuff. People will then say, “Oh we have safety guidelines and guardrails. We can check the box and move on.”

A lot of the real damaging stuff is likely happening with the smaller models that are specialized. Models for deepfakes or specialized for email spam or misinformation you don’t need. You don’t need a large language model to create misinformation, and that to me is the biggest threat.

His comments are technically correct and defensible, and he seems to imply that he would support a more properly oriented legislation. Now whether that’s true or not, I don’t know, but it’s definitely needed. 

What problems did you see with the bill?

No. 1, is the whole tying the threshold to how much money was spent on the model. You could argue that most models today, even the big ones, don’t qualify for that threshold.

The other one is that it places the blame on developers, so I’m worried about what it would do to the open source movement and community. I believe holding a developer for a technology (they made) almost never makes sense — this may be a controversial statement. Let’s say you have a developer who works on developing a better disk drive or better storage technology. They have no control over what the storage technology is going to store. Is it going to store good things? Is it going to store evil things? How the technology is used is a whole different matter itself.

That’s where it becomes very iffy in terms of going after developers. You want a healthy open source movement. You want a healthy development environment. You want a healthy research community. 

As an AI expert who understands how these technologies work, what approach should lawmakers take in developing AI safety legislation that is more comprehensive and realistic? 

Defining liabilities and the liable entities is very important. If you put something out there and you make it available for use, you should define who’s going to be liable, and for what. 

Again, we have to be super careful here. 

Those liability laws work well in many cases, like if a car manufacturer puts out a car and we identify who makes profit from it, if a certain component explodes or the car fails to brake, we know who to go after. That’s an example of stuff done right because it provides incentives for those companies to be super careful around areas like safety. 

Now there are bad examples. Medical liability is an example that kind of went to the extreme because it wasn’t exactly designed right. It insists that you typically must identify a single doctor or physician who is going to be liable. The truth is in health care, it’s a whole team that can include yourself because you might be taking stuff that’s harmful to you and contraindicated. 

The second area is usage. I’ll go back to cars, my favorite example. There’s such things as traffic laws. There’s fines associated with traffic laws, and they go all the way to imprisonment, depending on the seriousness of the violation. You decided to use the technology in a way that society does not accept or you broke the rules that go along with the license to use. That’s why driver’s licenses are a good idea. Those kinds of restrictions make a lot of sense. 

The third one is ingredients. Was the data used to train the models obtained legally or illegally? Does the data constitute an intrusion on people’s rights? Do you have full rights to the data? … Data is a huge component here and probably the most important bit in the equation. The algorithms are trivial compared to the data. 

In his letter, Gov. Newsom said California is home to 32 of 50 of the world’s leading AI companies. Last month, Newsom signed into law a number of other AI bills related to deepfakes, watermarking AI generated content, and protecting the digital likeness of performers. What role do you think California lawmakers play in setting the standard for the rest of the country and the world? 

California has significance in two ways. 

Number one it is one of the largest states and typically one of the most progressive ones, so they are willing to try things out and be ahead of the curve. They’ve had an impact on cars, environmental restrictions, and on health care. 

There is no doubt whatsoever that in the world of AI and technology relating to AI, Silicon Valley is at the center, and, therefore, anything you do in California is going to have both national and international consequences.