Skip to content

Northeastern AI expert urges businesses to ditch the ‘one-model-fits-all’ approach

Usama Fayyad, senior vice provost for AI and data strategy at Northeastern, says companies should start small when introducing AI into their operations.

Usama Fayyad sitting on stage speaking into a microphone in front of audience members sitting and listening to him.
Usama Fayyad,  senior vice provost for AI and data strategy at Northeastern
speaks at Northeastern’s Roux Institute in Portland, Maine. Photo by Matthew Modoono/Northeastern University

With all the hype around chatbots like ChatGPT, Gemini and Claude, many business leaders feel pressured to quickly overhaul their operations using generative AI.

That would be a mistake, says Usama Fayyad, senior vice provost for AI and data strategy at Northeastern University. 

Instead, it is better for companies to slow down, narrow their focus and start small. 

“It’s the opposite philosophy to what ChatGPT and all the others in generative AI are doing,” Fayyad said last week, speaking at the inaugural AI in Action Business Summit on Northeastern’s campus in Portland, Maine.

“They’re basically saying, ‘We want a single model that is a know-it-all model. It can solve any problem. It speaks all languages. It knows all fields of science.’ That is not the right direction,” he said. “In fact, I believe it is the opposite direction to where we should be headed.” 

The most popular — and most promoted — chatbots in use today are large language models, which means they were trained on large swaths of data. Those models aren’t ideal for the business world, Fayyad said, because they are bloated and energy-intensive. 

It is much more practical for companies to turn to smaller, tailor-made models that are increasingly gaining traction in the AI research space.   

“Small language models provide an amazing solution,” Fayyad added. “They’re efficient, high-speed, private — you don’t send your data anywhere — and you can customize them, which to me is the most important part.”  

“Don’t ask, ‘What is the largest model I can afford to build? You should be asking, ‘What is the smallest LLM that I can get away with?’”  

But even smaller, tailor-made models are only as good as the data they are trained on. And that’s been true since researchers started working with machine learning technologies in the 1940s, Fayyad stressed. 

In fact, many of the core technologies behind today’s generative AI tools are similar to those researchers used decades ago. The biggest difference isn’t the algorithms — it’s the massive increase in available data for training models.

“Between the internet and the digital transformation, it’s a completely different world than 70 years ago,” he said. “That’s what made the difference.” 

Therefore, the companies that are most successful in integrating AI into their operations are experts at capturing and taking action on data, Fayyad said. They understand that their data sets are their secret sauce. 

“Many businesses, I don’t care how small you are, you have the kind of data that an OpenAI, Microsoft or Google would kill to have access to,” he said.

For many workers, generative AI can feel confusing, fast-moving and hard to apply effectively on the job. But to stay competitive, both companies and employees must invest in AI literacy and upskilling, Fayyad said.

“‘Will AI replace my job?’ is the eternal question these days,” he said. “The answer is no, but a human using AI will replace your job. … It’s important to use this stuff.”  

However, it’s important to understand these technologies can also be used nefariously. Fayyad pointed to the proliferation of deepfakes as an example. 

“Today, we are creating a world where you cannot trust or believe anything you read, see and hear,” he said. “In this world, where you are so dependent on digital, how can you function when anything you see is no longer trustworthy?” 

To combat these issues, Fayyad highlighted the importance of safety regulations and responsible AI frameworks being developed at universities like Northeastern. 

“We think it’s a huge area to think about — how should we use this technology and where should we take it?”