Featured
Hurricane Idalia and the Hawaii firestorm were the most notable weather disasters in the U.S. in 2023, but they were far from the only ones.
In 2023, 25 weather or climate disasters caused at least $1 billion in losses and the deaths of 482 people, according to the National Centers for Environmental Information.
In 2024, artificial intelligence should play a bigger role in predicting those events and saving lives, Northeastern University faculty experts predict.
“In the next 12 months, we are going to see more and more efforts where data-driven systems and artificial intelligence come together,” says Auroop R. Ganguly, professor of civil and environmental engineering and director of AI4CaS (AI for Climate and Sustainability) focus area within Northeastern’s Institute for Experiential AI.
For years, scientists have been using climate prediction models based largely on the rules of physics and chemistry to forecast weather patterns, Ganguly says.
Recently, new hybrid-based models have been developed that also take into account machine learning and other generative AI tools. These models in turn have helped climate scientists create even more accurate and precise systems.
That trend will continue in 2024, Ganguly says, as prediction models continue to improve and as the need for accurate climate data becomes more dire.
“AI will be used with our existing knowledge of physics and processes to help us get better at anticipating and preparing for the disasters of the future,” he says.
You don’t have to look far for an example of the impact of these new hybrid based systems. Doctorate students at Northeastern are working with officials from the Tennessee Valley Authority to provide a more accurate hybrid-based flood prediction system than the one they are using that is based solely on physics.
In the next 12 months, we are going to see more and more efforts where data-driven systems and artificial intelligence come together. Auroop R. Ganguly, professor of civil and environmental engineering and director of AI4CaS within Northeastern’s Institute for Experiential AI
In the next 12 months, we are going to see more and more efforts where data-driven systems and artificial intelligence come together.
In addition to helping better predict weather and climate disasters, government regulations on AI will continue to be refined over the next year, says Sina Fazelpour, assistant professor of philosophy and computer science at Northeastern.
“I think one interesting question for the year ahead will be, ‘What will be the regulatory landscape and the shape of the policies that will come in the U.S.?’” he says.
He points to the Biden administration’s recent executive order on regulating AI. Fazelpour expects that some parts of the order will play a larger role in the next 12 months.
“For instance, there are parts of the executive order that have bipartisan support, like creating an AI Safety Institute,” he says. “Now of course, the particular shape of that institute and what comes out of it is completely unclear.”
Fazelpour serves as an AI fellow at the National Institute of Standards and Technology, a division of the U.S. Department of Commerce that develops technological measurements and standards. The executive order specifically mandated that organization to develop new standards around testing and deploying these new technologies.
While the demand for regulations is clear, Fazelpour says there’s much work to be done in actually coming to an agreement on what those regulations should be.
For example, defining best practices around “red teaming” — a term used to describe companies hacking and breaking their software to look for flaws and vulnerabilities — will be one major challenge, he says.
“There’s so much that we need to create in terms of appropriate evaluation tools for AI systems,” Fazelpour says. “Some of these tools will be technological and will require innovation in philosophy and in human and computer interactions.”
President Joe Biden’s executive order called on Congress to quickly pass regulations around AI privacy. Fazelpour is less clear how these efforts will play out.
“Even when there is stable civil support, the particular ways that we will realize and create technological tools and policy regulations remains to be seen,” he says.
Cansu Canca, research associate professor in philosophy and director of Responsible AI Practice at the Institute for Experiential AI, says businesses have certainly been incentivized over the past years to use more AI-based tools.
“I wouldn’t say it’s an unfounded trend,” she says. “There’s good reason to slowly or rapidly, depending on the sector and function of the company. But of course, that goes back to the issue of what are the limitations and when and where they should use it?”
In the next year, she is hopeful that companies will continue to refine the best practices of what it means to use AI responsibly and integrate ethics adequately into the innovation process.
In terms of user behavior, Fazelpour believes consumers in 2024 will continue to become more savvy about how to best use the technology, taking into account system limits and the fact that they “hallucinate.”
“It’s not supposed to be Wikipedia or Google Search,” he says. “It’s something else.”