Future computer scientists urged to embrace AI collaboration
Tina Eliassi-Rad met with students on the Oakland campus to discuss concerns about possible job loss and environmental impacts of AI

OAKLAND, Calif. — When Northeastern University student Duo Xu thinks about her future career in computer science, she wonders about the impact of AI.
Then she heard a promising perspective from one of the leading minds in the field.
“What can I do to prepare, if so many jobs will be occupied by AI?” Xu asked.
Tina Eliassi-Rad, the inaugural Joseph E. Aoun professor of computer science at Northeastern University, had the answer.
“What I see is more human-AI collaboration, as opposed to AI taking over for humans,” replied Eliassi-Rad, who was in Oakland to speak at the campus’s AI Summit, at which discussions focused on how AI is reshaping learning and work.
Economists have shown that AI has not met its promise in terms of productivity, Eliassi-Rad said, adding that true boosts in human effectiveness will come when humans learn to use AI as collaborators.
“I think how to use AI to make yourself more productive is where we’re going,” she said.
Xu was among a group of students who met with Eliassi-Rad, a core faculty member at Northeastern’s Network Science Institute who works at the intersection of artificial intelligence and network science. She talked with students in the Student Union, a cozy wood-paneled room in the center of campus.
Working a job designing AI agents should involve more than the computing challenges, Eliassi-Rad told the students.
“If you are a student and you want to get into AI, you should broaden your horizon and not just think of it as an engineering endeavor,” she said. “The people who are creating these models are the people who have to be involved in mitigating any kind of risk.”





This requires asking questions, from the design stage, about the social impact an AI agent might have, she says, including why a large language model gives particular outputs and how to interpret what its outputs mean. LLM models are only as good as the data used to train them, she says, and they are never objective.
“Perhaps if you’re on the top of the socio-economic system, the model will do well for you,” she said, “but if you are not, then perhaps it wouldn’t do well because it didn’t see as much of your data or your people’s data. The key is to be skeptical. Take the scientific way of thinking and don’t give up your agency.”
Eliassi-Rad also emphasized this point to an audience of faculty and tech professionals attending the AI Summit in Lisser Hall. Scientists and researchers developing AI-integrated tools should focus less on a tool’s predictive accuracy, she said, and more on how the tool is making its predictions and how it is useful.
Editor’s Picks
“As we train them, we should insist on improving our understanding of the world,” she said. “Instead of using them because they make our jobs more efficient, and I’m not against efficiency, we should know what they are doing.”
Students who attended the more intimate session shared concerns about AI eliminating some of the jobs that new college graduates often occupy and even about the amount of power required to power AI agents.
“How do you see the risk of environmental impact being addressed?” asked Russell Sample, a first-year computer science major.
“We’re not addressing it,” answered Eliassi-Rad, “and it’s huge.”
But there are small things that individuals can do to reduce the energy burden, she said. Executing a search on an AI agent is energy intensive, but when a person selects “deep research” to their query, the amount of energy required increases because the search requires multiple steps and adds to computational time, she said.
“Every time you go to any of these LLMs and say ‘deep research,’ just think about the costs of the planet,” she said. “Simple things. Even adding ‘please.’ Don’t say ‘please,’ just give it a command.”
Of all the risks that AI presents, including hallucinations and privacy breaches, environmental impact is at the top of his list of concerns, Sample said after the panel.
“It’s my biggest concern with AI at the moment,” he said. “The data scraping is a problem, but I feel like it’s being addressed a lot more. The environmental issue isn’t being focused on as much.”
Other students were encouraged by what Eliassi-Rad had to say, especially by her forecast that the relationship between humans and AI agents will become more like a partnership.
First-year computer science and communications major Mitya Nigam said that she has been worried about AI agents replacing humans on the job market, but that the idea of a human-AI collaboration is inspiring.
“She had a relatively optimistic view on AI,” Nigam said. “I thought that was really interesting.”










