‘The Agentic AI Book’ unfolds the black box surrounding large language models
Almost anyone today can build an AI agent, but few understand the mechanics of how they work. “The Agentic AI Book” aims to fill that knowledge gap.

Ever since OpenAI unveiled ChatGPT more than three years ago, these large language models that can analyze and generate text have been in the spotlight.
Such programs have become increasingly easier to build and deploy, with organizations ranging from retail chains like Walmart to the U.S. military, which recently entered into an agreement with OpenAI, expressing interest in utilizing them.
However, exactly how LLMs work often remains a mystery even to many who build them thanks to LLMs’ complexity compared to the low bar for entry into the field, says Ryan Rad, professor of computer science at Northeastern University’s Vancouver campus. This gap in knowledge means developers produce LLMs and AI agents, which run on the LLMs, that are prone to errors, hallucinations — when an AI returns an incorrect answer not based on its training data — and inefficiencies.
Rad’s new book — titled “The Agentic AI Book” — focuses on this knowledge gap, providing practical training for all those who are interested in building or fixing AI agents, from foundational concepts to using multiple agents at once.

Ildar Akhmetov, an associate teaching professor of computer science and the director of computing programs at the Vancouver campus, says that as someone who is versed in computer science but “not an AI guy,” he’s the target audience for Rad’s book.
“I can use agents,” he notes, and even orchestrate multiple AI agents together, but he hasn’t yet found a good book “to understand what’s inside, on the conceptual level.”
The books currently on the market are either too dense and require much deeper prerequisite mathematical knowledge than even he has, or are too general and geared toward an audience not fluent in computer science, Akhmetov says. “I don’t need another book about prompting ChatGPT,” he says with a laugh.

A key concept that Rad’s book delves into is the issue of agency. Agency, the ability of something to take action, gets complicated when you start applying it to computers, he says. When a person makes a decision and decides to act on it, they are applying their agency within a situation, but when can we say the same thing about a piece of software?
Rad says that the conversation around AI and LLMs has to start with the foundational question: What is an agent?
Rad thinks that the agency of AI systems should be considered along a spectrum, from zero, which includes most traditional computer software that follows a rigid set of instructions from beginning to end and, therefore, has no room to interpret things on its own, up to what he calls level five, in which AI can “actually generate [its] own output, and execute [its] own output, in a way that sometimes is not predictable.”
A level five agent can build its own custom software tools to achieve its objective without any additional input from a human, Rad explains.
Truly massive or complex tasks, Rad continues, may even require the use of multiple AI agents. For instance, financial services may need one agent to process transactions while another validates them, according to Microsoft.
This is also where coders’ inexperience or undertraining can get them in trouble. Not every agent in a multi-agent system needs to be level five, for example. Sometimes a level one LLM is all that’s needed to assist a level four or five agent.
Editor’s Picks
When an LLM is programmed beyond the needs of its user, it can lead to dramatically higher levels of power usage, Rad says, which both costs the organization running it more money and can overly tax a power grid that’s already straining under the explosion of AI data centers.
Artificial intelligence data centers consume massive amounts of electricity, with one report from the International Energy Agency estimating that the average center “consumes as much electricity as 100,000 households.”
The training provided by “The Agentic AI Book” could help designers and software engineers mitigate some of these effects by building more efficient AI, Rad says.
“The Agentic AI Book” could also help its readers prevent common cyberattacks and diagnose the causes behind AI hallucinations.
Notably, the release schedule that Rad has designed for the book — releasing it chapter by chapter, starting in January with the last chapter scheduled to drop in July, when the complete physical edition also comes out — is meant to keep its contents relevant in a rapidly changing industry. Both Rad and Akhmetov note that, in the two to three years it takes to write and publish a book, the fast-moving world of AI and LLMs would usually have made that book irrelevant.
By the time the final chapter comes out, updates to the first can begin, Rad notes, which will remain available to purchasers through his website.
Akhmetov says he has long suspected that Rad would produce a valuable book to the field, adding with a grin that he is reading the first chapter now, “While it’s still up to date.”










