Skip to content

Anthropic supply chain risk designation could chill innovation, experts say

Anthropic is the first company in the U.S. to be designated a supply chain risk, an unprecedented move that could change the power balance between Silicon Valley and the federal government.

Side profile view of Pete Hegseth standing outside of a building with a plaque on it that says 'Department of War'.
Defense Secretary Pete Hegseth stands outside the Pentagon during a welcome ceremony for the Japanese defense minister at the Pentagon in Washington, Jan. 15, 2026. (AP Photo/Kevin Wolf, File)

The Pentagon’s designation of the industry leading AI company Anthropic as a “supply chain risk” suggests that the U.S. government may be using its supply chain authority as leverage in negotiations with U.S. businesses, according to a Northeastern University expert.

For technology companies working with federal agencies, “this creates a new reality,” said Nada Sanders, a professor of supply chain management at Northeastern. 

“This may mean contract terms may become less negotiable. That changes the power balance between Silicon Valley and Washington,” she added. 

Anthropic was given the “supply chain risk” designation after failing to reach an agreement with the U.S. Department of War – formerly the U.S. Department of Defense — over a $200 million defense contract. 

The department refused to agree with the company’s insistence that its AI models not be used for fully autonomous weapons without human intervention and for mass domestic surveillance. It argues that it has the right to use any AI model for any application it deems appropriate as long as it falls under “lawful” use.  

The supply chain risk designation is a significant and unprecedented penalty for an American company, said Sanders.

In the past, the government has given the designation to foreign companies like Chinese technology firms Huawei and ZTE, whose cases “centered around concerns about state influence, data access and critical telecom infrastructure control,” Sanders said. 

“In general terms, when a company is deemed a ‘supply chain risk,’ it typically means governments believe that doing business with that company could create vulnerabilities in national security, critical infrastructure, data integrity or economic stability,” Sanders said. 

But Anthropic is an American company, so the traditional foreign entity framework doesn’t work in this case, Sanders said, and instead the government’s response seems primarily retaliatory in nature. 

“One of the big concerns here is that labeling a U.S. AI company this way — especially in apparent retaliation for its negotiation stance — could put a chill on innovation,” she said. “Companies may hesitate to develop safety or ethical guardrails if doing so risks exclusion from government markets.”

Anthropic has a history with the Pentagon, being the first AI company to offer its class of large language models for use with the government’s classified networks. 

But in a statement just a day before the supply chain risk designation, Anthropic CEO Dario Amodei outlined why the company wouldn’t cross its two red lines. 

On the prospect of using Anthropic’s technology for mass domestic surveillance, Amodei wrote that it would be “incompatible with democratic values.” 

In regard to its technology being used for fully autonomous weapons, Amodei stated that today’s frontier AI models — the company’s most advanced and powerful large language models — are “simply not reliable enough” yet.  

In a post on X, Secretary of War Pete Hegseth wrote that no contractor, supplier or partner that works with the U.S. military may use Anthropic’s models and that over the next six months the government would transition away from using the company’s tools. 

President Donald Trump confirmed as much, announcing on his social media platform Truth Social that the government would stop using Anthropic’s models. 

Previously, the department had threatened to use the Defense Production Act to force Anthropic to allow the government to use its technology. The Defense Production Act grants the president the ability to direct private companies in support of national defense, but it is unclear if the government may use it in the future. 

In response, Anthropic said the Department of War does not have the authority to block organizations that work with the military from doing business with Claude. The designation only applies to Department of War contracts, the company argues. It also says it plans to sue the government over its “supply chain risk” designation.  

Hours after negotiations between Anthropic and the Department of War broke down, OpenAI, another major American-based AI company, announced that it had signed its own contract with the department. Though in recent days, OpenAI CEO Sam Altman said the deal, in hindsight, was “sloppy” and that he was working to revise it. 

In a message to his staff, Amodei said OpenAI’s deal with the government was “safety theater,” and the Trump Administration was upset with the company because it hadn’t “given dictator-style praise to Trump,” according to a report from The Information. 

Usama Fayyad, senior vice provost for AI and data strategy at Northeastern University, said the U.S. government’s escalation against Anthropic has set a “bad precedent.” 

“It is not clear if it’s legal or will stand, but it will cause major economic, scientific and engineering damage as everyone freezes in fear and the U.S. falls behind other countries pending resolution,” he said. 

By going after Anthropic in this manner, the U.S. government is signaling to other AI companies not to cross it, Fayyad said, “which is not a smart move in a democratic society.” 

And “until resolved, this could cause a huge cratering of Anthropic enterprise business since many other enterprises will hesitate to deal with this ‘scarlet letter’ situation,” he said.