AI Company Anthropic Takes Legal Action Against DOJ Over 'Supply Chain Risk' Label

Mar 10 2026

Legal action has been initiated by Anthropic, an artificial intelligence firm, against the Trump administration, aiming to overturn the Pentagon's recent classification of the company as a "supply chain risk."

This controversial designation arose after Anthropic declined to allow unrestricted military applications of its technology, particularly its AI chatbot, Claude.

On Monday, Anthropic filed two lawsuits: one in a California federal court and another in the federal appeals court located in Washington, D.C. Each lawsuit addresses different aspects of the Pentagon's decision.

The San Francisco-based company received its risk designation last week following a public dispute regarding the potential military use of its AI technology. The lawsuits seek to annul this designation and halt its enforcement.

A significant conflict over the military's use of artificial intelligence came to light in late February, coinciding with U.S. military actions against Iran. Defense Secretary Pete Hegseth abruptly ended the Pentagon's collaboration with Anthropic and other agencies, invoking a law aimed at countering foreign supply chain threats to label a domestic company.

Trump and Hegseth have accused the rising AI firm of jeopardizing national security after CEO Dario Amodei stood firm against concerns that the company's products could facilitate mass surveillance or autonomous weaponry.

In response to Hegseth's unprecedented designation of Anthropic as a supply chain risk, the company pledged to pursue legal action, arguing that this application of law was intended for foreign threats, not American firms.

Anthropic contends that this legally questionable action has โ€œnever before publicly applied to an American company,โ€ setting a concerning precedent.

The impending legal confrontation could significantly impact the dynamics within Big Tech at a pivotal moment, influencing regulations surrounding military applications of AI and establishing safeguards to prevent technology from endangering human life.

What do you think?

๐Ÿ‘ 0
๐Ÿ‘Ž 0
๐Ÿ”ฅ 0
๐Ÿ˜Š 0
๐Ÿ’ฉ 0
๐Ÿ˜ 0
๐Ÿ˜ค 0
More Like This