A federal judge has blocked the Pentagon from branding Anthropic as a national security/supply-chain threat, finding that the military’s campaign against the AI company violated Anthropic’s First Amendment and due process rights. What happened - U.S. District Judge Rita Lin (Northern District of California) issued a preliminary injunction after a brief hearing, saying the government’s actions amounted to impermissible retaliation. “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Judge Lin wrote. - The ruling follows an internal record that critics say undercuts the Pentagon’s case. Andrew Rossow, CEO of AR Media Consulting, told Decrypt the designation was “triggered by press conduct, not a security analysis” and called the government’s motive “retaliation.” The dispute in brief - In July 2025, the Department of War’s Chief Digital and Artificial Intelligence Office awarded Anthropic a two-year, $200 million contract to deploy its Claude model on the GenAI.Mil platform. Negotiations collapsed when Anthropic insisted on two usage limits: Claude not be used for mass surveillance of Americans and not be used for lethal autonomous weapons, arguing the model was not safe for those applications. - At a February 24 meeting, Secretary of War Pete Hegseth demanded Anthropic drop those restrictions by February 27 or face an immediate supply-chain designation. Anthropic refused. - On February 27, President Trump posted on Truth Social directing federal agencies to “immediately cease” using Anthropic’s tech and labeled the company a “radical left, woke company.” Shortly after, Hegseth called Anthropic’s stance a “master class in arrogance and betrayal,” ordering contractors doing business with the military not to commercially work with Anthropic. - A formal supply-chain designation letter followed on March 3. Anthropic sued on March 9, alleging First Amendment retaliation, due process violations, and breaches of the Administrative Procedure Act. The court’s order and immediate effects - Judge Lin’s order — stayed for seven days and requiring a compliance report by April 6 — blocks the three government actions and restores the status quo ante (i.e., the situation before February 27). She wrote that “punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation.” - The “supply chain risk” label has historically been reserved for foreign intelligence services, terrorist groups, and other hostile actors; it had never before been applied to a domestic company. In the weeks after the label was threatened, defense contractors reportedly began reassessing or terminating relationships with Anthropic. Reactions and implications - Critics say the government’s internal paperwork essentially documented its motive, making the case vulnerable. Rossow described the move as “weaponization” of the supply-chain statute and warned that accepting the government’s theory would create a dangerous precedent: private firms could be blacklisted for adopting safety policies the government dislikes “before any harm occurs,” without due process. - Others see a different, potentially constructive effect. Pichapen Prateepavanich, founder of infrastructure firm Gather Beyond, told Decrypt the ruling could push AI vendors to formalize ethical guardrails when working with governments and shows companies can set clear usage limits without automatically triggering punitive regulatory action — though she added the broader tension between safety priorities and government demands remains. Why crypto readers should care - The case sets an important precedent about government leverage over private tech vendors and the limits on labeling companies as security risks. For crypto and blockchain infrastructure providers—custodians, oracles, node operators, and others that often sit between private actors and government contracts—the ruling signals that political or contractual disputes can’t easily be escalated into blacklisting without running afoul of constitutional protections. It also highlights how safety or ethical guardrails demanded by providers could become flashpoints in procurement talks. Bottom line Judge Lin’s injunction restores Anthropic’s position for now and curtails a government action that had no prior domestic analogue. The ruling may curb the ability of federal agencies to use supply-chain designations as a punitive tool in contractual disputes, but it also underscores the unresolved tension between national-security demands and private-sector safety policies—tension that will matter for AI vendors and infrastructure providers across tech and crypto. Read more AI-generated news on: undefined/news