The CEO of AI company Anthropic, Dario Amodei, has responded to the United States Department of Defense and the White House after military contractors working with the Pentagon were ordered to stop using Anthropic’s products.

In an interview with CBS News on Saturday, Amodei said Anthropic objected to the use of its AI models for mass domestic surveillance and fully autonomous weapons capable of firing without any human input.

He stressed that Anthropic supports most of the US government’s proposed use cases for its AI systems, except for surveillance and fully autonomous weapons platforms. According to Amodei, these issues touch on fundamental American principles, including the right not to be spied on by the government and the right for military officers to make decisions about war themselves, rather than delegating them entirely to machines.

The Defense Department recently labeled Anthropic a “supply chain risk,” effectively barring military contractors from using its products in defense-related work. Amodei described the move as “unprecedented” and “punitive.”

However, he clarified that he is not categorically opposed to the development of fully automated weapons in the future, particularly if foreign militaries begin deploying them. For now, he argued, AI technology is not sufficiently reliable to operate autonomously in military settings.

Amodei also said the legal framework has not kept pace with the rapid advancement of AI and called on Congress to establish guardrails to prevent the use of AI in domestic mass surveillance programs.

In a related development, US Defense Secretary Pete Hegseth on Friday declared Anthropic a “supply-chain risk to national security” and ordered that, effective immediately, no contractor, supplier, or partner doing business with the US military may engage in commercial activity with the company.

Hours later, rival AI firm OpenAI accepted a contract with the Defense Department to deploy its AI models across military networks.