Skip to main content

A U.S. appeals court has declined to temporarily block the Pentagon’s decision to blacklist Anthropic, marking a partial legal victory for the Trump administration amid an ongoing dispute over artificial intelligence policy and national security.

The ruling by the U.S. Court of Appeals for the District of Columbia Circuit allows the designation to remain in place while litigation continues. The decision is not final but means Anthropic will remain restricted from certain government contracts for now.

The U.S. Department of Defense classified Anthropic as a “national security supply-chain risk” after the company refused to modify safety restrictions on its AI system, Claude, particularly regarding potential military applications such as surveillance or autonomous weapons.

Anthropic argues that the designation is unlawful and retaliatory, claiming it violates constitutional protections including free speech and due process. The company has warned the move could result in billions of dollars in lost business and long-term reputational damage.

The legal situation remains complex. In a separate case, a California federal court previously blocked a related Pentagon order, suggesting the government may have acted improperly. The current case, however, involves a broader legal framework that could extend restrictions beyond military contracts to wider government procurement.

U.S. officials maintain that the decision is based on operational concerns rather than the company’s stance on AI ethics. According to the Justice Department, Anthropic’s refusal to adjust its system could create uncertainty in military use and potentially disrupt critical operations.

The case represents a rare instance of a U.S. technology firm being formally designated as a supply-chain risk, highlighting growing tensions between national security priorities and private-sector AI governance.