A United States judge has temporarily halted the Pentagon’s blacklisting of Anthropic, marking a significant development in the company’s dispute with the military regarding AI safety in combat situations. Anthropic, in its lawsuit filed in a California federal court, contends that U.S. Secretary of War Pete Hegseth exceeded his authority by classifying Anthropic as a national security supply-chain risk. This designation is typically assigned to companies that may expose military systems to potential infiltration or sabotage by adversaries.
Alleging a violation of its freedom of speech under the First Amendment, Anthropic claimed that the government retaliated against its stance on AI safety without providing an opportunity to challenge the designation, violating its Fifth Amendment rights to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, supported Anthropic’s arguments in a 43-page ruling. However, the ruling will not take immediate effect, as there is a seven-day window for the administration to appeal.
The dispute arose following Anthropic’s opposition to allowing the military to utilize its AI chatbot Claude for U.S. surveillance or autonomous weapons. This move by Hegseth, preventing Anthropic from securing certain military contracts, could potentially result in substantial financial losses and damage to the company’s reputation. Anthropic maintains that AI models are not yet reliable enough for use in autonomous weapons and opposes domestic surveillance as an infringement of rights. On the other hand, the Pentagon argues that private entities should not restrict military activities, emphasizing that they would only employ such technology in lawful manners.
Judge Lin’s ruling suggested that the government’s actions seemed to be aimed at punishing Anthropic rather than safeguarding national security interests as stated. Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the court’s decision, emphasizing the company’s commitment to collaborating with the government for the benefit of all Americans through safe and dependable AI technology.
The designation of Anthropic as a supply-chain risk was unprecedented, marking the first time a U.S. company had received such classification under a government-procurement statute intended to safeguard military systems from foreign sabotage. Anthropic’s lawsuit argued that the decision was unlawful, lacked factual support, and contradicted previous commendation of Claude by the military.
The Justice Department countered, asserting that Anthropic’s refusal to comply with contractual terms could introduce uncertainty in Pentagon operations involving Claude, potentially jeopardizing military systems. The government clarified that the designation was not based on Anthropic’s AI safety stance but rather its reluctance to accept contractual terms.
Additionally, Anthropic faces another lawsuit in Washington concerning a separate Pentagon supply-chain risk designation that could result in its exclusion from civilian government contracts.
