The Pentagon has formally classified artificial intelligence developer Anthropic as a supply chain risk, a move that immediately restricts the use of its AI technology in U.S. military contracts. This designation, announced on Thursday, effectively prohibits government contractors from integrating Anthropic’s AI platform, Claude, into projects related to the Department of Defense. However, the company’s CEO, Dario Amodei, emphasized that this restriction is narrowly targeted and does not affect the use of Claude in non-military applications or by private sector clients.
This decision comes amid ongoing tensions between Anthropic and the Pentagon, rooted in disagreements over the company’s stringent safeguards designed to limit certain military applications of its AI. The Defense Department, which some within the company referred to as the “Department of War,” has expressed frustration that these protective measures hinder its operational flexibility. Despite this, Anthropic has maintained its stance, refusing to allow its AI to be used for autonomous weapons systems or mass surveillance, which has been a significant point of contention.
In his statement, Amodei clarified that the supply chain risk label applies exclusively to the use of Claude within Pentagon contracts and does not extend to all customers who happen to have defense-related agreements. He also revealed that discussions have been ongoing between Anthropic and the Defense Department about how the company might continue collaborating without compromising its ethical safeguards. Nevertheless, Pentagon Chief Technology Officer Emil Michael later stated on social media that no active negotiations are currently underway between the two parties.
The controversy escalated recently when an internal memo from Amodei was leaked, revealing his belief that Pentagon officials’ dissatisfaction with Anthropic partly stemmed from the company’s refusal to offer uncritical praise of former President Donald Trump. This disclosure added a political dimension to the dispute and prompted Amodei to issue an apology for the memo’s publication. Meanwhile, Anthropic’s investors have been working to manage the fallout from the strained relationship with the Pentagon.
The Pentagon’s designation is particularly notable because it marks a rare instance where an American technology firm has been publicly labeled a supply chain risk, a label typically reserved for foreign adversaries such as China’s Huawei. Despite the official restrictions, sources indicate that Anthropic’s AI technology continues to support certain military operations, including intelligence analysis and mission planning related to Iran. This underscores the complex balance between national security interests and ethical considerations in the deployment of advanced AI tools.
Major tech companies involved with Anthropic have responded cautiously to the development. Microsoft, which integrates Anthropic’s Claude into various platforms, confirmed that its legal team has reviewed the Pentagon’s designation and concluded that Anthropic’s products remain accessible to customers outside of Department of Defense contracts. Microsoft affirmed its commitment to continue collaboration with Anthropic on civilian projects. Amazon, another investor and user of Claude, has not yet commented publicly on the situation.
Additionally, Palantir’s Maven Smart Systems, a software platform that provides intelligence analysis and targeting support to the military, reportedly utilizes workflows built on Anthropic’s Claude code. This highlights the deep integration of Anthropic’s AI within U.S. defense technology ecosystems, even as the company faces increasing scrutiny and operational constraints from the government.
Anthropic had distinguished itself among AI firms by actively courting U.S. national security agencies and positioning its technology as a valuable asset for military applications. However, the company’s firm stance on ethical limitations has led to a protracted conflict with the Pentagon over the permissible scope of AI use on the battlefield. The supply chain risk designation formalizes this rift and signals a significant shift in how the U.S. government manages its relationships with domestic AI providers amid growing concerns about security, ethics, and control.
