A prominent coalition of technology giants, featuring key backers of the artificial intelligence startup Anthropic such as Amazon and Nvidia, voiced serious concerns on Wednesday regarding the Pentagon’s recent move to label the AI firm as a supply-chain risk. This designation has sparked a flurry of activity among investors and partners aiming to mitigate the potential fallout from the escalating dispute between Anthropic and the U.S. Department of Defense.
The Information Technology Industry Council, an influential industry group that counts Nvidia, Amazon, Apple, and OpenAI among its members, released a letter expressing unease about the Defense Department’s consideration of imposing such a risk classification. Although the letter did not explicitly mention Anthropic by name, it clearly referenced the ongoing procurement conflict that has drawn significant attention within the tech and defense sectors.
In recent days, Anthropic’s CEO Dario Amodei has engaged in discussions with several of the company’s major investors and strategic partners, including Amazon’s CEO Andy Jassy. Venture capital firms like Lightspeed and Iconiq have also been actively communicating with Anthropic’s leadership, exploring possible avenues to resolve the tensions. These investors are reportedly reaching out to their networks, including contacts within the previous Trump administration, in hopes of de-escalating the situation and preventing a complete ban on Anthropic’s AI technologies from all Pentagon contractors.
Despite the friction, dialogue between Anthropic and the Pentagon continues, although the precise details of these conversations remain undisclosed. It is important to note that former President Donald Trump has publicly urged Anthropic to assist the government in phasing out its existing AI systems, adding another layer of complexity to the ongoing negotiations. The Pentagon, for its part, has declined to comment on the matter.
The dispute between Anthropic and the Defense Department, which was renamed the Department of War under the Trump administration, has been simmering for several months. At the heart of the conflict lies a fundamental disagreement over the extent to which AI companies should control how their technology is deployed, especially in military contexts. The Pentagon has pushed for AI firms to abandon restrictive conditions in favor of a broad lawful use policy. However, Anthropic has stood firm on its prohibitions against using its Claude AI system for autonomous weapons and mass surveillance within the United States.
Anthropic was among the first AI companies to handle classified information through a cloud supply agreement with Amazon. OpenAI, another major player in the AI field, announced last Friday that it had secured its own classified contract with the Pentagon and argued that Anthropic should not be deemed a supply-chain risk. Connie LaRossa, OpenAI’s national security policy lead, emphasized that both companies share similar ethical boundaries, including rejecting domestic surveillance and autonomous weapons applications. She also revealed efforts to have the Pentagon rescind the risk designation against Anthropic, underscoring the importance of supporting U.S.-based AI innovators.
Investor conversations with Anthropic’s executives have reaffirmed strong backing for the San Francisco-based startup, while simultaneously stressing the need to find a workable solution with the Pentagon. Some investors expressed frustration over CEO Amodei’s approach, suggesting that his handling of the situation has been more confrontational than diplomatic. One insider described the issue as a combination of ego and a lack of political finesse. At the same time, Amodei faces the challenge of maintaining support from employees and customers who admire his principled stance, making any concession to the government a delicate matter.
Amodei has publicly stated that Anthropic cannot, in good conscience, comply with the Pentagon’s demands. Nevertheless, he assured investors that the company remains committed to seeking a resolution. The primary concern among investors is to prevent the official supply-chain risk label, which could severely restrict Anthropic’s ability to sell its products to business clients, including government contractors. Demand for Anthropic’s AI tools, such as the Claude chatbot and Claude Code coding assistant, has surged recently, with Claude topping the Apple App Store’s free app downloads on Monday, surpassing even OpenAI’s ChatGPT.
Defense Secretary Pete Hegseth has warned that if the supply-chain risk designation is applied, all government contractors would be prohibited from using Anthropic’s technology in any capacity. Anthropic has contested these claims, arguing that Hegseth lacks the legal authority to block the use of its AI outside of defense contracts. The Pentagon has not responded to requests for comment on this dispute. Furthermore, Anthropic has declared its intention to legally challenge any supply-chain risk designation imposed by the government.
Despite these assurances, some investors worry that the ongoing conflict could deter prospective customers who want to avoid entanglement in political controversies. These concerns come at a critical juncture for Anthropic, which has raised tens of billions of dollars based on optimistic projections for its enterprise sales. Approximately 80 percent of the company’s revenue is generated from business clients, underscoring the importance of maintaining strong commercial relationships.
Anthropic is currently facilitating secondary share sales for employees to investors, although no final decision has been made regarding an initial public offering. The company’s projected annual revenue, or run rate, has recently climbed to around $19 billion, up from $14 billion just weeks prior, highlighting rapid growth. Meanwhile, several U.S. government agencies have begun phasing out Anthropic’s technology, with the State Department switching to OpenAI following the Trump administration’s directive to discontinue Anthropic’s use within six months.
