The United States military reportedly deployed the artificial intelligence system Claude, developed by the AI company Anthropic, during recent airstrikes targeting Iran. This development comes in direct contradiction to an order issued by former President Donald Trump, who had mandated an immediate halt to the use of Anthropic’s AI tools across federal agencies. The decision to proceed with Claude’s involvement in the operation underscores the complex challenges and competing interests surrounding AI integration within national defense frameworks.
The weekend military campaign, conducted in collaboration with Israeli forces, saw Claude AI playing a significant role in various critical tasks. These included analyzing intelligence data, pinpointing strategic targets, and running sophisticated combat simulations to support operational planning. Such functions are deeply embedded in the military’s decision-making processes, demonstrating the increasing reliance on advanced AI technologies to enhance battlefield effectiveness and situational awareness.
Just hours before the airstrikes commenced, President Trump publicly announced a directive requiring all federal departments to immediately discontinue the use of Claude and other AI products developed by Anthropic. He cited serious security concerns, labeling the company’s technology as a potential risk to national security. However, recognizing the extensive integration of Claude within defense systems, the Department of Defense was granted a grace period of up to six months to gradually phase out the AI tool, allowing time to transition to alternative solutions without disrupting ongoing operations.
This presidential ban was part of a broader conflict between the Pentagon and Anthropic. The dispute arose after Anthropic refused to grant the Department of Defense unrestricted access to Claude, particularly for uses involving surveillance and autonomous weaponry. The company’s stance was rooted in ethical considerations and strict adherence to its terms of service, which prohibit certain applications of its AI technology. This principled resistance led to a highly publicized confrontation between Anthropic’s leadership and senior US government officials, highlighting the ethical dilemmas posed by AI deployment in military contexts.
Amid this controversy, OpenAI, a rival artificial intelligence firm, announced it had secured a contract with the Pentagon to provide its own AI technology for classified defense networks. This move positions OpenAI as a potential successor to Anthropic in supplying AI solutions tailored for military use. The shift signals a competitive landscape in the defense AI sector, where companies balance innovation, ethical responsibility, and government demands.
The use of Claude AI in the Iran strikes despite the presidential ban illustrates the ongoing tension between national security imperatives and ethical governance of emerging technologies. As AI continues to play an increasingly pivotal role in military operations, the debate over control, transparency, and responsible use remains a critical issue for policymakers and industry leaders alike.