The United States military reportedly utilized the artificial intelligence system Claude, developed by the AI company Anthropic, during recent strikes targeting Iran. This deployment occurred even after former President Donald Trump issued a directive to halt the use of Claude and other Anthropic technologies across federal agencies. The decision to proceed with the AI tool in critical military operations underscores the complexities and challenges surrounding the integration of advanced AI systems within national defense frameworks.
During the weekend’s coordinated air campaign, which involved close collaboration with Israeli forces, Claude played a significant role in various operational tasks. These included analyzing vast amounts of intelligence data, pinpointing strategic targets, and conducting combat simulations to optimize mission planning. Such functions are integral to modern warfare, where rapid data processing and predictive modeling can significantly influence the outcome of military engagements.
President Trump’s order to discontinue the use of Anthropic’s AI technologies came just hours before the airstrikes commenced. He publicly labeled Anthropic as a potential security threat, instructing all federal departments to immediately stop employing the company’s AI tools. However, recognizing the deep integration of Claude within defense systems, the Department of Defense was granted a grace period of up to six months to gradually phase out the technology. This transitional allowance reflects the practical difficulties in swiftly removing AI systems that have become embedded in critical military infrastructure.
The ban was part of a broader dispute between the Pentagon and Anthropic. The Department of Defense had demanded unrestricted access to Claude’s capabilities, including for sensitive applications such as surveillance and autonomous weapons development. Anthropic resisted these demands, citing ethical considerations and adherence to its own terms of service, which prohibit certain uses of its AI models. This impasse led to a public confrontation between the AI firm and senior US government officials, highlighting the growing tensions over ethical AI deployment in defense contexts.
Amid this controversy, OpenAI, a leading competitor in the AI industry, announced it had secured a contract with the Pentagon to provide its technology for classified defense networks. This move positions OpenAI as a potential alternative to Anthropic’s services, signaling a shift in the military’s AI partnerships. The evolving landscape of AI adoption in national security continues to raise important questions about control, ethics, and the future role of artificial intelligence in warfare.