Anthropic’s Claude Reportedly Helped Power Iran Strikes Just After Trump Ordered U.S. Agencies to Cut Ties
Hours after President Donald Trump instructed federal agencies to begin severing their relationship with Anthropic, the U.S. military allegedly leaned on the company’s flagship AI model, Claude, to help execute a large-scale strike on Iran.
According to people familiar with internal operations, U.S. Central Command had already integrated Claude into its decision-making pipelines and continued to use the system during the operation. The AI reportedly supported intelligence analysis, target selection, and the simulation of potential battle scenarios as the strikes unfolded.
The reliance on Claude came in direct tension with a presidential directive issued the previous day. Trump had ordered government agencies to stop using Anthropic’s products and initiate a six‑month wind‑down period. The move followed the collapse of negotiations between the company and the Department of Defense over the terms under which U.S. defense and intelligence entities could employ commercially developed artificial intelligence.
Despite that mandate, the Iran mission moved ahead with systems that still had Anthropic’s tools deeply woven into their workflows. Military planners, faced with an immediate operational timeline, chose not to disconnect or replace the AI components that had been used in prior planning and simulations, according to individuals briefed on the process.
Insiders describe Claude as having been embedded in “live” intelligence and simulation environments, making it difficult to simply flip a switch and remove it. Once an AI model underpins models, dashboards, and analytic tools that commanders rely on, extracting it can risk breaking mission‑critical systems at the worst possible moment. That technical and operational reality appears to have collided with the political directive from the White House.
The directive itself was the culmination of a fraught back‑and‑forth over guardrails, access, and oversight. Defense officials had pushed for broad flexibility in how they could deploy commercial AI-ranging from large‑scale data analysis to wargaming, targeting support, and logistics optimization. Anthropic, for its part, has publicly positioned itself as a “safety‑first” AI developer and has sought to put limits on the use of its tools in lethal or highly sensitive military contexts. The breakdown in talks reportedly centered on how far those limits should go, and who would ultimately control how the models were used.
In the meantime, the Pentagon had already incorporated Claude into a growing ecosystem of experimental and semi‑operational tools. Within Central Command, the system was reportedly used to sift through vast volumes of satellite imagery, signals intelligence, and human reporting, surfacing patterns that analysts might otherwise overlook. In the context of the Iran strikes, this meant suggesting likely high‑value targets, highlighting anomalies, and running “what‑if” scenario tests to estimate likely enemy responses.
Military planners did not hand over lethal authority to Claude, according to those familiar with the systems. Human officers retained final say over target lists and strike timing. Claude’s role was described as advisory and analytic-accelerating tasks that would otherwise demand many teams working around the clock. Still, the system’s involvement underscores how rapidly AI has become embedded in real‑world military decision chains, even as policymakers are still debating where to draw the line.
The timing has intensified scrutiny. The fact that a presidential order to cut ties with a specific AI vendor was almost immediately followed by a mission heavily reliant on that vendor’s technology raises thorny questions: When does an operational need justify short‑term departures from emerging policy? Who decides when it is “safe” to switch off an AI system that a command staff has come to depend on? And what happens when those decisions must be made in the fog of a crisis with national‑level stakes?
The episode also exposes a structural problem: once a commercially developed model like Claude is integrated into classified tools and workflows, it effectively becomes part of the defense infrastructure. Replace it too hastily and mission performance could suffer; leave it in place and the government risks operating at odds with its own stated rules. This tension is likely to recur as more AI providers intersect with military and intelligence use cases.
Behind the scenes, lawyers and compliance teams are now forced to grapple with a legal grey zone. A directive to “halt use” can clash with contractual obligations, ongoing classified programs, and safety considerations if removing software mid‑operation undermines readiness. Agencies may attempt to argue that a six‑month phase‑out inherently allows for short‑term exceptions during active missions. Critics, however, are likely to counter that ignoring the spirit of the order from day one signals that enforcement will be lax or symbolic.
The controversy around Anthropic’s tools also feeds into a broader global debate over AI in warfare. Governments are wrestling with how to reap the advantages of machine‑speed analysis and simulation without sliding into automated targeting or eroding human accountability. Some technologists warn that even “advisory” systems can subtly shape decisions by framing options, ranking risks, and influencing how time‑pressed commanders interpret ambiguous data.
In practical terms, military AI adoption tends to move in stages. First, tools are tested in low‑risk environments: logistics planning, back‑office analytics, or red‑team simulations. Next, they are brought closer to the operational edge, helping intelligence officers triage information or explore hypothetical battle plans. Over time, as confidence grows, AI output may become one of the default inputs into real missions-precisely the situation described in the Iran operation. By that point, reversing course requires more than deleting an app; it can mean rebuilding entire workflows.
The clash between Trump’s order and Central Command’s use of Claude is also likely to reverberate across the technology sector. Other AI companies watching the episode will be forced to reassess how they negotiate with defense customers. Some may seek tighter contract language to shield themselves from abrupt political reversals. Others may double down on strict usage policies, wary of their systems being cited in controversial or escalatory military actions.
For the government, the incident is a warning about the risks of policy whiplash in a domain where integration cycles are long but crises can erupt overnight. If agencies are encouraged to adopt AI aggressively and then abruptly told to unwind those deployments, the result can be exactly what appears to have happened here: de facto exceptions that arise because the technology is too deeply baked into live systems to remove on command. That dynamic undermines both the credibility of directives and the predictability that operational planners depend on.
There is also the question of accountability within the chain of command. If an AI system mislabels a target, downplays collateral damage, or overestimates the likelihood of a successful strike, who bears responsibility? The contractor that trained the model? The acquisition officers who approved the tool? The commanders who weighed the AI’s recommendations? Incidents like the Iran strikes will likely intensify calls for precise doctrine spelling out when and how AI can be consulted, logged, audited, and challenged.
Ethically, the use of commercial AI in lethal operations is forcing a reexamination of the relationship between private tech firms and the state’s monopoly on the use of force. Companies that market themselves as “general-purpose” platforms are discovering that once their tools are powerful enough, they will inevitably be pulled into national security roles-whether they actively pursue that market or not. Anthropic’s clash with the Pentagon over acceptable use is a preview of many similar conflicts to come.
Finally, the Iran strikes episode underscores a core paradox of modern AI: the very capabilities that make systems like Claude attractive to militaries-speed, scale, pattern recognition, and scenario simulation-also make them risky to regulate through blunt political decrees. Once embedded, these tools are no longer just software; they are part of the operational nervous system. Untangling them will demand not only technical work but a coherent, long‑term strategy for how democratic governments intend to wield AI in war and peace alike.

