Trump orders Us agencies to drop anthropic Ai after pentagon military clash

Trump has ordered every U.S. federal agency to cut ties with Anthropic’s AI tools, dramatically escalating a clash between the company and the Pentagon over how artificial intelligence can be used in military operations.

In a post on Truth Social on Friday, the president said government departments must “immediately cease” using Anthropic’s products, granting a six‑month window for agencies that already rely on the company’s technology to phase it out. According to Trump, no exception will be made for defense or intelligence agencies.

“The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” Trump wrote, arguing that decisions about battlefield tactics and tools “belong to your commander‑in‑chief and the tremendous leaders I appoint to run our military.”

The move came one day after Anthropic rejected a Pentagon request to weaken or remove “safeguards” in its flagship Claude model. Those protections are designed to stop the AI system from being used for “mass domestic surveillance” or for “fully autonomous targeting” and other lethal military applications, according to the government’s characterization of the dispute.

Anthropic, founded by former OpenAI researchers, has positioned itself as a safety‑first AI lab. Its public policies emphasize “constitutional” guardrails intended to limit harmful or abusive uses of its models. In the confrontation with the Pentagon, the company is understood to have held the line on its commitment not to support certain kinds of surveillance and weapons‑related deployments, even when pressed by the U.S. military.

Trump’s directive effectively turns that stance into a red line. By framing Anthropic as a “radical left” and “woke” actor, the administration is signaling that alignment and safety constraints seen as politically or ideologically motivated will be treated as grounds for exclusion from federal work-especially in defense‑related contexts.

The order is poised to disrupt ongoing and planned contracts across the federal government. Agencies that have integrated Anthropic’s Claude into internal workflows-for drafting documents, analyzing large datasets, assisting with research, or supporting customer‑facing chat systems-will now have to identify alternative vendors, migrate systems, and complete security reviews within six months. That timeline is aggressive for large bureaucracies, particularly in areas like defense, intelligence, and healthcare, where data sensitivity and compliance requirements are strict.

Defense technologists and procurement officials now face a complicated balancing act. On one side is Trump’s insistence that the military must not “take orders” from AI companies about what is or is not permissible in warfare. On the other is a growing consensus among AI researchers and some national‑security experts that unconstrained deployment of advanced models in surveillance, targeting, and cyber operations could introduce serious risks, including escalation, misuse by insiders, and unintended collateral harm.

The Pentagon has in recent years promoted a framework for “responsible AI,” with principles around reliability, traceability, and human oversight. Anthropic’s refusal to loosen Claude’s guardrails effectively created a test case for how far contractors can go in imposing their own safety norms when they conflict with operational demand from military customers. Trump’s response suggests that, under his administration, such resistance will carry heavy commercial and political consequences.

For the broader AI industry, the episode highlights a looming fault line: companies that hard‑code strong limits against military and surveillance use, versus those that are willing to tailor models more closely to government requirements. Startups and major cloud providers alike may now feel pressure to clarify where they stand on enabling offensive cyber tools, battlefield decision support, and large‑scale monitoring of communications, knowing those positions could either unlock or shut down access to lucrative federal contracts.

Inside agencies, the practical impact will vary. Departments that have only experimented with Claude in pilots or noncritical tools can likely switch to other large language models with relatively low friction. But units that built workflows, fine‑tuned models, or integrated Anthropic’s APIs deeply into their infrastructure will have to undertake significant technical work: exporting and transforming data, retraining or re‑prompting alternative systems, re‑validating outputs for accuracy and bias, and re‑certifying security and compliance.

Privacy and civil‑liberties advocates are likely to view the clash through a different lens. From their perspective, Anthropic’s refusal to support “mass domestic surveillance” is not ideological “wokeness” but a baseline ethical constraint. The president’s insistence that no AI vendor should be able to draw that line could be read as an attempt to clear the way for more expansive monitoring capabilities, with fewer built‑in checks coming from private‑sector partners.

There are also legal questions in the background. While the president has broad authority over federal procurement and defense policy, individual contracts are governed by complex regulations and, in some cases, explicit commitments about how technologies may and may not be used. If any existing agreements with Anthropic embed usage‑restriction clauses aligned with the company’s safety policies, attempts to override them could spark disputes over contract termination, penalties, or future liability.

Geopolitically, the decision may reverberate beyond U.S. borders. Allies that collaborate closely with the United States on intelligence sharing and defense technology are watching how Washington handles AI governance inside the military. Some partner nations have adopted stricter stances on autonomous weapons and pervasive surveillance, closer to the position Anthropic is reportedly defending. A hard break from those norms by the U.S. could complicate joint projects or invite calls for stronger international constraints on military AI.

For Anthropic itself, being blacklisted across the federal government is both a commercial setback and a branding moment. Losing access to U.S. public‑sector contracts and defense budgets is costly, but the confrontation also sharpens the company’s identity as a firm willing to walk away from government money rather than relax its safety posture. That stance could resonate with corporate customers, researchers, and policy advocates who prioritize ethical constraints on AI use-while simultaneously making Anthropic a political lightning rod.

In the coming months, attention will focus on three fronts: how quickly agencies manage the technical and logistical challenge of removing Anthropic tools; which rival AI vendors step in to fill the gap; and whether Anthropic’s firm line on military safeguards inspires other companies to adopt similar commitments-or convinces them to quietly move in the opposite direction to avoid the same fate.