Trump is preparing an executive order that would dramatically reshape how artificial intelligence is regulated across the United States, aiming to strip states of much of their power over the emerging technology and concentrate authority in Washington.
According to a draft summary circulating among policymakers, the order would override or preempt AI regulations adopted by states such as California, where lawmakers have pushed some of the country’s toughest rules on automated decision-making and AI safety. The intent is to prevent what Trump allies describe as a “patchwork” of conflicting state laws that companies say makes it difficult to build and deploy AI systems at scale.
Under the draft, Attorney General Pam Bondi would be tapped to lead a new federal task force charged with challenging state-level AI statutes and regulations. This task force would be empowered to evaluate whether particular state measures impose “onerous” or excessive requirements on AI developers, cloud providers, or companies integrating AI into their products and services.
One of the most aggressive tools envisioned in the draft order is financial leverage. States that adopt AI rules the administration deems too burdensome could see certain streams of federal funding restricted or conditioned on rolling back or revising those measures. That approach would mirror tactics used in other policy arenas, where Washington has tied highway or education funds to state compliance with federal priorities.
The order would also direct the Federal Trade Commission to step into a larger role on AI oversight. Rather than creating a bespoke federal AI regulator from scratch, the Trump plan leans on existing consumer protection and competition laws. The FTC would be instructed to issue detailed guidance explaining how current statutes—covering deceptive practices, data privacy, and unfair competition—can be applied to AI products and services in ways that effectively supersede, or “preempt,” many state-level rules.
David Sacks, who has emerged as Trump’s point person on both AI and digital assets, is expected to oversee much of the order’s implementation. Sacks would coordinate between the White House, the Justice Department, the FTC, and other agencies to ensure that industry guidance, enforcement priorities, and legal challenges are aligned with the administration’s broader objective: a single, national AI rulebook written in Washington, not fifty different ones authored by state legislatures.
The text of the executive order is still being refined and could change before it is finalized or made public. But even in draft form, it signals a sharp turn away from the more state-driven, experimental approach that has characterized early AI policymaking in the U.S., especially in tech-heavy jurisdictions like California, New York, and Colorado.
Behind the scenes, the executive order effort is running in parallel with moves on Capitol Hill. Republicans in Congress are weighing whether to attach a temporary moratorium on certain forms of AI regulation to a must-pass defense authorization bill. That moratorium could limit states’ ability to enforce or even enact new AI rules while a federal framework is being developed, further reinforcing Washington’s primacy over the issue.
Supporters of Trump’s approach argue that the U.S. cannot maintain its technological edge if companies must design products to comply with dozens of divergent state rules. They point to examples from data privacy and fintech, where varying state standards have forced businesses to adopt the most restrictive rules nationwide or forgo certain markets altogether. In their view, AI’s strategic importance—for national security, economic competitiveness, and global influence—justifies a more centralized regulatory regime.
Critics, however, see the draft order as a sweeping power grab that risks weakening consumer safeguards and undermining democratic experimentation at the state level. States like California have spent years developing rules on algorithmic transparency, bias audits, and workplace AI monitoring. Civil rights advocates argue that those localized protections often move faster than Washington and respond more closely to residents’ concerns about discrimination, surveillance, and job displacement.
The conflict also reflects a deeper philosophical split over how AI should be governed. One camp favors “light-touch” regulation that prioritizes innovation, framing problematic uses of AI as issues to be handled under existing laws on fraud, discrimination, or safety. The other camp believes AI’s complexity and societal reach demand new, sector-specific rules—especially in areas like healthcare, finance, policing, and employment—where algorithmic decisions can have life-altering consequences.
If Trump’s order moves forward largely as described, it would tilt decisively toward the first camp. By relying on agencies like the FTC and the Justice Department, the administration would effectively say that the legal tools to police AI already exist; what’s needed is coordinated federal enforcement and clear guidance, not a tangle of state-by-state experiments.
For businesses deploying AI, a strong federal preemption regime would likely bring both relief and uncertainty. Relief, because a single national standard is easier to navigate than 50 separate ones. Uncertainty, because the contours of that standard would be shaped not by detailed statutes but by evolving agency interpretations, enforcement actions, and court decisions—often after the fact.
States, in turn, would have to decide whether to challenge the order in court. Attorneys general in blue states have previously sued over federal attempts to preempt local authority on issues ranging from environmental rules to labor standards. Similar litigation around AI is likely if the final order substantially curtails state enforcement powers or ties federal dollars to policy compliance in ways they consider coercive.
There is also a practical question of scope. “AI” is not one technology but a broad category spanning chatbots, recommendation engines, facial recognition, predictive policing tools, credit scoring systems, and more. Any federal attempt to preempt state laws will have to grapple with whether all these uses are treated the same, or whether certain high-risk sectors are carved out for stricter rules—or for continued state involvement.
International dynamics are another factor. While the European Union is moving forward with far-reaching, risk-based AI regulation, the U.S. is still debating whether to mirror any of that approach. A strong federal preemption model would signal to global partners and competitors that America intends to rely substantially on general consumer and competition law, rather than creating a dense web of AI-specific rules. That could make the U.S. more attractive for some types of AI research and deployment, but it might also create frictions for companies operating across both U.S. and EU markets.
The emerging role of figures like David Sacks underscores how AI and crypto policy are increasingly intertwined in Washington. Both technologies raise questions about data control, systemic risk, market power, and national security. A consolidated White House strategy that spans AI models, digital infrastructure, and tokenized finance suggests that the administration sees them not as siloed issues, but as components of a broader technological and geopolitical contest.
For everyday users, the outcomes of this fight will influence how transparently AI systems operate, how easily people can contest AI-driven decisions, and how much recourse they have if algorithms cause harm. Whether those rights and safeguards are set mainly in state capitols or in federal agencies will shape the balance of power between citizens, companies, and governments for years to come.
As the Trump team refines the order and negotiators on the Hill explore an AI moratorium tied to the defense bill, the central question is no longer whether AI will be regulated, but who will hold the pen. The answer could lock in a federal-first model of AI governance that sidelines states—or trigger a long legal and political struggle over where the boundaries of that authority should lie.

