Lose your job to AI? Under a new proposal from New York State Assembly member and congressional candidate Alex Bores, that could automatically trigger a cash payout from the federal government.
Bores has unveiled what he calls an “AI Dividend” – a policy framework designed to send direct payments to Americans if rapid advances in artificial intelligence and automation begin to meaningfully erode employment. The idea is to tie a new kind of stimulus to measurable signs that machines are replacing human workers at scale.
He outlined the plan in a post on X, describing the AI Dividend as a contingency program rather than an always-on benefit. Instead of permanent universal checks, the system would activate only when specific economic indicators show that AI-driven automation is starting to push people out of jobs or depress wages.
The policy document highlights a growing anxiety in corporate boardrooms and among economists. “CEOs are openly warning that AI will significantly reduce white-collar employment,” the proposal notes, citing projections that as much as half of existing jobs could, in theory, be automated in the coming years. Entry-level and routine positions – often the first rung on the career ladder – are considered especially exposed.
Under the AI Dividend framework, payments would be triggered by a combination of macroeconomic signals that suggest human labor is losing ground to technology. While the full list of indicators has not been made public in detail, Bores points to factors like persistent declines in labor force participation, falling labor share of income, and sudden spikes in productivity that are not matched by wage gains. The goal is to capture the moment when automation is clearly enriching capital but not workers.
Structurally, the AI Dividend is pitched as a kind of automatic stabilizer for the AI era. When the economy is growing and employment is strong, the system would remain dormant. But if AI adoption surges and job losses follow, a pre-set formula would kick in, sending payments directly to households without needing a fresh act of Congress for each downturn. That design aims to avoid the slow, politically contentious stimulus debates that characterized previous crises.
Bores frames the proposal as a way to balance innovation and social stability. He does not argue for slowing down AI development outright; instead, he suggests that if society allows companies to reap enormous gains from automation, the public should share in that “dividend” when the social costs show up as lost jobs or weaker bargaining power for workers. In that sense, the plan implicitly treats the benefits of AI as a kind of shared national resource.
The AI Dividend also reflects a broader shift in the policy conversation around automation. For years, fears of mass technological unemployment were often dismissed as exaggerated. But the recent explosion of generative AI tools capable of writing, coding, designing, and analyzing has pushed the concern from hypothetical to immediate. White-collar roles in law, finance, customer support, media, and software are now seen as vulnerable, not just factory and warehouse jobs.
Compared with traditional proposals like a universal basic income, Bores’ approach is narrower and more conditional. A UBI would pay everyone, all the time, regardless of economic conditions. The AI Dividend, by contrast, is explicitly tied to measurable harm from automation. That could make it more politically palatable to skeptics worried about permanent, expensive entitlement programs, while still providing a safety net if AI causes sharper disruption than expected.
At the same time, the conditional nature of the program raises practical questions. Policymakers would need to agree on which economic indicators truly reflect AI-driven job loss, rather than normal business cycles or other shocks. There would also be trade-offs between making the triggers strict enough to avoid constant payouts and flexible enough to respond quickly when workers are genuinely displaced.
If implemented, the AI Dividend could have far-reaching implications beyond simple cash transfers. By guaranteeing some level of support in the face of automation, it might give workers more bargaining power when negotiating with employers adopting AI tools. It could also buy time for people to retrain, upskill, or switch careers without falling into immediate financial crisis.
Critics, however, may argue that payments alone are not enough. Without parallel investments in education, re-skilling programs, and stronger labor protections, the checks could become a band-aid over deeper structural changes in the labor market. There is also the risk that businesses could lean even harder into automation if they believe the state will absorb more of the social fallout.
Supporters of ideas like the AI Dividend counter that waiting for disruption to fully arrive before acting would be irresponsible. Building an automatic, data-driven safety mechanism in advance, they contend, is a way to harness the upside of AI while acknowledging its downside risk. In this view, the policy is less about pessimism and more about risk management in a period of unprecedented technological uncertainty.
The debate around the AI Dividend also taps into a larger question: who should benefit from the productivity windfall AI promises? If a small number of firms and investors capture most of the gains while millions face career instability, political pressure for redistribution is likely to intensify. A structured dividend tied to automation metrics is one emerging attempt to preempt that conflict with a rules-based compromise.
Whether Bores’ proposal gains traction in Congress or remains a talking point in a campaign platform, it underscores how rapidly the economic conversation around AI is changing. What was once a speculative question about future robots is now entering legislative drafts and policy playbooks. As generative AI systems continue to advance, more lawmakers are likely to put forward their own versions of an AI-era social contract – and the concept of an “AI Dividend” may be an early signal of where that conversation is headed.

