X halts revenue for creators who hide AI-generated war footage
X is tightening the rules around how creators can make money on the platform, explicitly targeting undisclosed AI-generated videos related to war and armed conflict. Under an updated version of its Creator Revenue Sharing policy, users who upload synthetic war footage without labeling it as AI-made will temporarily lose access to monetization.
Head of Product Nikita Bier announced that the change takes effect immediately. According to Bier, any creator who posts AI-generated videos depicting combat, military operations, strikes, or other war-related scenes and fails to clearly mark them as artificial will be suspended from the revenue-sharing program for 90 days. In other words, these creators can keep their accounts, but they will stop earning money from their posts during that period.
The platform is also introducing escalating consequences for repeat offenders. Creators who repeatedly publish unlabeled AI war content risk being permanently removed from the monetization program, losing the ability to earn from their audience and engagement on X altogether. This shifts the focus from traditional account bans to cutting off the financial upside of spreading misleading or synthetic war imagery.
Bier framed the move as a response to a dangerous new media environment, where generative AI tools can quickly produce highly realistic but entirely fabricated battlefield scenes, explosions, troop movements, or “on-the-ground” clips. During times of war, he emphasized, the public’s access to accurate, verifiable information is especially crucial. When powerful AI systems make it trivial to fabricate convincing footage, the risk of confusion, manipulation, and narrative warfare dramatically increases.
Under the updated rules, enforcement will be triggered in two primary ways. First, if a post is identified as AI-generated through a user-added note that flags the content as synthetic, that will be enough to prompt a review. Second, X will rely on technical signals such as file metadata, traces of generative models, and other detection tools to determine whether a video has likely been produced or heavily altered using AI. If the platform concludes that a war-related clip is synthetic and lacks proper disclosure, monetization penalties will follow.
X has signaled that it plans to keep refining these detection methods. As AI content becomes more photorealistic and harder to distinguish from real footage, platforms are racing to improve their ability to flag manipulated media at scale. X positions this policy as part of a broader effort to sustain user trust, especially when posts about wars, uprisings, or strikes can influence public opinion, policy debates, and even the behavior of people in active conflict zones.
This new stance reflects a larger struggle across the tech industry. Social platforms are caught between two powerful forces: on one hand, a commitment to open expression and minimal censorship; on the other, rising pressure to limit deceptive media that can mislead millions in seconds. Governments, journalists, watchdogs, and human rights organizations have all warned that synthetic war footage and deepfakes can inflame tensions, justify violence, or obscure real atrocities by flooding timelines with fake “evidence.”
By honing in on monetization rather than outright bans, X is experimenting with a middle path. Instead of removing every account that posts questionable AI content, the platform is targeting the financial incentives that often drive creators to chase virality at any cost. If misleading war videos no longer generate ad revenue shares for their posters, X is betting that the incentive to produce and circulate them will diminish.
The policy is also a signal to advertisers. Brands are increasingly wary of appearing alongside graphic war footage, let alone next to content that later turns out to be fabricated. By restricting monetization on undislosed AI war videos, X can reassure advertisers that their budgets will be less likely to support creators who blur the line between reality and simulation. That, in turn, may protect the long-term viability of its ad and revenue-sharing ecosystem.
For creators, the new rules introduce a clear obligation: transparency. Posting AI-generated war scenes is not prohibited outright, but failing to disclose that they are synthetic now carries a serious economic cost. Creators who use generative tools to visualize hypothetical scenarios, explain military tactics, or produce commentary on conflict will need to clearly mark their content as AI-made if they want to preserve their revenue share.
This move also raises important questions about how platforms define “war-related” AI content. In practice, enforcement will likely focus on videos that appear to depict real events on actual battlefields-bombings, casualties, troop engagements, and destruction of infrastructure. However, there may be gray areas: fictional war simulations, historical recreations rendered with AI, or satirical clips that mimic news footage. The effectiveness and fairness of the policy will depend on how well X’s systems and human reviewers can distinguish harmful deception from clearly artistic or educational uses.
Another challenge lies in the rapidly evolving nature of generative AI. Tools capable of producing high-resolution, realistic video are advancing quickly, often outpacing detection models. While X is investing in metadata analysis and other forensic signals, bad actors can strip or alter this data, making provenance harder to establish. Over time, the platform may have to introduce more stringent requirements, such as standardized disclosure labels, watermarks, or even cooperation with AI model providers to embed detectable signatures in generated media.
The timing of the policy change is not accidental. Around the world, conflicts and geopolitical flashpoints have turned social platforms into real-time information battlegrounds. Citizens and journalists upload raw, unfiltered footage from front lines, while states and non-state actors push their own narratives, sometimes using doctored or staged content. In this context, unlabeled AI videos can be weaponized to fabricate atrocities, fake victories, or simulate attacks that never occurred, pushing public sentiment in specific directions.
X’s decision also underscores an important shift in how platforms think about responsibility. For years, the emphasis was on content takedown and moderation queues. Now, attention is turning to the money that flows around that content. By explicitly linking policy enforcement to revenue eligibility, X is acknowledging that monetization is not neutral; it shapes what people post, how often they post, and how provocative or sensational their material becomes.
From a user perspective, the new rule could gradually change the information environment on X. If creators adapt and start consistently labeling AI-produced war content, audiences may develop a clearer sense of what they are watching. A labeled synthetic video is still powerful and potentially persuasive, but it does not masquerade as raw reality. Over time, more visible disclosure could help people maintain a healthier skepticism when consuming footage from conflict zones.
At the same time, some critics are likely to question whether the policy goes far enough. Cutting revenue for undisclosed AI war videos addresses a specific slice of the problem, but does not touch unlabeled AI content about elections, disasters, crime, or other high-stakes topics. Others may argue the opposite: that focusing specifically on war-related content could create uneven enforcement or political bias, especially when different sides in a conflict accuse each other of spreading fakes.
Creators who operate in news, commentary, and conflict analysis spaces will now have to update their internal workflows. Teams that experiment with AI to illustrate complex military developments or simulate scenarios will need clear labeling guidelines. They may also have to educate their audiences about what “AI-generated” actually means, why they are using such tools, and how viewers should interpret that content relative to verified on-the-ground reporting.
In the long run, X’s policy could become a test case for whether economic levers are effective in curbing misuse of generative AI. If revenue suspensions significantly reduce the volume of unlabeled synthetic war videos, other platforms may follow a similar approach, extending it to other sensitive categories such as elections and public health. If not, companies may be forced to consider harsher options, including wider takedowns or stricter identity and verification standards for accounts posting high-impact media.
What is clear is that the era of unregulated, unlabeled AI-generated war imagery is closing fast. As generative tools spread, social networks are under mounting pressure to ensure that users can distinguish between authentic footage and algorithmic fiction, especially when lives, policies, and international relations may be shaped by what appears in a few seconds of video on a timeline. X’s new monetization rules represent one of the first concrete attempts to draw that line by hitting deceptive content where it often matters most to creators: their earnings.

