Man Admits Using AI to Steal $8 Million in Fake Music Royalties
A man from North Carolina has admitted in federal court that he used artificial intelligence and automated streaming accounts to siphon off more than $8 million in digital music royalties-money prosecutors say should have gone to real, human artists.
Michael Smith pleaded guilty in the Southern District of New York to one count of conspiracy to commit wire fraud following a multi‑year federal investigation, according to the U.S. Department of Justice. As part of his plea, Smith agreed to forfeit the ill-gotten royalty payments and now faces a maximum sentence of five years in prison. His sentencing hearing is scheduled for July 29.
According to prosecutors, Smith built a massive catalog of synthetic tracks by leveraging AI tools capable of composing melodies, generating instrumentals, and producing vocal-like performances. These systems allowed him to quickly create thousands of distinct songs without traditional songwriting, recording sessions, or live performers.
But creating the music was only part of the scheme. Investigators say Smith then deployed networks of automated accounts-essentially bots-to repeatedly stream his AI-generated songs on major platforms. By pushing play counts into the billions, he triggered royalty payouts at scale, diverting millions of dollars from the pool of funds meant to compensate legitimate artists and rights holders.
“Michael Smith generated thousands of fake songs using artificial intelligence and then streamed those fake songs billions of times,” U.S. Attorney Jay Clayton said in a statement announcing the plea. Prosecutors argue the operation was designed from the outset to exploit weaknesses in streaming platforms’ royalty and fraud-detection systems.
How the Scheme Exploited Streaming Economics
The case highlights a vulnerability at the heart of the modern music business. Streaming platforms typically distribute royalties based on a “pro rata” model: all subscription and ad revenue goes into a pool, and artists and rightsholders are paid according to their share of total streams in a given period.
By flooding services with low-cost, AI-generated tracks and artificially inflating their play counts, Smith effectively increased his share of that pool. Every fake stream of his catalog didn’t just create undeserved income-it diluted the earnings of genuine artists whose music was streamed legitimately but who now had to share royalty revenue with a massive, fraudulent catalog.
AI tools allowed Smith to scale this model in a way that would have been difficult with human-made content. Instead of hiring producers, engineers, and session musicians, he could programmatically generate tracks at almost no marginal cost, then pair that with automated streaming tools to maximize payouts.
The New Reality of AI-Generated Music
Smith’s guilty plea arrives at a moment when AI-driven music creation tools have become both powerful and accessible. With readily available software, users can now generate entire songs-complete with instrumentals, melodies, lyrics, and synthetic vocals-in a matter of minutes. Some tools can mimic specific genres or moods; others can approximate the style or tone of particular artists.
For hobbyists and independent creators, these technologies can be empowering: they lower the barrier to entry for music production. But the same tools can be weaponized for fraud when combined with automated streaming systems, fake accounts, and opaque royalty mechanisms.
The case underscores a central tension: technology that democratizes creation can also democratize abuse. As the tools improve, the line between human‑created and machine‑generated content becomes harder to distinguish-both for listeners and for platforms tasked with monitoring abuse.
Why This Case Matters for Artists
For working musicians, the Smith scheme is more than an isolated crime; it illustrates how fragile their income streams can be in a digital-first ecosystem. Many artists already rely on razor-thin margins from streaming, where payouts per stream are tiny and sustained income often requires millions of plays.
When a fraudulent actor can allegedly extract over $8 million through automated plays, it raises fundamental questions about how robust royalty systems really are. Every dollar routed to fake tracks is a dollar not available to legitimate creators. Over time, repeated abuses on this scale could further erode trust in streaming as a viable income source.
It also highlights an asymmetry: while labels and large catalogs have legal and technical resources to monitor and combat fraud, independent and small artists are often left with little recourse or visibility into how their royalties are calculated or potentially impacted by manipulation.
Legal Boundaries in the Age of AI
Smith’s guilty plea to conspiracy to commit wire fraud makes clear that traditional fraud statutes remain powerful tools for prosecutors, even when the alleged misconduct is wrapped in cutting-edge AI technology. The core of the case isn’t the use of AI itself-it’s the deceptive and manipulative use of digital systems to obtain money under false pretenses.
Wire fraud laws generally target schemes that use electronic communications or systems to defraud victims. By using automated accounts and streaming infrastructure to generate false usage data, Smith allegedly created a digital illusion of demand, which then unlocked royalty payments he was not entitled to receive.
This case sends a signal: using AI or automation as part of a scheme does not place someone outside existing legal frameworks. Courts and enforcement agencies are willing to treat AI tools as just another instrument-no different, legally, than using scripts, bots, or other forms of digital automation to commit fraud.
How Platforms Can Respond to AI-Driven Streaming Fraud
Streaming services now face growing pressure to harden their systems against schemes like the one Smith admitted to. Several concrete measures are likely to become more prominent:
1. Stronger Bot and Anomaly Detection
Platforms can deepen behavioral analytics: identifying patterns such as unnaturally high repeat plays from the same devices or networks, unusual listening times, or sudden spikes for obscure tracks. Machine learning models can flag suspicious behaviors for human review.
2. Content Provenance and Verification
As AI-generated content proliferates, services may require clearer metadata, including how a track was created and who controls the rights. While AI-made music is not inherently abusive, more transparency makes it easier to detect coordinated manipulation.
3. Refined Royalty Models
Some in the industry advocate for “user-centric” payout models, where each user’s subscription fee is divided only among the artists they actually listen to, rather than pooled. While not a silver bullet, such models can reduce the incentive to flood the system with low-quality, botted content.
4. Tighter Onboarding for Distributors
Services may tighten the requirements for getting music onto their platforms-stricter identity checks for uploaders, better monitoring of bulk submissions, and graduated trust levels for new accounts.
The Ethical Dimension of AI in Creative Industries
Beyond legal and financial concerns, the case raises difficult ethical questions about the place of AI in music and other creative fields. When algorithmically generated songs are used to siphon money away from humans who devote years to their craft, it reinforces fears that AI will not just compete with artists creatively, but undermine their livelihoods structurally.
There is an important distinction between legitimate experimentation-using AI as a tool in the creative process-and using AI content solely as a vehicle for extraction and exploitation. The Smith scheme, as described by prosecutors, falls squarely in the latter category: the purpose of the songs was not artistic expression, but financial manipulation.
This draws a line that regulators, platforms, and industry stakeholders will increasingly be forced to clarify: AI can be integrated into creative workflows, but intentional deception and the gaming of compensation systems will not be tolerated.
What This Signals for the Future of AI and Fraud
Smith’s case is likely one of the first high-profile examples of AI-enabled music fraud, but it will not be the last. As generative tools improve, bad actors could replicate similar tactics not only in music, but in other areas where usage-based payments exist-such as video streaming, audiobook platforms, or even advertising networks that pay per view or click.
Enforcement agencies will need to build technical expertise to understand how these schemes operate under the hood. Platforms will need to invest more in security, fraud analytics, and transparency. And policymakers may need to refine regulations to ensure that rights holders, consumers, and smaller creators are protected in an environment where synthetic content is ubiquitous.
Yet, the outcome of this case also shows that AI fraud is not unstoppable. Investigators were ultimately able to trace patterns, establish the scope of the scheme, and secure a guilty plea with forfeiture of proceeds. That signals to would-be imitators that the risk of criminal liability is real.
How Creators and Rightsholders Can Protect Themselves
While much of the responsibility lies with platforms and regulators, artists and rights holders can take several practical steps:
– Monitor Unusual Royalty Activity: Sudden unexplained swings in streaming data or income-up or down-may warrant closer examination and contact with distributors or collecting agencies.
– Register Works and Contracts Carefully: Clear ownership documentation makes it easier to assert rights and challenge irregularities.
– Engage with Industry Bodies and Unions: Collective pressure can push platforms to adopt stronger anti-fraud measures and more transparent accounting.
– Stay Informed About AI Tools: Understanding how AI generation and automation work helps creators recognize both legitimate opportunities and suspicious patterns of activity.
None of these steps alone can prevent large-scale fraud, but they contribute to an ecosystem where abuse is harder to hide.
A Turning Point for AI in the Music Economy
Michael Smith’s guilty plea encapsulates a pivotal moment for the intersection of AI, digital platforms, and creative labor. It demonstrates how rapidly evolving tools can be misused at scale, but also how traditional legal frameworks can still respond effectively.
As AI-generated music tools continue to spread, this case will likely be referenced in future debates about platform accountability, artist rights, and the design of fair, resilient royalty systems. The challenge ahead is to harness the creative potential of AI without allowing it to become a shortcut for fraud and a drain on the livelihoods of human creators.

