AI technologies are on track to carve out a substantial portion of the global creative economy within just a few years, while the legal protections meant to shield artists are already straining under the pressure.
According to UNESCO’s latest Re|Shaping Policies for Creativity report-a wide-ranging monitoring study based on data from more than 120 countries-generative AI could drive steep income declines for professional creators by 2028. The report projects potential global revenue losses of up to 24% for music creators and around 21% for audiovisual creators as AI-generated material floods the market and increasingly competes with human-made work.
These losses are not framed as a distant or speculative risk. UNESCO’s analysis suggests that the rapid adoption of AI tools in content production, distribution, and recommendation systems is already reshaping how value is created and captured in the cultural sector. As algorithmically generated songs, videos, scripts, and images proliferate, they are expected to displace, undercut, or devalue a significant share of traditional creative labor.
Legal experts warn that the core concepts underpinning modern copyright regimes are being pushed to their limits by the scale and speed of AI. Generative models are typically trained on vast corpora of existing works-music catalogs, film and TV libraries, artworks, photographs, books, and other media-much of it protected by copyright or related rights. The question of whether such training and downstream uses fall under doctrines like “fair use” or similar exceptions has become a central point of contention.
Ishita Sharma, managing partner at Fathom Legal, said the UNESCO projections “significantly strengthen the normative case for recalibrating copyright and neighboring rights frameworks.” In her view, the conversation has moved beyond theoretical debates about innovation and technological progress toward a more concrete problem: a “distributive imbalance” in which AI systems extract economic value from protected works at industrial scale, while the creators of those works receive little to no corresponding benefit.
Sharma argues that current legal tools were never designed for models that can ingest, analyze, and generate material derived from millions of works simultaneously. As a result, frameworks built around individual infringements, limited copying, or small-scale quotation are being stretched to cover a reality in which machine-learning systems rely on massive datasets and produce outputs that may compete directly with the original creators in commercial markets.
The UNESCO report situates AI within a broader context of long-running pressures on creators, including platform dominance, opaque royalty systems, and increasingly precarious working conditions in the cultural industries. AI, it warns, does not arrive in a vacuum; it amplifies existing inequities. For example, musicians and screenwriters who already struggle to earn sustainable incomes may now find themselves competing not only with other humans but also with automated systems capable of generating endless content at near-zero marginal cost.
At the same time, generative AI is being rapidly integrated into the very platforms and services that control access to audiences. Recommendation algorithms, search tools, and content libraries can be populated with AI-made music, scripts, or visuals that are cheaper to license, or even fully owned, by the platform operator. This dynamic risks further concentrating power in the hands of a few large technology and media companies while eroding the bargaining position of individual artists and independent producers.
For music creators, a potential 24% hit to global revenue is not just a statistic; it translates into fewer recording budgets, reduced performance opportunities, and thinner royalty checks. Composers and producers may be replaced or supplemented by AI tools that generate background tracks, advertising jingles, or even full songs tailored to specific moods or user profiles. In some cases, entire libraries of AI-generated music can be commissioned for a fraction of what it would cost to engage human musicians.
Audiovisual creators face parallel threats. Screenwriters, editors, animators, and visual effects artists are increasingly encountering tools that can draft scenes, generate storyboards, create visual assets, and even simulate actors’ performances. While these technologies can be marketed as “assistive,” the UNESCO projections indicate that their widespread deployment is expected to shift revenue away from human professionals and toward the owners of the AI systems, unless new safeguards are introduced.
The report also notes that the cultural impact of AI-generated content is difficult to measure purely in financial terms. As more of what audiences see and hear is created or heavily shaped by algorithms, there is a risk of homogenization, reduced diversity of voices, and a narrowing of the creative ecosystem. Marginalized and underfunded communities-already underrepresented in mainstream media-may be disproportionately squeezed out when automated content floods the marketplace.
However, UNESCO does not advocate a simple halt to innovation. Instead, it calls for urgent policy work to ensure that AI’s integration into the creative economy is aligned with human rights, cultural diversity, and fair remuneration. That includes rethinking how consent, licensing, and compensation operate in the age of machine learning, and whether creators should be able to opt out of having their works used to train models-or be paid when they are.
One of the thorniest issues is transparency. Many creators do not know whether their works have been included in AI training datasets, how those datasets were assembled, or how the resulting models may be used. Without robust disclosure obligations, it becomes difficult to negotiate licenses, seek payment, or even establish that a particular work contributed to an AI-generated output. Policymakers are increasingly considering whether AI developers should be required to document training data sources and give rights holders clear mechanisms to assert their interests.
Another emerging debate centers on the status of AI-generated content itself. If a piece of music or a film scene is created entirely or predominantly by an AI system, who-if anyone-owns the rights? Some legal regimes are moving toward limiting or denying copyright protection for purely machine-made works, in part to avoid flooding registries with AI output and to maintain a clear incentive structure for human creativity. Others are exploring hybrid models that distinguish between assisted and autonomous creation.
For creators, the UNESCO projections are a warning to rethink career strategies and revenue models in light of AI’s rapid advance. Some artists are beginning to treat AI as a collaborative instrument-using it to prototype ideas, experiment with styles, or handle repetitive tasks-while doubling down on aspects of their work that are hardest to automate, such as live performance, personal storytelling, or direct fan engagement. Building loyal audiences, cultivating distinctive voices, and controlling key rights may become even more critical as the baseline value of generic content drops.
Collective action is also likely to play a larger role. Unions, guilds, and collecting societies are increasingly pushing for contract clauses, industry codes, and regulatory safeguards around AI training and deployment. Negotiations over how AI can be used in film, television, music, and advertising are becoming central flashpoints, with demands for explicit consent, clear crediting, and guaranteed minimum human involvement in key creative decisions.
On the policy front, governments are under pressure to update copyright and neighboring rights frameworks so they can cope with machine-scale use of cultural works. Options on the table include new rights related specifically to data mining and AI training; statutory licensing schemes that channel revenue from AI developers back to rights holders; and strengthened enforcement tools when models are trained in ways that disregard existing protections.
At the same time, there is a growing recognition that cultural policy cannot be separated from technology policy. Issues like competition law, platform regulation, and data governance now directly influence whether creators can negotiate fair terms, reach audiences on reasonable conditions, and maintain control over how their work is used by AI. UNESCO’s report encourages states to craft integrated approaches that address these overlapping domains rather than treating them as isolated regulatory silos.
Educational and capacity-building measures are another key recommendation. Many artists, producers, and small cultural enterprises lack the technical literacy and legal resources to understand how AI systems work or to defend their interests effectively. Providing training, legal support, and accessible tools can help level the playing field, allowing creators not only to protect themselves but also to explore how AI might be used to expand their own creative possibilities on their own terms.
The looming revenue losses highlighted by UNESCO-nearly a quarter of income for music creators and more than a fifth for audiovisual professionals by 2028-underscore the urgency of this transition. Without proactive intervention, the benefits of generative AI in the cultural sector are likely to accrue primarily to a small number of large companies controlling the infrastructure, data, and distribution channels, while the individuals whose work underpins these systems absorb the costs.
Whether AI ultimately entrenches this imbalance or becomes a tool that can coexist with and even enhance human creativity will depend largely on decisions being made now: how laws are rewritten, how business models evolve, and how creators organize and respond. UNESCO’s warning is clear: the disruption is already underway, and the next few years will be decisive in determining whose interests the emerging AI-powered creative economy is built to serve.

