Unicef calls for global ban on ai-generated child abuse imagery amid deepfake crisis

UNICEF urges global ban on AI‑generated child abuse imagery as deepfake crisis escalates

UNICEF is calling on governments worldwide to urgently outlaw AI‑generated child sexual abuse material, warning that current laws are failing to protect children from a rapidly expanding wave of synthetic exploitation.

According to new research, at least 1.2 million children globally have had their images transformed into sexually explicit deepfakes over the past year alone. Many of these children never met their abusers, never shared intimate images, and yet still became victims—because powerful AI tools can now fabricate convincing abuse content from ordinary photos or videos.

The figures come from Disrupting Harm Phase 2, a large-scale research initiative led by UNICEF’s Office of Strategy and Evidence (Innocenti), ECPAT International, and INTERPOL. Drawing on nationally representative household surveys of around 11,000 children across 11 countries, the findings reveal that in some nations, as many as one in 25 children have had their likeness used in sexually exploitative deepfakes. In classroom terms, that is roughly one child per typical class.

A new era of abuse without physical contact

Researchers stress that AI is transforming the nature of child abuse online. Perpetrators no longer need direct access to a child, compromising situations, or pre‑existing explicit material. Instead, they can harvest images from social media, messaging apps, school websites, or even family photo albums posted online, then feed them into openly available AI tools capable of generating highly realistic synthetic sexual content.

This shift has two devastating implications. First, it drastically lowers the barrier to entry for offenders, as no contact and little technical expertise are required. Second, it creates a universe of abuse material that appears real to viewers but may have no basis in an actual recorded crime—making it harder for law enforcement, platforms, and even families to understand what has happened and how to respond.

Legal systems lag behind AI reality

UNICEF warns that most legal frameworks were written for an era when abuse material was tied to documented, physical crimes against children. Many criminal codes define child sexual abuse material as content showing “real” children being abused. AI-generated imagery, in which no physical contact necessarily occurred, often falls into a grey area.

This legal gap has serious consequences:

– Police and prosecutors may be unable to charge offenders who create or share AI‑generated material if the law only covers images of “actual” abuse.
– Offenders may claim that, because no direct abuse was filmed, the harm is “less serious” or even “victimless.”
– Victims whose everyday photos are turned into deepfakes may have no clear path to justice or removal of the content.

UNICEF’s message is blunt: children suffer real psychological and social harm when their faces and bodies are used in synthetic abuse content, regardless of whether physical abuse occurred in the making of those images. The law must reflect that reality.

One child per classroom: the scale behind the statistics

The Disrupting Harm Phase 2 project deliberately translates data into everyday terms. In some countries, the research estimates that one out of every 25 children has been targeted in this way. For teachers and parents, this means that in almost any school, at least one child may already have been affected.

Beyond the numbers, the impact is deeply personal:

– Children may discover fabricated sexual images of themselves circulating among peers or strangers.
– Victims can face bullying, blackmail, or extortion, even if they never shared a single intimate image.
– The knowledge that “anything you post can be turned into something abusive” can create lasting anxiety, mistrust of technology, and withdrawal from normal online life.

UNICEF argues that these harms should be recognized alongside more conventional forms of online exploitation.

Why AI makes the problem uniquely dangerous

UNICEF and its partners underline several factors that make AI‑driven abuse uniquely alarming:

1. Speed and scale
A single image of a child can now be turned into countless synthetic variations in minutes. Offenders can create entire collections of fabricated abuse material, amplifying the child’s victimization.

2. Plausible deniability
When confronted, abusers can claim the images are “just AI” or “not real,” which can confuse families, deter victims from reporting, and complicate legal cases.

3. Blurring of real and fake
Synthetic images can be mixed with real abuse material, making it harder for investigators to identify actual victims in need of urgent help.

4. Low technical barrier
Many generative tools are available through simple web or app interfaces. People with no advanced skills can create sophisticated deepfakes, including those targeting children.

5. Persistent circulation
Once shared, AI‑generated images can be downloaded, edited, and re‑uploaded repeatedly across multiple platforms, making removal extremely difficult.

UNICEF’s core demand: criminalize AI‑generated child abuse imagery

In light of these threats, UNICEF is urging governments to:

Explicitly criminalize the creation, distribution, and possession of AI‑generated child sexual abuse imagery, regardless of whether a “real” abuse act occurred during its production.
Update legal definitions of child sexual exploitation and abuse material to cover synthetic, manipulated, and deepfake content involving a child’s likeness.
Recognize AI‑generated abuse as a form of child sexual exploitation in all relevant child protection, cybercrime, and human rights frameworks.

The organization stresses that such laws must focus squarely on the protection of children and on holding perpetrators accountable, not on criminalizing children who may themselves be coerced into sharing images or be victims of image-based manipulation.

Role of tech companies and AI developers

UNICEF’s call is not limited to lawmakers. It also places responsibility on technology and AI firms, urging them to:

– Build robust safeguards into generative AI systems to prevent their use for creating sexual content involving minors.
– Deploy automated detection tools to identify and block attempted generation or upload of child abuse imagery, including synthetic content.
– Maintain clear reporting channels for suspected deepfakes involving children and respond swiftly to removal requests.
– Collaborate closely with law enforcement and child protection organizations under strict human rights and privacy protections.

The agency emphasizes that safety by design—embedding child protection into products from the outset—must become standard practice, not a reactive add‑on.

Supporting victims of AI‑generated abuse

UNICEF also stresses the need for comprehensive support for affected children and families:

Psychological support to address trauma, shame, or fear associated with the abuse and potential public exposure.
Legal assistance to help families navigate reporting, takedown requests, and pursuit of justice where possible.
Clear procedures in schools and institutions so that when such cases emerge, staff know how to respond sensitively and protect the child from further bullying or victim‑blaming.

The organization warns against dismissing these cases simply because “no physical act” took place in front of a camera. The emotional and social consequences for children can be severe and long‑lasting.

Digital literacy and prevention: what parents and educators can do

While systemic legal and technical reforms are essential, UNICEF highlights the importance of preventive education:

– Teach children that any image they share online can be copied, altered, and misused, even if it seems harmless at the time.
– Encourage critical thinking about image authenticity, helping young people recognize that pictures and videos can be faked and that they have a right to question and report suspicious content.
– Foster open communication at home and in schools, so children feel safe disclosing if they suspect their image has been misused or if they encounter abusive material online.
– Promote privacy practices, such as limiting who can see personal photos, avoiding the public posting of identifiable images in compromising contexts, and using strong security settings on devices and accounts.

Prevention alone cannot solve the crisis, but it can reduce risk and help children respond more confidently if something goes wrong.

International cooperation is critical

Because abuse content—real or synthetic—moves across borders in seconds, UNICEF insists that isolated national efforts are not enough. Harmonized international standards are needed so that offenders cannot simply exploit legal loopholes by operating from jurisdictions with weaker laws.

UNICEF, ECPAT International, and INTERPOL are using the Disrupting Harm Phase 2 project to encourage:

Alignment of legal definitions of child sexual abuse material, including AI‑generated content.
Cross‑border information sharing among law enforcement agencies investigating online exploitation.
– Development of common protocols for handling cases involving synthetic imagery and deepfakes.

Such cooperation, they argue, can significantly increase the pressure on networks that produce and distribute abusive material, whether real or AI‑generated.

Why recognizing AI‑generated abuse as “real” harm matters

At the heart of UNICEF’s appeal is a simple principle: if a child’s face, body, or identity is used in abuse imagery—whether captured by a camera or fabricated by an algorithm—the child is a victim and deserves protection, redress, and justice.

Dismissing AI‑generated imagery as “just fake” overlooks the very real consequences:

– Damage to reputation and relationships.
– Increased risk of blackmail or further exploitation.
– Lasting feelings of humiliation, fear, and loss of control over one’s own image.

UNICEF argues that by legally recognizing this form of abuse, societies affirm children’s rights to dignity, privacy, and safety in a digital world that is changing faster than most laws can keep up.

The bottom line

The rise of generative AI has opened a new front in the fight against child sexual exploitation. With at least 1.2 million children already affected by deepfake abuse in just one year—and in some countries, one child in nearly every classroom—the problem is no longer hypothetical or far‑off.

UNICEF’s message to governments is clear: modernize laws now to explicitly criminalize AI‑generated child abuse material, close the loopholes that leave children unprotected, and work with technology providers to stop this content at the source. The alternative is a future in which every child who appears online, in any context, could be at risk of being turned into the subject of synthetic abuse, with little recourse or recognition of the harm they suffer.