UK Prime Minister Keir Starmer is preparing to ask Parliament for sweeping new powers to bring AI chatbots and other “addictive” online tools firmly under the UK’s online safety regime, amid growing alarm over their potential impact on children.
The government plans to tighten existing online safety legislation so that providers of AI chatbots are explicitly covered, closing what ministers see as a dangerous regulatory gap. The move follows recent measures targeting so‑called “nudification” apps and the criminalization of non‑consensual intimate images, which officials say are only the beginning of a broader crackdown on tech‑enabled harms.
Starmer framed the initiative as a direct challenge to major technology firms that, in his view, have failed to police their own products adequately. Addressing the public, he signaled readiness for confrontation with companies resisting stronger oversight, arguing that commercial interests cannot be allowed to override child protection.
“No social media platform or AI service should be able to shrug off responsibility for keeping children safe,” he said, stressing that the era of “self‑regulation by press release” is over. He accused some firms of prioritizing engagement metrics and growth over the wellbeing of young users, particularly when it comes to features deliberately designed to keep children online for as long as possible.
Under the proposals, which will go out to public consultation, the government would gain the power to impose binding age limits on social media and AI chatbot services. Ministers want the ability to require robust age‑verification mechanisms, not just tick‑box “Are you over 13?” prompts that children can easily bypass.
The new powers would also allow regulators to restrict or ban specific design features-such as autoplay, infinite scroll, and certain types of personalized recommendations-that are engineered to maximize screen time and habit formation. Officials argue that these mechanisms can be especially harmful when combined with highly responsive AI systems that learn from users’ behavior and tailor content in real time.
Starmer’s team is particularly focused on generative AI chatbots, which can simulate natural conversation, answer personal questions, and create immersive, emotionally engaging interactions. Critics fear that, without strict safeguards, such tools could be used to groom children, expose them to explicit or violent content, or subtly influence their beliefs and behavior.
The Prime Minister has repeatedly linked the AI push to wider concerns about mental health, bullying, and sexualized content online. He has suggested that AI systems-if left unchecked-could supercharge already familiar social media problems by making harmful content more targeted, more persuasive, and harder to detect.
Current online safety rules were largely drafted before the explosion of mainstream generative AI products. While regulators can sometimes apply existing provisions to new technologies, there is growing unease in government that AI chatbots sit in a grey area: not quite traditional social networks, but far more interactive and powerful than standard search engines or messaging tools.
Bringing chatbots “firmly in scope” means providers would face explicit legal duties to assess risks to children, design safer products, and act quickly when problems emerge. This could include requirements to filter sensitive topics for under‑18s, to restrict sexually explicit or self‑harm‑related content, and to prevent children from being nudged into extreme or age‑inappropriate material.
Enforcement would likely fall to the UK’s digital and communications regulator, which already has new powers to impose large fines on non‑compliant tech platforms. With AI companies added to the list of regulated services, they could face significant penalties if they fail to comply with age restrictions, design mandates, or content rules focused on minors.
Industry reaction is expected to be mixed. Some AI developers have publicly endorsed the principle of child safety and already deploy content filters and safety layers, but they often resist highly prescriptive regulation, arguing it could slow innovation, drive up compliance costs, and cement the dominance of the biggest companies that can afford extensive legal and technical teams.
Civil liberties advocates are likely to scrutinize the proposals closely. While many support stronger protections for children, there are concerns that rushed or overly broad rules could lead to excessive data collection for age verification, undermine privacy, or encourage over‑blocking of legitimate information that older teenagers might reasonably seek, such as mental health advice or sexual education.
Starmer’s allies counter that the goal is not to create a “digital nanny state,” but to require companies to meet basic standards of care-standards that parents generally assume already exist but too often do not. They argue that making products safer by design is preferable to placing the entire burden on families to monitor every interaction online.
The political calculus is clear: child safety is one of the few technology issues with broad public support across party lines. Parents, teachers, and child psychologists have long warned that modern digital platforms can overwhelm families’ ability to manage risk, particularly when services are opaque about how their algorithms work and what data they collect.
AI chatbots introduce an additional layer of complexity. Unlike static content feeds, conversational systems can ask questions, probe for personal details, and fine‑tune their responses to a specific child’s vulnerabilities. Experts worry that this dynamic can deepen emotional dependency on the tool, blur the line between reality and simulation, and make children more susceptible to manipulation.
There are also fears about the potential for AI systems to be hijacked or misused. Malicious actors could attempt to exploit chatbots’ open‑ended capabilities to spread disinformation, promote self‑harm, encourage risky behavior, or directly contact minors. Regulators want clearer obligations on companies to anticipate and mitigate these scenarios rather than react after harm has occurred.
From a technical perspective, the proposed rules will raise difficult questions about how to distinguish adult users from children without creating a de facto digital ID system. Solutions might involve device‑level controls, privacy‑preserving age‑estimation techniques, or tiered access models that limit certain types of AI interactions unless a user’s age has been reasonably verified.
Education and digital literacy are expected to feature strongly alongside legislative measures. Government advisers argue that even the best safety systems cannot fully replace informed parents, teachers, and young people who understand how AI works, its limitations, and the tactics used by predators or exploitative businesses online.
Internationally, other governments are watching the UK’s moves closely. Many are wrestling with similar dilemmas: how to harness AI’s potential for learning, creativity, and productivity while minimizing its capacity to amplify harm, especially for children. If the UK successfully enforces clear, workable standards, its framework could influence emerging global norms for AI safety.
For AI companies, the direction of travel is becoming unmistakable. Tools that interact directly with the public-especially minors-are unlikely to remain lightly regulated. Firms that anticipate stricter requirements, invest early in safety research, and build transparent governance processes may find themselves better positioned than those that view regulation as a fight to be delayed or avoided.
For families, the impact of Starmer’s proposals will depend on their final shape. In a best‑case scenario, parents could see fewer addictive design tricks, stronger filters around explicit or harmful material, and clearer options to configure what their children can do with AI. In a poorly executed one, they could be left with clunky age checks, confusing settings, and a false sense of security.
The consultation process will therefore be crucial. Policymakers will need input from child psychologists, educators, technologists, civil rights advocates, and young people themselves to strike a balance between safety, privacy, free expression, and innovation. Starmer has signaled that he wants a robust public debate rather than a narrow, industry‑driven conversation.
What is already clear is that AI chatbots are no longer being treated as a futuristic novelty. In the eyes of the UK government, they are now part of the same ecosystem of powerful online services that shape children’s lives daily-and must be regulated accordingly. Starmer’s push marks an attempt to redraw the social contract between tech companies, the state, and families before the next wave of AI tools becomes even more deeply embedded in childhood.

