Baltimore has become the latest city to take Elon Musk-linked companies to court, filing a consumer protection lawsuit that squarely targets the way artificial intelligence tools can be used to generate sexualized deepfakes-including of children.
In a complaint filed in a Maryland state court, the Mayor and City Council of Baltimore accuse X Corp., Musk’s AI venture xAI, and SpaceX of violating local consumer protection laws through the design and deployment of Grok, xAI’s generative AI chatbot. According to the filing, Grok can be used to create and circulate non-consensual sexually explicit images of real people, and the companies allegedly failed to put in place adequate safeguards to prevent that abuse.
Lawyers for the city argue that Grok allows users to “undress” or digitally alter images of real individuals with very little prompting, turning ordinary photos into pornographic or sexualized content. That, the lawsuit contends, exposes Baltimore residents to serious invasions of privacy, reputational damage, and lasting psychological trauma-particularly when minors are involved.
The suit is being brought by the Baltimore City Law Department together with the firm DiCello Levitt. Their position is that the companies did not merely host or indirectly facilitate harmful content, but actively designed and marketed an AI product with known, foreseeable misuse, then failed to implement effective guardrails to protect consumers.
At the heart of the case is a legal question that reaches far beyond Baltimore: can city- and state-level consumer protection laws be used to hold AI developers and platform operators responsible for what people do with their tools, especially in a regulatory vacuum at the federal level? One legal expert cited by the city suggests this lawsuit could become an early test of how far local governments can go in policing AI when Congress and federal agencies have not yet set comprehensive rules.
Baltimore’s lawyers frame Grok not just as a neutral piece of software, but as a system whose architecture, training, and user interface make it particularly easy to generate harmful deepfakes. They point to the way generative AI can accept simple text prompts, transform or synthesize images, and rapidly produce convincing photo-realistic output, arguing that this combination dramatically lowers the barrier to creating non-consensual sexual content.
The city also argues that this is not an abstract or purely hypothetical risk. Deepfake pornography has already become a growing global problem, with victims ranging from private citizens to public figures. For minors, the harms are even more acute: the lawsuit notes that sexualized imagery of children can amount to child sexual abuse material, triggering not only civil liability but potential criminal implications for those who create, distribute, or enable it.
From Baltimore’s perspective, the harm is both individual and collective. The complaint describes how victims can face bullying, blackmail, social isolation, loss of employment or educational opportunities, and severe mental health consequences. At a community level, the city argues that such abuses undermine trust in digital communication, magnify gender-based and sexual violence, and impose real costs on local services-from law enforcement to counseling and social support.
The city is using consumer protection law as a central tool. These statutes generally prohibit unfair, deceptive, or abusive business practices. Baltimore claims that by releasing Grok without robust safety mechanisms, disclosure, or recourse for victims, the companies engaged in unfair practices and misled the public about the risks of using or being depicted by the technology.
A key aspect of the lawsuit is the assertion that the companies had notice of the risks. Generative AI’s capacity to fabricate realistic images, including sexually explicit ones, is well-known, and numerous incidents involving other AI tools have been widely discussed by technologists, regulators, and civil society. Baltimore argues that despite this awareness, the defendants prioritized rapid deployment and product growth over user safety, making their alleged negligence both foreseeable and avoidable.
The inclusion of SpaceX as a defendant reflects Baltimore’s attempt to encompass Musk’s broader corporate ecosystem, suggesting that his companies are interlinked in strategy, branding, and sometimes infrastructure. The city’s legal team appears to be signaling that major tech figures and conglomerates will not be able to avoid scrutiny by siloing AI products into separate entities.
Beyond financial penalties or injunctive relief, what Baltimore ultimately wants is a shift in how AI products are built and governed. The suit implicitly demands that AI developers adopt “safety by design”: putting rigorous testing, red-teaming, content filters, and strong controls in place before releasing powerful models to the public. It also hints at the need for clear complaint mechanisms and rapid takedown procedures for victims of AI-generated abuse.
If the city prevails, the verdict could prompt other municipalities and states to bring similar actions, effectively building a patchwork of AI regulation from the bottom up in the absence of a comprehensive federal framework. Companies might respond by creating different product configurations or access rules for certain jurisdictions, or by raising their safety standards across the board to reduce litigation risk.
On the other hand, if the court finds that local consumer laws do not extend to the design choices of AI models, or if protections for online platforms are interpreted broadly enough to shield the defendants, it could limit how aggressively cities and states can police AI-generated content. That outcome might add momentum to calls for explicit federal AI legislation rather than relying on existing, more general statutes.
The case also raises fraught questions about where responsibility lies in the AI ecosystem. Developers often argue that they merely create tools and cannot control every misuse. Critics counter that when a tool’s harmful uses are obvious, repeated, and technically feasible to mitigate, companies have a duty not to turn a blind eye. Baltimore’s complaint is aligned with the latter view, portraying Grok’s design as a catalyst rather than a neutral instrument.
One practical challenge highlighted by this dispute is the difficulty of enforcing rules against deepfakes at scale. Even if AI companies implement filters to block certain prompts or image transformations, motivated users may find workarounds or use third-party wrappers. This reality fuels arguments for a multi-layered approach: stronger corporate safeguards, clearer legal remedies, education for the public about deepfakes, and proactive law enforcement strategies.
Baltimore’s move fits into a broader pattern of local and state governments experimenting with AI-related regulation: some are proposing or passing laws targeting deepfake political ads, biometric surveillance, algorithmic discrimination, or AI use in hiring and housing. By focusing on sexual deepfakes and consumer harm, Baltimore is carving out another front in that emerging landscape.
For residents, the lawsuit is framed as a defense of fundamental rights-privacy, dignity, and safety in a world where an image can be altered and shared globally in seconds. For tech companies, it’s a warning that the era of lightly regulated AI experimentation is ending, and that the social consequences of their products will increasingly be evaluated in courtrooms, not just in press releases and investor decks.
Whatever the eventual outcome, the Baltimore case underscores a central reality of the AI age: the line between innovation and harm is no longer a purely technical question. It is becoming a legal, ethical, and civic battleground, and cities like Baltimore are no longer waiting for Washington to draw that line for them.

