Grammarly expert review Ai sparks backlash for mimicking dead scholars voices

“Obscene”: Grammarly’s AI Now Channels Feedback From Dead Scholars

Grammarly has rolled out a new artificial intelligence feature that is already provoking a strong backlash among academics. The tool, called Expert Review, promises to critique users’ writing in the style of famous scholars, journalists, and other public intellectuals-including many who are no longer alive.

Instead of generic grammar tips, Expert Review analyzes a piece of text and then frames its comments “through the lens” of a specific expert persona. In practice, that means users can choose a named figure, and the system will generate feedback as if that person were reviewing the work. Critics argue that this amounts to a kind of digital ventriloquism, putting new words into the mouths of people who never consented to be represented in this way. One medieval historian has already described the idea as “morbid,” while others have labeled it “obscene.”

The controversy is especially sharp when it comes to deceased scholars. For many academics, their work and reputation are tightly intertwined with how they expressed ideas, argued positions, and engaged in debate. Having an AI system simulate that voice on demand-without consent and without any way for the original thinker to object-is being seen as a profound ethical breach, even if the underlying technology is technically impressive.

Grammarly has long been known as a grammar and style assistant: it launched in 2009 as a tool to catch typos, flag clumsy sentences, and suggest clearer wording. But the company has been rapidly repositioning itself in the age of large language models. In October, its parent company rebranded as Superhuman, signaling a strategic shift away from a single-purpose writing checker toward a broader platform of AI “productivity agents” that also promise help with research, scheduling, email, and workflow automation. Expert Review is one of the flagship features of this new direction.

From a product perspective, Expert Review tries to move beyond surface-level suggestions and into quasi-human mentorship. Instead of merely highlighting passive voice or suggesting synonyms, the tool aims to provide higher-level feedback: the strength of an argument, clarity of structure, coherence of evidence, and appropriateness of tone for a given audience. Wrapping that feedback in the persona of a renowned thinker is presumably meant to make the critique feel more authoritative, engaging, and aspirational.

Yet that same design choice is what has alarmed researchers and writers in the humanities and social sciences. Scholars spend entire careers studying the intellectual contributions of figures whose names are now being used as selectable “styles” inside a commercial AI product. To them, the idea that the voice of a historian, philosopher, or critic can be reduced to a promptable template feels like a flattening of intellectual history into a marketing gimmick.

There is also the issue of accuracy. Even the most advanced language models generate plausible text rather than verified truth. When a system claims to offer feedback “as” a specific expert, it invites users to trust that the advice aligns with that person’s thinking. In reality, the model is synthesizing patterns from training data, not channeling the expert’s authentic judgment. If the AI provides poor or misleading feedback while wearing the mask of a respected scholar, whose credibility is being spent-the company’s or the scholar’s?

For living experts, the problem is partly one of consent and control. Being turned into an AI persona raises questions about rights to one’s name, style, and intellectual brand. Many writers and academics carefully manage how their work is presented and contextualized. Having a tool impersonate them-without ongoing oversight-risks misrepresentation and reputational harm, even if the system is marketed as an “approximation” rather than a perfect replica.

When it comes to the dead, the questions get even more fraught. Some critics argue that deceased scholars cannot meaningfully consent, and their estates may not have anticipated needing to protect against AI impersonation. Others worry that posthumous AI personas will be used to legitimize contemporary positions by retroactively “endorsing” them through simulated commentary. If an AI can claim that a long-dead intellectual would praise or criticize a modern text in a certain way, that blurs the line between historical interpretation and outright fabrication.

Supporters of tools like Expert Review might counter that humans have always imagined how past thinkers would respond to new work. Students are often asked to write essays like “What would this philosopher say about today’s politics?” The difference now is scale, commercialization, and perceived authority. What used to be a classroom exercise or a speculative essay is being packaged as a push-button feature. The risk is that users may treat AI personas as if they were authentic extensions of the original thinkers rather than speculative recreations.

For everyday users, the feature also raises practical questions. Does feedback from a simulated scholar actually improve writing more than conventional AI suggestions? Or does the persona simply add a layer of theatricality? If the advice is generic-“clarify your thesis,” “add more evidence,” “consider counterarguments”-then attaching a famous name may be more about branding than better pedagogy. If it is more specific, users must ask whether the specificity is trustworthy or just a more confident-sounding guess from the model.

There is a broader cultural concern as well: the commodification of intellectual authority. When the voices of scholars, journalists, and experts are turned into selectable presets, expertise risks being perceived as just another aesthetic or filter. This may deepen existing confusion about what constitutes real scholarship, peer review, or professional editorial standards in an era already flooded with AI-generated content.

Some observers are calling for clearer guidelines and guardrails. Possible measures could include:
– Limiting expert personas to fully consenting, currently living individuals with explicit agreements.
– Clearly labeling outputs as AI-generated approximations, not genuine opinions or endorsements.
– Providing transparent explanations of how each persona is constructed and what data informs it.
– Allowing people-especially public intellectuals-to opt out of being used as a style or persona.

Others argue that the entire approach of impersonating specific individuals, living or dead, should be abandoned in favor of more abstract “modes” of feedback: for example, a “rigorous academic critic,” a “supportive writing tutor,” or a “hard-nosed editor.” These could still deliver sophisticated comments and higher-level critique without trading on individual names, legacies, or reputations.

What the backlash around Expert Review ultimately reveals is a growing tension at the heart of AI-assisted creativity. Many people want tools that feel more human, more personal, and more insightful than a basic grammar checker. But the closer those tools come to simulating real people-especially people who never agreed to be simulated-the more they collide with questions of ethics, consent, and respect for intellectual history. Grammarly’s shift to a broader AI productivity platform under the Superhuman brand may be strategically logical, yet Expert Review shows how easily technological ambition can outpace careful consideration of whose voices are being used-and at what cost.