Grammarly’s new “Expert Review” feature promises feedback from subject-matter experts. What it actually delivers is AI-generated suggestions served under the names of real academics - including at least one professor who died less than two months ago.
Medieval historian Verena Krebs first flagged the problem when she discovered the tool offering feedback in the voice of David Abulafia, the distinguished Cambridge historian who died on January 24, 2026.
Scholars are calling it “digital necromancy.”
How It Works
Expert Review is an AI agent built into Grammarly’s Docs platform, available to Pro subscribers. Users open a document, click the Expert Review icon, and receive suggestions supposedly “inspired by subject-matter experts.”
Grammarly’s own description is revealing: the AI “identifies areas in your writing that can improve and shows suggestions inspired by subject-matter experts.” Suggestions are “generated by AI and inspired by how experts might approach your topic.”
The key word is “inspired.” Grammarly isn’t connecting you with actual experts. It’s scraping their published work, building AI models around their writing patterns and expertise, then presenting AI-generated feedback under their names.
The scholars whose identities are being used didn’t consent to this.
”Literally Digital Necromancy”
The academic response has been scathing.
“Without anyone’s explicit permission it’s creating little LLMs based on their scraped work and using their names and reputation,” wrote Vanessa Heggie, an associate professor at the University of Birmingham.
Kathleen Alves, an associate professor of English at CUNY, called it “literally digital necromancy” on Bluesky. Hisham Zerriffi, an associate professor at the University of British Columbia, echoed the sentiment with the term “NecromancerLLM.”
Claire E. Aubin, a historian and podcast host, described it as “among the most cursed” things she has seen in academia.
The appearance of David Abulafia’s name particularly shocked scholars. Abulafia was a towering figure in Mediterranean history, author of the bestselling The Great Sea and winner of the Wolfson History Prize. He is survived by his wife, historian Anna Abulafia, and their two daughters.
He died weeks before his identity appeared in an AI writing tool.
The Consent Problem
Grammarly’s support documentation addresses user data consent: users can opt out of having their own writing used to train Grammarly’s models.
What Grammarly doesn’t address is consent from the experts whose identities populate Expert Review. The academics being impersonated never agreed to become AI writing coaches. Their published work was scraped without permission. Their names are now attached to AI-generated suggestions they never reviewed or approved.
For deceased professors, consent becomes impossible by definition.
The Legal Gray Zone
Using someone’s name and reputation for commercial purposes without consent sits in murky legal territory.
Right of publicity laws vary dramatically by state. Some states cut off publicity rights at death. Others, like California, extend them for up to 70 years. This creates a patchwork where the same AI tool could be actionable in one state and lawful in another.
Tennessee’s ELVIS Act, signed in March 2024, specifically targets unauthorized AI voice clones and digital replicas. It creates a cause of action against anyone distributing an algorithm whose “primary purpose or function” involves creating unauthorized facsimiles of a person’s voice or likeness.
But academic expertise and writing style may not fit neatly into these frameworks. Grammarly isn’t generating a deep fake video of David Abulafia. It’s using his name and scraped work to provide writing feedback “inspired by” how he might approach a topic.
Whether that constitutes actionable impersonation remains untested in court.
The estate of comedian George Carlin successfully sued creators of an AI-generated Carlin stand-up special called “I’m Glad I’m Dead.” But that case involved a full audio performance impersonating Carlin’s voice. Grammarly’s use is more subtle - and possibly harder to challenge legally.
The Company Rebrand
This controversy arrives as Grammarly navigates a significant identity shift.
In October 2025, Grammarly’s parent company rebranded as “Superhuman,” combining the writing tool with Coda and Superhuman Mail into a unified suite. The Expert Review feature is part of a broader push into AI agents - specialized tools that go beyond grammar checking into substantive writing assistance.
The move toward AI agents requires training data. Subject-matter experts provide that data. The question is whether using their names and work without consent crosses an ethical line that Grammarly seems willing to ignore.
What This Means
Grammarly’s Expert Review represents a new frontier in AI impersonation concerns. It’s not celebrities being deep-faked for viral videos. It’s working academics - and recently deceased scholars - having their names and expertise commodified without consent.
The feature’s disclaimer that suggestions are “generated by AI and inspired by how experts might approach your topic” does little to address the core issue. Users see an expert’s name. They receive suggestions attributed to that expert’s perspective. That attribution is commercial and unauthorized.
For now, affected academics have few options beyond public criticism. The legal landscape for AI impersonation remains unsettled, especially for uses that fall short of full audiovisual deepfakes.
But as AI tools increasingly mine human expertise and identity for commercial purposes, the question of who controls your professional reputation - living or dead - will only become more urgent.
The Bottom Line
Grammarly built a feature that uses real experts’ names and scraped work to provide AI writing feedback, without asking permission. That at least one of those “experts” died weeks before his identity appeared in the tool transforms a privacy concern into something more visceral.
Calling it digital necromancy isn’t hyperbole. It’s an accurate description of a product that treats human identity as training data.