Grammarly's 'Expert Review' Impersonation Scandal: How AI Hijacked Writers' Names and Faces
Grammarly's Expert Review feature used real authors' and journalists' names to power AI writing suggestions — including Carl Sagan, who died in 1996. After Wired's investigation and a class-action lawsuit filed by journalist Julia Angwin, the feature was shut down in days.

When Your Name Becomes an AI Avatar
In August 2025, Grammarly quietly launched "Expert Review" — a feature that generated writing suggestions "inspired by" real authors and academics. Screenshots showed it using the names of Stephen King, Neil deGrasse Tyson, and Carl Sagan, among others.
There was one problem: Carl Sagan died in 1996.
The feature included a tiny disclaimer noting that the expert names didn't indicate endorsement. What it didn't mention was that these people never gave permission for their names to be used at all. Grammarly was essentially creating AI personas out of living and dead writers without asking anyone.
The Verge Gets Impersonated Too
In March 2026, The Verge staff tested Expert Review with their own articles. They quickly found suggestions bearing the names of their colleagues: Nilay Patel, David Pierce, Tom Warren, Sean Hollister. None had given permission.
The suggestions themselves were generic word salad — headline advice from "Nilay Patel" calling for "urgency" and "intrigue" with vague platons he'd never write. The "source" links were often broken or pointed to unrelated articles.
The Fallout Was Fast
Grammarly's response timeline tells a story of escalating damage control:
- March 4: Wired breaks the story about deceased professors' names being used
- March 8-10: The Verge finds their staff's names in Expert Review
- March 10: Grammarly launches opt-out email inbox for experts
- March 11: Grammarly disables Expert Review entirely
- March 11: Journalist Julia Angwin files class-action lawsuit
CEO Shishir Mehrotra called Expert Review a "bad feature" with "very little usage" during an interview with The Verge's Nilay Patel. But the damage wasn't in the usage — it was in the principle.
The Legal Case
The lawsuit argues that Grammarly violated the publicity rights of journalists, authors, and academics by using their names and professional reputations without consent. Julia Angwin notes that even if someone's work is publicly available, that doesn't grant companies the right to use their identity to sell AI features.
The case highlights a growing tension in AI development: can companies train on public data and then attach real identities to algorithmic outputs? The answer will shape how AI features work for years.
What This Means for AI Features
Grammarly's Expert Review wasn't the only feature doing something like this — it was just the first to get caught. The feature tried to make AI suggestions feel more authoritative by slapping famous names on them. It's a shortcut that prioritizes perceived credibility over actual consent.
The lesson for AI companies is clear: if you're going to use someone's name to market your product, ask them first. The fact that Grammarly moved this fast to disable the feature suggests they knew exactly how bad it looked.
The Broader Pattern
This fits into a larger trend of AI companies overreaching on what public data allows them to do. From voice cloning without permission to AI-generated music covers, the pattern is consistent: build something that leverages real people's identities, wait for backlash, then apologize and shut it down.
Grammarly called Expert Review a "buried" feature with limited usage. But the principle matters more than the scale — using someone's name to sell AI features without consent isn't innovation. It's identity theft with extra steps.
The feature is disabled now, but Superhuman CEO Mehrotra says they're "reimagining" it to give experts "real control over how they want to be represented." Whether that means getting actual permission this time remains to be seen.



