
Grammarly’s ‘Expert Review’ Feature Raises Ethical Concerns
Grammarly, the popular writing assistant, recently introduced a feature called “expert review” designed to elevate writing with insights from leading professionals. However, a closer look reveals these “insights” aren’t from human experts at all – they’re generated by AI. This practice has ignited a debate about the ethical implications of using AI to mimic the voices of prominent figures without their consent.
The Discovery and the Concerns
The issue came to light thanks to reporting by The Verge, which found that Grammarly was using the names of well-known authors, scientists, and journalists – including Stephen King, Neil deGrasse Tyson, Carl Sagan, and even this author – without permission. A disclaimer buried within the support pages states that references to experts are “for informational purposes only” and don’t imply any affiliation or endorsement. However, the feature’s design strongly suggests otherwise, leading users to believe they’re receiving feedback from real people.
As Stevie Bonifield of The Verge pointed out, no one asked for permission, nor were they compensated for their AI-generated “expert” labor. This raises serious questions about the responsible use of AI and the potential for misrepresentation.
Testing the ‘Expert’ Advice
Curious about the quality of these AI-generated reviews, I decided to test the feature myself. I pasted in a draft of a recent article by my colleague, Ella Markianos, about a protest at OpenAI. The “expert review” button promised insights, but the results were underwhelming. Instead of a thoughtful critique, I received generic advice, presented as if coming from figures like Shoshana Zuboff, Claire Wardle, and John Carreyrou.
The AI-generated advice from the fictional John Carreyrou, author of Bad Blood, suggested using “sensory imagery” – a perfectly valid writing tip, but hardly the groundbreaking insight one would expect from a Pulitzer Prize-winning investigative journalist. Similarly, advice attributed to Kara Swisher, a renowned Silicon Valley journalist, felt out of character and lacked her signature bluntness.
The Response from the ‘Experts’
I shared screenshots of the AI-generated advice with Kara Swisher, who responded with characteristic candor: “You rapacious information and identity thieves better get ready for me to go full McConaughey on you,” she texted. “Also, you suck.”
When I tested the feature with my own writing, the AI drew “inspiration” from AI ethicist Timnit Gebru and New York Times opinion writer Julia Angwin – individuals known for their critical views on AI development. The irony was palpable.
Grammarly’s Response and the Bigger Picture
Grammarly has announced it will allow experts to opt out of the feature by emailing expertoptout@superhuman.com. However, this move feels like a reactive measure to address the backlash rather than a proactive commitment to ethical AI practices.
The situation highlights a broader trend: AI companies are leveraging the work of others without consent or compensation. While chatbots readily offer to write in the style of specific authors, they don’t seek permission or offer payment. Grammarly simply packaged this capability into a paid product, monetizing the identities of real people without their involvement.
The Intersection with National Security
The concerns extend beyond consumer writing tools. Recent reports reveal the U.S. military is utilizing AI systems, like Maven, powered by Anthropic’s Claude, for targeting and prioritization in military operations. This raises questions about accountability and the potential for unintended consequences. Anthropic is currently engaged in a legal battle with the Pentagon over its designation as a “supply chain risk” due to its refusal to comply with the military’s “all lawful use” standard.
The use of AI in warfare is accelerating, and the lines between innovation and ethical responsibility are becoming increasingly blurred. The events surrounding Grammarly’s “expert review” serve as a cautionary tale about the potential pitfalls of unchecked AI development.
Ultimately, the bigger problem is the invisible ways our work is being used to train these systems, shaping their outputs without our knowledge or consent. Grammarly simply had the “bad manners” to put our names on it.
Further Reading:
- The New York Times – For comprehensive news coverage.
- The Verge – For in-depth tech reporting.
- Platformer – For insights at the intersection of Silicon Valley and democracy.




