
Sometime in the next few months, I expect to receive a modest check as a member of the class covered by the class-action settlement in Bartz v. Anthropic.
In 2025, AI company Anthropic — best known for its chatbot Claude — agreed to pay up to US$1.5 billion to thousands of authors after a federal judge ruled that it had infringed upon their copyrights.
When I first heard about the settlement, my assumption was straightforward: Anthropic had used my book to teach Claude about its subject — Ellen Wilkinson, the former socialist activist, British Labour politician, and feminist.
What hadn't occurred to me was that Claude might also be learning something far more personal: how I, Laura Beers, political historian, construct a sentence. How I find my voice on the page.
Yet mounting evidence suggests that chatbots like Claude are being trained not merely to regurgitate an author's content, but to replicate their voice. In March 2026, journalist Julia Angwin filed a class action suit against the owners of Grammarly, alleging the company had misappropriated her identity — and those of other writers — to build its "Expert Review" AI tool, which offers editorial feedback in the voices of named authors, living and dead.
The prospect that a machine might analyze and ultimately impersonate my authorial voice points toward a future George Orwell anticipated with unsettling prescience. In his 1949 dystopian novel "1984," Orwell imagined "novel-writing machines" capable of mass-producing literature through programmed mechanical "kaleidoscopes" — substitutes, in his vision, for the irreducibly human process of artistic creation.
'In place of a human mind, a statistical average'
I decided to find out how well Claude could mimic a real literary voice.
First, I asked it to write an essay "in the style of Laura Beers" about an early 20th-century politician. The result bore little resemblance to my own prose. Perhaps, having ingested only one of my books, Claude simply didn't have enough to work with.
So I tried a more demanding test. I asked Claude to write in the style of Orwell — the subject of my book "Orwell's Ghosts: Wisdom and Warnings for the Twenty-First Century" — one of the most distinctive prose stylists of the 20th century, and one whose body of work is largely in the public domain, meaning it could legally have been used for training.
The prompt: write an essay "in the style of George Orwell about the threat to individual identity and freedom of thought posed by AI and particularly by the ability of 'novel writing machines' to produce literature."
Primed with ample source material, Claude produced something genuinely passable:
"Here is a type of comfort, familiar to anyone who has ever been slowly dispossessed of something, that consists not in denying the loss but in not yet noticing it. The frog does not leap from the gradually heated water. The citizen does not protest the gradual narrowing of the permitted thought. And the reader, it seems, does not mourn the novel until the novel has already been replaced by something that resembles it in every outward particular – same chapters, same characters, same approximate sequence of feeling – yet contains, in place of a human mind, a statistical average of all the human minds that came before it."
The closing line about the "statistical average" rings false — too on-the-nose, too self-consciously clever. But Orwell would, I think, have appreciated the slowly boiling frog. And "Here is a type of comfort" has the cadence of a phrase he might genuinely have written.
I doubt anyone would mistake Claude's effort for authentic Orwell. But here's the uncomfortable question: when it comes to machine-produced "literature," does that distinction matter? Perhaps it's enough that the output functions as entertainment — good enough to distract, good enough to sell.
Jam, bootlaces and books
That dispiriting logic was Orwell's own, embedded in the world of "1984."
In his dystopia, the Ministry of Truth's novel-writing machines churn out not just fiction but "newspapers, films, textbooks, telescreen programmes [and] plays" — an endless torrent of "rubbishy newspapers containing almost nothing except sport, crime and astrology, sensational five-cent novelettes" and sex-drenched films, along with cheap pornography designed for the "proles," Oceania's uneducated working class.
The technology repels Winston Smith, Orwell's protagonist, who defiantly buys a diary and pen to record his own unmediated thoughts. But Julia — his anti-intellectual lover, who works as a mechanic servicing the machines — sees it differently: "Books were just a commodity that had to be produced, like jam or bootlaces."
'Full-Length Novels in Seconds'
By current estimates, thousands of books listed on Amazon have been written in whole or in part using AI. The commodity model Julia imagined is no longer fiction.
Many of these titles are not entirely machine-written; instead, they've been, as AI writing tool Sudowrite advertises, "polished by AI." Its "Rewrite" function promises to "refine your prose while staying true to your style" — a service not unlike the Ministry of Truth's Rewrite Squad, updated for the subscription economy.
Others are generated entirely by machine. The AI writing tool Squibler promises "Full-Length Novels in Seconds" from a single overarching prompt.
The financial logic is self-reinforcing: as long as AI-generated content turns a quick profit, readers will encounter more of it — particularly as large language models grow more refined. Studies already show that readers struggle to distinguish AI-generated prose from human-written work.
Last year, over lunch in Los Angeles, a screenwriter friend told me that his colleagues are most anxious about one specific threat: AI-written sequels. Once a franchise like "Fast & Furious" has an established cast and mythology, audiences will likely turn up for the next installment regardless of whether a human wrote it.
And yet my brief experiments with Claude leave me with at least some cautious optimism about the future of literary art. A chatbot can absorb and analyze "a statistical average of all the human minds that came before it" — but without lived experience and genuine human sensibility, it is difficult to imagine such a system ever producing true art.
Whether AI will one day write the next George Orwell essay remains an open question. That it will keep churning out "Fast & Furious 25" seems considerably less in doubt.
Laura Beers does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.