(adsbygoogle = window.adsbygoogle || []).push({});
TLDR
A forthcoming book from Hachette, “Shy Girl” by Mia Ballard, was pulled from the publisher’s lineup after being accused of AI use. The controversy quickly stirred up conversations about how publishers detect AI and handle manuscripts that potentially used AI.
In March of this year, Hachette Book Group, one of the “Big Five” publishers, pulled a forthcoming horror novel titled “Shy Girl,” over allegations of AI use. The book, by Mia Ballard, follows the story of a young woman held hostage by a man she meets online, who forces her to live as his pet. It was originally published in the UK and sold about 1,800 copies, but Hachette decided not to publish it in the US and discontinued the book in the UK. This controversy sheds light on the challenges the publishing world now faces. Editors, agents and publishers are now forced to grapple with the issue of AI and learn how to detect and prevent it in future works.
While I do believe the allegations that “Shy Girl” was written by AI, I also question some of the methods that were used to detect the AI use and how those methods could put other authors at risk, even authors who have never used generative AI. In reporting about this incident, The New York Times cited Max Spero, the founder and chief executive of Pangram, an AI detection program. Spero ran a test of “Shy Girl” into Pangram’s AI detection program after readers on Reddit and Goodreads began to voice suspicion of AI use. According to Pangram’s program, the book was 78% AI generated.
However, what the Times did not acknowledge is that there are issues with this claim. For starters, according to OpenAI, AI detectors simply do not work. Their website even states, “When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.” Other reports have backed up this claim as well. A 2023 article in Ars Technica reported, “If you feed America’s most important legal document — the US Constitution — into a tool designed to detect text written by AI models … it will tell you that the document was almost certainly written by AI.” Even selections from the Bible, they write, were flagged as AI. The Washington Post ran its own study, asking students to turn in both original work as well as AI-generated work. The AI detectors turned out to be incorrect on almost half of the essays, despite claiming that its accuracy rate as 98%. OpenAI even shut down its own AI detection software in 2023 due to its “low rate of accuracy.”
And it makes sense that AI detectors wouldn’t work, especially as AI continues to advance. AI learns from human writing, after all. The more advanced AI becomes, the more likely it will be for an AI detector to claim that human writing sounds like AI, when actually it’s the other way around.
One BookTok creator, Emma Skies, also pointed out the journalistic and ethical issues with The New York Times citing Pangram as its main source in relation to “Shy Girl”’s AI use. Spero is the founder and CEO of Pangram, so of course he has something to gain financially by entering this conversation. The Times cited Spero as their very first piece of evidence but then failed to acknowledge the conflict of interest.
Skies mentioned Spero’s actual linked “research,” as the Times put it. This so-called research is a single X post with a screenshot of Pangram’s 78% claim of AI. When you take a closer look, you can see that the file Spero uploaded to his program is titled “OceanofPDF.com,” meaning it came from a well-known book pirating website called Ocean of PDF. The New York Times not only cited Spero’s dubious claims as valid “research,” but that very same “research” is stolen. Spero had no legal right to upload it into his AI detection program in the first place. Because it was a pirated version, there’s no confirmation that Spero even uploaded the correct text.
Moving onto the actual “evidence” of AI use coming from Pangram, a rudimentary look also shows that the 78% claim is nonsense. Although the link to the actual Pangram report is no longer working, BookToker Skies took screenshots, which she includes in her video. For starters, the report marks any use of an apostrophe or em dash as AI. According to many AI detection programs, contractions and em dashes — two punctuation marks humans use in their writing regularly (even in this very sentence) — are evidence of AI use. The report also flags the entire table of contents, including the title and author name as AI, and it continues to flag those details throughout all of the scanned pages.
Editorial Note: Read our article on the benefits of spacing em dashes, including how spacing could reduce unfair AI writing accusations.
Meanwhile Pangram claims their false positive rate for their AI scanners is only 1 in 10,000, which amounts to 0.0001%. The New York Times also irresponsibly repeats that claim in their article. Pangram’s own website is the only evidence supporting the claim. However, anyone with any amount of critical thinking skills can look at the actual 78% report and see that the 1 in 10,000 claims are false. But it seems like the Times didn’t bother to fact check or look into this problem in the slightest. However, because of the Times reporting, this 78% is being cited on a number of other news sites, including The Guardian and CNET.
So, what does this all mean for the future of publishing? For starters, I wonder what evidence Hachette used to determine “Shy Girl” was partially written by AI. Was it only AI detection software? If so, then Hachette likely has a lawsuit on their hands, and Ballard has already stated that she’s speaking to lawyers.
However, I assume there was more happening behind the scenes. Relying solely on AI detection software to pull a book from publication seems irresponsible, and I would like to think the publisher knows that as well. Prior to putting the book through a detection software, there had been rumblings of AI use. Others have cited word repetition, awkward similes and nonsensical metaphors as evidence.
Going through reviews on Amazon and Goodreads, it seems there were very few AI accusations in 2025 but a huge spike in 2026. Perhaps a few people voicing their suspicion led to a snowball effect, making others realize that the book may have been written by AI. This time, the masses seem to have been right, as Ballard’s book was likely written by AI.
But what happens when an innocent writer gets caught up in the groupthink AI hysteria? If one person accuses someone of using AI, it’s possible it could turn into a witch hunt, leading to more accusations that will hurt innocent writers in the long run.
And who’s to say if a book is generative AI and not just bad writing? The dilemma brings up a larger issue with publishing as a whole.
“Shy Girl” was originally a self-published book. According to an article in Slate, when a publisher takes on a self-published book, the amount of editing prior to publication is minimal. Therein lies the problem. When publishers aren’t putting in the necessary editorial work, both bad writing and AI use will inevitably slip through the cracks. The Slate article quotes a Big Five editor who wished to remain anonymous, saying, “Editors are being very clear that [AI use is] not acceptable, but it’s hard to police without potentially offending someone who just doesn’t write well.”
So why are we publishing works by people who don’t write well in the first place? If publishers are now tiptoeing around bad writing for fear of offending someone, there seem to be two indications:
- the publisher isn’t doing their job
- the author should not have their book published because the writing is not good
This poor outcome is not entirely the fault of the editor. Most editors at Big Five publishers these days are overworked beyond capacity. It is the fault of the publisher as a whole. Clearly, higher standards need to be set when it comes to both writing quality and AI use.
That change is unlikely to happen anytime soon. I don’t expect publishers to suddenly hire more editors to free up time for the ones already there.
AI-generated books will slip through the cracks, and non-AI generated books will be accused of using AI. If we rely on systems like Pangram’s AI detection program, rather than the work required to actually edit a book, many writers will be falsely accused of using AI.
I couldn’t tell you, for instance, how many times I used an apostrophe or em dash in my own book. If I put my work through Pangram’s system, I’m sure it would flag some of it for AI use, despite the fact that I’ve never once used generative AI in my writing.
Publishers need to reckon with the fact that many writers are in fact using AI. Unless agents and editors put in the work to stop it, this “Shy Girl” incident will happen again. Eventually, innocent writers will pay the price.
Get recommendations on hidden gems from emerging authors, as well as lesser-known titles from literary legends.







