menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Sheer prevalence of AI prose is giving rise to a new and extreme kind of suspicion

12 0
yesterday

Earlier this month, Orbit Books, a US imprint of the Hachette publishing giant specialising in genre fiction, cancelled the publication of Shy Girl. The initially self-published novel by debut novelist Mia Ballard became a breakout success among horror fans, via Goodreads and social media buzz; after it was republished in the UK last year, though, readers and commentators began to note aspects of its prose style that were strongly suggestive of generative AI. In January, a book YouTuber using the name Frankie’s Shelf posted a video essay about it bluntly entitled I’m Pretty Sure This Book is AI Slop, outlining in impressive detail an abundance of evidence that the book was substantially created using a large language model (LLM).

(I want to take a parenthetical moment here to marvel at the fact that this YouTube video is two hours and 40 minutes long, almost exactly the same run-time as Paul Thomas Anderson’s Best Picture-winning One Battle After Another, and actually longer than his previous Oscar winner, There Will Be Blood. I wish I could tell you why the hell the video is so long, but I haven’t watched it, because it’s two hours and 40 minutes long.)

Ballard’s attempt to defend her work probably did more harm than good. In an interview with the New York Times, she blamed the LLM usage on a freelance editor she claimed to have hired before self-publishing the book, saying the editor had added the AI elements without her knowledge. Whether or not this defence is convincing is probably beside the point, as Ballard is hardly the main villain of the piece anyway. This isn’t to say she bears no responsibility, but rather that the whole depressing affair emerges out of deeper fault lines in the publishing business, and in our culture more generally.

A report on Shy Girl from Pangram, the AI detection software, apparently flagged certain phrases that were almost certainly generated by AI. According to a recent article in The New York Times Book Review, these included such phrases as “the pause feels like a knife in my chest, sharp and unyielding”, and “I press the phone to my lips, the screen cool and unyielding”. These certainly don’t seem like particularly good or interesting sentences, but they don’t read to me to as markedly worse than the kind of sentences you might find in the sort of disposable genre fiction Shy Girl, or the chatbot that generated it, was intending to replicate.

Denial of a vocation to women isn’t just ‘discomfort’, it is spiritual abuse

‘It’s strange to say to Irish people that you miss the bad weather’

Shaun Ryder: ‘Bez and I have had a sexless marriage for 32 years. I’m with him all the time’

Author Susannah Dickey: ‘With each book I find myself more invested in writing Ireland’

That Times Book Review article claimed “book publishing has few safeguards in place to prevent the unwitting publication of a novel heavily generated by artificial intelligence”. One obvious response to this might be that there used to be a thing called taste, and that it was once a requirement for editors. But I suspect that part of the problem is that editors, especially at imprints specialising in popular fiction, are often obliged to make their own literary sensibilities subordinate to the demonstrated tastes of the market. It’s perfectly possible to imagine an acquiring editor reading those sentences in Shy Girl and recognising them as bad, but also recognising that such literary considerations are beyond the remit of their job.

[ Colleen Hoover: What is it about her stories that people are drawn to?Opens in new window ]

While thinking about all of this, I came across a quote from a recent interview with the author Colleen Hoover, whose novels have sold more than 50 million copies. “I’m not some highbrow literary writer,” she told a reporter for Elle. “Sure, I could probably spend more time on a sentence and write metaphors and stuff that I don’t do. But I don’t enjoy reading that, and I want to write what I like to read.”

To be clear, I have not read Hoover’s multimillion-selling breakout romance novel It Ends With Us, nor any of the other wildly successful novels she publishes at an average rate of about three volumes per year. There are, clearly, millions of readers who are keen for her to keep them in the style to which they have become accustomed, and for whom her spending more time fussing over metaphors and such would just snarl up the supply chain. My point here is that there isn’t very much distance between this way of thinking about being a writer as essentially owning and operating a content mill, and concluding that the process might as well be automated.

Every other day now seems to bring the revelation that some writer or journalist has prompted their work into existence with an LLM. A couple of weeks ago, we had the sorry case of Peter Vandermeersch, former head of Irish operations at Mediahuis – publisher of, among others, the Irish Independent and the Sunday Independent – who was suspended for publishing quotes “hallucinated” by AI.

[ Peter Vandermeersch controversy illustrates AI challenge more vividly than anything he wroteOpens in new window ]

In the UK, the failed Reform candidate and political pundit Matt Goodwin was accused on GB News of using AI in writing his new book, leading to entirely fabricated quotes attributed to everyone from Cicero to Roger Scruton. Goodwin attempted to defend his honour by reading out the response he’d received when he uploaded his book to ChatGPT and asked it whether it was written using AI; using ChatGPT to prove that he hadn’t used ChatGPT did not have the desired effect.

Perhaps most gallingly of all, mere days after running that article about Shy Girl and the problem of authors using AI, it emerged that the New York Times Book Review itself published a review, by the English writer Alex Preston, that was substantially generated by AI. Entire paragraphs were lifted more or less wholesale from a previously published Guardian review of the same book, Watching Over Her by Jean-Baptiste Andrea.

[ When AI works out how to write badly to convince us it’s human, the jig is upOpens in new window ]

When I read about this, I couldn’t help imagining how annoyed I would be if one of my own books had been reviewed by ChatGPT in the New York Times Book Review. (And then I remembered how annoyed I was when my first book was reviewed by Rahm Emanuel’s older and less likable brother in the New York Times Book Review, and realised I’d have been better off if they’d commissioned ChatGPT to rip off the Guardian’s review, which was at least a positive one.)

Contacted by The Guardian, Preston admitted he had “made a serious mistake”. This framing itself is revealing: it’s as though writers who use ChatGPT to generate a text and find that what is produced is plagiarism, don’t realise, or don’t want to acknowledge, that automated plagiarism is precisely what AI does.

The sheer prevalence of AI prose is giving rise to a new and extreme variant of what the French philosopher Paul Ricoeur called the hermeneutics of suspicion: the practice of approaching a text with the intention of exposing repressed meanings, unconscious motives. And the motive, unconscious or otherwise, with LLM-generated text is always plagiarism.

We exist, increasingly, in a cultural environment where we can never be quite sure whether what we are reading was written by a human, or generated by some or other LLM. At a time when fewer and fewer people are reading books – let alone book reviews, the space for which is dwindling in periodicals across the world – this amounts to a particularly morbid cultural symptom.


© The Irish Times