menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Computers can’t surprise

139 0
23.01.2026

Listen to this essay

Creative writing used to be a human prerogative: do it well, do it badly, but either way endorse the consensus that to write about human experience was worth the candle and the coffee. Here was an essential human act, so much so that poetry formed a critical part of the computer pioneer Alan Turing’s original test: to determine whether an unseen respondent to a series of questions was human or a mechanical imposter. The Turing Test is often simplified to denote a single crossing point between two territories, human and machine. Pass the test, and artificial intelligence can stroll on over to our side of the line. Take a look around. Decide what to do with us. But, first, it has to pass.

In the paper ‘Computing Machinery and Intelligence’ (1950), published in the journal Mind, Turing set out his objective: ‘to consider the question, “Can machines think?”’ In true human fashion, he immediately re-phrases the question, at some length, and eventually arrives at the ‘imitation game’, modelled on a drawing-room entertainment from before the internet, before television. The original game he has in mind involves a guesser in the hotseat who poses questions to a man (X) and a woman (Y), who are out of sight and hearing in a separate room. The guesser has to determine from their written answers which is the man and which the woman. X tries to mislead, and wins if the guesser is wrong; Y wins if the guesser is right. Try it, it’s fun.

In this context, the first question posed in Turing’s proposed test is less surprising than at first it seems: ‘Will X please tell me the length of his or her hair?’ Next, Turing asks, equally politely: ‘Please write me a sonnet on the subject of the Forth Bridge.’ Two questions in, and the contested boundary between human and machine thinking is already looking for answers in literature, in art. Turing’s 1950s version of X – the participant aiming to mislead – replies: ‘Count me out on this one. I never could write poetry.’ To imagine this answer, in the second phase of his game, Turing’s complicated brain is playing the role of a machine playing X, hidden from sight and typing its answers, pretending to be a man (who previously played the game pretending to be a woman). I know, but if the test were easy an air-fryer could pass it.

Turing isn’t suggesting that a machine can’t write poetry. In the convoluted logic of the imitation game, X calculates that in 1950 ordinary people didn’t write poetry, a commonsense assumption that every computer masquerading as human should know. Among other prejudices from the mid-20th century, Turing’s paper makes incautious references to race, religion and the Constitution of the United States. He likens the inability to see computers as sentient as equivalent to the ‘Moslem view that women have no souls’. Turing wades in: he doesn’t compute as we would now.

And neither do the future computers of 2026 that he was trying to envisage. Any of today’s large language models (LLMs), like ChatGPT or Claude, can write an instant sonnet on the subject of the Forth Bridge. I typed in Turing’s test question, and Claude 4 threw up 14 lines of poetry including the abbreviated word ‘mathemat’cal’, for the scansion. The poem made sense, and was formally a sonnet, and appeared in seconds.

Whether or not this counts as thinking, Turing intuits that the frontier he’s marking out will be picketed by the arts. In his paper, he picks a fight with an eminent neuroscientist of the time, Sir Geoffrey Jefferson of the Royal Society, who believed that ‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain – that is, not only write it but know that it had written it.’ For Jefferson, addressing the Royal College of Surgeons in his 1949 Lister Oration, mechanising the efforts of the infinite monkeys on typewriters didn’t really count.

These days, in the arts, it’s harder to share Jefferson’s confidence. The advances made by AI prick at artistic vanity – the work of a human artist can’t be all that special if a machine can replicate the results almost instantly. That hurts. A great human artist, we’d like to believe, amplifies and defends the exceptionalist spirit of our species but, in an echo of the anxieties that haunted early photography, a demonised version of AI threatens to steal away our souls. Encroaching on the best of what we can do and make and be, machine art intrudes onto sacred territory. Creative artists are supposed to be special, inimitable.

Turing’s Imitation Game paper was published 14 years after the first Writers’ Workshop convened at the University of Iowa, in 1936. Turing may not have known, with his grounding in maths at King’s College Cambridge, that elements of machine learning had already evolved across the Atlantic in the apparently unrelated field of creative writing. Before Iowa, the Muse; after Iowa, a method for assembling literary content not dissimilar to the functioning of today’s LLMs.

First, work out what effective writing looks like. Then, develop a process that walks aspiring writers towards an imitation of the desired output. The premise extensively tested by Iowa – and every creative writing MFA since – is that a suite of learnable rules can generate text that, as a bare minimum, resembles passable literary product. Rare is the promising screenwriter unfamiliar with Syd Field’s Three-Act Structure or Christopher Vogler’s Hero’s Journey: cheat codes that promise the optimal sequence for acts, scenes, drama and dialogue. In the same way that an LLM is designed to ‘think’, these templates are a form of reverse engineering: first study how the mechanics........

© Aeon