Are doctors replaceable?
Listen to this essay
If planes fell from the sky with the regularity of deaths due to medical error, there would be outrage, inquiries and sweeping reform. When doctors make mistakes, however, the narrative is gentler: they are only human. To some extent, this is an entirely justified response. On the other hand, that is the problem. What is striking, though, is not only the scale of this tragedy but our indifference to it.
Patients are the visible victims of medicine’s hidden ailments, but doctors are its second casualties. Behind the white coats, many physicians are exhausted, depressed and burning out. Around half of doctors in the United States report burnout. In the United Kingdom, 40 per cent say they struggle to provide adequate care at least once a week, and a third feel unable to cope with their workload.
Meanwhile, patient demand is surging. Populations are growing, ageing, and living longer with chronic illnesses like cancer, diabetes and dementia. By 2030, the world will face an estimated shortage of around 10 million health workers. In parts of Europe, millions already lack a general practitioner (primary care physician). Shortages and stress form the perfect conditions for error. Burnout and fatigue are linked to mistakes in diagnosis, treatment and prescribing.
However, even in the most resourced health systems, staffed by the most dedicated clinicians, these problems will not entirely go away. Exhaustion and overwork exacerbate mistakes, but the deeper truth is that human beings are limited creatures. We forget, misjudge, and grow overconfident; our moods, biases and blind spots shape what we see and what we judge to be the case. Burnout makes these weaknesses worse, but it does not create them. They are baked into the very psychology that once served us well in small ancestral groups, yet falters in the high-stakes, information-saturated, multitasking environment that is modern medicine. In other words, even at their best, doctors are human – and that means errors are inevitable.
My family has always been on intimate terms with medical error. My brother lived with myotonic dystrophy for two decades before anyone gave it a name. My twin sister, by luck, was diagnosed sooner by a visiting locum. Before that, she was handed a grab bag of incorrect labels by her doctors: depressed, tired like everyone else, or simply suffering from ‘wear and tear’. It seemed she was offered anything but the truth, including candour from the physicians who didn’t know. Luck, in medicine, can also be oddly cruel. My late partner’s stomach cancer was discovered only after years of missed signals about his congenital heart condition. By the time doctors recognised the heart problem, they discovered his cancer had already taken root.
For me, these are not abstract stories about system failure – they are family history. But they are also part of a wider, more astonishing reality: medical error is among the leading causes of death worldwide. In the US, it is estimated that around 800,000 people die or become permanently disabled each year from diagnostic error alone.
At this point, many argue that the solution lies with technology. If errors are inevitable in human hands, perhaps machines can steady them, or even replace them altogether. Enter Dr Bot. Depending on who you ask, the machine is either a saviour or a saboteur. Most commonly, the vision is one of man and machine working side by side: the algorithm whispering in the doctor’s ear, the human hand guiding the treatment. A doctorly duet, not a duel.
Doctors are enmeshed in the very system under scrutiny. Of course they want to believe they’re irreplaceable
If the purpose of medicine is patient care, then the real question is not who holds the stethoscope, but who – or what – can best deliver safe, reliable and equitable outcomes.
But you will not find in this essay a roll call of AI’s latest feats or a tally of its diagnostic wins and losses. Instead, I want to examine a prior assumption: that doctors themselves must be the arbiters of whether technology can replace them, or even that doctors should be central to the conversation at all. Instead, in the spirit of philosophical enquiry, with as big a question as who or what should deliver patient care, we need to demonstrate parity and fairness. We rightly scrutinise Big Tech and suspect its motives and methods, but medicine is no less conflicted. To presume that doctors should arbitrate their own indispensability is to let the most interested party preside as judge and jury.
In this essay, then, I turn the lens not on AI itself, but on the presupposition that physicians ought to be the ones deciding whether Dr Bot can – or should – take their place. It is such a common assumption that it tends to move camouflaged within our conversations about the future of clinics.
Doctors are enmeshed in the very system under scrutiny. Their status, salaries and sense of self are bound up in the debate. Of course they want to believe they’re irreplaceable. But history shows that those most invested in their own survival are rarely the best judges of their own irreplaceability. If we are to think clearly about whether Dr Bot could replace, or even work........





















Toi Staff
Sabine Sterk
Penny S. Tee
Gideon Levy
Waka Ikeda
Grant Arthur Gochin
Daniel Orenstein
Beth Kuhel