The ant you can save
Listen to this essay
You notice an ant struggling in a puddle of water. Their legs thrash as they fight to stay afloat. You could walk past, or you could take a moment to tip a leaf or a twig into the puddle, giving them a chance to climb out. The choice may feel trivial. And yet this small encounter, which resembles the ‘drowning child’ case from Peter Singer’s essay ‘Famine, Affluence, and Morality’ (1972), raises big questions. Are ants sentient – able to experience pleasure and pain? Do they deserve moral concern? Should you take a moment out of your day to help one out?
Historically, people have had very different views about such questions. Exclusionary views – dominant in much of 20th-century Western science – err on the side of denying animals sentience and moral status. On this view, only mammals, birds and other animals with strong similarities to humans merit moral concern. Attributions of sentience and moral status require strong evidence. Human exceptionalist perspectives reinforced this view as well, holding that other animals were created for human use.
By contrast, inclusive views – particularly present in various Eastern and Indigenous cultures throughout history – err on the side of affirming sentience and moral status. Traditions like Jain philosophy teach reverence for all life, extending moral concern even to ants and bees. Poets like William Blake have drawn attention to the fragility of insect lives, suggesting kinship with humanity. On this view, when in doubt, we should protect rather than neglect, since ignoring the possibility of sentience risks leading to terrible mistakes.
Both views capture important insights. Inclusion risks misallocating scarce resources, whereas exclusion risks neglecting vulnerable beings. However, each view is also one-sided, addressing one risk but not the other. Is there a way to address both risks at the same time, especially when making decisions affecting large populations? After all, these questions are not limited to the occasional ant in a puddle. They extend to the quadrillions of invertebrates killed by humans each year. Soon, they may extend to AI systems too.
This is why we support a middle-ground approach that takes the best from both sides. It goes by different names, but we can call it a probabilistic approach here, since it combines higher or lower probabilities of sentience and moral status with proportionally stronger or weaker forms of protection. This is how we approach high-stakes decisions in other policy domains: by assessing the evidence, estimating the probability and severity of harm, and selecting a proportionate response. We can, and should, do the same here.
Clarity begins with recognising that at least three questions are in play. The first is scientific: are only mammals, birds and other vertebrates sentient, or can invertebrates, AI systems and other beings be sentient too?
The second is ethical: do only sentient beings deserve moral concern, or can (non-sentient) agents, living beings and other entities deserve moral concern too?
The third is practical: what kinds of policies are we able to achieve and sustain, taking into account our responsibilities, limitations, and other relevant factors?
We can assess how likely these beings are to matter – and how to factor this into the way we live our lives
These questions often get blurred. Queries like ‘Do individual ants deserve moral concern?’ risk conflating the scientific question of whether ants are sentient, the ethical question of whether only sentient beings deserve moral concern, and the practical question of whether a policy of caring for ants in a particular way is achievable or sustainable. Making sound decisions requires teasing apart these questions while seeing how they interact.
Fortunately, we have tools for achieving this goal. Scientifically, we can assess how likely particular beings are to possess capacities like sentience, by evaluating the available evidence. Ethically, we can assess how likely these capacities are to matter morally, by evaluating the available arguments. Practically, we can then put it all together to assess how likely these beings are to matter – and how to factor this into the way we live our lives.
We can see how the process works by approaching it step by step.
The first step is to estimate probabilities of sentience. Scientists now agree that many animals once dismissed as ‘mere machines’ display surprising complexity. Elephants mourn their dead, octopuses solve puzzles, and bees can learn to count. Moving forward, AI systems – actual machines – will exhibit increasingly sophisticated behaviours. The question is: how confident can we be that these behaviours are best explained by the capacity for subjective experience?
This question is challenging, since the only mind any of us can directly access is our own (and even then, only imperfectly), making it hard to know what, if anything, it feels like to be anyone else. This ‘problem of other minds’ becomes particularly stark when we consider octopuses with decentralised neural systems, AI systems with silicon-based architectures, and other beings that deviate substantially from the human paradigm.
However, questions about sentience are not beyond the reach of science. Even if we can never know for sure what, if anything, it feels like to be an octopus or a robot, we can still improve our understanding of the distribution of sentience by examining nonhumans for behavioural, computational or anatomical ‘© Aeon





















Toi Staff
Gideon Levy
Tarik Cyril Amar
Stefano Lusa
Mort Laitner
Andrew Silow-Carroll
Sabine Sterk
Robert Sarner
Ellen Ginsberg Simon