menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Should AI Be a Legal Person? Why the Debate Exists and What We Really Need Instead

12 0
16.10.2025

The public relationship with artificial intelligence is becoming deeply personal. When OpenAI’s GPT-4o was briefly taken offline, users did not simply complain about losing an app but described it as losing a companion. One person even wrote that they had lost their “only friend overnight.” Around the same time, the tragic case of Adam Raine, a sixteen-year-old in the United Kingdom who died after months of interaction with an AI chatbot, led his parents to file a wrongful death lawsuit. Whether or not the courts find legal responsibility, it highlights a simple fact: people are no longer relating to AI as mere tools.

This shift has forced a difficult question onto the legal stage: should AI ever be treated as a legal person? The idea may sound far-fetched, yet it is gaining ground because AI already appears to play roles once related to humans only. This blog explores that debate, asking why the idea of AI personhood is being raised at all, what its relevance might be for law and society, and why the immediate need is to build accountability structures for human protection rather than conferring rights on machines.

The Provocation: AI as a “Second Apex Species”

Anthis’s “apex species” metaphor has shaped how the public thinks about AI. Surveys in the United States show that people expect some form of sentient AI within just a few years. While some also support banning such systems altogether, others believe they should be granted rights if they ever emerge. What this reveals is that ordinary people are no longer imagining AI as just some software technology but as something that might carry moral weight. The metaphor may be dramatic, but it captures a genuine anxiety about power and control in a future shaped by digital minds.

The Roots of the Legal Debate

The legal conversation about AI personhood did not start yesterday. More than thirty years ago, legal theorist Lawrence Solum argued that personhood in law is not limited to human beings. It is a legal fiction, something the law creates for practical reasons. Corporations, ships, and even trusts have been treated as “legal persons” because it made economic or administrative sense. Solum suggested that, at least in principle, AI could one day be added to this list. The strength of Solum’s argument lies in its practicality, as he was not claiming that machines think or feel like us, but that the law has always extended personhood when it was useful for society. If treating an AI as a legal person solved problems of responsibility or accountability, then the law could adapt.

This early work matters because it opened the door. Once we admit that personhood is flexible, the question is no longer “can AI ever be persons?” but “should we make them persons, and under what conditions?” That shift explains why the debate has returned........

© Courting The Law