Should AI Have Legal Rights?
The debate around artificial intelligence (AI) has moved far beyond discussions of efficiency or technological progress. We are now confronting questions that, until recently, belonged firmly in the realm of science fiction: can a machine deserve legal rights? If an AI system makes decisions independently, should it still be treated merely as property? And how should responsibility be assigned when AI begins influencing governance, security, and daily life? These issues are becoming increasingly urgent as technology becomes more capable and deeply embedded in society.
Today’s advanced AI systems do far more than follow simple instructions. They learn from experience, recognise patterns, negotiate outcomes, and make decisions that even their creators cannot always predict. They are deployed in banking, healthcare diagnostics, self-driving vehicles, energy grids, and national security systems, areas where their actions can have real and sometimes irreversible consequences. This growing autonomy is raising difficult legal and ethical questions about their status and about human accountability for their behaviour. Current law treats AI as property, no different from a mobile phone or vehicle. This made sense when software acted purely as a tool. However, as AI begins to operate with increasing degrees of independence, the old classifications are revealing their limitations. A number of legal scholars now argue that AI may eventually require a very narrow form of legal personhood, something closer to a corporation than a human being. Corporations are not sentient, yet they can enter contracts, be sued, own property, and bear responsibility. Extending a similar model to AI, some argue, could help assign........





















Toi Staff
Gideon Levy
Sabine Sterk
Stefano Lusa
Tarik Cyril Amar
John Nosta
Ellen Ginsberg Simon
Gilles Touboul
Mark Travers Ph.d
Daniel Orenstein