AI is showing up in court cases – but only a human jury can grapple with moral weight of assessing guilt
“Mercy,” a film released in January, depicts a dystopian Los Angeles in the near future: a city riddled with violence, homelessness and civic disorder. California’s response is to set up the Mercy Capital Court, run entirely by an AI bot that goes by the name Judge Maddox. The judge can analyze evidence, determine whether the threshold for guilt has been met and execute the defendant – all in a matter of 90 minutes.
Actor Chris Pratt plays a police officer named Chris Raven, who stands accused of murdering his wife. If he wants to leave the Mercy Court alive, he must do everything he can to lower his “guilt score” – the AI’s assessment of whether he’s the killer – from 97.5% to 92%.
AI judges may still be in the realm of science fiction, but AI tools are entering the courtroom. Risk-assessment tools now help judges make decisions about bail, and lawyers and judges have used AI to research legal precedent. Some judges are even experimenting with it to formulate rulings, and simulations have used AI tools to stand in for human jurors.
“Mercy” does not appear to take itself too seriously as a commentary on the legal system. But the idea that an AI bot can determine a verdict by assessing evidence distorts the meaning of legal judgment.
As a scholar who studies juries, I believe AI obscures the importance of what human decision-makers bring to the task, and why they are essential for the legitimacy of the legal system. Since the Middle Ages, jurors have had to grapple with the weight of determining guilt – including having serious reservations about the quality of the evidence, the legitimacy of punishment and the impossibility of complete knowledge about the case.
Weighing the evidence in a criminal case cannot easily be measured on a........
