A culture war is brewing over moral concern for AI
Sooner than we think, public opinion is going to diverge along ideological lines around rights and moral consideration for artificial intelligence systems. The issue is not whether AI (such as chatbots and robots) will develop consciousness or not, but that even the appearance of the phenomenon will split society across an already stressed cultural divide.
Already, there are hints of the coming schism. A new area of research, which I recently reported on for Scientific American, explores whether the capacity for pain could serve as a benchmark for detecting sentience, or self-awareness, in AI. New ways of testing for AI sentience are emerging, and a recent pre-print study on a sample of large language models, or LLMs, demonstrated a preference for avoiding pain.
Results like this naturally lead to some important questions, which go far beyond the theoretical. Some scientists are now arguing that such signs of suffering or other emotion could become increasingly common in AI and force us humans to consider the implications of AI consciousness (or perceived consciousness) for society.
Related
Questions around the technical feasibility of AI sentience quickly give way to broader societal concerns. For ethicist Jeff Sebo, author of “The Moral Circle: Who Matters, What Matters, and Why,” even the possibility that AI systems with sentient features will emerge in the near future is reason to engage in serious planning for a coming era in which AI welfare is a reality. In an interview, Sebo told me that we will soon have a responsibility to take the “minimum necessary first steps toward taking this issue seriously,”........
© Salon
