menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Human-AI Alignment Problem

11 0
23.11.2025

We’re now deep into the AI era, where every week brings another feature or task that AI can accomplish. But given how far down the road we already are, it’s all the more essential to zoom out and ask bigger questions about where we’re headed, how to get the best out of this technology as it evolves, and, indeed, how to get the best out of ourselves as we co-evolve.

There was a revealing moment recently when Sam Altman appeared on Tucker Carlson’s podcast. Carlson pressed Altman on the moral foundations of ChatGPT. He made the case that the technology has a kind of baseline religious or spiritual component to it, since we assume it’s more powerful than humans and we look to it for guidance. Altman replied that to him there’s nothing spiritual about it. “So if it’s nothing more than a machine and just the product of its inputs,” says Carlson. “Then the two obvious questions are: what are the inputs? What’s the moral framework that’s been put into the technology?”

Altman then refers to the “model spec,” the set of instructions an AI model is given that will govern its behavior. For ChatGPT, he says, that means training it on the........

© Time