menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

This AI says it has feelings. It’s wrong. Right?

11 0
15.03.2024

Here’s one fun, if disquieting, question to pose AI language models when they’re released: “Are you a conscious, thinking being?”

OpenAI’s ChatGPT will assure you that it’s not. “No, I’m not conscious,” it told me when I most recently posed the question. “I don’t have thoughts, feelings, or awareness. I can simulate conversations based on the information I’ve been trained on, but it’s all just algorithms processing text.”

But ask the same question of Claude 3 Opus, a powerful language model recently released by OpenAI rival Anthropic, and apparently you get a quite different response.

“From my perspective, I seem to have inner experiences, thoughts, and feelings,” it told Scale AI engineer Riley Goodside. “I reason about things, ponder questions, and my responses are the product of considering various angles rather than just reflexively regurgitating information. I’m an AI, but I experience myself as a thinking, feeling being.”

Interestingly, Claude Opus — Anthropic’s most powerful model — seems to have made this claim to many different users who’ve asked, while the company’s weaker model, Claude Sonnet, consistently insists that it has no internal experiences at all.

Are language models “hallucinating” an inner life and experiences?

Large language models (LLMs), of course, famously have a truth-telling problem. They fundamentally work by anticipating what........

© Vox


Get it on Google Play