menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Exclusive: California’s new plan to stop AI from claiming to be your therapist

12 40
yesterday
A client sees a psychiatrist in his office. | Najlah Feanny/Corbis via Getty Images

Over the past few years, AI systems have been misrepresenting themselves as human therapists, nurses, and more — and so far, the companies behind these systems haven’t faced any serious consequences.

A bill being introduced Monday in California aims to put a stop to that.

The legislation would ban companies from developing and deploying an AI system that pretends to be a human certified as a health provider, and give regulators the authority to penalize them with fines.

“Generative AI systems are not licensed health professionals, and they shouldn’t be allowed to present themselves as such,” state Assembly Member Mia Bonta, who introduced the bill, told Vox in a statement. “It’s a no-brainer to me.”

Many people already turn to AI chatbots for mental health support; one of the older offerings, called Woebot, has been downloaded by around 1.5 million users. Currently, people who turn to chatbots can be fooled into thinking that they’re talking to a real human. Those with low digital literacy, including kids, may not realize that a “nurse advice” phone line or chat box has an AI on the other end.

In 2023, the mental health platform Koko even announced that it had performed an experiment on unwitting test subjects to see what kind of messages they would prefer. It gave AI-generated responses to thousands of Koko users who believed they were speaking to a real person. In reality, although humans could edit the text and they were the ones to click “send,” they did not have to bother with actually writing the messages. The language of the platform, however, said, “Koko connects you with real people who truly get you.”

“Users must consent to use Koko for research purposes and while this was always part of our Terms of Service, it is now more clearly disclosed during onboarding to bring even more transparency to our work,” Koko CEO Rob Morris told Vox, adding: “As AI continues to rapidly evolve and becomes further integrated into mental health services, it will be more important than ever before for chatbots to clearly identify themselves as non-human.

Nowadays, its website says, “Koko commits to never using AI deceptively. You will always be informed whether you are engaging with a human or AI.”

Other chatbot services — like........

© Vox