We shouldn’t let kids be friends with ChatGPT
The number of kids getting hurt by AI-powered chatbots is hard to know, but it’s not zero. Yet, for nearly three years, ChatGPT has been free for all ages to access without any guardrails. That sort of changed on Monday, when OpenAI introduced a suite of parental controls, some of which are designed to prevent teen suicides — like that of Adam Raine, a 16-year-old Californian who died by suicide after talking to ChatGPT at length about how to do it. Then, on Tuesday, OpenAI launched a social network with a new app called Sora that looks a lot like TikTok, except it’s powered by “hyperreal” AI-generated videos.
It was surely no accident that OpenAI announced these parental controls alongside an ambitious move to compete with Instagram and YouTube. In a sense, the company was releasing a new app designed to get people even more hooked on AI-generated content but softening the blow by giving parents slightly more control. The new settings apply primarily to ChatGPT, although parents have the option to impose limits on what their kids see in Sora.
And the new ChatGPT controls aren’t exactly straightforward. Among other things, parents can now connect their children’s accounts to theirs and add protections against sensitive content. If at any point OpenAI’s tools determine there’s a serious safety risk, a human moderator will review it and send a notification to the parents if necessary. Parents cannot, however, read transcripts of their child’s conversations with ChatGPT, and the teen can disconnect their account from their parents at any time (OpenAI says the parent will get a notification).
We don’t yet know how all this will play out in practice, and something is bound to be better than nothing. But is OpenAI doing everything it can to keep kids safe?
Even adults have problems regulating themselves when AI chatbots offer a cheerful, sycophantic friend available to chat every hour of the day.
Several experts I spoke to said no.........
© Vox
