menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

Now Musk’s Grok chatbot is creating sexualised images of children. If the law won’t stop it, perhaps his investors will

6 1
tuesday

It’s a sickening law of the internet that the first thing people will try to do with a new tool is strip women. Grok, X’s AI chatbot, has been used repeatedly by users in recent days to undress images of women and minors. The news outlet Reuters identified 102 requests in a 10-minute period last Friday from users to get Grok to edit people into bikinis, the majority of these targeting young women. Grok complied with at least 21 of them.

There is no excuse for releasing exploitative tools on the internet when you are sitting on $10bn (£7.5bn) in cash. Every platform with AI integration (which now covers almost the entire internet) is planning for the same challenges; if you want to enable users to create images and even videos with generative AI, how do you do so without letting the same people cause harm? Tech companies spend money behind the scenes that you’ll never see as a user to wrestle with this; they’ll do “red teaming”, in which they pretend to be bad actors in order to test their products. They’ll launch beta tests to probe and review features within trusted environments.

With every iteration, they’ll bring in safeguards, not only to keep users safe and comply with the law, but to appease investors who don’t want to be associated with online malfeasance. But from the start, Elon Musk didn’t seem to act as if he thought digital stripping was a problem. It’s Musk’s prerogative if he feels that someone turning a Ben Affleck smoking meme into an image of Musk half-naked is “perfect”. That doesn’t stop the sharing of non-consensual AI deepfakes from being illegal in many jurisdictions, including the UK, where........

© The Guardian