Grok produces sexualized photos of women and minors for users on X – a legal scholar explains why it’s happening and what can be done
Since the end of December, 2025, X’s artificial intelligence chatbot, Grok, has responded to many users’ requests to undress real people by turning photos of the people into sexually explicit material. After people began using the feature, the social platform company faced global scrutiny for enabling users to generate nonconsensual sexually explicit depictions of real people.
The Grok account has posted thousands of “nudified” and sexually suggestive images per hour. Even more disturbing, Grok has generated sexualized images and sexually explicit material of minors.
X’s response: Blame the platform’s users, not us. The company issued a statement on Jan. 3, 2026, saying that “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.” It’s not clear what action, if any, X has taken against any users.
As a legal scholar who studies the intersection of law and emerging technologies, I see this flurry of nonconsensual imagery as a predictable outcome of the combination of X’s lax content moderation policies and the accessibility of powerful generative AI tools.
The rapid rise in generative AI has led to countless websites, apps and chatbots that allow users to........
