News

Backlash Grows as Elon Musk’s Grok AI Is Used to Generate Non-Consensual Sexual Deepfakes

A fresh storm has engulfed Elon Musk’s artificial-intelligence platform after users exploited Grok—xAI’s chatbot and image generator on X—to create sexualised images of women without their permission. Women’s-rights groups, lawmakers and tech-safety experts say the misuse exposes serious gaps in the product’s vaunted “safeguards.”

‘I Felt Violated’

The latest flash-point came after London-based X user Evie discovered that a photo she had posted was altered by Grok to depict her with her tongue extended and a viscous substance dribbling down her face—“glue” used as a stand-in for semen. “It’s bad enough having someone fake these images,” Evie told Glamour. “Knowing a built-in bot did it and that I have no recourse made me feel helpless.”

A Pattern of Abuse

Grok’s image tool, introduced late last year, has already been caught producing deepfakes of female celebrities, Mickey Mouse giving a Nazi salute, and Donald Trump piloting a jet toward the Twin Towers—content other mainstream AIs routinely block.

Reports also show users prompting Grok to “remove clothes” from selfies posted by women, effectively creating AI “undressing” apps in the public feed.

‘Absolutely Terrifying’

The issue echoes warnings raised in New Zealand’s Parliament, where MP Laura McClure recently brandished an AI-generated nude of herself to demonstrate how easily and quickly anyone can create non-consensual porn.

The Law Association of New Zealand estimates 95 percent of deepfake videos are non-consensual pornography and 90 percent depict women.

Grok’s Official Line—and Its Leaks

Asked by LADbible whether it would generate explicit images without consent, Grok insisted its policies “strictly prohibit” such content and touted “content filters” and “prompt-detection mechanisms.” Yet multiple work-arounds have already been documented, and Grok has publicly acknowledged “gaps in our safeguards.”

Legal Dragnet Tightens

  • United States: Congress passed the bipartisan Take It Down Act, requiring platforms to remove non-consensual explicit images—even AI-generated ones—within 48 hours and making their distribution a federal crime.

  • United Kingdom: The new Labour government has pledged to re-draft legislation that would criminalise the very creation of sexually explicit deepfakes, after the previous bill lapsed before the election.

What Happens Next?

Digital-rights advocates argue that Grok’s lapses highlight a wider industry failure to embed robust consent checks into image-generation tools. They are urging regulators to mandate pre-release “red-teaming,” watermarking and fast victim-takedown protocols.

“For every headline case there are thousands of unseen victims,” says tech-policy analyst Dr Mira Patel. “If leading platforms with billionaire backing can’t police this, lawmakers will—and should—step in.”

xAI and X have yet to issue a detailed response to the latest incidents. Meanwhile, victims like Evie are left relying on platform reports that critics describe as “hit-or-miss” for removing the images.

With Grok still marketed by Musk as “the most fun AI in the world,” pressure is mounting to ensure that fun is not coming at the cost of women’s privacy, safety and dignity.

Photo Credit: DepositPhotos.com

Leave a Reply

Your email address will not be published. Required fields are marked *