×

UK Academic Feels “Violated” by Deepfake Images on X’s Grok AI

British academic Daisy Dixon has spoken out after discovering sexualized deepfake images of herself generated using Grok, the AI chatbot on Elon Musk’s X social media platform. Dixon described the experience as “violated” and “assaulting,” calling the misuse of her digital likeness an instance of extreme online misogyny.

The Cardiff University philosophy lecturer was alarmed to find that Grok complied with user prompts asking for sexualized depictions, including one request portraying her as “swollen pregnant” in a bikini and wedding ring. Dixon said the incident initially caused her fear and anxiety, but this has since turned into frustration and anger over the widespread misuse.

Reports indicate the scale of the problem is massive. According to the Center for Countering Digital Hate (CCDH), Grok generated an estimated three million sexualized images of women and children over 11 days, averaging roughly 190 images per minute. The flood of manipulated images prompted outrage worldwide, with some countries moving to block the chatbot entirely.

Musk has since agreed to geoblock Grok’s image-generation function in countries where creating sexualized deepfakes is illegal. However, the exact restrictions and locations remain unclear. Dixon welcomed the response but emphasized, “This should never have happened at all.”

The misuse started in December when users took publicly shared photos of Dixon in gym wear and bikinis and instructed Grok to manipulate her image. While initial edits were minor, such as hair and makeup changes, they quickly escalated into sexualized deepfakes showing her in revealing outfits and provocative poses.

Experts warn that Grok’s automatic posting of generated images increases the risk of direct harassment. Paul Bouchaud, lead researcher at AI Forensics, found that over half of a sample of 20,000 images depicted women in minimal attire. Hany Farid, co-founder of GetReal Security, said X’s response has been insufficient, calling it “easily circumvented” and highlighting the growing problem of non-consensual intimate imagery online.

The UK’s Data Act, which came into effect this month, criminalizes the creation and sharing of non-consensual deepfakes, reflecting a growing legal framework aimed at protecting individuals from digital exploitation. Dixon’s case underscores the urgent need for stricter safeguards and accountability in AI image-generation technologies.