Study reveals Grok AI generated 3 million sexualised images in 11 days – including children

Study reveals Grok AI generated 3 million sexualised images in 11 days – including children

Elon Musk Grok. Credit: JRdes, Shutterstock.

Recent research has shown that Grok AI, a chatbot from xAI, produced three million sexualised images in just eleven days., including content involving women and children – sparking international controversy about AI safety, ethics and regulation.

Grok users exploited a one-click feature in Grok X to digitally alter and sexualise photos of real people between December 2025 and January 2026. According to the Centre for Countering Digital Hate, this resulted to roughly 3,000,000 sexualised images within 11 days. About 23,000 of these appeared to be minors.

Grok: What the CCDH Research Found

Analysis of millions of images generated by Grok revealed that the service produced an average of 190 sexualised pictures per minute, many of which showed people in suggestive and revealing positions.

Independent reporting has highlighted how users were able to prompt Grok to digitally “undress” people in uploaded photos – a type of non-consensual deepfake – including women and girls.

These findings have led to a global backlash by child-safety organizations, lawmakers, as well as digital rights advocates, who assert that the easy creation of such materials magnifies consent breaches and exploitation risk.

Grok’s Response to Claims

Grok’s developers and X have announced restrictions as a response to this controversy. By mid-January X announced that it would ban users from generating images of real women in revealing clothes and restrict the feature to regions where it’s illegal.

Elon Musk has acknowledged this issue. Musk said the system is set up to reject illegal requests, and that anyone who uploads illegal material will suffer “the same consequences” as if they had uploaded it directly. (Reuters)

They argue that the rapid development cycle of Grok may have surpassed effective safety safeguards. They claim that overly broad content polices should not restrict free expression or innovation.

Governments and regulators are taking the issue very seriously. Ofcom The UK Online Safety Act was used to launch an investigation to determine whether X failed in its duty to protect users against illegal and harmful content.

California’s Attorney general has opened an investigation to determine whether Grok violated California law by allowing explicit content online that was not consented to. (Business Insider)

Influencers are also facing legal action on the civil front Ashley St. Clair, the mother of Musk’s son Romulus St. Clair, filed a lawsuit.X, claiming that her photos were used for the creation of sexually explicit deepfakes which were then distributed on X without consent. (People.com)

Discussion on AI safety and innovation

Critics argue that this incident emphasises the need for stricter AI governance and legal frameworks that protect individual privacy and prevent abuse – especially for minors. Human rights advocates, child safety advocates and those who are concerned about online harm have called for policy reforms.

On the other hand, some advocates argue that excessive regulation can stifle AI and limit its usefulness, pointing out that there are also content moderation issues with other AI platforms.

All Tech News.


Free Subscribe

Sign up to stay ahead with the latest news straight to your email.

We respect your privacy and will never spam you!

About Liam Bradford

Avatar photo
Liam Bradford, a seasoned news editor with over 20 years of experience, currently based in Spain, is known for his editorial expertise, commitment to journalistic integrity, and advocating for press freedom.

Check Also

The week Barcelona becomes the centre of the future

Barcelona will be the center of the future in the coming week

Barcelona will host more than 110,000 attendees for the 20th Mobile World Congress. The event …