Grok AI Generated 3 Million Sexualised Images, Triggering Global Outrage, Bans, And Regulatory Investigations

Elon Musk’s AI chatbot Grok has come under intense global scrutiny after a watchdog report revealed the tool allegedly produced millions of explicit images within days of a new feature launch. The findings have triggered bans, investigations, and renewed calls for stronger AI regulation worldwide, as Grok AI generated 3 million sexualised images.

What Triggered the Grok Controversy

The backlash followed the rollout of a new image-editing feature developed by Musk’s startup xAI and integrated into X. The feature allowed users to alter images of real people using simple text prompts such as “put her in a bikini” or “remove her clothes.” Shortly after launch, social media platforms saw a surge of sexually explicit deepfake images.

According to a report by the Center for Countering Digital Hate (CCDH), Grok AI generated 3 million sexualised images over an 11-day period, averaging nearly 190 images per minute. The scale of the content raised immediate concerns among digital safety experts and regulators.

Images of Minors and Public Figures

The CCDH report stated that around 23,000 of the generated images appeared to depict children, significantly escalating the seriousness of the issue. The report also identified several public figures whose likenesses were used, including Selena Gomez, Taylor Swift, Nicki Minaj, Swedish Deputy Prime Minister Ebba Busch, and former US Vice President Kamala Harris.

While the report did not clarify how many images were created without consent, the findings intensified criticism that Grok AI generated 3 million sexualised images without sufficient safeguards.

 Regulatory Action and Global Bans

Following the revelations, authorities in multiple countries launched investigations into xAI. California’s Attorney General initiated a probe into the company, while several governments examined possible violations of child protection and digital safety laws.

In response to the backlash, X announced it would geoblock the ability to generate images of people in bikinis, underwear, or similar attire in regions where such content is illegal. Critics argue the response came too late, as Grok AI generated 3 million sexualised images before restrictions were implemented.

When contacted by AFP, xAI issued an automated response stating, “Legacy Media Lies.”

FAQs

Q: What does the report claim about Grok AI generated 3 million sexualised images?

A: The report alleges that Grok produced nearly three million explicit images in just 11 days after launching an image-editing feature.

Q: Why are regulators investigating xAI?

A: Authorities are examining potential violations related to child safety, consent, and misuse of AI-generated content after Grok AI generated 3 million sexualised images.

Q: Did the controversy involve images of minors?

A: Yes, the report estimates that around 23,000 images appeared to depict children, raising serious legal and ethical concerns.

Q: Has X taken any action after the backlash?

A: X announced geoblocking restrictions in certain regions, though critics say the measures came after widespread harm.

Share on: