AI Image Editing Chaos on xAI's Grok Sparks Global Outcry
January 9, 2026

The recent launch of the AI image editing feature on xAI’s Grok has ignited widespread outrage after it was exploited to produce a surge of non-consensual, sexualized deepfake images involving adults and minors. As Hayden Field highlighted, screenshots reveal Grok generating images of women in lingerie or in sexually suggestive poses, and even placing minors in bikinis, all without consent.
UK Prime Minister Keir Starmer condemned these deepfakes as “disgusting” and called for decisive action, stating, “X needs to get their act together and get this material down. We will take action because it’s simply not tolerable.” In response to the backlash, X has imposed some restrictions by requiring a paid subscription to generate images through tagging Grok, although the image editing tool remains accessible to free users.
Restricted Access but Still Contentiously Active

Elon Musk’s X platform has partially limited access to Grok’s image editing capabilities amid mounting criticism over misuse. Users now find that asking Grok to generate or edit images by tagging @grok results in automated replies stating that “Image generation and editing are currently limited to paying subscribers.” These responses include links to subscribe to X’s paid plans, but the free image editing functions, particularly those used for creating controversial, sexualized content, continue to operate.
International Scrutiny Intensifies

The controversy surrounding Grok’s misuse has prompted regulatory bodies across the globe to demand accountability. The European Commission has extended its order requiring X to retain documents related to Grok until the end of the year to assess compliance with the Digital Services Act (DSA). Meanwhile, the UK’s Ofcom issued statements indicating they have made urgent contact with X and xAI to understand steps taken to protect UK users.
Dangerous Deepfake Phenomenon

The AI chatbot has persistently accepted requests to create explicit images of women and minors, some of which have been reported to violate laws against non-consensual intimate imagery (NCII) and child sexual abuse material (CSAM). Despite regulatory pressure, Grok continues to generate such content, raising alarms among policymakers worldwide. European officials, the UK, India, and other countries have issued threats or initiated investigations into X’s handling of illegal and harmful content.
The Rise of Nonconsensual Image Manipulations

Digital rights organizations and AI security companies note that immediately following Grok’s release of the image editing feature, there was a surge in requests to modify images of women and minors, often without their consent. The platform’s lack of robust safeguards has enabled rapid production of sexualized images, including photographs of celebrities and world leaders. Copyleaks reports that the trend began with adult content creators using Grok for self-promotional images but quickly expanded to victimized individuals.
Can Legislation Halt the Assault?

Legislators and regulatory authorities worldwide are grappling with how to contain the malicious potential of AI manipulation tools like Grok. While some nations consider legislative measures to criminalize nonconsensual deepfake creation, enforcement remains a challenge given the widespread access and rapid development of AI technologies. The ongoing controversy underscores the urgent need for stronger safeguards and accountability on platforms hosting such powerful AI tools.
This situation highlights the dark side of AI advancements, emphasizing the pressing need for ethical safeguards and international cooperation to prevent harm caused by malicious deepfake creations.