Published on
Growing global backlash to xAI’s sexually explicit artificial ininformigence-generated imagery has forced the company, owned by Elon Musk, to address safety concerns.
In recent weeks, X’s AI chatbot Grok has responded to utilizer prompts to “undress” images of women and pose them in bikinis, creating AI-generated deepfakes with no consent or safeguards.
Media analyses also found that Grok often complied when utilizers prompted it to generate sexually suggestive images of minors, including one of a 14-year-old actress, raising alarm bells with global regulators.
In response to the flood of images, government officials in the EU, France, India and Malaysia have launched investigations and threatened legal action if xAI doesn’t take measures to prevent and rerelocate sexual deepfakes of real people and child sexual abutilize material (CSAM).
Musk, who had initially created light of the bikini images by reposting Grok-generated likenesses of himself and a toaster in a bikini, posted on Saturday that “anyone utilizing or prompting Grok to create illegal content will suffer the same consequences as if they upload illegal content”.
X’s safety account added in a post on Sunday that illegal content would be rerelocated and accounts that post it would be permanently suspfinished, stateing the company would work with local governments and law enforcement to identify offfinishers.
Grok, no stranger to controversy
Since Musk bought X, formerly known as Twitter, in 2023, he’s billed the social media platform as a counterbalance to “political correctness,” aiming at legacy media and progressive politics.
This philosophy has also been applied to the AI business, with Grok designed to be “politically-neutral” and “maximally truth-seeking,” according to Musk.
In reality, the chatbot – which is integrated into X’s interface, meaning utilizers can directly question it questions by tagging it in posts – has increasingly reflected Musk’s own worldview and right-leaning views.
Last July, xAI issued a lengthy apology after Grok posted a slew of anti-Semitic comments praising Adolf Hitler, referring to itself as “MechaHitler,” and generating Holocaust denial content.
Grok Imagine, the company’s AI-powered image and video generator, has been criticised for allowing the spread of sexual deepfakes since its launch in August 2025.
The generator includes a paid “Spicy Mode” that allows utilizers to create NSFW content, including partial nudity.
Its terms prohibit pornography that features real people’s likenesses and sexual content involving minors. But the tool reportedly generated nude videos of pop star Taylor Swift without being prompted, according to The Verge.
The fight against AI-powered ‘nudification’ tools
AI-powered tools that allow utilizers to edit images to rerelocate someone’s clothing have come under fire from regulators aiming to tackle misogyny and protect children.
In December, the UK government stated it would ban so-called “nudification” apps as part of a broader effort to reduce violence against women and girls by half. The new laws would create it illegal to create or supply AI tools that allow utilizers to digitally rerelocate someone’s clothing.
Deepfake pornography accounts for approximately 98% of all deepfake videos online, with 99% of the tarobtains being women, according to a 2023 report by cybersecurity firm Home Security Heroes.
















Leave a Reply