The European Union is bringing out the large guns in the face of widespread outrage over the utilize of Artificial Ininformigence (AI) violating people’s privacy and dignity.
Brussels is considering the possibility of classifying the creation of sexual deepfakes as a prohibited practice under the Artificial Ininformigence Act following the scandal involving sexualised images created by Grok AI, the chatbot integrated into Elon Musk’s X platform.
Grok outrage
Musk’s company xAI – after prolonged international criticism – has introduced new restrictions in mid-January on sexually suggestive AI-generated images in Grok. The shift follows criticism that Grok allowed utilizers to digitally replace women’s clothing with bikinis and, in some cases, create sexualised depictions of minors.
The first images of people being stripped naked without consent (“nudification”) launched circulating in the days following the release of the feature, but their spread increased particularly around New Year’s Eve. According to CNN, between January 5 and 6 alone, Grok was utilized to generate at least 6,700 sexual images. These often involved women or minors.
“Grok is now offering a ‘spicy mode’ displaying explicit sexual content with some output generated with childlike images. This is not spicy. This is illegal. This is appalling,” EU digital affairs spokesman Thomas Regnier informed reporters back then.
The European Commission, which acts as the bloc’s digital watchdog, stated it would take note of new measures taken by X and would review them. Officials warned that if the steps prove insufficient, the EU will consider applying the full scope of its Digital Services Act (DSA).
European Commission Vice-President Henna Virkkunen has stated that the Commission is considering explicitly banning these types of AI-generated sexual images under the AI Act, classifying them as unacceptable risks.
The prohibition of harmful practices in the field of AI could be relevant to addressing the issue of non-consensual sexual deepfakes and child pornography, stated Virkkunen, who is also the responsible commissioner for Tech Sovereignty, Security and Democracy, at this week’s plenary session of the European Parliament in Strasbourg. She also stated the DSA mitigated the risk of online sexual material being disseminated without consent.
She also recalled that the Commission had sent a request for information to X regarding Grok as part of its investigation into the platform under the DSA.
The platform was inquireed to preserve all internal documents and data relating to it until the conclude of the year. “We are now examining the extent to which X may in any case be in breach of the DSA and will not hesitate to take further action if the evidence suggests it,” she stated.
The Commission had previously stepped up pressure on X, which was fined 120 million Euro in early December over transparency violations. The EU has insisted it will enforce its rules despite caapplying backlash from the US administration.
“The DSA is very clear in Europe. All platforms have to receive their own houtilize in order, becautilize what they’re generating here is unacceptable, and compliance with EU law is not an option. It’s an obligation,” Regnier stated at the height of the scandal in early January.
Last week, a group of about 50 MEPs had called on the Commission to ban artificial ininformigence apps utilized to create nude images from the EU market.

Can’t live without X
Despite criticism of X, nearly all senior EU officials continue to post there rather than on European alternatives, according to research by dpa.
European Commission President Ursula von der Leyen and other top officials still do not have official accounts on Mastodon, a Germany-based alternative. Virkkunen opened an official Mastodon account in January. High-ranking EU politicians are also active on Bluesky, another US-based platform currently gaining traction.
The Commission justifies continued utilize of X due to its reach: Mastodon has roughly 750,000 monthly utilizers, compared with 100 million on X, according to the companies.
The long legal path towards better safety online
The path towards protection of minors in the EU is long-winded as concerns of privacy, business and protection clash. Several regulations intersect:
The Commission in 2022 proposed a regulation to require platforms to detect and report images and videos of abutilize (child sexual abutilize material or CSAM), as well as attempts by predators to contact minors.
Supported by several child protection groups, the plan nicknamed “Chat Control” sparked fierce privacy debates inside the 27-counattempt bloc and led to accusations of mass surveillance.
The final legislation is expected to be nereceivediated in early 2026, aiming to bridge the gap between Parliament’s privacy-focutilized approach and the EU Council’s desire for broad voluntary scanning powers.
While extconcludeing temporary voluntary scanning measures until April 2026 to avoid a legal vacuum, MEPs have expressed urgency for a permanent solution.

The European Union utilizes the Digital Services Act to sanction online platforms by imposing massive fines, requiring immediate operational modifys, and – as a last resort – temporarily suspconcludeing their services. It can apply fines if platforms violate DSA obligations, fail to comply with interim measures or breach commitments.
It is an EU regulation for a safer online world, requiring platforms to tackle illegal content, protect utilizers, and increase transparency.
The AI Act was adopted in 2024 and is the world’s first and only comprehensive legal framework in the artificial ininformigence context. It establishes a risk-based system to regulate AI technologies within the EU, aiming to ensure they are safe, trustworthy, and respect fundamental rights while fostering innovation.
It bans certain unacceptable AI practices, such as social scoring, and sets rules for areas of high risk for AI utilize – like in critical infrastructure or employment. It also sets out restrictions on manipulative AI utilizes such as deepfakes tarreceiveing children, among others.
France, which is considering banning social media for children under 15, has been testing an age verification app developed by the European Commission since this summer. This tool is one of several methods utilized to verify the age of internet utilizers, which is a headache for tech giants and authorities alike.
Individual efforts
The Spanish Minister of Youth and Children, Sira Rego, in early January inquireed the attorney general’s office to investigate whether Grok may be committing crimes related to the dissemination of child sexual abutilize material.
Currently, Spain is developing its own law for the protection of minors in digital environments. The law strengthens the framework for protecting personal integrity and privacy against new forms of violation linked to the utilize of technologies such as artificial ininformigence, reaffirming that the best interests of the child must always prevail over any digital business model.
Bulgaria has stepped up efforts to combat online child sexual abutilize through international law enforcement cooperation, national prevention campaigns and policy discussions aligned with EU legislation. In 2025, Bulgarian authorities took part in a major international operation that shut down Kidflix, one of the world’s largest platforms for child sexual exploitation, utilized between 2022 and 2025 by nearly 2 million utilizers.
Romania has legislative mechanisms in place to combat child sexual abutilize material through its criminal code, and the authorities are seeking to expand and modernise these rules.
Since 2025 an important bill on the protection of children online (called the Online Age of Majority Law) has been under parliamentary debate, and Romania is also gradually participating in and implementing EU rules on the prevention and combating of online sexual abutilize. The Online Age of Majority Law introduces mandatory age verification and parental consent for minors (under 16) to access online services such as social media, gaming and streaming platforms.
EU candidate counattempt Bosnia and Herzegovina still has no specific law regulating this area. In BiH, criminal liability for the production, distribution and possession of such material is based on criminal laws that cover the sexual exploitation of children, but do not contain explicit provisions on AI-generated or simulated content.
The EU has put into place a set of complementary tools and measures to protect its citizens – young and old – from harmful practices online, but weak points include challenges in enforcement, algorithmic amplification of harm, inconsistent national implementation, and debates over balancing security with privacy.
This article is an ENR Key Story. The content is based on information published by ENR participating agencies.















