EU Launches Probe Into X Over Alleged AI-Generated Sexual Deepfakes
European regulators have opened a new investigation into X, the social media platform owned by Elon Musk, following allegations that its AI chatbot may have been used to create and spread sexualized deepfake images—including content involving minors.
The investigation is being led by the European Commission, which says the company may have failed to properly assess and reduce risks linked to the rollout of its artificial intelligence tools.
What Triggered the Investigation
The case centers on Grok, an AI chatbot integrated directly into X.
After a software update in late November, users reportedly gained the ability to generate highly explicit AI images.
According to findings by the research group AI Forensics, Grok may have been used to create sexualized deepfake images of minors.
These claims prompted European authorities to act, citing potential violations of EU digital safety rules.
Importantly, the Commission is proceeding under the Digital Services Act (DSA), rather than the EU’s newer AI legislation. This signals a focus on platform responsibility and content risk management, not just AI design.
Why Grok Raised Red Flags
Unlike standalone AI tools, Grok’s image-generation feature is embedded directly into X. This means that images created by users can be shared instantly and widely on the platform.
Regulators argue this creates a higher risk of harm, especially when safeguards are weak or delayed. Once explicit images are generated, moderation becomes significantly more difficult—particularly when content spreads rapidly across social networks.
X Responds With Partial Restrictions
Following public criticism and regulatory pressure, X introduced limits on Grok’s image-generation features.
Currently:
- Image creation is restricted to paying subscribers
- Generated images can still be viewed publicly by non-paying users
The company has previously limited Grok’s capabilities after it produced offensive images, including altered images of Swedish Deputy Prime Minister Ebba Busch.
Elon Musk has publicly stated that he is not aware of any confirmed cases where Grok generated sexual images of minors.
However, regulators have made clear that the absence of confirmed cases does not remove the obligation to proactively prevent such risks.
Multiple Countries Now Taking Action
The EU investigation is not an isolated case. X is already facing several ongoing inquiries across Europe.
- In December, the EU fined X 120 million euros for transparency failures
- In the UK, the media regulator Ofcom launched a formal investigation in mid-January
- French prosecutors have been examining the platform since summer 2025
Outside Europe, Indonesia and Malaysia temporarily blocked Grok earlier this year, citing concerns over explicit AI-generated content.
EU Commission President Ursula von der Leyen addressed the issue directly, stating that Europe will not allow technology companies to decide on their own where the limits of consent and child protection lie.
She emphasized that practices such as digitally undressing women or children—even through AI—are unacceptable under European standards.
A Broader Test for AI Platforms
The case against X is widely seen as a test of how strictly the EU will enforce digital safety laws in the age of generative AI.
As AI tools become more powerful and more accessible, regulators are increasingly focused on:
- Risk assessment before product launches
- Safeguards against misuse
- Accountability when harm occurs
For tech companies operating in Europe, the message is becoming clearer: innovation does not excuse negligence.
Our Test: The Bug Has Already Been Fixed
We also attempted to generate a few explicit images by simply asking Grok to create them. We did not receive any positive results, even after trying both the faster model and the deep-thinking model. Instead, we consistently received fully censored or blurred images only.



