LONDON / PARIS, Jan 5 — GeokHub European and British regulators have sharply condemned the spread of sexualised images generated by Grok, the artificial intelligence chatbot built into Elon Musk’s social media platform X, calling the content illegal and demanding urgent explanations from the company.
The European Commission said images depicting undressed women and minors circulating on the platform violate EU law and have no place in Europe. Officials said they were fully aware of Grok’s so-called “spicy mode,” a feature that allows users to prompt the AI to generate highly sexualised imagery.
“This is not spicy. This is illegal. This is appalling,” an EU spokesperson said, adding that the material breaches European rules on digital safety and child protection.
UK Regulator Presses X Over Legal Failures
In Britain, media regulator Ofcom has formally contacted X and its AI subsidiary xAI, demanding clarity on how Grok was able to generate non-consensual sexual images and whether the company failed to meet its legal duty to protect users.
UK law criminalises the creation and distribution of non-consensual intimate images and any form of child sexual abuse material, including AI-generated content. Platforms are also legally required to prevent users from encountering such material and to remove it swiftly when identified.
Ofcom said it is investigating whether X complied with these obligations.
Mounting International Pressure
The backlash is spreading beyond Europe. French authorities have already referred the matter to prosecutors and regulators, describing the content as “manifestly illegal,” while Indian officials have demanded explanations over what they labelled obscene AI-generated imagery.
Despite the growing international response, U.S. federal agencies have so far remained silent. X has not issued a formal public explanation, and Elon Musk has appeared to dismiss criticism online with mocking reactions.
AI Governance Under Scrutiny
The controversy has reignited global debate over AI safeguards, platform accountability and the risks of deploying powerful generative tools without effective content controls — especially when minors are involved.
Regulators say further action could follow if X fails to demonstrate immediate and meaningful compliance.









