The Grok Scandal Forces Reckoning on AI, Consent and Gendered Harms

The Grok AI scandal underscores why regulators must treat dignity and women’s safety as legal, not merely moral, concerns.

April 21, 2026
Tauchnitz, Evelyne - Ofcom After the Grok AI Deepfake
The Grok case exposes a persistent gap between moral intuitions and legal norms. (Mike Blake/REUTERS)

In early January, the growing scandal over deepfakes generated by Grok AI triggered sharp reactions, including several probes by different nations. Grok, the generative artificial intelligence (AI) system developed by Elon Musk’s company xAI, is directly integrated into X (formerly Twitter). With the help of Grok, users were able to generate large volumes of sexualized images of identifiable individuals and instantly share them online. While some argue that images have long been manipulated and sexualized online, AI tools such as Grok have made non-consensual sexualized representation much faster, cheaper and scalable than ever before.

On February 10, the world observed Safer Internet Day, promoting safer and responsible use of the internet. The Grok incident, however, reminds us that “online safety” is not just about filtering nudity or offensive content; it is also about ensuring consent, dignity and control over one’s digital self.

In many ways, the Grok case exposes a persistent gap between moral intuitions and legal norms. Morally, non-consensual sexualized representations are widely understood as violations of dignity, autonomy and sexual self-determination. Legally, however, such harms are still mainly addressed only indirectly through privacy, data protection, defamation or platform liability — frameworks that struggle when the image is synthetic and the chain of causation is diffuse. This results in a form of normative lag: what most people intuitively recognize as a serious moral harm may still not be legally sanctioned in a straightforward way.

It is therefore crucial to clarify what is — and what is not — at stake.

To start off, this debate is not purely about nudity or sexual expression per se. More important than that is recognizing that the harm comes from three things: the absence of consent, the collapse of context and the exercise of power. Voluntary sexual self-representation is fundamentally different from having sexualized meaning imposed upon oneself by someone else, amplified by algorithmic systems and circulated without control.

The consequences that come from that, however, are not evenly distributed. Women — and to a lesser extent, children — are disproportionately affected by AI-generated sexualized deepfakes. Reports on online harassment and image-based abuse show that sexualized attacks are frequently used to intimidate and silence women, particularly those who are publicly visible as journalists and media workers, human rights defenders and activists, writers or politicians.

For women, sexual exposure carries asymmetric risks that range from reputational and professional harm to threats of escalation and victim-blaming. For men, sexualized images are far less likely to undermine credibility or personal safety, reflecting how sexualized representation operates differently within gendered norms of respectability and authority.

Systems like Grok AI combine speed, accessibility, realism and instant shareability; they lower the cost of perpetrating abuse to near zero while maximizing its reach in social environments already shaped by gendered norms, online harassment cultures and unequal vulnerability. In other words, the technology did not create misogyny, but it can operationalize and amplify existing patterns at a scale not seen in this way before.

When harms are foreseeable, responsibility cannot rest solely with individual users. Design choices, safeguards and institutional duties of care become ethically and legally relevant. This logic already underpins parts of the EU Digital Services Act, which emphasizes systemic risk management rather than treating harm as an endless sequence of individual takedowns.

In the United Kingdom, the Office of Communications (Ofcom) is aiming to tackle these issues and not only enforce existing rules, but also translate contested judgments about harm, consent and responsibility into legal regulations and operational standards. Its investigation into Grok under the Online Safety Act has been framed as a test of platform obligations in the age of generative AI. This is why Ofcom’s role, its decisions and the implications of such a case matter far beyond the United Kingdom. At the EU level, regulators have opened proceedings under the Digital Services Act to assess whether systemic risks associated with AI-generated sexual content were adequately mitigated by X. In Australia, the government has even announced plans to introduce a digital duty of care under its Online Safety Act that would require platforms to take reasonable, proactive steps to prevent foreseeable online harms, moving beyond reactive content removal toward anticipatory obligations for safety.

On the other side of the Atlantic, strong free speech traditions may limit regulatory reach in the United States, but Canada’s privacy watchdog has already expanded its probe into X and Grok following reports of non-consensual sexualized deepfakes. The fact that Canada is still lacking a dedicated online safety regulator makes the handling of this case particularly relevant.

The broader stakes are clear. The danger posed by generative AI is not simply its ability to fabricate images, but the fact that it can make a routine out of non-consensual sexualization. If regulators fail to adapt legal norms to these new socio-technical conditions, the result will not merely be more offensive content, but narrower conditions for women’s safe participation in public life.

The Grok AI scandal is therefore not just a test of technical compliance, but of whether AI regulation can grapple seriously with corporate power, human dignity and gendered harm. How the United Kingdom and regulators around the world respond will matter. But what is ultimately at stake is whether these harms and the intent to prevent them in future can be translated into binding law, or if they will remain “only” morally relevant and legally unenforced.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Evelyne A. Tauchnitz is a CIGI senior fellow and senior researcher at the Institute of Social Ethics ISE, University of Lucerne, Switzerland. Her expertise lies within digital technology, specifically global governance, peace and conflict research, ethics and human rights.