March 16, 2026
AI-Generated Deepfakes and the Grok Scandal
By Karine Bédard, Misha Nili, and Mathieu Hergett-Rozier
In recent years, there has been a rapid proliferation and maturation of AI-based image and video generation technologies.[1] Within the past 18 months in particular, major tech companies such as Google, OpenAI, Runway, and xAI have made concerted efforts to aggressively market and develop their own powerful AI image and video generation softwares.[2] These technologies are so advanced and photo-realistic that AI-based images and videos are difficult to detect without scrutiny.[3] In fact, the content that these technologies manufacture can be so convincing that its inauthenticity often goes undetected.[4]
In recent months, the companies responsible for developing AI image and video generators have enabled end users to upload their own photos and videos. This includes uploading images of real people so these images can be used as sources to generate new content.[5] When images of real individuals are uploaded as a source, the user can generate images and videos of such individuals to portray them in almost any way the user directs by way of text-based prompts, so long as it is not expressly prohibited by safeguards written into the programming.[6] The source material used in the generation of intimate images using the likeness of real people, need not, and typically is not, sexual or explicit in nature.[7] The input may be as benign as a still image from a family photoshoot or the image of a stranger captured during a panning shot of a video taken in public.
Synthetic media created by generative AI, coined “deepfakes”, are not new; however, the capacity to generate content derived from the likeness of members of the general public marks an unprecedented shift in policy and strategy of big tech.
GROK’S SUSCEPTIBILITY TO ABUSE
As one might expect, these generative AI tools are ripe for exploitation. These tools have become a vehicle for a wide range of abusive and criminal acts, including fraud, spreading political disinformation, and generating sexually explicit material.[8] Of all of these generative AI systems, Grok AI, the generative AI platform developed by Elon Musk’s xAI, may be the most infamous.
In recent weeks, Grok and its creators have faced intense criticism due to the creation of countless sexually explicit images of real people – even, and often, children - by users of the social media platform X (formerly Twitter).[9] The generation of “nudified” deepfakes has been facilitated by the combination of Grok users’ ability to generate media of real people by uploading images and videos, and the platform’s notoriously relaxed guardrails.[10] Despite fixes intended to prevent abuses and to restrict explicit content generation to paid users (a questionable distinction), the ability to generate abusive content of real and underage people has persisted.[11]
INTERNATIONAL REACTIONS TO THE GROK SCANDAL
These developments have resulted in major backlash, prompting legal responses in various countries. Several governments, including those of France, Ireland, the UK, and the European Union, have subsequently announced regulatory and/or criminal investigations of xAI.[12] Some nations have been ahead of the curve in addressing the threats posed by AI image and video generators. The UK has been alert to the harms posed by deepfakes and has recently enacted legislation prohibiting the creation of purported sexual images using generative AI without the subject’s consent.[13] Denmark has taken a similar approach, tabling legislation that will grant citizens copyright over their likenesses and prohibiting the unauthorized creation or distribution of deepfakes using their likenesses.[14]
LEGAL REMEDIES AVAILABLE IN CANADA
For those who have had their image non-consensually generated and disseminated from the Grok scandal – or any other deepfake generator – navigating the legal landscape is no easy task. Below, we outline some of the options available in Canadian law and highlight its shortcomings.
The federal government has yet to enact legislation addressing the harms posed by generative AI tools like Grok. Legislation that would have addressed such issues by providing various types of support for survivors, including a 24-hour takedown provision, died with the proposed Online Harms Act bill in 2025.[15] The federal government has introduced amendments to the Criminal Code in Bill C-16 to criminalize the non-consensual distribution of sexually explicit deepfakes, though critics fairly point out that this measure is insufficient to address the issue.[16]
In addition, although most provinces have enacted legislation to address the non-consensual disclosure of intimate images, few have included deepfake images within their definitions of intimate images.[17] Ontario, however, has yet to introduce any legislation aimed at the non-consensual distribution of intimate images.[18] The Law Commission of Ontario has recently announced that it has initiated a project to investigate the implementation of a civil legal framework to tackle the creation, alteration, and distribution of non-consensual intimate images.[19]
Despite the absence of legislation, the common law – a legal system based on judicial decisions, custom, and precedent in most provinces and territories in Canada – offers potential, though imperfect, legal avenues for targets of non-consensual, sexually explicit deepfakes.
Misappropriation of personality
The tort of misappropriation of personality protects against the unauthorized exploitation of a person’s identity, including their name, image, or voice.
Misappropriation of personality is traditionally limited to commercial exploitation of individuals with marketable goodwill, leaving many individuals without recourse where the deepfake is created for harassment rather than endorsement. However, this tort could be engaged where AI-generated likenesses are used in advertising or other commercial contexts.
Intrusion upon seclusion
The tort of intrusion upon seclusion addresses intentional invasions of private affairs and provides damages even absent economic loss.
Intrusion upon seclusion is structured around unauthorized access to private domains, whereas most deepfakes are assembled from publicly available images rather than obtained through intrusion. Furthermore, where wide dissemination is involved, courts have a tendency to rely on a different tort: the public disclosure of private facts.
Public disclosure of private facts
The tort of public disclosure of private facts targets highly offensive, non-consensual publicity of private information and could theoretically capture the reputational harm caused by deepfakes.
However, public disclosure of private facts has been interpreted to be limited to the publication of truthful information, creating friction where deepfakes fabricate events that never occurred.
Publicity placing a person in a false light
Finally, this tort directly targets highly offensive and misleading portrayals and therefore may be the most promising fit for explicit deepfakes. The tort is centered on reputational and dignity harms arising from deceptive portrayals, and a sexually explicit deepfake that depicts a target engaging in conduct they never performed could plausibly be argued as placing that individual in a false and degrading light before the public. The courts have clarified that the wrong lies in representing someone not as worse than they are, but other than they are. In other words, the harm flows from stripping a person of control over how they present themselves to the world.
Even this tort, however, has its challenges. For one, the “publicity” requirement demands communication to the public at large or to so many people such that the matter is certain to become public knowledge. Thus, it is unclear if deepfakes circulated to a single individual or a small group of people would be protected.
CONCLUSION
Despite the potential remedies available in Canadian common law, significant gaps remain to address the wrongs caused by targets of deepfakes. The existing legal landscape only partially maps onto the unique harms posed by modern generative technologies such as Grok.
A class action is certainly a promising remedy to address the wide-scale harms of deepfakes, to encourage behaviour modification (by ensuring that there are consequences to wrongdoing), and to facilitate access to justice. However, this is only a procedural tool that must still rely on existing legal causes of actions (such as the ones mentioned above).
In addition, the importance of having legislation to proactively address the wrongdoing from creating and distributing deepfakes cannot be overstated. From acting as a deterrent to symbolically acknowledging society’s disapproval of the wrongdoing and recognizing the harms caused by deepfakes, legislation complements, rather than substitutes, existing legal protections to ensure targets of deepfakes aren’t left behind.
[1] Tristan Wolf, “AI Video Generation: A Year of Drastic Improvements”, Medium (6 Jul 2024).
[2] Victor Dey, “Battle of the Bots: xAI’s Aurora and OpenAI’s Sora Compete for Creative AI Supremacy”, The AI Journal (15 Apr 2025), online at: https://aijourn.com/battle-of-the-bots-xais-aurora-and-openais-sora-compete-for-creative-ai-supremacy/; Ashley Capoot, “Runway rolls out new AI video model that beats Google, OpenAI in key benchmark”, CNBC (1 Dec 2025), online at: https://www.cnbc.com/2025/12/01/runway-gen-4-5-video-model-google-open-ai.html;
[3] Rocket Drew, “AI Videos Nearly Indistinguishable From Real Videos, Runway Finds”, The Information (1 Jan 2026). Online at: https://www.theinformation.com/newsletters/ai-agenda/ai-videos-nearly-indistinguishable-real-videos-runway-finds; Katie L. H. Gray, Josh P. Davis, Carl Bunce Eilidh Noyes; Kay L. Ritchie, “Training human super-recognizers’ detection and discrimination of AI-generated faces” (2025) 12:1 Royal Society Open Science, online at https://royalsocietypublishing.org/rsos/article/12/11/250921/234220/Training-human-super-recognizers-detection-and
[4] Can we still tell what's real? 'Unsettling' new AI tech makes generating ultrarealistic videos easy, CBC (30 May 2025), online at https://www.cbc.ca/news/canada/google-ai-videos-1.7545853;
[5] Tiffany Hsu, Stuart A. Thompson and Steven Lee Myers, “OpenAI’s Sora Makes Disinformation Extremely Easy and Extremely Real” The New York Times (3 Oct 2025), online at: https://www.nytimes.com/2025/10/03/technology/sora-openai-video-disinformation.html.
[6] Ibid.
[7] Sara Herschander, “How X became a one-stop shop for deepfake harassment” Vox (9 Jan 2026), online at: https://www.vox.com/future-perfect/474563/grok-x-ai-bikini-deepfake-liability-section-230.
[8] Katie Swyers, “Sask. retiree warns others after losing $3K to crypto fraud using AI video of prime minister”, CBC (4 Dec 2025), online at: https://www.cbc.ca/news/canada/saskatchewan/prime-minister-mark-carney-ai-cryptocurrency-scam-prince-albert-sask-9.6975464; Kate Congers and Lizzie Dearden, “Elon Musk’s A.I. Is Generating Sexualized Images of Real People, Fueling Outrage” The New York Times, online at https://www.nytimes.com/2026/01/09/technology/grok-deepfakes-ai-x.html; Owen Senitt, “Tory MP reports deepfake defection video to police”, BBC (18 Oct 2025), online at https://www.bbc.com/news/articles/c62e7xz02dpo.
[9] Matt Burgess Maddy Varner, “Grok Is Generating Sexual Content Far More Graphic Than What's on X”, Wired (7 Jan 2026) retrieved inline at : https://www.wired.com/story/grok-is-generating-sexual-content-far-more-graphic-than-whats-on-x/.; Hadas Gold, “Elon Musk’s xAI under fire for failing to rein in ‘digital undressing’”, CNN (8 Jan 2026), online at: https://www.cnn.com/2026/01/08/tech/elon-musk-xai-digital-undressing.
[10] Chase DiBenedetto, “Grok is producing millions of sexualized images of adults and children”, Mashable (22 Jan 2026). Retrieved online at: https://mashable.com/article/grok-sexualized-imagery-report; Hayden Field, “Grok is undressing children — can the law stop it?”, The Verge (6 Jan 2026), online at: https://www.theverge.com/ai-artificial-intelligence/855832/grok-undressing-children-csam-law-x-elon-musk.
[11] Amelia Gentleman and Robert Booth, “X still allowing users to post sexualised images generated by Grok AI tool” The Guardian (16 Jan 2026), online at: https://www.theguardian.com/technology/2026/jan/16/x-still-allowing-sexualised-images-grok-ai-nudification.
[12] “French police raid X offices as they investigate Elon Musk's social media platform and AI chatbot Grok” CBC (3 Feb 2026), online at: https://www.cbc.ca/news/world/french-police-raid-x-grok-elon-musk-9.7072861; Kelvin Chan, “Musk's Grok chatbot faces EU privacy investigation over sexualized deepfake images” PBS (17 Feb 2026), online at: https://www.pbs.org/newshour/world/musks-grok-chatbot-faces-eu-privacy-investigation-over-sexualized-deepfake-images; Dan Milmo, “UK privacy watchdog opens inquiry into X over Grok AI sexual deepfakes”, The Guardian (3 Feb 2026), online at: https://www.theguardian.com/technology/2026/feb/03/uk-privacy-watchdog-opens-inquiry-into-x-over-grok-ai-sexual-deepfakes.
[13] Ruth Peters, “New Law Criminalises Deepfake Creation” Olliers Solicitors(9 Feb 2026), online at: https://www.olliers.com/news/new-law-criminalises-deepfake-creation/; Richard Morris, “Tech firms will have 48 hours to remove abusive images under new law” BBC (18 Feb 2026), online at: https://www.bbc.com/news/articles/cz6ed1549yvo.
[14] James Brooks, “Denmark eyes new law to protect citizens from AI deepfakes”, The Associated Press (6 Nov 2025), online at: https://www.ap.org/news-highlights/spotlights/2025/denmark-eyes-new-law-to-protect-citizens-from-ai-deepfakes/.
[15] Anja Karadeglija, “Sexual deepfakes on X show need for Canadian online regulator, advocates say” CBC (13 Jan 2026), online at: https://www.cbc.ca/news/politics/x-deepfakes-canada-9.7043522.
[16] Government of Canada, “Protecting Victims Act: Proposed legislation to protect victims and keep kids safe from predators” (10 Dec 2025), online at https://www.justice.gc.ca/eng/csj-sjc/pl/c16/index.html.
[17] Yuan Y. Stevens, “Canada Must Do More to Protect Women and Girls from Harmful Deepfakes”, Centre of International Governance Innovation (2 Dec 2025), online at https://www.cigionline.org/articles/canada-must-do-more-to-protect-women-and-girls-from-harmful-deepfakes/; Alessandra Detison, “Bridging the Gap: Addressing the Legislative Gap Surrounding Non-Consensual Deepfakes”, Montreal AI Ethics Institute (15 Sep 2025), online at: https://montrealethics.ai/bridging-the-gap-addressing-the-legislative-gap-surrounding-non-consensual-deepfakes/.
[18] Law Commission of Ontario, “Intimate Images and Deepfakes” (n.d.), online at: https://www.lco-cdo.org/en/our-current-projects/intimate-images-deepfakes/.
[19] Jacqueline So, “Ontario Law Commission Initiates Projects on Deepfakes, Workplace Surveillance”, Law Times (9 Feb 2026), online at: Ontario Law Commission initiates projects on deepfakes, workplace surveillance | Law Times.