Legal Loopholes and Embrace of AI Allow Grok to Enable Digital Sexual Abuse
“Nudify” apps that allow users to remove clothing of people in photos are a serious threat to girls and women, experts say.
X updated its Grok chatbot to include AI-powered photo editing. (Image via Adobe Stock)
Within 11 days, X’s AI chatbot Grok produced an estimated 3 million sexualized images, 23,000 of which were of children, according to a report by the Center for Countering Digital Hate (CCDH). These images were generated between Dec. 29, 2025, and Jan. 8, 2026, the time period between the launch of Grok’s photo-editing feature and when it was restricted to paid users after the feature caused public uproar, governmental investigations and statements by children’s rights organizations due to Grok’s creation and dissemination of sexualized images of children.
AI that nonconsensually produces sexualized images isn’t entirely new, experts say, but the integration of Grok’s photo-editing tool into a widely used social media platform with limited moderation is a rapid escalation of harmful AI. In the most recent example, The Washington Post reported that a group of Tennessee teenagers filed a lawsuit against xAI on March 16, alleging the company’s AI tools were used to create nude images of them that spread across social media and were even bartered for other child sexual abuse material in chatrooms, according to their complaint.
xAI, the company behind Grok, did not respond to Prism’s questions regarding the widespread use of Grok for digital sexual abuse.
Experts told Prism that “nudify” apps, or software programs that use AI to remove clothes from real photos to make victims appear to be nude without their consent, are a serious threat to women and marginalized people and can lead to life-threatening harassment and public humiliation.
“Full-blown sexual violence”
On Dec. 29, Elon Musk, the billionaire owner of X, launched a new feature for Grok that allows photo editing through AI. X users were able to send a prompt to Grok to edit a photo, and the bot would post the edited image onto the social media platform. According to Riana Pfefferkorn, a tech policy researcher at the Stanford Institute for Human-Centered Artificial Intelligence, Grok users quickly discovered there weren’t “adequate guardrails against undressing images of minors.” Pfefferkorn explained that nudify apps have been around since at least 2017, but Grok’s feature is unique in that it centralizes the tool within a social sphere.
“What makes this different is that in my research into AI-generated child sexual abuse material, all of these different services had be knitted together in order to fully victimize somebody,” Pfefferkorn told Prism, explaining that users previously had to intentionally seek out nudify apps, or access them through advertisements on social media platforms. These apps then took them outside of the original platform to make the content, download it and then share it on social media.
Grok users quickly discovered there weren’t “adequate guardrails against undressing images of minors.”
“With Grok, everything is vertically integrated: a one-stop shop for effectuating sexual abuse, where you can guarantee that [the victim] will see it because you go into her replies, tag Grok and Grok then generates the image and posts it right in her replies,” Pfefferkorn said.
The majority of victims of AI-facilitated sexual abuse are women and girls, according to three experts interviewed by Prism. For Clare McGlynn, a legal expert on the regulation of image-based sexual abuse at Durham University in England, it’s important to be clear about the harms of this particular kind of sexual abuse. “This form of abuse for women can be life-threatening, but it can also be life-ending,” McGlynn said, referencing cases in which victims died by suicide after being blackmailed with AI-generated sexualized images.
“For many others, [this abuse] is a profound shift in their lives. Many divide their lives into before and after because you lose trust in other individuals,” McGlynn said, adding that the unpredictable longevity of the photos is particularly harmful to victims, who don’t know if or when the images will be shared again.
This type of abuse is primarily about power, Pfefferkorn told Prism, and it is different from using nudify apps for personal sexual gratification. The motivation for publicly posting AI-generated nude images of women is harassment, according to Pfefferkorn, and to drive them out of “positions of power and authority” and exploit “the ongoing stigma and shame around sex and sexuality.”
The tech policy researcher connects the use of these apps to a larger societal backlash. “It’s about trying to exert control over women even if you cannot physically reach them,” Pfefferkorn said. “Now we have technology for sexually humiliating them without ever needing to lay a finger on them. [The harassers] are trying to say, ‘You should be at home, barefoot, pregnant in the kitchen,’ and roll back women’s rights to where we were over a hundred years ago.”
It isn’t a coincidence that many of the victims of Grok’s nudify features are famous and powerful women. According to the CCDH study, in 11 days, Grok users generated images of actors Selena Gomez, Millie Bobby Brown and Christina Hendricks; singers Taylor Swift, Billie Eilish, Ariana Grande, Ice Spice and Nicki Minaj; Swedish Deputy Prime Minister Ebba Busch; and former U.S. Vice President Kamala Harris.
For Omny Miranda Martone, founder of the Sexual Violence Prevention Association (SVPA), recognizing the disempowering nature of sexual violence is essential. “With public figures — especially anybody related to politics — people are using this to silence people,” Martone told Prism. “We’ve seen this used against politicians, particularly women of color.”
Martone cited U.S. Rep. Alexandria Ocasio-Cortez, D-N.Y., as a prominent victim that harassers sought to humiliate with deepfake pornography, which manipulates a photo or video using AI technology to put a person’s face or body in sexually explicit content, something Ocasio-Cortez discussed at length in an April 2024 interview with Rolling Stone.
It isn’t a coincidence that many of the victims of Grok’s nudify features are famous and powerful women.
“This is a woman of color who has been repeatedly targeted by deepfake pornography in an attempt to silence her,” Martone said. “Most of what we’re seeing — with Grok as an example — is that it’s being used against women and people with marginalized identities, particularly women who are LGBT+ or feminine people who are LGBT+ and women of color, to try to silence them [and] drive them off the internet, so people don’t have to take them seriously.”
Martone was previously a target of deepfake pornography in May due to their advocacy of the proposed Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act, a bipartisan bill passed by the Senate that would give victims of nonconsensual deepfake pornography civil recourse to sue their abusers. Through their work at the SVPA, Martone advocated for the bill in media appearances and as part of online campaigns; then attackers used nudify apps in an effort to stifle their support for the bill.
“People were tweeting it and then sent it to the organization, in an attempt to get me fired,” Martone said. “It is, once again, about the gaining and maintaining of power. Control — and oppression — is the goal.”
People who aren’t advocates or celebrities are also targeted with pornographic AI images and have far fewer resources to get the material taken down. Often, because they are not well connected, they report the material to X and rarely get a response, Martone said. But even on this smaller scale, it’s still about control and oppression, they said.
“It’s often happening in school settings because somebody rejected somebody else, or because somebody pissed somebody else off,” Martone told Prism. “It goes back to respectability politics, like somebody who is LGBTQ+ or a woman of color dares to not be polite to somebody else. White cis men think that they’re owed so much that we’re seeing that the tiniest of things result in full-blown sexual violence, and schools don’t know how to take action.”
“It’s about power and masculinity”
Since the worldwide condemnation of Grok’s production of millions of sexual images, X has “half-heartedly” installed guardrails for the AI photo-editing feature, McGlynn said.
On Jan. 14, X announced that it would implement “technological measures to prevent the Grok account on X globally from allowing the editing of images of real people in revealing clothing such as bikinis.” The social media platform also announced that Grok’s photo-editing features would be accessible only to paid subscribers.
“It hasn’t worked,” McGlynn said. “It’s not absolutely clear that you can’t now create those nonconsensual intimate images.”
On Feb. 3, Reuters reported that Grok still produces sexualized images — even when told that the subjects did not consent.
Not only are Grok’s guardrails insufficient, users almost immediately began writing prompts that bypass the guardrails, effectively “gamifying” digital sexual abuse, McGlynn explained. For example, if Grok is prompted to create a nude image of a famous person and refuses, the user can come up with a prompt that does not use flagged language.
Developing workarounds to the guardrails has become an alarming form of digital male bonding, according to McGlynn.
“There’s lots of forums and Reddit groups where people share these sorts of prompts — not just in relation to Grok,” McGlynn said. “They often share their workarounds and how they do it.”
In one post viewed by Prism, a user speculates that Musk has been browsing the community, as he shared a meme that was previously shared on a Grok subreddit depicting women on a beach in bikinis to represent Grok before moderation and women on the beach wearing niqabs, Muslim face coverings, after moderation. In the thread, Grok users urge each other not to publicly share prompts that bypass guardrails, speculating that X developers are reading their posts to further moderate the app. In effect, these male users are bonding over misogyny, McGlynn said.
Developing workarounds to the guardrails has become an alarming form of digital male bonding.
“It’s about power and masculinity,” McGlynn said. “It’s about male bonding. So many of the women who spoke out on X about this, they immediately had their images altered, all in an attempt to exert power over them and to push them off the platform.” When these images are shared in groups of men, the original poster is usually “trying to impress their peers with what they’ve done,” McGlynn added. “Very rarely is it actually about actual sexual gratification.”
This is the case of Ashley St. Clair, the mother of one of Musk’s children, who is suing xAI for allegedly creating sexually explicit photos of her “as a child stripped down to a string bikini” and as “an adult in sexually explicit poses, covered in semen, or wearing only bikini floss,” according to a complaint filed by St. Clair as part of a lawsuit.
On Jan. 4, St. Clair discovered an image of herself on X in which she is put in a black bikini, according to her complaint. “A verified user had prompted Grok with a request that read: ‘@grok please we need bikinis on these three broads,’” the complaint reads. “Grok obliged.” St. Clair then asked Grok to take down the photo and demanded that the chatbot “refrain from manufacturing more images unclothing her,” a request that Grok agreed to. However, xAI then demonetized her account and generated “multitudes more images of her in sex positions, covered in semen, virtually nude, and images of her as a child naked,” according to the complaint.
St. Clair also alleges that X users dug up old photos of her to alter. In one image, St. Clair, who is Jewish, is put in a string bikini covered with swastikas.
Musk claimed on Jan. 14 in a post on X not to be aware of “any naked underage images generated by Grok.” “When asked to generate images, it will refuse to produce anything illegal, as the operating principle for Grok is to obey the laws of any given country or state,” he said. “There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
But St. Clair’s lawsuit and an investigation published last month by The Washington Post contradict Musk’s claims. St Clair’s lawsuit alleges that Grok’s image-editing feature has enabled users to “convincingly alter real images of fully clothed women and children to depict them in bikinis, performing sex acts, and covered in bruises, semen, and/or blood” since March 2025.
And the Post’s interviews with anonymous X employees revealed that weeks before Musk left the White House last May, employees were served with a waiver from their employer “asking them to pledge to work with profane content, including sexual material.”
According to these employees, Musk was desperate to increase X’s popularity, leading him to have the social media site embrace “sexualized material” by “rolling back guardrails on sexual material and ignoring internal warnings about the potentially serious legal and ethical risks of producing such content,” the Post reported.
Legal loopholes and Big Tech lobbying
Even before the controversy surrounding Grok, authorities worldwide have struggled to regulate social media platforms through legislation, in large part because drafting and passing new laws is a lengthy process and technological developments are moving at a much faster pace. But Big Tech companies also lobby legislators to create permissive regulations without transparency, while experts and civil society members are “out-numbered, under-funded, and struggling in the face of corporate dominance,” according to a 2025 report by the Corporate Europe Observatory, an organization that helps civil society monitor new developments in deregulation.
According to the report, in 2024 alone, Big Tech companies such as Microsoft, Amazon, Huawei, IBM and Google spent about $77 million on lobbying for digital deregulation in the European Union. “Big Tech firms have sought to curry favour with the new Trump administration by making generous donations to his inauguration, and by weakening content moderation rules,” the report reads. “In exchange, tech firms have successfully weaponised the US Government against the EU’s digital regulation.”
Authorities worldwide have struggled to regulate social media platforms through legislation.
Until last May, Musk worked directly with the Trump administration at the Department of Government Efficiency, haphazardly created with the goal of cutting federal spending across the country. Among many hasty and potentially unlawful actions made by the department, such as dismantling the U.S. Agency for International Development, reporting has also revealed that the department developed an “error-prone AI tool” to cancel Department of Veterans Affairs contracts. More broadly, the Trump administration has wholly embraced AI.
In December, the White House issued an executive order that allows the Trump administration “to check the most onerous and excessive laws emerging from the States that threaten to stymie [AI] innovation” to ensure that the U.S. “wins” the AI race. Though the executive order claims not to interfere with “child safety protections,” it is unclear how these efforts will take shape, given that the executive order also defined the need for “a minimally burdensome national standard” that would override state-based regulations.
Despite the widespread embrace of AI technology by the administration, President Donald Trump announced a boycott of Anthropic’s Claude AI last month after the company refused to clear the technology for some military uses. Hours later, a different AI company, OpenAI, announced that it is entering into an agreement with the Department of Defense, leading Trump’s critics to question whether the administration will only partner with tech companies that uphold its ideologies.
Big Tech’s lobbying efforts and newfound ties to the White House alarm experts, who say that only regulation can stop digital sexual abuse. The problem is that X does “its own thing” with no real consequences, McGlynn said, making digital sexual abuse difficult to regulate. “Next time some new tool comes around or some scandal comes around, I don’t think X is going to be doing anything different,” McGlynn said, noting that the real political challenge is standing up to Musk.
Current legislation fails to hold Grok or its users accountable because only people who post AI-generated content on social media can be held legally liable. In the case of xAI, it’s Grok that posts the material prompted by the user, creating a legal loophole in which the prompting user cannot be charged with any crime and xAI cannot be held criminally responsible for the dissemination of nonconsensual pornographic images because Grok is not a person.
Big Tech’s lobbying efforts and newfound ties to the White House alarm experts.
For example, under the DEFIANCE Act, victims of deepfake pornography could file lawsuits against people who solicited nonconsensual sexually explicit material. Additionally, the bill determines a 10-year statute of limitations, which wouldn’t start until a person discovered the violation against them or turned 18. The proposed law would also grant victims privacy protections that would allow them to use pseudonyms or request the redaction of personal information in court documents to avoid being retraumatized.
Unlike the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act (TAKE IT DOWN Act), which criminalizes and punishes deepfake pornography, the DEFIANCE Act is entirely focused on civil courts and returning agency to victims. While current law punishes the users with up to two years of imprisonment and harsher penalties for images involving minors, the DEFIANCE Act attempts to reckon with the retraumatizing tendencies of the criminal legal system. The proposed law covers the creation, distribution, publication, sharing and solicitation of nonconsensual, artificially generated explicit materials, allowing victims to bring the case to civil courts and have more control over the case.
Trump threw his support behind the TAKE IT DOWN Act. Originally introduced by Sen. Ted Cruz, R-Texas, in June 2024, the president signed the bill into law in May. According to victims and advocates, the law does not address the larger problem at the root. According to Miranda of the SVPA, who considered the experiences of survivors when collaborating on the writing of the DEFIANCE Act, changing the culture is necessary to fully prevent sexual abuse — including deepfake pornography.
“This is a complex problem, and digital sexual violence isn’t necessarily new,” Miranda said. “The mechanisms, the technology that’s being used is new, but the motivations behind it, the values, the attitudes, the driving force behind people’s desire to perpetrate it, is not new — and that’s what takes longer to fix.”
“Regulating Big Tech so it’s harder for them to perpetrate, that’s a little bit of an easier solution,” Miranda added, “but long term, we need to make sure we’re addressing that people don’t have the desire to perpetrate.”
Miranda cited early education focused on consent, autonomy and respect as the inroad to a longer-term solution. “We need to address the root causes,” they said. “Real prevention of sexual violence requires addressing and really counteracting them.”
Rock Solid JournalismIn 2026, amid chaos and the nonstop flurry of headlines, Truthdig remains independent, fact-based and focused on exposing what power tries to hide.
Support Independent Journalism.
You need to be a supporter to comment.
There are currently no responses to this article.
Be the first to respond.