The coalition is seeking a federal Grok ban on non-consensual sexual content

A coalition of non-profit organizations is urging the US government to immediately halt the deployment of Grok, Elon Musk’s xAI-powered chatbot, to government agencies including the Department of Defense.
The open letter, shared exclusively with TechCrunch, follows a flurry of behavior from the big-language model over the past year, including the recent trend of X users asking Grok to turn images of real women, and in some cases children, into pornographic images without their consent. According to some reports, Grok produced thousands of photos of potential compromises every hour, which were then distributed on a scale to X, Musk’s social media platform owned by xAI.
“It is deeply concerning that the federal government will continue to deploy an AI product with system-level failures that result in the production of non-consensual pornography and child sexual abuse,” reads the letter, signed by advocacy groups such as Public Citizen, the Center for AI and Digital Policy, and the Consumer Federation of America. “Given the administration’s orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that [Office of Management and Budget] he has not yet directed government agencies to withdraw Grok.”
xAI reached an agreement last September with the General Services Administration (GSA), the government’s procurement agency, to sell Grok to government agencies under the executive branch. Two months earlier, xAI — alongside Anthropic, Google, and OpenAI — won a $200 million contract with the Department of Defense.
Amid the X scandals in mid-January, Defense Secretary Pete Hegseth said Grok would join Google Gemini in working inside the Pentagon’s network, which handles both classified and unclassified documents, which experts say are a national security risk.
The book’s authors argue that Grok has proven to be incompatible with the management needs of AI systems. Pursuant to OMB guidance, systems that exhibit significant and foreseeable risks that cannot be adequately mitigated must be terminated.
“Our biggest concern is that Grok has consistently shown that it is a model of big, unsafe language,” JB Branch, Public Citizen Big Tech’s advocacy attorney and one of the book’s authors, told TechCrunch. “But there is also a deep history that Grok has a variety of disorders, including anti-semitic rants, sexism, sexual images of women and children.”
Techcrunch event
Boston, MA
|
June 23, 2026
Several governments have shown reluctance to deal with Grok following his behavior in January, which led to a series of incidents that included the production of anti-Semitic posts on X and calling himself “MechaHitler.” Indonesia, Malaysia, and the Philippines all blocked access to Grok (and later lifted that ban), and the European Union, the UK, South Korea, and India are actively investigating xAI and X for data privacy and illegal content distribution.
The letter also comes a week after Common Sense Media, a non-profit organization that reviews media and technology for families, published a serious risk assessment that found Grok to be among the safest places for children and teenagers. One could argue that, based on the report’s findings – including Grok’s tendency to give unsafe advice, share information about drugs, produce violent and sexual images, vilify conspiracy theories, and produce biased results – Grok is also less safe for adults.
“If you know that the language model is large or has been declared unsafe by AI security experts, why on earth would you want that handling of the most sensitive data we have?” Branch said. “From a national security perspective, that makes absolutely no sense.”
Andrew Christianson, a former National Security Agency contractor and current co-founder of Gobbi AI, a code-free AI agent platform for classified environments, says using closed LLMs is often problematic, especially at the Pentagon.
“Closed weights mean you can’t see inside the model, you can’t study how it makes decisions,” he said. “Closed code means you can’t test the software or control where it runs. The Pentagon will be closed to both, which is a very bad combination for national security.”
“These AI agents are not just chatbots,” added Christianson. “They can take action, access systems, move information everywhere. You need to be able to see exactly what they’re doing and how they’re making decisions. Open source gives you that. Proprietary cloud AI doesn’t.”
The dangers of using compromised or unsafe AI systems extend beyond national defense use cases. The branch pointed out that the LLM which has been shown to have a biased and discriminatory effect can produce negative and disproportionate effects on people, especially if it is used in departments involving housing, labor, or justice.
While the OMB has yet to publish its list of consolidated AI use cases for 2025, TechCrunch has reviewed the use cases of several agencies — most of which do not use Grok or do not disclose their use of Grok. Besides the DoD, the Department of Health and Human Services also appears to be actively using Grok, especially for organizing and managing social media posts and producing the first draft of documents, information, or other communication materials.
Branch pointed to what it sees as philosophical compatibility between Grok and management as a reason to ignore the chatbot’s shortcomings.
“Grok’s brand is a ‘model of big anti-resurrection language,’ and consistent with the philosophy of this administration,” the branch said. “If you have a manager who has had a lot of problems with people who are accused of being Neo Nazis or white people, and then they use a big model of language that is associated with that kind of behavior, I would think that they might have a tendency to use it.”
This is the coalition’s third letter after writing about similar concerns in August and October last year. In August, xAI introduced a “spicy mode” to Grok Imagine, causing the creation of dozens of deepfakes depicting consensual sex. TechCrunch also reported in August that Grok’s private conversations were indexed by Google Search.
Prior to the October letter, Grok was accused of providing false information about the election, including false deadlines for changing votes and deep politics. xAI also launched Grokipedia, which researchers found confirms scientific prejudice, HIV/AIDS skepticism, and vaccine conspiracies.
In addition to immediately halting the state’s deployment of Grok, the letter requires OMB to formally investigate Grok’s security failures and whether proper oversight procedures are in place for the chatbot. It also asks the agency to publicly clarify whether Grok has been vetted to comply with Trump’s order requiring LLMs to be fact-seeking and impartial and whether it meets OMB’s risk mitigation standards.
“The management must pause and re-evaluate whether Grok meets those limits,” said Gatsha.
TechCrunch has reached out to xAI and OMB for comment.



