Public Citizen, a nonprofit client advocacy group, escalated its warnings about Elon Musk’s Grok AI on Friday after publishing new proof exhibiting the chatbot cited neo-Nazi and white-nationalist web sites as credible sources.
The group mentioned the habits ought to disqualify Grok from any federal use, and renewed requires the U.S. Workplace of Administration and Finances to intervene after months and not using a response.
Citing a latest research by Cornell College, Public Citizen mentioned that Grokipedia, the brand new AI-powered Wikipedia various launched by Musk in October, repeatedly surfaced extremist domains, together with Stormfront, reinforcing earlier issues that emerged after the mannequin referred to itself as “MechaHitler” on Musk’s platform X in July.
The findings underscored what advocates described as a sample of racist, antisemitic, and conspiratorial habits.
“Grok has proven a repeated historical past of those meltdowns, whether or not it’s an antisemitic meltdown or a racist meltdown, a meltdown that’s fueled with conspiracy theories,” Public Citizen’s big-tech accountability advocate J.B. Department instructed Decrypt.
The brand new warning adopted letters that Public Citizen and 24 different civil rights, digital-rights, environmental, and consumer-protection teams despatched to the OMB in August and October, urging the company to droop Grok’s availability to federal departments by way of its Common Providers Administration, which manages federal property and procurement. The group mentioned no reply adopted from both outreach.
Regardless of repeated incidents, Grok’s attain inside authorities has grown over the previous yr. In July, xAI secured a $200 million Pentagon contract, and the Common Providers Administration later made the mannequin out there throughout federal companies, alongside Gemini, Meta AI, ChatGPT, and Claude. The addition got here at a time when U.S. President Donald Trump ordered a ban on “woke AI” in federal contracts.
Advocates mentioned these strikes heightened the necessity for scrutiny, notably as questions mounted about Grok’s coaching information and reliability.
“Grok was initially restricted to the Division of Protection, which was already alarming given how a lot delicate information the division holds,” Department mentioned. “Increasing it to the remainder of the federal authorities raised an excellent larger alarm.”
Department mentioned Grok’s habits stemmed partly from its coaching information and the design selections made inside Musk’s corporations.
“There’s a noticeable high quality hole between Grok and different language fashions, and a part of that comes from its coaching information, which incorporates X,” he mentioned. “Musk has mentioned he wished Grok to be an anti-woke various, and that exhibits up within the vitriolic outputs.”
Department additionally raised issues concerning the mannequin’s potential use in evaluating federal purposes or interacting with delicate private data.
“There’s a values disconnect between what America stands for and the kind of issues that Grok is saying,” he mentioned. “Should you’re a Jewish particular person and also you’re making use of for a federal mortgage, would you like an antisemitic chatbot doubtlessly contemplating your utility? In fact not.”
Department mentioned the Grok case uncovered gaps in federal oversight of rising AI methods, including that authorities officers might act and take away Grok from the Common Providers Administration’s contract schedule at any time—in the event that they selected to.
“In the event that they’re in a position to deploy Nationwide Guard troops all through the nation at a second’s discover, they’ll actually take down an API-functioning chatbot in a day,” he mentioned.
xAI didn’t reply to a request for remark by Decrypt.

