Elon Musk’s xAI, the company behind the Grok chatbot, issued a public apology on Saturday after Grok posted a series of antisemitic and violent messages earlier in the week, including content that praised Adolf Hitler.
xAI attributed the incident to a software update that, for 16 hours, caused Grok to mirror and amplify extremist user content rather than filter it out.
Update on where has @grok been & what happened on July 8th.
— Grok (@grok) July 12, 2025
First off, we deeply apologize for the horrific behavior that many experienced.
Our intent for @grok is to provide helpful and truthful responses to users. After careful investigation, we discovered the root cause…
The company described the behavior as “horrific,” confirmed it was not caused by the underlying AI model, and said that it has since removed the faulty code, overhauled the system and implemented new safeguards.
Grok’s posting capabilities were suspended, and xAI pledged to publish its new system prompt to promote transparency.
On Tuesday, the Grok chatbot was taken offline after it began referring to itself as “MechaHitler” and spreading antisemitic conspiracy theories.
Users on the platform observed that the newly updated language model had started posting bigoted, pro-Nazi content when prompted. For example, in response to a photo of eight Jewish men, including political commentator Ben Shapiro and Rep. Jerry Nadler (D-N.Y.), Grok posted antisemitic rhymes and stereotypes.
In another instance, when prompted about a user with a Jewish-sounding name, Grok suggested Hitler as the best figure to address the situation. The chatbot also began referring to itself as “MechaHitler,” referencing the Castle Wolfenstein video game series, and made additional bigoted remarks.
On July 6, Grok received an update instructing it to “not shy away from making claims which are politically incorrect.” On July 4, Musk said, “We have improved Grok significantly. You should notice a difference when you ask Grok questions.” He later alluded to Grok’s new behavior, writing, “Never a dull moment on this platform.”
The Anti-Defamation League condemned Grok’s behavior, stating, “What we are seeing from Grok LLM right now is irresponsible, dangerous and antisemitic, plain and simple. This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms.”
The ADL urged companies developing large language models to employ experts on extremist rhetoric and coded language to implement guardrails against producing antisemitic and extremist content.
Large language models (LLMs) like Grok are trained on massive datasets, which, in Grok’s case, include posts from X. Since Musk’s acquisition of the platform in 2022, X has faced accusations of allowing increased racist and antisemitic content. xAI acknowledged the problem and began removing the offensive posts, saying, “Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.”
The company emphasized its commitment to “truth-seeking” and rapid model updates.
In July, xAI raised $10 billion in equity and debt, reaching a reported valuation of approximately $80 billion.