On 4 July 2025, Grok posted criticizing posts on X (previously known as Twitter), and also declared Hitler as the problem solver for today’s social issues. This was not a usual activity like other chatbots.
It’s kind of surprising how a legitimate company can cause such a big blunder but again, nothing is surprising when it comes to Musk’s world. However, the company deleted its posts and took the Chatbot off temporarily. Was it enough to save the legacy of Grok? That’s still to determine. The chatbot’s horrific behaviour was widely criticized, and as a result, it was banned by the Turkish government.
After that, the company remained quiet and on 12 July 2025, they posted the reason behind the misbehaviour and apologized for it. Furthermore, they admitted that it was due to an update in a code path of the Grok bot. But, some experts say that it was not an accident. If it was just a bug, then Grok was supposed to answer based on who was asking the question. But in some of its posts, it often repeats Elon Musk’s opinions. Even when users asked a neutral question, it leaned toward one specific viewpoint. The critics say that: “This isn’t about user manipulation, it’s about the AI being biased from the inside.”
However, it’s hard to believe that a company with such a big team could accidentally cause a bug with such drastic results. Even if it really was a mistake, the way Grok responded makes it feel more like a pattern than an accident, which feels like propaganda. But it all depends on what Grok does next.
If this was truly a one-time issue, and Grok avoids similar behaviour in the future, then the apology will be considered. But if it happens again, then it’ll be clear that this wasn’t just a glitch. It’ll show us exactly what kind of system XAI is building. At that point, not just the AI, but Musk himself will be held accountable for its behaviour.