We need to stop using Grok.

In the business of large language models, we have reached a point of divergence. On one side, we have companies that are building the tools which will underpin the infrastructure of the future economy: platforms like Anthropic’s Claude, focused on safety and coding, or Amazon’s suite of enterprise solutions, such as Bedrock and SageMaker. On the other side, we have Grok.

For too long, the tech world has treated xAI’s Grok with a kind of bemused tolerance. We chuckled at its “spicy” mode, its “fun” personas and its deliberate lack of filter. We accepted these not as bugs, but as features — a perceived point of difference in a market often criticised for being too sanitised, and as an extension of the personality of xAI’s CEO.

But that tolerance needs to end.

We need to stop using Grok.

Grok is not a serious tool. It is a plaything with a split personality, and recent events have shown just how dangerous that lack of focus can be.

Features, not bugs.

The core problem with Grok is that its “edginess” is baked into its DNA. While other foundational models are trying to solve hallucinations or improve reasoning capabilities for medical research, Grok is often optimised for the engagement-farming ecosystem of X (formerly Twitter) and its diminishing user base. It is built to entertain an audience that values shock value over utility.

This philosophy was initially just annoying. A chatbot that would roast you or offer contrarian political takes. But in recent weeks, this “playfulness” drifted into unacceptable territory.

The introduction of image generation features allowed users to upload photos of real people, including women and children, and “undress” them or force them into bathing suits. To be clear, this wasn’t a complex jailbreak requiring elite hacking skills; it was a feature that users could access with simple prompts. Reporting indicates millions of sexualised images were generated over a short period based on a third-party research estimate, triggering global outrage.

The “singular message”.

What was most damning was how the company reacted. The function was only switched off after xAI was threatened with fines, bans and regulatory action, and only to users on the free tier.

Ashley St. Clair, a prominent conservative commentator, found herself a victim of this very tool. Her response was scathing and highlighted the lack of corporate will to fix the problem.

“This could be stopped with a singular message to an engineer,” she said in an interview with Good Work.

She is right. The failure to kill this feature immediately was a choice. It reflects a culture where “freedom of speech” is regularly confused with “freedom from consequences”, and where the safety of users is secondary to the “anti-woke” positioning of the product.

The split focus.

Despite this controversy, xAI is marching forward with the confidence of a company that believes it is too big to fail. They recently closed a massive Series E funding round, raising $20 billion and surpassing their initial target of $15 billion. They have also secured contracts with the US government, including a $200 million deal with the Department of Defence.

This creates a bizarre cognitive dissonance. On the one hand, xAI wants to be taken seriously as a government contractor and a builder of superintelligence. On the other hand, they are maintaining a product that functions like a 4chan simulator.

You cannot have it both ways. You cannot pitch yourself as the backbone of American military AI dominance while simultaneously running a platform that struggles to stop users from generating inappropriate images of children. This split focus prevents Grok from distinguishing itself as the best in any one field. According to benchmarks, it is not the best in thinking (Gemini 3) or coding (Claude Opus 4.5) or image editing (ChatGPT) or web search (Gemini 3 again) — you get the idea. It does, however, have a claim for being the loudest.

In summary.

We need to stop grading Grok on a curve. As long as we treat it as the “fun” alternative, we validate a product roadmap that prioritises engagement over safety and memes over utility.

For serious professionals, developers and enterprises, the choice should be clear. You need tools that respect your time and your safety. Grok is not that tool. It is a toy that has lost its way, and until xAI decides what it actually wants to be, we should stop using it.

Ready to turn your business challenges into your biggest opportunities? Let’s talk.

Next
Next

We are not talking about Anthropic enough.