We are not talking about Anthropic enough.

In the sprawling, ongoing narrative of the AI revolution, the spotlight might be intense but it is strangely narrow.

It usually focuses on two main players: the dominance of OpenAI and the counter-maneuvers of Google. We are inundated with breathless takes on the latest release, the newest “god-mode” rumour or the impending, long-promised arrival of Artificial General Intelligence (AGI). This is partly media’s inability to juggles more than a couple of balls at once, as well as the result of the work OpenAI did with ChatGPT early on to be able to very nearly make their LLM synonymous wih AI, in the way that Kleenex is synonymous with tissues.

We are not talking about Anthropic enough.

But in the midst of this noise, there is a third major player operating with a different cadence, a different philosophy and maybe even a more sustainable or, at least, practical vision for the future of generative AI. That player is Anthropic.

While the tech world obsesses over the potential of AI — to manage your life, to create Hollywood-grade video, to make hyper-realistic chatbots — Anthropic is obsessed with the reality of getting work done. They are building the infrastructure for competence and productivity while others are building the infrastructure of spectacle. And frankly, we are not talking about them enough.

The philosophy of function over flashiness.

The primary way Anthropic differentiates itself from OpenAI is through a deliberate rejection of “novelty factor” features.

OpenAI’s strategy has been one of shock and awe. Products like Sora (text-to-video) and the emotive, flirty voice capabilities of GPT-4o were undeniably impressive technical achievements. They, at least temporarly, took over social media feeds and generated massive mainstream hype. But they also distracted from the core utility of a Large Language Model (LLM) for businesses and professionals. They felt, and even continue to, at times, feel like solutions looking for problems. It is an approach which can go one of two ways: it either get users excited about AI or it has them reject it out of hand completely.

Anthropic, conversely, seems uninterested in winning the hype cycle. Their focus is ruthlessly utilitarian. They are building tools for engineers, researchers and other users where accuracy, context window size and steerability are far more valuable than generating a video of a woolly mammoth walking through Tokyo. Their focus is on Constitutional AI — an approach that trains models to follow a specific set of principles for safety and steerability. While others build tools for entertainment and broad consumer curiosity, Anthropic is building tools for people who need to get difficult, precise work done.

* Quick note on Anthropic’s Constitution: I feel compelled to address the framing of the new “constitution”, which was released, at the time of writing, earlier today. It frames Claude as, well, a being — as a “who”. I need to gather my thoughts on this, but will say that while I appreciate it from a literary perspective (in the same way that Orhan Pamuk wrote parts of My Name is Red from the perspective of the illustrations in a book or Virginia Woolf wrote Flush from the point of view of a dog), this sits uncomfortably. More on this soon.

Capturing the builders.

Nothing exemplifies this functional focus better than how Anthropic has approached developers, specifically with the release of Claude Code.

Beginning with Artifacts — a masterstroke that moved AI interaction from a fleeting chat into a collaborative workspace, Anthropic has dominated the software engineering space. With the release of Claude Code, they plugged AI directly into the terminal.

Claude Code is widely seen as a game-changer because it starts treating AI as a true “agent”. It is a Command Line Interface (CLI) tool that lives directly in your codebase. It can navigate your file system, run tests, edit multiple files simultaneously and executes complex refactors with a level of autonomy that can genuinely feel like an extension of the user’s own skillset.

By focusing on this deep, agentic developer experience, where the AI has permission to act rather than just respond, Anthropic is capturing the most crucial demographic in tech. As a result, Anthropic is quietly becoming the indispensable operating system for the people building the future.

A big year ahead.

In 2026 alone, Anthropic has announced:

  • Claude Cowork: A non-technical version of Claude Code which integrates the LLM with your computer

  • Claude for Healthcare: A toolset that allows healthcare providers and consumers to use Claude for medical purposes through HIPAA-ready products.

  • Labs: R&D activity will be underpinned by Labs, a newly announced “team focused on incubating experimental products at the frontier of Claude’s capabilities.” In short, Anthropic will use the methodology that led to the creation of Claude Code to continue to safely conceptualise, build and test new Claude-powered tools.

All of this and we are still only in January.

Anthropic also pitch themselves as proponents and even caretakers of the ethical side of AI. If it hasn’t already hapened, businesses will grow weary of AI wild cards, instead seeking reliability, security and tools that integrate into existing workflows; with Anthropic leaning heavily into this need for a “safe” pair of hands, they are likely to push further into enterprise collaboration. This ethical stance will position Claude as a secure force that understands your business context better than any alternative provider.

The ultimate functional test: healthcare.

Perhaps the most significant sign of Anthropic’s direction is its aggressive move into the healthcare and life sciences space. Healthcare is the ultimate test of an AI’s reliability — it is the antithesis of the “move fast and break things” ethos. In an industry where a “hallucination” can be life-threatening, the "novelty factor” is a liability.

Anthropic is positioning Claude as the safe, precise choice for clinicians and researchers. And, as mentioned, their focus on safety and constitutional guardrails makes them a natural fit for analysing clinical trials, assisting in drug discovery and managing complex patient records. By targeting the most regulated and high-stakes industry on the planet, Anthropic is proving that a “boring” focus on reliability is actually a competitive advantage.

The quiet powerhouse.

We are currently living through the phase of AI development where the loudest and brightest demos get the most attention. OpenAI has been the undisputed master of this phase.

But as the smoke clears, the world will need tech that delivers real, indisputable value. It will demand infrastructure that works, follows the rules and manages complex tasks without constant human babysitting. Anthropic is betting that in the long run, businesses won’t care about an AI that can sing or dance or generate realistic yet surreal videos. They will care about the AI that can manage a budget, refactor a million lines of code and safely navigate the complexities of a hospital ward.

It is time we started taking Anthropic — and the functional future they are building — much more seriously.

Ready to turn your business challenges into your biggest opportunities? Let’s talk.

Previous
Previous

We need to stop using Grok.

Next
Next

Doom corps to watch (out for) in 2026.