Unpacking the basics: artificial intelligence.

We are reaching the pointy end of our “unpacking” series. We’ve covered the transformation strategy, the people side, the cloud infrastructure and the delivery methods. Now, we have to talk about the engine that is THE driver of the biggest shift in the global economy since the advent of the internet itself… artificial intelligence.

Unpacking the basics: artificial intelligence.

You may already know all about AI. You might be using it proactively, and doing so daily — from using ChatGPT to draft emails all the way to automation and systematising workflows. Or you may have used it inadvertently, in a tool or app which runs AI in the background. Either way, it’s hard to ignore but can also be hard to pin down.

In the business world, the conversation around AI often swings wildly between two extremes:

The utopia: “AI will solve all our problems, double our revenue and we can all retire early.”

The doomsday: “AI is going to take all our jobs, gain consciousness and eventually take over the world.”

The truth, as always, is somewhere in the middle.

But before we can figure out how to use it, we need to strip away the sci-fi baggage and understand where it’s come from and what it actually is.

Not magic, it’s math.

At its simplest level, AI is a prediction engine. It takes a massive amount of data, looks for patterns and uses those patterns to predict an outcome.

But first, let’s go back the beginning… Ideas about AI and automation have been around for centuries — and the theory for where we are now is at least 80 years old, with Alan Turing’s papers: “On Computable Numbers, with an Application to the Entscheidungsproblem” from 1936 and “Intelligent Machinery” from 1948, among others. These conceptualised very nearly precisely what we are doing here today. I also like to remind people that automation was the point of computing from the outset — the fact you had to cook up a formula in Excel was a bug, not a feaure.

Whether we like to admit it or not, AI is simply another stepping stone on this journey.

Back to today… For the last decade, we have mostly been using Predictive AI (or Machine Learning). You interact with this every day. When Netflix recommends a show you might like, that’s AI. When your bank freezes your card because of a “suspicious transaction”, that’s AI. It is looking at historical data and saying, “Based on the past, here is what is likely to happen next.”

But in the 2020s, the game changed. We moved from Predictive to Generative.

The shift to Generative AI.

In 2017, researchers from Google’s DeepMind lab published a groundbreaking paper called Attention Is All You Need. This set the found for the language models — the ChatGPTs, Groks and Geminis — we know and love today.

The paper introduced deep learning architecture — OpenAI picked this idea up and ran with it. Then came the ChatGPT moment.

While Predictive AI analyses existing data, Generative AI creates new data.

Think of it like this:

Predictive AI is like a librarian who has read every book in the library. If you ask, “Where can I find a book about history?”, they can point you to the exact shelf instantly.

Generative AI is like an author who has read every book in the library. If you ask, “Write me a new story about history in the style of Shakespeare,” they can write something brand new that has never existed before, drawing on all the knowledge they have absorbed — about Shakespeare, essays and criticism on the work of Shakespeare, about the history of literature and so on.

For businesses, this is a major shift. We are now using computers to write code, design images, draft marketing copy and summarise legal documents. You can now talk to your computer as you would to a colleague, a superior or a child — and have it respond back to you.

The elephant in the room.

Will AI take my job?

This is the source of the anxiety we touched on in our change management post.

The short answer is: for some tasks, yes.

If your job consists entirely of repetitive, manual data entry or summarising basic text, AI can do that faster and cheaper than you, and probably better than you, too. It doesn’t get tired, it doesn’t have other things on its mind, it doesn’t have beef with its supervisor. It is ready at all times — and it will only get better.

However, for most knowledge workers, AI is not a replacement… it is a force multiplier. It is a co-pilot.

The lawyer who uses AI to scan contracts will beat the lawyer who scans them manually. The developer who uses AI to write boilerplate code will build apps faster than the one typing every character by hand.

The saying currently doing the rounds in the tech industry is accurate: “AI won't replace you. A person using AI will.”

The limitations (and the risks).

However, we need to be careful. Because AI sounds so confident, we tend to trust it blindly.

But AI has flaws.

It hallucinates: Generative AI is designed to be creative, not factual. It can confidently state facts that are completely made up. This continues to happen no matter how advanced a model gets.

It is biased: AI learns from the internet. And the internet is full of bias, stereotypes and bad information. If you train a model on biased data, you get biased results — rubbish in, rubbish out.

The black box: Often, even the creators of these models don’t know exactly how the AI reached a certain conclusion.

In summary.

Artificial Intelligence is the most powerful and significant tool of our generation. It offers the chance to remove the “drudgery” from our work lives— automating the boring stuff so we can focus on the creative, strategic and human parts of our jobs — and create productivity gains in our personal lives; for example, you used to have scroll through Google results to get the answer you were after, but now you can just ask Gemini and it will answer you directly.

But it is not a set-and-forget solution. It requires human oversight, strategic application and a healthy dose of scepticism.

As we hand over more of our decision-making to machines, we run into a new set of problems. Who owns the data? Who is responsible when the AI makes a mistake? How do we ensure we use this tech for good?

That brings us to the final, and perhaps most important, piece of the puzzle: ethics. (But that’s a topic for our next and final post).

Is your business looking to integrate AI but unsure where to start? We help you separate the hype from the practical value. Get in touch.

Previous
Previous

Unpacking the basics: ethics and technology.

Next
Next

Unpacking the basics: agile delivery.