Unpacking the basics: ethics and technology.

Over the last few weeks, we have unpacked the strategy (digital transformation), the people (change management), the infrastructure (the cloud), the methodology (agile) and the engine (AI). But there is one final piece of the puzzle. It is intent.

Unpacking the basics: ethics and technology.

In the rush to adopt the latest tools, it is easy to ask, “Can we build this?” It is much harder to ask, “Should we build this?”

As technology becomes more powerful and more invasive, the line between “clever marketing” and “manipulation” is getting very blurry. And in the 2020s, the businesses that win will be the ones that use this technology in good faith. And I say that for a few reasons…

Let’s unpack the ethics of modern tech.

The ‘dark patterns’ trap.

To understand what good ethical practice looks like, we first need to look at the bad. In the industry, these are called dark patterns.

These are design tricks used in websites and apps that are specifically engineered to make you do things you didn’t mean to do. They come in all shapes and sizes, with some more obvious than others, some serious, some harmless, and with companies large and small engaging in these practices.

The roach motel: This is where a service is incredibly easy to sign up for (one click!), but almost impossible to leave. You have to call a phone number, send a fax or navigate through twelve pages of “Are you sure?” to cancel.

The bait and switch: You click a button that looks like it will do one thing (e.g. “close window”), but it actually does another (e.g. “start download”).

False urgency: “Only 2 rooms left at this price!” (when there are actually hundreds).

Then we get into the murky territory of gamifying user behaviour through dopamine triggering, algorithms and gambling-like random number generators which create addictive behaviours.

These tactics might squeeze money out of customers in the short term, but they destroy trust in the long term. And in an age where customers are more data-savvy than ever, losing trust can be a death sentence.

The AI trust gap.

This brings us to Artificial Intelligence. As we discussed in our last post, AI is a prediction engine, not a human… but it is getting very good at sounding like one.

This creates a massive ethical minefield.

If a customer is chatting to a support agent on your website, do they know it’s a bot? If you use AI to write a personalised sales email, is it dishonest to make it look like you typed it yourself?

The core principle here must be transparency.

There’s no need to disclose to clients that you created your banner using Canva’s AI tool. But if you are using AI to make decisions about people’s lives (like approving a loan or screening a CV), you should explain how that decision was made. If you are using a bot, say it’s a bot. If you are tracking data, say you are tracking data.

The golden rule: human-in-the-loop.

So, how do you ensure you are operating in an ethical way?

We often think of automation as a way to remove humans entirely — I’ve had clients make this exact request. But ethical automation keeps a human involved in critical stages, and the best safeguard against ethical failure is a concept called “human-in-the-loop” (HITL). Here’s how it works:

The loop: AI does the heavy lifting (analysing the data, drafting the report).

The human: Reviews the output, checks for bias, applies context and makes the final call.

For example, an AI might flag a large transaction as “fraudulent” because it fits a pattern. A human looks at it and realises, “Oh, no, that’s just a customer buying a wedding ring in a different city.” They can choose to let that transaction pass, or contact that customer to verify it, and so on.

Without the human in the loop, the computer just blocks the card and ruins the customer’s day. The human provides the empathy and the judgement that the technology simply cannot.

I should also mention that HITL gets regular criticism because people see it as flawed and a cop out and clichéd and a softball approach to governance and so on. Which it is — but something is absolutely better than nothing, and applying HITL will get a business in the right mindset to take that thinking further.

Ethics as a competitive advantage.

For a long time, “ethics” was seen as a compliance issue, ticking the box to avoid getting sued.

Today, it is a competitive advantage. Whydo I say that? Well, because things are about to get crazy.

We are entering an era of deepfakes and algorithmic bias, where the real and the fictional become very nearly inseperable. Customers are becoming suspicious of everything they see on a screen. If your brand puts people first and establishes itself as a beacon of truth — meaning, you are transparent about your data, honest about your automation and respectful of your users — you will stand out.

In summary.

Technology is neutral. Use it, don’t use it — the tool doesn’t care one way or the other. It’s up to us as users to define the ways to use technology. And we should make the effort to think about this, and map them out, and communicate them to the broader team or business. In many ways, digital transformation is also about upgrading your responsibility.

Here’s some basic guidelines:

  • Don’t mislead: Avoid dark patterns.

  • Be transparent: If it’s AI, say it’s AI — this will be less fraught and loaded with time anyway.

  • Keep humans involved: Don’t outsource your values to an algorithm.

This brings us to the end of our “unpacking the basics” series. We hope it has helped demystify some of the jargon and given you a clearer view of the digital landscape.

And remember, the future is going to be built on technology, but it must be driven by humans.

Is your business trying to navigate the complexities of modern tech with integrity? We can help you build a strategy that puts people first. Get in touch.

Previous
Previous

Unpacking the basics: the A-Z of digital transformation.

Next
Next

Unpacking the basics: artificial intelligence.