The Bias Pickle - Ethical AI (Part 1)
Something which still doesn’t come up nearly enough in conversations about AI use is ethics.
I guess this makes sense; we – as a society – probably don’t discuss ethics and ethical behaviour enough. I have been guilty of this oversight myself — recently, the work I was doing with a client operating in the health space kept hitting hurdles. It was only after I realised that I was framing problems from a legal perspective, not an ethical one — as she was — did we find common ground.
Photo by Google DeepMind.
But even outside of industries like health, there’s a line which can logically be drawn from a business’s AI implementation and use to strategy to business objectives which are all explicitly underpinned by an ethical stance – or, at the very least, a set of values which were workshopped, agreed, documented, distributed and championed as principles the business holds to be true.
I have written about ethics and AI before, but it’s worth revisiting; the rapid advancement and pervasive integration of AI systems into the fabric of society marks a transformative era… and we are only at the beginning. This level of adoption raises a convoluted array of ethical implications that need attention – and understanding. In many ways, I don’t think we are ready for what is already here, let alone what is coming. The power of AI needs to be carefully managed – unchecked AI development and use risks perpetuating and amplifying existing societal discrimination, eroding public trust and creating systemic harms that can disproportionately affect vulnerable populations. It can also undermine the structures that businesses have worked hard to build and maintain, and which form the bedrock of their operations.
In this series, I will take a high-level look at the various challenges that AI use throws up – firstly, in Part 1, by discussing bias in AI. Understanding the challenges posed by ethical AI and responsible governance – and the emerging efforts to address them – is paramount to ensuring AI truly serves humanity in the best way possible.
Unpacking unfairness
Bias manifests when AI systems generate skewed or unfair results – this can be the result of flaws in the training data or the algorithms themselves. These biases can lead to discrimination, reinforce existing societal disparities and significantly erode public trust in AI technologies and the institutions that deploy them. They can create substantial legal and financial risks for businesses and governments.
Bias can manifest in the following ways:
Training data: Bias in data used to train AI — if this data is unrepresentative, incomplete or historically skewed, the AI system is likely to perpetuate and even amplify these biases in its outputs.
Group attribution: An AI model may make generalisations about individuals based on the characteristics of the group to which they belong, leading to stereotyping and unfair treatment.
Algorithmic design: Bias can be introduced into the design of the algorithm through programming errors, an engineer unknowingly weighting certain factors unfairly in the decision-making process or creating rules based on conscious or unconscious human biases.
Proxy data: AI systems can unintentionally use proxy data as stand-ins for genuine data. For example, using post codes as a proxy for economic status. This can lead to discriminatory outcomes even when sensitive attributes are not explicitly considered.
The pervasive nature of these biases means that algorithmic bias is not merely a technical glitch but a reflection and amplification of existing societal disparities. It originates at multiple points, from the collection of historically flawed data, through the design choices of developers, to the interpretation of results. The feedback loop where biased outputs become new inputs reinforces and exacerbates existing inequalities, making it a self-perpetuating problem if not actively addressed.
Bias Case Studies
There have been a number of instances where biased AI used in high-stakes domains – such as health and finance – had substantial impacts. These are worth exploring – yes, as clear examples of what to avoid, but also in demonstrating how easy it is to slip up.
Health
A commercial algorithm which was in wide use by US insurers and hospitals to identify patients for high-risk care management programs exhibited significant racial bias. The algorithm underestimated the care needs of Black patients because it used healthcare spending as a proxy for need, reflecting historical disparities in access to care.
Similarly, Warfarin dosing algorithms, designed to personalize medication dosages, performed poorly in African Americans due to the exclusion of critical genetic variations in their training data, leading to potential overdosing.
Finance
AI algorithms denying loans or credit without clear explanations can cause major financial harm and exacerbate existing economic inequalities.
An example of this is when Apple's credit card algorithm reportedly offered significantly lower credit limits to women compared to their male spouses, even when the women had higher credit scores and incomes, highlighting a clear gender bias.
Crime
Predictive policing algorithms, when trained on historical arrest data from areas with past racial marginalisation, can reflect and reinforce biases, leading to disproportionate targeting of certain communities.
An algorithm used in US courts to predict the risk of reoffending was found to be racially biased. It was found to incorrectly classify Black defendants as high-risk at almost twice the rate of white defendants, while white defendants were more likely to be mislabeled as low-risk despite reoffending.
Recruitment
Amazon discontinued an AI-powered recruitment tool after it was discovered to be biased against female candidates. The tool penalised CVs with phrases like "women's chess club" and downgraded graduates of all-women's colleges, having learned to prefer male candidates from historical hiring data.
Bias Detection and Mitigation
Addressing bias is challenging.
The bad news is there is no single, magical fix. A complex algorithm often requires an equally complex strategy which blends technical solutions and organisational governance measures. It means interrogating how AI systems are designed, trained and deployed, and building in bias mitigation processes which cover the entire AI lifecycle.
The good news is that there are proven methods and effective tools out there which can support your efforts to mitigate against bias risk.
Here are some approaches:
Google’s What-If tool: This tool allows users to analyse machine learning models, testing their behaviour, identifying potential biases and exploring how outputs change with different inputs.
Interdisciplinary collaboration: Creating an environment which encourages cross-functional collaboration – among AI researchers, domain experts, ethicists and so on. Different perspectives can help identify potential biases that homogeneous teams might overlook, leading to more comprehensive and fair AI solutions.
Human-in-the-loop reviews: Implementing a "human-in-the-loop" approach is a critical safeguard against unintended consequences. Humans should continuously monitor AI outputs and intervene when biased decisions are detected.
Looking to Part 2
Imagine you’ve created a powerful, intelligent AI system – it’s working well, and everyone is super pumped. But – as it turns out, it also has inherent biases. There is going to be the strong temptation to let the thing run, hope for the best and patch up the gaps when they appear. But the initial biases built into AI can compound over time, leading to increasingly skewed results. Addressing bias needs more than simply fixing individual errors.
In part 2, we will look at why AI audits play a critical role in AI systems adhering to ethical standards, and for detecting prohibited activities or illegal bias. We will also turn our attention to some of the governance measures being taken in the US, EU and Australia to ensure that AI is developed and used responsibly and ethically.
Stay ahead of the curve on ethical AI — get in touch today.