AI bias isn’t obvious. That’s the problem.

Share this post

We need to talk about a quiet problem in AI – one that’s far more dangerous than a factual error or a clunky sentence.

It’s bias.

And no, I’m not talking about tone mismatches or dodgy analogies. That’s surface-level stuff.

I’m talking about bias that skips over medical symptoms. That sidelines job applicants. That quietly favours one group of people over another – based on patterns it’s learned from an unfair past.

Just recently, a study revealed that some AI tools used in healthcare were downplaying symptoms described by women and people from ethnic minority backgrounds. Not rewording them. Not softening them. Actively minimising them – meaning less care might be recommended simply based on who the patient was.

That’s not just frustrating. That’s potentially fatal.

And it’s a wake-up call for all of us – not just those in the medical world.

Because AI is being rolled out everywhere. In HR teams. In schools. In councils. In legal practices. And it’s doing a lot more than drafting social media posts. It’s summarising case notes. Flagging risks. Recommending who gets the job – and who gets sidelined.

So here’s the problem – if we don’t define fairness, our AI tools won’t either. And if leaders aren’t taking responsibility for that now, they might be building the next crisis without even knowing it.

Trained on history. Blind to fairness.

AI doesn’t invent insights out of thin air. It learns from patterns in the data it’s given – and a lot of that data is soaked in history. Unfortunately, history hasn’t always been fair.

As I say in my book ‘AI-Human Fusion’, “AI is only as fair as the data and systems behind it. After all, it comes down to how the humans have trained it.”

If those systems reflect outdated thinking, skewed assumptions, or exclusionary practices, then the AI absorbs it all. Not with malice, but with complete and unwavering confidence.

That’s the real danger – it doesn’t question the logic behind the patterns. It just runs with them.

So if women have historically been dismissed in medical notes, or certain ethnicities have been underrepresented in formal research, the AI doesn’t raise a flag. It simply assumes that’s the norm. And when that model is asked to summarise a new set of symptoms or prioritise a list of applicants, those biases don’t disappear. They get baked in.

That’s why fairness in AI can’t be left to the developers alone. It needs to be championed by leadership – from the start. Because if we don’t decide what fairness looks like in our context, the algorithm will decide for us.

And its definition might be the last one we’d want.

A clever prompt – or a cover-up?

Now, here’s where things get interesting.

Some AI trainers are already experimenting with ways to reduce bias at the prompt level. I recently heard about a female trainer who starts her sessions with empowering feminist messaging – like ‘Women can do anything!’ and ‘Women rule the world!’ – before giving it a task.

And it works. Kind of.

With the right prompt, you can nudge the tool to rethink its assumptions. You can steer it toward fairness. You can even get more balanced output just by adjusting your tone, your language, or the way you structure your request.

But is that really a solution – or just a clever patch?

Because while prompting can shape what you get in the moment, it doesn’t change the underlying patterns the tool was trained on. You’re still building on top of a foundation that might be cracked.

It’s a bit like putting a coat of paint over water damage. Looks better, sure. But the real problem is still lurking underneath.

And here’s the BIG question – should we really be relying on the user to fix the system?

Not everyone knows how to prompt for fairness. Not everyone has the time or confidence to experiment with language until the output feels right.

That’s why we can’t afford to treat prompt hacks as the end game. They’re helpful. They’re creative. But they’re not enough.

So what does fairness actually look like?

It starts by asking better questions – before you prompt, before you automate, before you trust the output. Questions like –

  • What assumptions might be built into the data or the prompt we’re using? Every prompt has a perspective. Every dataset has a history. Whether it’s the way a question is framed, or the type of language used to describe a task, it all influences the response. Leaders need to look closely at what’s being fed into the tool – not just what’s coming out.
  • Are we regularly reviewing the outputs for bias, gaps, or blind spots? Bias isn’t always obvious. That’s why it’s not enough to do a one-time check. Fairness in AI is an ongoing habit. It means reviewing outputs with fresh eyes, gathering diverse feedback, and staying curious about what might be missing.

As Parul Gupta from Meta says in AI-Human Fusion, “Fairness is contextual and businesses need to define what it means for their specific AI applications.” And that definition needs to be crystal clear before we simply accept the output.

Bias won’t fix itself

AI is evolving fast – but that doesn’t mean we should be handing it the biggest decisions without question.

The truth is, this technology is still in its infancy. It’s clever, yes. Helpful in the right hands, absolutely. But fair? Reliable? Ready to guide life-changing decisions without human oversight? Not yet.

That’s why now is the time to pause. To assess where we’re using these tools – and why. To stop assuming the output is neutral, and start building the skills, awareness, and frameworks that ensure it’s responsible.

AI can be part of the solution. But only if we stop expecting it to lead – when it’s barely started walking.

You can read more about my research and thoughts about the ethical and responsible usage of AI in my book ‘AI-Human Fusion’ via Amazon, Booktopia, or Dymocks.

P.S. This article was written in collaboration between ChatGPT and my human brain.

Grab your FREE
‘AI Humanisation Checklist’!

More to explore

Like the sound of our approach and thoughts on AI?

We’d love to discuss AI training with you.

Call us on +61 2 8860 6552 or apply for training via the form below.

Leanne Shelton Speaking at a keynote