• Core Services
    • One to one coaching
    • Team Coaching
    • Programmes and Workshops
    • Wellbeing and Wellness
  • Executive Leadership
    • Leading Inclusively
    • Leading with Purpose
    • Leading with EI
    • Leading as Coach
    • Leading with Influence
    • Women in Leadership
  • Leadership Skills
    • Leadership Presence
    • Leadership for Change
    • Leadership Connection
  • Events
    • Women in Leadership breakfast briefing
  • Clients
    • What our clients say
  • Resources
  • Contact

Why diversity matters in the age of AI

February 2

One of the key questions facing leaders now is how to interface with generative AI in the workplace. If you’ve decided to embrace it, then you’ll need to give serious thought to how you deal with its shortcomings.

In this post, I’ll outline some of those issues, and how you can ensure your organisation is equipped to respond to them.

The limits of prompt engineering

By now you’ll be aware that you can’t trust every response an AI model gives you. In one of the most famous examples, from 2024, ChatGPT was asked “how many R’s are in the word strawberry” and answered – confidently and incorrectly – “There are two “R”s in the word “strawberry.””

That may seem inexplicable, but if you have a thorough understanding of how AI models are built, you may be able to anticipate these kinds of problems and frame your question differently.

That’s all prompt engineering is: creating inputs for AI models that will result in the most accurate, relevant and effective outputs. Simple in theory, but – as the strawberry example shows – much more complicated in practice.

And no matter how good your prompt engineering is, it’s not the be all and end all. Getting the best out of AI tools requires the same leadership skills you’ve been using all your career, and which a good coach can help you hone: recruitment and people management. You need smart people with relevant knowledge and diverse perspectives who can keep an eye on the AI model’s output, so they can quickly report any errors or worrying trends.

Automation bias

One of the concepts I think is really important when considering generative AI is automation bias. Forbes defines it as “our tendency to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.”

In that Forbes article, writer Bryce Hoffman gives some truly sobering examples of what overconfidence in machine-generated information can lead to, from catastrophic financial choices to fatal plane crashes. Needless to say, in the age of AI, it’s more important than ever that leaders avoid this blindspot. 

It’s not just the results that you need to be healthily sceptical about. It’s also the AI models themselves. It’d be nice to think that safeguards built into these models would prevent dangerous outputs, but honestly, you can’t rely on that.

While prompt engineering can help you get better results, its efficacy is limited in comparison to the initial training and system prompt (which guides how the model will respond to user input). In one notorious example from July 2025, X’s integrated AI chatbot, Grok, descended into antisemitism and dubbed itself “MechaHitler” within days of a system prompt update.

“Once we begin relying on automated systems, we tend to stop questioning them,” Hoffman points out. “The reliance on automated systems has increased dramatically with advancements in technology, making understanding and mitigating automation bias crucial for maintaining high-quality decision-making processes.”

Unconscious bias

It might seem odd to think of a machine as having bias in the same way a human can, but – who writes those system prompts? Who created all the material AI models are trained on? That’s right: fallible, biased humans.

That bias might come from three different places:

The system prompt The Grok example above is a clear example. You may not have control over this, but you can test each update with thoughtful prompts designed to expose hidden biases.

The material on which the AI model was trained As Satyen Sangani points out in Forbes, “regardless of quality, [the AI model] learns from the data it’s fed and generates new data and insights. As a result, any biases present in the original data will only be strengthened, exacerbating the problem.” You can test for bias by carefully creating prompts and fact- and sense-checking the results (without further use of AI).

The ongoing evolution of the AI model To quote Sangani again, “we’re dealing with a moving target. AI models drift as they transform based on new data, which […] can further eternalize damaging biases such as gender and racial inequality.” So you can’t just rely on checking for bias when a new version is released – you need to keep assessing it, in a process called functional monitoring.

The need for varied voices

One of Hoffman’s three recommendations for overcoming automation bias is to “seek diverse perspectives” on the output. As far as I’m concerned, this is just common sense. Everyone has unconscious bias. So if you want to catch inaccuracies in an AI model, it just makes sense to have people with as wide a range of experiences, perspectives and areas of expertise as possible.

I want to be very clear: this is about DEI, not as a box-ticking exercise or a way of virtue signalling, but as a solid and sensible business practice. In the age of AI, more than ever, diversity is strength – a strength I’ve been helping my clients build for over two decades.

What generative AI spits out can be reactive. Business leaders without a clear sense of purpose will be led by whichever voices – human or AI – are the loudest at that moment, which could have disastrous effects. To avoid that, you need to clarify your purpose; have diverse, well-informed voices around you; follow your compass; and avoid groupthink. It’s vital that you get familiar with your strengths and weaknesses, and avoid yes men at all costs.

One last cautionary note: you need to have a fallback. It’s highly unwise to put all your eggs in the AI-generated basket. Even if you’re closely monitoring every aspect of the input and output, you should never be 100% reliant on an AI model.

Recent Posts


Welcome to 2026!

February 1

Avoiding the perils of perfectionism

February 1

Why Great Innovations Fail to Scale

January 31

Design Processes to Evolve with Emerging Technology

January 31

5 Critical Skills Leaders Need in the Age of AI

January 1

Tags


Categories


AI
Leadership

Contact


  • Core Services
  • Executive Leadership
  • Leadership Skills
  • Privacy Policy
  • Events
  • Blog
  • Video
  • Podcast
  • Downloads

NEELA BETTRIDGE

BUSINESS LEADERSHIP CONSULTING

©Copyright Neela Bettridge 2017