"Because the Algorithm Said So" Isn't Good Enough Anymore


"Because the algorithm said so” isn’t good enough anymore.

When your machine learning model makes a decision that affects someone’s medical treatment, financial security, or legal rights, stakeholders need to understand why.

I first encountered interpretable machine learning working in insurance - though I didn’t realise it at the time.

The insurer I worked for used machine learning models as part of its premium calculation process. There was an unwritten rule that any models we deployed had to be easily explainable to our policyholders.

My employer was a government work cover insurer and the motivation behind this rule was simple. For people to have confidence in the work cover system, the calculation process needed to be transparent.

As a result, model interpretability was implicit in everything my team did.

That was more than a decade ago, and the stakes have only gotten higher.

And while AI and machine learning seem to be increasingly heading in the black-box direction, with deep learning and LLMs, the need for interpretability has never been more critical – particularly in high-stakes fields like medicine, law and government, where algorithmic decisions can fundamentally alter the course of someone’s life.

Serg Masis has literally written the book on machine learning interpretability, and given his book, Interpretable Machine Learning with Python, is now in its second edition, the topic clearly resonates strongly with many data scientists.

In the latest episode of Value Driven Data Science, Serg joins me to share practical strategies for building interpretable machine learning models that earn stakeholder trust and accelerate AI adoption within your organisation.

You’ll learn:

  1. The crucial distinction between interpretable and explainable models [07:06]
  2. Why feature engineering matters more than algorithm choice [14:56]
  3. How to use models to improve your data quality [17:59]
  4. The underrated technique that builds stakeholder trust [21:20]

Model accuracy means nothing if stakeholders don’t trust the predictions.

Listen now on Apple Podcasts or Spotify, or click the link below:

Episode 98: Building Trust in AI Through Model Interpretability

Talk again soon,

Dr Genevieve Hayes

Data Science Impact Algorithm

Twice weekly, I share proven strategies to help data scientists get noticed, promoted, and valued. No theory — just practical steps to transform your technical expertise into business impact and the freedom to call your own shots.

Read more from Data Science Impact Algorithm

When I started my career, data science didn’t exist as a field. I trained as an actuary and statistician and those were the tools I relied on in my earliest roles. Then, around 10 years ago, I started hearing about the wonders of machine learning and became worried that my traditional training was no longer enough. So, despite already having a PhD in Statistics, I went back and completed a Masters in Machine Learning. Then came the AI wave – ChatGPT, large language models, generative AI – and...

The most valuable lessons I’ve learned in my data science career weren’t learned in a classroom. They came from conversations with people who’d already figured things out the hard way. My podcast has been a more valuable learning tool for me than all of my university degrees combined. Over 100 episodes, I’ve had the chance to speak one-on-one with some of the sharpest minds in the industry - CEOs, best-selling authors and leading researchers - on everything from cutting-edge AI to what it...

In 2015, I fell in love with a job I would never have. I’d just attended a conference where people were talking about machine learning and data science as the way of the future. I returned to the office eager to learn more and started down the data science rabbit hole - where I stumbled across an article about the recently established NYC Mayor’s Office for Data Analytics. They were using data science to locate illegal cooking oil dumping in the city’s sewers. To coordinate emergency services...