"Because the Algorithm Said So" Isn't Good Enough Anymore


"Because the algorithm said so” isn’t good enough anymore.

When your machine learning model makes a decision that affects someone’s medical treatment, financial security, or legal rights, stakeholders need to understand why.

I first encountered interpretable machine learning working in insurance - though I didn’t realise it at the time.

The insurer I worked for used machine learning models as part of its premium calculation process. There was an unwritten rule that any models we deployed had to be easily explainable to our policyholders.

My employer was a government work cover insurer and the motivation behind this rule was simple. For people to have confidence in the work cover system, the calculation process needed to be transparent.

As a result, model interpretability was implicit in everything my team did.

That was more than a decade ago, and the stakes have only gotten higher.

And while AI and machine learning seem to be increasingly heading in the black-box direction, with deep learning and LLMs, the need for interpretability has never been more critical – particularly in high-stakes fields like medicine, law and government, where algorithmic decisions can fundamentally alter the course of someone’s life.

Serg Masis has literally written the book on machine learning interpretability, and given his book, Interpretable Machine Learning with Python, is now in its second edition, the topic clearly resonates strongly with many data scientists.

In the latest episode of Value Driven Data Science, Serg joins me to share practical strategies for building interpretable machine learning models that earn stakeholder trust and accelerate AI adoption within your organisation.

You’ll learn:

  1. The crucial distinction between interpretable and explainable models [07:06]
  2. Why feature engineering matters more than algorithm choice [14:56]
  3. How to use models to improve your data quality [17:59]
  4. The underrated technique that builds stakeholder trust [21:20]

Model accuracy means nothing if stakeholders don’t trust the predictions.

Listen now on Apple Podcasts or Spotify, or click the link below:

Episode 98: Building Trust in AI Through Model Interpretability

Talk again soon,

Dr Genevieve Hayes

Data Science Impact Algorithm

Twice weekly, I share proven strategies to help data scientists get noticed, promoted, and valued. No theory — just practical steps to transform your technical expertise into business impact and the freedom to call your own shots.

Read more from Data Science Impact Algorithm

Organisations live or die based on the quality of the decisions made by their executives. So, if you’re a data scientist looking to create value, the answer is simple: help your senior stakeholders make better decisions. This means you need to understand the decisions your stakeholders are trying to make. Any analysis you do needs to connect back to those decisions at the end. Machine learning, data analysis, all the technical work people associate with data science - that’s just one step in...

Data scientists love to jump straight to machine learning. New problem? Throw data at a neural network. See what happens. But there’s a foundational step that, in the right circumstances, can dramatically increase your chances of project success - and most data scientists skip right over it. Mathematical modelling from first principles. I’m talking about physics here. Mass conservation. Energy conservation. Newton’s laws of motion. The stuff you find in high school physics textbooks. In this...

Around age 19, having just graduated high school, I got my first job in "data". I could barely use Excel and thought Python was something you'd find at the zoo, but a friend of the family hired me to tutor their teenaged son. My maths was good enough to help him get through high school maths. Around age 29, having just finished my PhD, I landed a role managing an insurance pricing and analytics team. I'd never heard the term "machine learning" back then, but I had spent a good chunk of the...