|
Biased machine learning models don’t just produce poor predictions. They damage reputations, derail projects, and in high-stakes fields like healthcare, they can potentially cause real harm. Yet most data scientists don’t check for bias until it’s too late - missing the opportunity to address it at its source. Serg Masis, author of Interpretable Machine Learning with Python, puts it bluntly: “Models magnify bias just simply by the way they are. It’s like when you make a caricature of someone - you’re gonna enhance some features that are not necessarily flattering. It’s the same thing with models.” Your training data has bias. Your model amplifies it. By the time you’re making predictions, the problem is much worse. In this week’s Value Boost episode of Value Driven Data Science, Serg joins me again to share practical techniques for detecting and mitigating bias before it becomes a major problem. In just 10 minutes, you’ll discover:
Don’t let model bias make a caricature of your data. Listen now on Apple Podcasts or Spotify, or click the link below: Episode 99: Preventing ML Bias Before it Becomes a Problem Talk again soon, Dr Genevieve Hayes |
Twice weekly, I share proven strategies to help data scientists get noticed, promoted, and valued. No theory — just practical steps to transform your technical expertise into business impact and the freedom to call your own shots.
The first time I ever presented my work in public was at a finance symposium when I was 27. I was close to submitting my PhD thesis and my supervisor offered me the opportunity as a supporting speaker to a renowned international mathematical finance researcher. I was the final speaker of the day. By the time I took the podium, almost everyone had gone home. Fewer than 10 people remained in the room. But the researcher was still there. Afterwards, I headed to the airport, and ran into him in...
“Cheating with artificial intelligence is now rampant at universities.” “University is no longer a test of your intellect. It’s a test of how well you can instruct Chat GPT.” “AI Is giving students top grades for zero intellectual work.” These are quotes from a recent article in The Australian Weekend Magazine, which argues that students are now turning to AI en masse to automate learning, and graduating with perfect grades but limited knowledge. The phenomenon has been observed across...
"Because the algorithm said so” isn’t good enough anymore. When your machine learning model makes a decision that affects someone’s medical treatment, financial security, or legal rights, stakeholders need to understand why. I first encountered interpretable machine learning working in insurance - though I didn’t realise it at the time. The insurer I worked for used machine learning models as part of its premium calculation process. There was an unwritten rule that any models we deployed had...