|
The first time I ever presented my work in public was at a finance symposium when I was 27. I was close to submitting my PhD thesis and my supervisor offered me the opportunity as a supporting speaker to a renowned international mathematical finance researcher. I was the final speaker of the day. By the time I took the podium, almost everyone had gone home. Fewer than 10 people remained in the room. But the researcher was still there. Afterwards, I headed to the airport, and ran into him in the departure lounge - he was flying to Melbourne, I was flying back to Canberra. While we waited for our flights, we talked for about half an hour. He gave me feedback on my presentation and encouraged me in my future career. I've long since forgotten what that researcher told me, but what I do remember was how it made me feel - that my work mattered and that people were paying attention. Had I been just another attendee at that conference, that conversation never would have happened. It happened because I had shown up and done the work - presented something I'd built, in a room that was mostly empty, for an audience that had largely left. Next week, the 100th episode of Value Driven Data Science goes live. Value Driven Data Science also started with a mostly empty room. For a long time, I had no idea whether anyone was listening, whether people cared about what I had to say or whether I would even be able to convince guests to appear. But I showed up anyway. Now, 100 episodes later, the show is ranking in the Listen Notes global top 10% (you can find the back catalogue on Apple Podcasts or Spotify or by clicking HERE). More importantly, it's given me the opportunity to speak with some of the most interesting and influential thinkers in the data profession - people I might never have had the opportunity to speak to otherwise. For Episode 100, we're going to be doing something different. In this episode, the tables are turned and Matt O'Mara, Managing Director of Analysis Paralysis and Director of i3, is going to interview me. I hope you'll tune in and listen. And if you've been thinking about showing up somewhere - presenting, publishing, putting your ideas out there - let this be the nudge. The room doesn't have to be full. Sometimes, you just have to be in it. You can listen to the first 99 episodes of Value Driven Data Science now on Apple Podcasts or Spotify, or at https://valuedrivendatascience.com/. Episode 100 goes live on Thursday 9th April. Talk again soon, Dr Genevieve Hayes |
Twice weekly, I share proven strategies to help data scientists get noticed, promoted, and valued. No theory — just practical steps to transform your technical expertise into business impact and the freedom to call your own shots.
Biased machine learning models don’t just produce poor predictions. They damage reputations, derail projects, and in high-stakes fields like healthcare, they can potentially cause real harm. Yet most data scientists don’t check for bias until it’s too late - missing the opportunity to address it at its source. Serg Masis, author of Interpretable Machine Learning with Python, puts it bluntly: “Models magnify bias just simply by the way they are. It’s like when you make a caricature of someone...
“Cheating with artificial intelligence is now rampant at universities.” “University is no longer a test of your intellect. It’s a test of how well you can instruct Chat GPT.” “AI Is giving students top grades for zero intellectual work.” These are quotes from a recent article in The Australian Weekend Magazine, which argues that students are now turning to AI en masse to automate learning, and graduating with perfect grades but limited knowledge. The phenomenon has been observed across...
"Because the algorithm said so” isn’t good enough anymore. When your machine learning model makes a decision that affects someone’s medical treatment, financial security, or legal rights, stakeholders need to understand why. I first encountered interpretable machine learning working in insurance - though I didn’t realise it at the time. The insurer I worked for used machine learning models as part of its premium calculation process. There was an unwritten rule that any models we deployed had...