Knowledge is No Longer Enough


“Cheating with artificial intelligence is now rampant at universities.”

“University is no longer a test of your intellect. It’s a test of how well you can instruct Chat GPT.”

“AI Is giving students top grades for zero intellectual work.”

These are quotes from a recent article in The Australian Weekend Magazine, which argues that students are now turning to AI en masse to automate learning, and graduating with perfect grades but limited knowledge.

The phenomenon has been observed across multiple degrees, so presumably data science students aren’t exempt.

Here’s what this means for working data scientists:

If AI can now ace a university degree, your competition for the next “big opportunity” isn’t just the data scientist at the desk next to you - it’s also the AI doing the degree.

You can’t compete on knowledge alone anymore. Knowledge is increasingly commodified and automatable.

You need to compete on expertise - the combination of knowledge and experience that’s much harder to automate.

The technical foundations still matter - you can’t fake your way through real work forever. But they’re table stakes now, not differentiators.

What separates you is what you’ve learned by actually doing the work.

Talk again soon,

Dr Genevieve Hayes

Data Science Impact Algorithm

Twice weekly, I share proven strategies to help data scientists get noticed, promoted, and valued. No theory — just practical steps to transform your technical expertise into business impact and the freedom to call your own shots.

Read more from Data Science Impact Algorithm

The first time I ever presented my work in public was at a finance symposium when I was 27. I was close to submitting my PhD thesis and my supervisor offered me the opportunity as a supporting speaker to a renowned international mathematical finance researcher. I was the final speaker of the day. By the time I took the podium, almost everyone had gone home. Fewer than 10 people remained in the room. But the researcher was still there. Afterwards, I headed to the airport, and ran into him in...

Biased machine learning models don’t just produce poor predictions. They damage reputations, derail projects, and in high-stakes fields like healthcare, they can potentially cause real harm. Yet most data scientists don’t check for bias until it’s too late - missing the opportunity to address it at its source. Serg Masis, author of Interpretable Machine Learning with Python, puts it bluntly: “Models magnify bias just simply by the way they are. It’s like when you make a caricature of someone...

"Because the algorithm said so” isn’t good enough anymore. When your machine learning model makes a decision that affects someone’s medical treatment, financial security, or legal rights, stakeholders need to understand why. I first encountered interpretable machine learning working in insurance - though I didn’t realise it at the time. The insurer I worked for used machine learning models as part of its premium calculation process. There was an unwritten rule that any models we deployed had...