top of page
BLOGS
I write for People Leaders who want clarity, not hype.


The Replacement Myth
I get asked three questions quite consistently about AI, and they all focus on one theme — “Will AI replace us?” The three questions are: 1. Which LLM is the best for the type of work I’m doing? 2. What AI course do you recommend? 3. Which jobs will be replaced by AI in the next 12 months? That last question is the one where I feel the most anxiety and tension emanating from the person asking it, and I understand why. Since ChatGPT launched in November 2022, entry-level job p


The AI decision has been made. Now it's your problem.
This is the eleventh post in a series I write for People Leaders who want clarity, not hype. My aim is always the same: to take the parts of the AI conversation that feel technical, abstract, or simply overwhelming, and translate them into something a people leader can actually use. This post is about a decision that has almost certainly already been made in your organisation, either with you or without you. It is about what that decision really commits your people to — and w


The Agent Illusion
There is a growing narrative that “building AI agents” is the next step for progressive People teams, and that if you are not experimenting with agents you are somehow lagging behind. I’m hearing this in leadership conversations, it’s all over LinkedIn and it’s certainly filling vendors sales pipelines. I call this the Agent Illusion . The Agent Illusion is the belief that deploying AI agents will unlock productivity on its own, without redesigning workflows, cleaning up data


The AI capability visibility gap
Why HR doesn’t have an AI tools problem, it has a judgement problem it cannot yet see. In a recent session with a Global Leadership team, I opened with a simple line: HR does not have an AI tools problem. HR has a capability visibility problem. The room went quiet, and not because it was controversial, but because it was uncomfortably accurate. Across enterprise and scaling People functions, AI adoption is now measurable. Dashboards exist, licences are activated, prompts are


Brain Skills: The missing layer in AI Strategy
Most organisations are having the same conversation about AI, just with different tools on the agenda. Which platform should we roll out? Which workflows should we automate? Which teams need training first? Those questions are not wrong, they are just incomplete. The harder question, and the one most People Leaders are quietly avoiding, is this: What human capabilities need to be strengthened so people can work with AI without surrendering judgement to it? That is where br


Why AI Training is a poor measure of AI Competence (and what to measure instead)
Most organisations now accept that AI training is necessary.What far fewer are clear on is how to tell whether that training actually worked. Completion rates are high, internal feedback is positive and there is a perception that collective confidence has gone up.And yet, weeks later, decision quality looks unchanged. That gap is not accidental. It is structural. Why AI competence is so hard to measure There is no globally recognised scale for AI competence, and that is not a


From AI Literacy to Critical Literacy: Why thinking still matters more than AI tools
Across multiple leadership and workforce studies, critical thinking consistently shows up as the skill that separates leaders who can use AI well from those who inadvertently outsource their judgement to it. In other words, AI isn’t replacing thinking - it’s exposing where it was weak all along. Combine this with most organisations rushing to build AI literacy without building critical literacy of AI – and we’re starting to surface a real problem. AI is exposing a thinking g


Change Fitness: Why organisations need to stop treating change as an event
For years, organisations have treated change as something with a beginning, a middle, and an end. You’ve been there - a transformation programme is launched, a roadmap is created, people are trained, and eventually, the organisation is declared “ there ”. That mental model is now obsolete. Not because leaders are failing to manage change properly, but because the conditions that made episodic change viable no longer exist. AI has simply exposed what was already true: volatili


AI hasn’t broken employee training. It exposed it.
Employee training hasn’t failed because of AI. It’s failed because AI has exposed how fragile most training models already were. For years, employee training has followed a familiar pattern: Set a company-wide goal Design learning for all roles and functions Prioritise virtual delivery for scale Motivate participation through campaigns and incentives Brief managers so they can support the rollout Encourage employees to apply learning through projects In principle, this sounds


Do you actually know the difference between Predictive and Generative AI?
January 2026 feels like a time to make sure we’re all clear on the language we’re using around AI. AI is no longer experimental, subsidised, or safely abstract. We have started to embed it in real workflows, make real decisions about use cases, and allocate real budgets. And yet, most People Leaders still use Predictive AI and Generative AI as if they mean the same thing. They don’t. That confusion was survivable in 2025.It becomes risky in 2026. Because Predictive and Gen


Prompting isn’t typing. It’s Design Thinking – and your budget will feel the difference in 2026
Most teams still treat prompting as “ typing with better outcomes .”That was harmless when GenAI was a novelty.It becomes very expensive in 2026. Because prompting isn’t a writing habit. It ’s a capability .And in 2026, capability becomes a cost driver. The shift isn’t philosophical.It ’s economic, and it’s already moving faster than most leadership teams realise. Economic Shift The first wave of AI adoption was subsidised. Flat of fixed subscriptions. Unlimited usage ( kind


People Leaders think their job is to adopt AI. It isn’t. It’s to translate it.
Most organisations are sprinting into AI adoption based on the AI-hype narrative. That leads to a flurry of experimentation without purpose, testing without clear success metrics, and all of it happening in silos that limit communication of outcomes. It looks productive. It feels progressive. In reality, it’s neither. Because AI isn’t failing due to lack of effort. It’s failing because no one’s translating what’s actually happening. Teams are building vertical capability with


AI Literacy versus AI Hacks
I posted about AI literacy versus AI Hacks this morning. Like all posts, they are the tip of the iceberg, there is always more to be said, shared and explored. What I shared was based on how I currently understand these concepts. It’s rooted in my own use of GenAI tools, and also in the many conversations I’ve had with peers who are experimenting in different ways. Definitions: Literacy vs Hacks Let me define how I see it: AI literacy is the longer road. It’s about choosing


Psychological Safety and the Workplace AI Evolution
I co-created and co-hosted the Amplify AI: Fundamentals for People Leaders event on the 28th March 2025 in London, it was a success for a combination of reasons. The Attendees were engaged, the Speakers and Panelists were open to sharing their knowledge without any filter, and we set a very specific tone at the beginning of the event. We started with a panel on Psychological Safety & Overcoming Resistance to AI. Because that panel was the ethos of the event. We wanted every
bottom of page
