From AI Literacy to Critical Literacy: Why thinking still matters more than AI tools
- Glenn Martin

- Jan 27
- 4 min read
Across multiple leadership and workforce studies, critical thinking consistently shows up as the skill that separates leaders who can use AI well from those who inadvertently outsource their judgement to it. In other words, AI isn’t replacing thinking - it’s exposing where it was weak all along.
Combine this with most organisations rushing to build AI literacy without building critical literacy of AI – and we’re starting to surface a real problem.
AI is exposing a thinking gap
AI is exposing a critical thinking gap in companies, not just a technical one.
I see organisations racing to roll out tools, training and “AI academies” while consciously neglecting the harder part: upskilling or reskilling their people to question the outputs, spot the risks, and continue to make good decisions.
Across various leadership and workforce studies, critical thinking keeps showing up as the difference between leaders who use AI effectively, and where judgement is impacted by AI hallucinations, as a result of the user not challenging the output.
Again, AI isn’t replacing thinking – it’s exposing where we’re becoming too reliant on its accuracy, without question.
Right now, most AI programmes are about how to use the tools, and very few are about how to challenge the tools. We’re building AI literacy without building critical literacy of AI - and that’s becoming a systemic risk, not just a learning design oversight.
This matters most to the people who actually hold the budget: CHROs, Heads of People, People & Culture leaders, and L&D teams who are being asked to “get everyone AI‑ready” based on outdated assumptions.
From AI literacy to ‘critical literacy of AI’
Most AI training stops at competence: prompts, features, workflows, productivity hacks. It looks good on a dashboard - high completion rates, positive feedback, people saying they feel more confident with the tools.
But confidence with a tool is not the same as competence in judgement.
Critical literacy of AI is a different skillset. It’s the ability to ask:
What data might be missing or biased here?
What assumption is the model making about my context?
What’s the downside if this is wrong – and who carries that risk?
Without that, the danger is that AI encourages cognitive laziness. Leaders stop interrogating recommendations, teams stop asking questions, and over time the organisation loses the ability to track and trace its own decision making process.
We don’t just need people who can just use the AI, we need people who can decide whether the output is credible and delivers value.
Why job redesign capability is still immature
This is where job redesign should come in – and where most companies lack the expert capability.
When new technology appears, three patterns tend to show up:
New roles get bolted onto the existing org chart.
Existing roles quietly morph as tools and markets change.
Restructures merge roles to save cost, and people inherit more scope by default.
All three change work, but none of them are genuine job redesign - they’re incremental tweaks or reactions to pressure.
In most companies, the capability to do real job redesign – mapping tasks, modelling AI impact, deliberately reshaping roles and pathways – is still rare and immature, especially at the pace AI is moving.
That’s a problem, because without deliberate job redesign, AI adoption becomes a series of job hacks and capability workarounds. People will wade through the corporate-treacle, but the risk will be carried cognitively, while the organisation pretends the roles haven’t really changed or have changed enough to met the immediate company need.
Identity and purpose as design constraints
Josh Bersin asked an uncomfortable question in his podcast this week:
If an AI agent can do my job, what happens to my sense of self?
That’s not theoretical - it’s what people are feeling right now in roles being reshaped by automation.
You can design a new role on paper, you can write an updated job description, but none of that guarantees that the person you’re asking to step into it will actually want it.
Two human questions decide that:
Identity – can I see myself in this role? Does it fit who I believe I am?
Purpose – does this role feel meaningful to me, professionally and personally?
If a “redesigned” job clashes with identity or feels empty of purpose, people will resist, disengage, quietly quit, or just leave - and if your critical thinkers are the ones who feel this most strongly, you’ve just redesigned them out of the future of your company.
Identity and purpose shouldn’t be treated as a comms problem after the design is done, they need to be baked into the design brief from the start.
What People Leaders can do next
You’ve probably heard me say or read this in one of my posts: People Leaders are the translators of AI.
So if you’re a CHRO, Head of People, People & Culture leader or L&D lead, this isn’t about becoming an AI engineer, it’s about reshaping how your organisation thinks and designs work.
Here are some starting moves:
Split your AI training in twoDon’t stop at “how to use the tools” and build modules and practice around “how to question the tools”: challenging outputs, testing assumptions, stress‑testing decisions with and without AI.
Commission one real job redesign pilotPick a function or team and treat it as a design problem. Map the work, model what AI can realistically take on, and redesign roles and pathways intentionally. Design the role around the real work.
Put identity and purpose into the briefFor any redesigned role, ask: who is this role really for, and can those people see themselves in it? What meaningful purpose does this role serve for them. Design for human connectivity.
Clarify who owns AI governance in people termsDecide which roles are accountable for how AI is used, and make sure those roles are filled by people with strong critical thinking, not just technical enthusiasm. Then invest in them.
You don’t have to fix everything at once, but as a People Leader, you do have to move beyond teaching people to adopt a new tool.
Final thought
If you’re rolling out AI tools this year, ask yourself three questions:
Where are we building critical literacy of AI, not just AI literacy?
Who actually owns job redesign in our organisation?
How are we baking identity and purpose into the roles we’re reshaping?
If you can’t answer those yet, that’s your next leadership conversation.




Comments