top of page

Your AI strategy has a lens problem

I was the Emcee at PX Live last Friday and I intentionally held back from mentioning AI — and it came up anyway. When it did, it was through the familiar ‘tool lens’ — which from a room full of People Leaders, was slightly surprising — but, at the same time, not so surprising.


That’s where most People Leaders are heading at the moment — the obsessive search for use cases, practical skills, and the latest frontier model functionality that will provide them with an army of agents. Sorry (not sorry!), I don’t mean to sound facetious, but I am only encountering a small minority of leaders who are starting to actively think and explore the ‘people lens’.


What do I mean by the ‘tool lens’ and ‘people lens’? I wrote about this in my last post, but as a shorthand reference, I mean:


  • Tool lens — a focus on selecting a company-wide frontier model for all employees, which is the basis for the overall AI strategy from the objectives, productivity goals, marketing competitiveness, through to training and development.


  • People lens — a focus on increasing human capability and competence with AI, so your workforce can learn, adapt, and keep pace with new tools and new ways of working.


Looking through the ‘tool lens’

Why are People Leaders stuck looking through the tool lens? I addressed this in a previous blog post, but the short answer is; the majority of them weren’t in the room when the decision was made about AI strategy — it was considered a “technology decision”.


Now People Leaders are living with the ramifications of this decision and they are shaping their own strategy based on the following:

  • Gathering team-specific use cases for pilot projects and joining a queue of departments in the process.

  • Sourcing or creating training to upskill employees on the company’s chosen frontier model and/or tools.

  • Creating dashboards to measure productivity based on token usage, time-saved, cost-per-licence and other tool-based metrics.


All this is fine until the following scenarios surface within the business:

  • You’ve chosen Copilot for convenience, because you’re an MS365 environment, but now you realise some teams are not getting any value from using it, like Marketing, Content, and Data.

  • You’ve signed up to an Enterprise ChatGPT licence and the software engineering team wants to use Claude Code.

  • You’ve implemented Langdock to offer your people access to different models, but they cannot access new capabilities like Claude Cowork, Agent mode in ChatGPT or NotebookLM in Gemini.


What is happening is that the ‘tool lens’ is shifting quickly with every new model development — whether it’s a new version, feature, or mode. Your ‘tool lens’ was too fixed and didn’t consider any flexibility.


So if the tool keeps shifting, what doesn’t?


‘People lens’ equals more flexibility

The neuroplasticity of the human brain is remarkable and I believe it’s one of the keys to truly unlocking the potential of AI in the workplace.


Neuroplasticity is the brain’s ability to form new neural pathways and reorganise existing ones in response to learning and experience. It doesn’t slow down or stop in adulthood — research from the likes of University College London and Dr Tara Swart has shown that the adult brain remains capable of significant structural change when it’s given the right conditions; practice, repetition, and exposure to new ways of thinking.


This matters because the AI landscape isn’t going to sit still. New models, new features, new ways of working — they’re arriving faster than any training programme can keep pace with. If your strategy is built around a fixed tool, your people will need retraining every time that tool changes or a better one arrives.


But if your strategy is built around increasing human capability — the ability to learn, adapt, and apply new skills quickly — then the tool becomes secondary.


That’s the people lens in practice:

  • Building confidence in prompting as a foundational skill, not a one-off workshop.

  • Developing the ability to critically assess AI output — knowing what to trust, what to question, and what to reject.

  • Creating a workforce that can move between models and new functionality without needing to start from scratch every time.


The goal isn’t to make people experts in one AI tool. It’s to build a growth mindset that is eager to learn but not fixated on the tool — so when the landscape shifts, and it will, your people shift with it.


Start with the work, not the tool

What I’m not advocating is a ‘buffet selection’ of AI tools for employees to choose from — that makes no sense from a work or commercial perspective.


My recommendation is that there is a focus on individual capability and competence building through learning modules that support learning through the people lens, and that this is supported by tools available based on work type.


Not every role needs the same AI capability. A marketing team’s relationship with AI looks very different to a software engineering team’s. Trying to force both through the same tool, with the same training, is where a lot of organisations are coming unstuck.


Instead, I think there’s a stronger case for aligning AI tools to broad categories of work:

  • Knowledge work — research, synthesis, summarisation, and decision support. Roles where the primary task is taking large amounts of information and turning it into something usable.

  • Content creation — writing, design, communication, and storytelling. Roles where the output is crafted material intended to inform, persuade, or engage an audience.

  • Data analysis — pattern recognition, reporting, forecasting, and insight generation. Roles where the work centres on interpreting structured and unstructured data to support business decisions.

  • Coding — software development, automation, scripting, and system building. Roles where the output is functional code, integrations, or technical infrastructure.


These aren’t rigid boxes — most roles will span two or more of these categories. A senior marketer might sit across knowledge work, content creation, and data analysis in the same week. The point isn’t to slot people into a single category; it’s to use the nature of the work as the starting point for selecting the right tools, rather than handing everyone the same one and hoping for the best.


This is a working theory and I want to be transparent about that. The nature of AI development, combined with what we know about how people learn and adapt, suggests that taking a fixed and firm position on this isn’t realistic. But I think the principle holds: start with the work, then select the tool — not the other way around.


I’ll be stress-testing this framework against credible sources and real client work over the coming weeks, and I’ll share what I find.


My call to action for People Leaders

If you’re a People Leader reading this, the question worth sitting with is a simple one: is your AI strategy built around a tool, or around your people?


If it’s built around a tool, ask yourself how many times that tool has already changed since you chose it — and how many times your team has had to adjust as a result. Then ask yourself whether that pattern is going to slow down.


I’ll answer that for you. It isn’t.


The organisations that will get the most from AI over the next few years won’t be the ones that picked the best tool first. They’ll be the ones that built a workforce capable of picking up any tool and using it well.


That’s the lens shift, and if People Leaders aren’t the ones driving it, it’s hard to see who else will.

Comments


bottom of page