top of page

The blocker between your People and AI is time

The cost of not trying is higher than the cost of trying and failing.


That’s the tension sitting underneath every AI conversation I have with people in organisations right now — and most of them haven’t named it yet.


Time is one of the most consistently cited blockers I hear when delivering AI training and workshops; whether it’s time to learn, initial set-up time, or a general sense of having no time to dedicate to learning, experimenting or exploring.


Without dedicated time, people start to develop concerns like:

  • AI will replace me

  • I need to be an expert before using AI

  • What if I get something wrong?


When we create narratives that lead to self-editing or blocking ourselves from taking action, the price of inaction becomes higher than the cost of trying and failing. That’s not a motivational line — it’s a practical reality.


Blockers that feel real

When I talk about what stops people using AI, there are typically two categories.


  • Tool-lens blockers are practical; friction in the tools themselves — set-up time, integrations, cost, token limits, or admin-level rights and permissions.


  • Human-lens blockers are more powerful; they shape how people think the tool will benefit them (or not), and whether their time investment will generate outputs that benefit their work.


The human-lens operates before, during and after tool use. It is the source of consistent friction, and therefore the one that requires the most attention when it comes to training, development and deployment.


Your AI strategy has to consider the human-lens to have any chance of success.


Why? - Because the innovative minority — the people in your business actively using AI day-to-day, generating the good news stories, sharing use cases across the business — are in the minority. Innovators represent roughly 2.5% of the population; early adopters around 13.5%.


Together, they make up a small but visible group, and their behaviour can create a misleading impression of how broadly adoption is actually progressing. You see them because they are visible, and this can shift leadership thinking toward “we’re on track” — when the reality is often different.


For most people in organisations, the pace of change is overwhelming. They don’t know how or where to start, and they are concerned about being replaced, doing something wrong, or needing a level of expertise before they can engage with AI at all.


In the same room as that 59%

In a recent training session, I asked attendees which mindset shift resonated with them most. 59% selected “I need to be an expert first.”


That’s not a small number. That’s the majority of a room full of professionals, telling you exactly where the friction lives — and it has nothing to do with the tools.


Shifting the mindset

With every technology evolution — from agriculture to industrial, industrial to digital, and now digital to AI — a mindset shift has been required. Some people are faster to adopt than others; these are your innovative minority, your early adopters.


But here’s what gets missed: each of those transitions didn’t just require new skills. They required a behavioural shift — a change in how people thought about their work, their role, and their relationship with the tools available to them. AI is no different; it just moves faster.


For the majority of people in your organisation, you need to reframe the shifts they need to make to their mental models, to help them move from:

  • Anxiety to action — understanding that small, low-stakes experiments are the starting point, not expertise.

  • Resistance to repetition — recognising that consistent exposure, not a single training session, is what builds confidence.

  • Repetition to preparation — moving from reactive use to proactive integration; starting to anticipate where AI can help before the task arrives.

  • Self-editing to editor-in-chief — shifting from “I don’t think I should try this” to “I’ll try it, assess the output, and decide what to keep.”


How do you make these shifts happen? By showing people examples of others in similar roles who have already made them. Internal comparisons can work, but they can also create expectation-anxiety. In that case, focus on modelling behaviour rather than comparing outputs.


Sharing failure unblocks more action

It’s easy to share success stories, and definitely more enjoyable for the storyteller; but a story of failure can have more benefit to the organisation as a whole.


In a recent training session, I shared something I had built that didn’t work the way I expected. My intention was to illustrate that deep learning only comes from doing — and that includes the jeopardy of failing, or not getting the result you wanted.


The story got a few laughs, a high degree of acknowledgement about how the model had hallucinated or its configuration had created more friction; but what I sensed overall was relief. People were listening to someone they considered an expert, and they were hearing — in all the uncomfortable details — how I’d failed.


It was relatable. It was human. It was permission to try and fail — with one clear message: take the learnings forward and use them.


Failure is data; it’s not a disaster.


Commitment to action

If your AI strategy — and the training that underpins it — moves through the three stages, it has to end with a clear call to employees: Blocker → Reframe → Action.


Introducing new habits or actions into a routine that is time-pressured requires intention and proportionality. You can’t insert an action that needs 45 minutes into a schedule that can only accommodate 15. Every new action needs a framework that provides:


A capability or skill level proportionate to the action; tools and systems that enable it with zero or limited time-friction; a timeline to trial the implementation; a method of recording the effectiveness or output; a check-in with a neutral person or co-worker to provide a feedback loop.


Without this framework, there will be no real measure of the learning or growth potential. Track the habit, or it won’t stick.


Call to Action for People Leaders

Your people are not failing to adopt AI because they lack intelligence or ambition.


They are failing to adopt it because the conditions for adoption have not been built.

Time is structural. If your working patterns don’t protect space for learning, experimenting and failing safely, your AI adoption will rest entirely on the shoulders of a small minority — the ones already doing it, already visible, and already creating a false picture of where the majority actually are.

Three things you can do this week.


Model failure first. Share something that didn’t work. A prompt that hallucinated. A tool that created more friction than it solved. A workflow you had to abandon. You don’t need to perform vulnerability — just be honest about the reality of learning something new. Your people are watching what you do, not what you say.


Build a proportionate action. Don’t ask people to commit to 45 minutes they don’t have. Identify one task, one tool, one 15-minute window. Pair it with a clear skill level, a short timeline to trial it, and a simple way to record whether it worked. Structured small beats unstructured ambitious.


Create a check-in, not a checkpoint. The difference matters. A checkpoint measures output; a check-in creates the feedback loop that turns a one-off experiment into a repeating habit. Pair people with a neutral colleague or line manager who asks one question: what did you learn?


The shift from blocker to action doesn’t happen because people decide to be braver, it happens because the environment makes action easier than inaction.


That’s your job. As a People Leader, you are the Translator of AI.

Comments


bottom of page