Three types of friction your AI adoption plan hasn't accounted for.
- Glenn Martin

- Apr 1
- 9 min read
Most AI adoption plans are built around access — which tools, which licences, which training programme, which rollout timeline. These are the variables that appear in budget conversations and Board updates — but they are not the whole picture.
What rarely makes it into the plan is friction. Not resistance, which gets plenty of attention and there are workshops for that, change management frameworks, and communications strategies designed to move people from sceptical to willing.
I am talking about something less immediately visible, but more practically experienced: the accumulation of small obstacles that sit between an employee and productive use of an AI tool.
Over the past six months, I have been using ChatGPT, Claude, and Copilot consistently and deliberately, with a focus on being tool-agnostic and outcome-led rather than committed to any one model. What I have discovered, through my own experience as a practitioner, is a taxonomy of friction that I suspect is playing out across organisations right now, and at a scale that most People Leaders have not yet accounted for.
I call them Set-up friction, Functionality friction, and Usage friction. None of them sound dramatic, and that is precisely the point.
Dramatic friction gets noticed; it triggers escalations, feedback surveys, and project reviews.
The friction I am describing is subtler than that — it erodes confidence, consumes time, and creates unnoticed attrition from tools that employees were never quite sure they had permission to struggle with in the first place.
If you are a People Leader responsible for AI adoption, rollout, or productivity planning, I want to offer you a lens that I do not often see reflected in the strategies I review. My hypothesis is that, if I’ve experienced these frictions, the question I’m asking you to ask yourself is this: what is your workforce experiencing?
1. Set-up friction — the decision layer before first use
Set-up friction is the friction that exists before an employee has used the tool once. It is the practical and cognitive overhead of deciding which tool, at what tier, for what purpose — and in most Enterprise organisations, that decision is either made for the employee without adequate context, in start-ups, the decision is usually left entirely to the employee without adequate support.
My own experience of this was navigating the difference between subscription tiers on ChatGPT. The cost difference was not significant in isolation, but the minimum seat requirement on the Business tier raised the total cost to a point that needed justification — and that justification required me to understand the functional differences between tiers well enough to make the case for or against them. This is not a complicated problem, but it is a genuinely time-consuming one, and it is a version of a decision that many leaders or employees will face if your organisation has not made these choices clearly on their behalf.
At scale, set-up friction looks like this: a workforce where different employees are on different tiers of the same tool, with different access to features, producing different outputs, and unable to share context or build consistent workflows because the baseline is inconsistent. It also looks like employees defaulting to free tiers that limit what the tool can do, and then concluding — incorrectly — that the tool itself is not worth the investment.
Set-up friction is largely preventable. It requires procurement clarity, access decisions made with genuine understanding of functional differences between tiers, and onboarding that starts before the first login rather than after it.
2. Functionality friction — when the integration doesn’t deliver what you assumed
Functionality friction is what happens when the tool you have chosen connects to your existing systems less completely, or less usefully, than you expected. It is not a failure of the tool in isolation; it is a failure of the assumption that “integration” is a binary state — either the connection exists, or it doesn’t.
In practice, integrations exist on a spectrum. A read-only connection, a bidirectional sync, and a connection that allows an AI model to take action on your behalf are three very different things, and the difference matters enormously to how useful the tool actually becomes in practice.
My own example: I had been using Fathom as my meeting transcription tool and, when I transitioned to Claude as my primary working model, I switched to Granola on the basis that it had a Claude connector. On the basis of my research, it seemed to offer the opportunity to fully automate something I was doing partially in a manual way: moving meeting notes into Notion, generating a summary, and sending a follow-up email to meeting attendees. The logic was sound, but the reality was more complicated.
The specific issue was that Granola’s transcript export strips much of the speaker identification that is visible within its own interface. When I copied a transcript to pass to Claude, the granularity of individual contributions — who said what, in what order — was largely lost. This mattered because the quality of a meeting summary depends on being able to attribute contributions and follow-up actions to named individuals; without that, you get a coherent summary of what was discussed, but not a reliable record of what was decided and by whom.
The end result was that I needed to revert to Fathom, extract Granola from my workflow, and re-establish the connections I had removed. The total elapsed time was approximately three days (implement, test, decide) — not catastrophic, but disruptive enough to affect my output during that period, and costly enough in terms of configuration and mental load that the consequences of the original decision were felt in the form of a general frustration.
Now scale that. An organisation that has made a tool-stack decision based on integration assumptions or a small cohort pilot, results in functionality that doesn’t meet the requirements of the wider business. This isn’t a three day disruption, the business experiences weeks of inconsistency, eroded confidence in the tools, and a reputational problem for the team or function that made the recommendation. The employees who tried the tool, found it wanting, and then returned to their previous behaviour will not necessarily distinguish between “the tool was the wrong choice” and “AI tools don’t work for me” — and that generalisation is much harder to undo than a tool decision.
Functionality friction is most effectively addressed before rollout, through structured integration testing that goes beyond confirming that a connection exists and examines what the connection actually enables, at what level of granularity, and under what conditions.
3. Usage friction — the hidden cost of getting started properly
Usage friction is the most underestimated of the three, because it is the friction that comes after the decision to adopt, and after the tool has been set up. It is the friction of adapting your working patterns to accommodate a new tool — which sounds straightforward until you recognise how much invisible infrastructure your existing working patterns are built on.
When I set up ChatGPT, the personalisation and instruction layer was relatively intuitive — I needed to be clear on what I wanted from the model and what I did not want, and the tool responded accordingly. The friction was low and the adaptation was quick. Copilot was similar. Claude Cowork was a different experience.
To use it well — to use it in a way that would genuinely support my working practice rather than simply add a tool to the stack — I needed to build a folder architecture that the tool could work within, configure Global instructions that reflected how I think and work, and test the Connectors I wanted to add in a systematic way. None of this is unreasonable, but all of it takes time.
I spent approximately four hours on this setup — mapping my primary folder structure, refining the Global instructions, testing integrations, and reviewing what I had built against what I actually needed. Four hours is not a large number in the context of a business tool investment, but it is an enormous number in the context of how most employees experience “AI adoption” — which typically involves a licence, a brief introduction, and an expectation that they will figure out the rest.
Is there a hack or shortcut? - Yes, you could copy-&-paste a standard template for the folder architecture, but this will only get you so far and generic templates lead to generic outputs.
Here is what I did not expect: that investment of time ultimately made me better at my own work. The process of building a folder architecture for the tool forced me to think more clearly about how I organise my work, what I keep, and what I discard. The tool created the conditions for a kind of deliberate reflection on my working practice that I would not have undertaken on my own.
That is a genuine and somewhat surprising benefit — but I had the time, the motivation, and the self-directed discipline to see it through. The majority of employees in your organisation do not have all three of those things simultaneously, and without them, the setup either does not happen properly, or it happens in a way that limits what the tool can do, leaving the employee with a diminished experience they are likely to attribute to the tool rather than the configuration.
Usage friction is about time, but it is not only about time. It is about protected space — the explicit permission to invest in learning properly, to configure thoughtfully, and to experiment without the pressure of an immediate productivity output. Without that space, employees make the minimum viable investment in setup, encounter limitations that are a function of that underinvestment, and form conclusions about the tool’s value that are almost impossible to reverse.
What these frictions look like at scale
None of the experiences I have described above are exceptional, they are ordinary, and that is what makes them important.
As a practitioner — motivated, curious, and actively choosing to make AI central to their working practice — I’ve encountered all three types of friction in the course of normal use.
The question for People Leaders is not whether their employees are encountering the same thing, the question is how often, how invisible, and what are the consequences for company-wide adoption.
Friction does not scale linearly.
My three-day functionality disruption at an individual level becomes two weeks of inconsistency across a team, and two months of patchy adoption across a function.
Set-up friction that leaves employees on the wrong tool tier creates a ceiling for productivity that no amount of training will raise if the access problem is not addressed first.
Usage friction that is not accounted for in the time available for learning means that the configuration work never happens, and the tool is used at a fraction of its potential — by people who conclude, reasonably but incorrectly, that the potential was never there.
The adoption plans I see tend to focus on access, training completion rates, and reported confidence levels. These are useful measures, but they do not capture friction.
A high training completion rate in the presence of unaddressed set-up friction tells you that people attended the sessions; it does not tell you whether they have the right tool access to apply what they learned.
A high confidence score in the presence of unaddressed usage friction tells you how people felt leaving the workshop; it does not tell you whether they had four hours to configure the tool properly before using it in a live context.
Friction is structural. It does not resolve itself through good intentions or a compelling communications campaign. It resolves through deliberate design decisions made before rollout, sustained by ongoing support that treats the tool environment as a variable rather than a constant.
What People Leaders can do
There are three practical moves that will address friction before it compounds.
Audit access before you audit adoption. Before you measure how well your people are using the tools, examine what they are actually working with — which tier, which features, which integrations, and whether those baseline conditions are consistent enough to produce consistent outputs. A training programme built on top of inconsistent access is a training programme that will underdeliver, regardless of how well it is designed.
Test integrations at the workflow level, not the connection level. For any tool that depends on integration with your existing stack — whether that is a meeting tool, a project management system, a communication platform, or a file environment — test the integration by running a real workflow through it, end to end, before it goes to your employees. Confirm not just that the connection exists, but what it enables, at what level of granularity, and where the gaps are. The gap between what an integration appears to offer and what it actually delivers is where functionality friction lives.
Protect setup time and model it visibly. The configuration investment that makes a tool genuinely useful is not a one-off administrative task, it is a reflection and design process that shapes how the tool fits into real working patterns. If you want your employees to make that investment, you have to protect the time explicitly, make it a sanctioned part of the adoption process, and model it yourself. Tell your people what you set up, how long it took, and what you learned from doing it. Failure stories are more useful than success stories here, because they give people permission to find the process difficult, and they make the investment feel normal rather than exceptional.
The friction your AI adoption plan hasn’t accounted for is not in the tools. It is in the conditions you have or haven’t built around them.
That is yours to address.




Comments