The AI decision has been made. Now it's your problem.
- Glenn Martin

- Mar 3
- 8 min read
This is the eleventh post in a series I write for People Leaders who want clarity, not hype. My aim is always the same: to take the parts of the AI conversation that feel technical, abstract, or simply overwhelming, and translate them into something a people leader can actually use.
This post is about a decision that has almost certainly already been made in your organisation, either with you or without you. It is about what that decision really commits your people to — and what you can do about it now.
The three (honest) ways companies choose a frontier model
How do leadership teams choose the AI model they are going to implement across their company? In my experience working with organisations of different sizes and sectors, this decision almost always follows one of three paths.
The logical choice. The company already has an enterprise licence with Microsoft or Google, and the associated model becomes the default. You have Microsoft — you choose Copilot. You are on Google Workspace — it is Gemini. No evaluation. No brief. The decision is made inside a procurement contract that predates the question.
The experimental choice. Your employees have already been exploring ChatGPT, Claude, Perplexity, and others on their own initiative. Your CFO now needs a formal commitment. The model that wins is usually the one most employees have heard of, or the one that came up in a leadership away-day conversation.
The budget choice. Cost defines everything. The organisation optimises for return on investment using a mix of models based on team need, preference, and individual cost centres. It looks strategic. It rarely is.
Some argue there is a fourth option: the strategic choice. But for that to be real, your leadership would need to be confident that the model you select today will meet the strategic requirements of the business in six, twelve, or eighteen months — and given the pace at which these models are changing, that is nearly impossible to predict with any certainty.
Here is the part that should concern you if you lead people: in most of these scenarios, the Chief People Officer was not in the room when the decision was made.
Why that decision has longer legs than you think
Choosing a frontier model feels like a technology decision. It is not. It is an organisational commitment that shapes what your employees can do, how they are trained, what data gets used, and what risks you carry — potentially for years.
I recently worked with an enterprise-level company that had taken what looked like a sensible approach: they gave their employees a choice between two frontier models. The intention was good. The execution revealed a problem that I see repeatedly. Employees were being trained to use the tools before anyone had seriously asked whether those tools were the right fit for their specific roles. People were forming opinions — and habits — based on a few hours of exploration and a limited understanding of what each model could actually do. The company had made a licence-level decision, added a bolt-on option, and called it a strategy.
This is the dependency problem.
Once employees are trained on a particular model, once your internal documentation and workflows are built around its interface, once your IT infrastructure is integrated with its API, the cost of changing course becomes significant — not just financially, but culturally. People resist retraining. Teams resist disruption. The model that was chosen for convenience becomes the model you are stuck with.
If you are a CPO reading this and you are currently mid-implementation, this is not a reason to stop. It is a reason to get into the room where the ongoing decisions are being made, and to start asking the questions that nobody else is asking on behalf of your people.
The layer tech is starting to talk about
Beyond the choice of frontier model, there is a second and more consequential decision on the leadership agenda in many organisations right now. It does not have a consistent name yet — some call it an orchestration layer, others call it an AI middleware layer or an enterprise AI fabric — but the simplest description is this: it is the layer that sits between the AI model and your company’s actual systems, data, and information.
Think of it this way. The frontier model — whether it is GPT-4, Gemini, or Claude — is a general-purpose engine. It knows a great deal about the world, but it knows nothing specific about your company: your policies, your people data, your organisational history, your decision-making patterns, your values as an employer. The orchestration layer is what connects the engine to your company’s world.
This is where the frontier model becomes specific to your organisation. And this is where the people implications become significant.
What goes into that layer? From a People & HR context, it might include internal knowledge bases, HR policy documents, performance data, recruitment criteria, or historical people decisions. Who decides what goes in? Who governs what the model can access? Who is accountable when it produces an output based on information that is out of date, biased, or simply wrong?
These are not technology questions. They are People & HR governance questions. They are people questions. And in a large majority of companies I encounter, they are being answered by IT and engineering teams, without a People leader at the table.
The frontier model is a commodity — it will be updated, replaced, or superseded regardless of what you choose today. The layer that connects it to your organisation’s knowledge and memory is where real value, and real risk, lives. If you are not involved in how that layer is designed and governed, you are handing over a significant slice of people risk management to teams who are not thinking about it.
When fine-tuning enters the picture
Fine-tuning is the process of training a frontier model further on your organisation’s own data, so that it becomes more accurate and more useful for your specific context. In plain terms: you are teaching the model to think more like your company.
That sounds appealing. It carries a risk that is rarely discussed in leadership briefings: if your organisational data reflects cultural problems, historical biases in how decisions have been made, or gaps in your policies, fine-tuning does not fix those issues, it encodes them, makes them faster, more consistent, and harder to see.
It creates a visibility gap.
The timing question matters as well, because fine-tuning should not happen until an organisation has a clear understanding of how the model is actually being used day-to-day, where it is producing good outputs, and where it is not.
Most organisations are nowhere near that level of clarity yet’ and the honest answer in most cases is, it is too early, and the groundwork has not been done.
The evaluation gap is your governance problem
The International AI Safety Report 2026 makes a point that should be required reading for every leadership team implementing AI right now.
Summarising this in my own layman’s terms: the tests used to evaluate AI models before they are released do not reliably predict how those models will behave in real-world use, at scale, under pressure. The inner workings of these models are not fully understood even by the people who built them. New capabilities, and new failure modes, can emerge and are unpredictable.
For a CPO, the operational translation of this is uncomfortable but important: the model your employees are being trained to use today is not a fixed, fully understood product. It is a system whose behaviour may change, whose limitations are not fully mapped, and whose real-world performance in your specific organisational context has not been formally tested.
Add to this the institutional dimension the report identifies: AI developers have commercial incentives to move quickly and keep certain information proprietary. The pace of development creates pressure to prioritise speed over thoughtful and intentional governance. This is not a criticism of any particular company, but it is a structural reality of the fast-moving AI market.
What this means practically is that the organisation taking on liability for how this technology affects employees, decisions, and workplace outcomes is yours, not the vendors. The question is not whether your model passed its pre-deployment tests, the question is whether your organisation has the governance infrastructure to monitor, report on, and respond to how it actually behaves inside your business.
Most do not. Yet!
The retrospective risk problem
I worked with a scale-up that made a decision I see more and more frequently: they chose a frontier model before they had a clear picture of how AI was already being used across their business.
By the time they formalised their approach, employees had already developed individual habits, preferences, and workarounds. Some were using tools the company had not sanctioned, some were putting data into systems that had no data governance in place, and some had already built their daily workflows around a particular model. This created a scenario where people resisted the change to another frontier model and a new way of working.
This is the retrospective risk problem: when formal governance arrives after informal behaviour is already embedded, you are not implementing a new policy, you are asking people to unlearn something they have already internalised. Combined that with accepting oversight of something they have been doing freely.
Good risk management around AI includes identifying vulnerabilities, assessing potentially risky model behaviours, creating incident reporting processes; in reality, this is considerably harder to introduce after implementation than before it.
The change management challenge is compounded because the people you most need to engage are often the ones who are already most confident in how they use these tools. They do not think they have a problem because their current experience, says they do not.
This is not an argument for analysis paralysis, it is a signpost for increasing urgency. If governance is not already part of your AI implementation, the window to introduce it without significant friction is closing.
What good actually looks like
Organisations that are navigating this well are not necessarily using better models or spending more money, they are making different decisions about who is involved and when.
My suggestions for “what good looks like” are:
People leaders are in the room before implementation, not after. The CPO has a formal role in both the model selection conversation and the orchestration layer governance - not as a token voice, but as a decision-maker with accountability for workforce risk.
Diagnosis precedes deployment. Before any model is formally adopted, there is a structured effort to understand how AI is already being used informally across the organisation. You cannot govern what you have not mapped and made visible.
Governance is built in, not bolted on. Incident reporting, monitoring for unexpected model behaviour, and regular review of what is feeding the orchestration layer are treated as standard operating practice, not a future phase.
Employee training is role-specific and honest about uncertainty. Rather than training everyone on a generic model overview, good organisations train people on the specific tasks the model will help with in their role, and are transparent about what the model does not do well, and what it might get wrong.
The question you should have asked at the start
The question that prompted this post was: how do leadership teams choose the frontier model they implement across their company? Having thought through what that choice actually means, I think it is the wrong question to start with.
The more important question — and the one most organisations have not yet answered seriously — is this: do we understand what we have committed our people to?
Not just in terms of the model’s features, but in terms of:
the dependency it creates and the data governance it requires.
the behavioural changes it is already producing (with or without formal training).
The risk you are carrying in the gap between what your pre-deployment testing showed and what is actually happening in your organisation every day.
The frontier model decision has been made, and In most cases, it was made before you had the information you needed, and possibly before you were asked.
That is the reality of where most organisations are right now.
What you do next is still yours to shape.




Comments