top of page

The Agent Illusion

There is a growing narrative that “building AI agents” is the next step for progressive People teams, and that if you are not experimenting with agents you are somehow lagging behind. I’m hearing this in leadership conversations, it’s all over LinkedIn and it’s certainly filling vendors sales pipelines.


I call this the Agent Illusion.


The Agent Illusion is the belief that deploying AI agents will unlock productivity on its own, without redesigning workflows, cleaning up data, defining evaluation standards, or being explicit about where human judgement and review sits.


Whilst I don’t dismiss that AI agents can deliver real gains, I challenge the idea that most organisations have built the conditions required for those gains to be realised.


What the evidence actually suggests

If you step away from marketing language and look at the research based on real case studies, a pattern emerges.

  • Only a small fraction of HR workflows are genuinely AI-ready.

  • Only a minority of organisations have moved agent initiatives beyond pilots.

  • A significant number of projects are expected to be cancelled because costs and risk outpace governance.


At the same time:

  • CEOs are publicly talking about the increased need for agent ecosystems.

  • Vendors are relabelling automation or chatbots as “agentic”.

  • Internal conversations are all about autonomous productivity.


In short, leadership appetite has moved faster than organisational readiness, and that gap is where the illusion lives.


Agents are not plug-and-play

An AI agent is not a clever prompt and it is not simply “ChatGPT connected to a system”; it is a set of instructions that can plan and execute tasks across tools with some autonomy, within guardrails that you define. That requires work.


At minimum, it requires:

  • Clear problem definition

  • End-to-end workflow mapping

  • Integrated and reliable data

  • Escalation logic

  • Pre-defined evaluation criteria


Most People teams have not been trained to operate this way; they are still building AI literacy, debating governance, experimenting with generative tools, and in many cases measuring training attendance rather than measurable capability shift.


Moving from that position straight into multi-agent automation across HR systems is not bold transformation; it is skipping steps that are critical for success.


Three hype patterns

It helps to name what is happening and call it out when you see it. I’m seeing the following:


1. The super-agent fantasy

The idea that autonomous HR agents will run recruiting, onboarding and performance management end-to-end. In practice, the strongest results are (and will be) narrow and specific; HR service desk triage, interview scheduling, benefits queries, structured onboarding tasks. High-volume, well-defined processes respond well to automation, where there is the need for complex human judgement, it can fail.


2. Vendor relabelling

Chatbots and workflow automations are being rebranded as agents. The language shifts faster than the capability, and leaders assume they are buying autonomy when they are buying assisted automation.


3. Inevitability framing

“Every employee will have an agent.” Perhaps they will, but “inevitability” is not strategy. The existence of a tool does not mean it is well-designed, well-governed, or improving outcomes.


What is actually true

Two things can be true at once.


Agents can deliver measurable gains. There are credible examples of:

  • Significant reductions in HR resolution time

  • High ticket containment rates

  • Faster onboarding

  • Recruiter workload reduction

  • Lower cost-to-serve


When scoped tightly, supported by clean data, and designed with clear escalation paths, agents remove repetitive work and free people to focus on higher-value decisions.


At the same time, most organisations are not structurally ready.


Common constraints include:

  • Fragmented data across systems

  • Immature governance models

  • Managers not trained to lead hybrid human-agent workflows

  • Employees who want oversight, not replacement


The illusion emerges when leaders focus on success metrics without replicating the conditions that enable safe scaling and adoption.


The real constraint: build logic

The missing capability is not access to agent technology, it is build logic.


Build logic simply means this: can your team define the problem precisely, map the workflow properly, clean and constrain the data, decide where humans must step in, and define how you will measure whether the system is actually performing. Without that, agents will automate confusion.


Most failure modes are predictable:

  • An agent is bolted onto a process nobody has mapped clearly.

  • Data is pulled from inconsistent sources, so outputs are fluent but incomplete.

  • Escalation paths are vague, so sensitive cases are mishandled.

  • Managers lack the confidence to challenge or override outputs.

None of this is a model problem, it is a sequencing problem.


A serious counter-argument

I know competitive pressure and the drive for productivity gains is real. People leaders are feeling this on a daily basis, as they see tooling moving quickly.


They start thinking, People & HR could be sidelined if it does not actively participate in automation design.


I agree that waiting for perfect readiness is not sensible, but I think the question is not whether to experiment, it is how.


Think about your sequencing approach

If you want to avoid the Agent Illusion, the sequencing needs to look different.

Start with one high-volume, low-risk process and map it fully.


Be explicit about:

  • Inputs and outputs

  • Escalation paths

  • Success metrics

  • Acceptable error thresholds

  • Data quality requirements


Assign clear ownership for performance monitoring whilst treating the agent like a new hire; onboard it properly, review its performance, refine its scope, and remove it if it is not adding value.


This is less dramatic than announcing transformation, but it is more likely to survive contact with reality.


The leadership risk

It is moving forward without developing build logic.


Without that capability, every new tool looks impressive for a quarter and then quietly under-delivers.

There is also a strategic dimension to this.


IT, Operations and Finance are already building automation capabilities. They are mapping processes, integrating systems and thinking in terms of efficiency and control. If People and HR positions itself as a buyer of AI tools rather than a designer of work systems, it becomes downstream of decisions that shape job design, performance expectations and employee experience.


That is not just a productivity issue, it becomes a credibility issue.


Build logic is what keeps People & HR in the room when decisions are made about:

  • which tasks are automated

  • where escalation sits

  • how performance is defined

  • how risk is governed

  • what remains distinctly human


Invest in build logic, or risk becoming downstream of decisions you should be leading.



Comments


bottom of page