Prompting isn’t typing. It’s Design Thinking – and your budget will feel the difference in 2026
- Glenn Martin

- 3 hours ago
- 3 min read
Most teams still treat prompting as “typing with better outcomes.”That was harmless when GenAI was a novelty.It becomes very expensive in 2026.
Because prompting isn’t a writing habit.It’s a capability.And in 2026, capability becomes a cost driver.
The shift isn’t philosophical.It’s economic, and it’s already moving faster than most leadership teams realise.
Economic Shift
The first wave of AI adoption was subsidised. Flat of fixed subscriptions. Unlimited usage (kinda!). An “All-you-can-eat AI.” approach.
This was onboarding at scale, not sustainable economics.
Now reality is catching up.
Hybrid pricing models – subscription plus usage – are already creeping into the tools used by non-technical employees. Developers have been living in this world for years through token-based APIs. The rest of the organisation is about to join them.
McKinsey has already signposted the direction: as AI capabilities scale, vendors will need consumption-based pricing in the business model mix if they want to survive long term. Flat or fixed fees don’t align with rising compute demands, model complexity or expanding context windows.
Soaring data centre investment (est. ¢1-¢2 trillion invested in 2025)
Fragile unit economics (rising investment versus flat revenue models)
And a rapidly increasing appetite for AI tokens (that must be paid for somewhere)
The story is consistent. AI economics are tightening. Subsidised usage is ending. Someone has to cover the bill, and it will not be the vendors.
2026: The Tipping Point
Across the organisations I work with, the conversation has already shifted. It’s no longer “should we use AI,” but “how quickly and how safely do we embed it into real workflows.”
Pricing will track that shift.
Flat subscriptions made sense when people were experimenting. But as usage scales, agentic workflows become normal, and models are asked to do more than write emails, “all-you-can-eat” stops being economically viable.
This is why 2026 is shaping up to be the tipping point.
CFOs will begin treating LLM access as a variable cost line item, not a flat licence. They’ll see usage-based economics appear in seat pricing and overages.
Token usage management will go from an edge case to a genuine discipline.
If GenAI mirrors cloud – and all signals suggest it will – the default enterprise model becomes:
A base licence + metered usage.
Which means the reality bites: the quality of your prompts will show up in your AI bill.
Where capability meets cost: The Prompt Maturity Gap
Here’s the part most organisations are missing.
During the training I’ve delivered this year, I’ve been iterating a Prompt Maturity Scale. The capability gap between “I can type into a chatbot” and “I can design a prompt that reliably produces value” is widening, not shrinking.
Most employees still prompt as if they’re using Google search – a retrieval mindset.Short. Vague. Under-specified. No constraints. No clarity. This is completely mismatched to systems that now behave more like collaborators than search engines.
Retrieval-era prompting in a usage-priced environment is a very efficient way to waste money.
Because in token economics, inefficiency compounds.
Long, unfocused outputs cost more.
Agentic workflows amplify cost.
Vague prompts generate rework, which generates more tokens, which generates a bigger bill.
Capability is no longer a “nice-to-have.”It’s specifically linked to spend and investment decisions.
Prompting isn’t typing. It’s Design Thinking.
When you remove the hype, prompting is nothing more or less than designing the interaction between a human and a model.
It’s intent.
It’s constraints.
It’s clarity.
It’s iteration.
It’s specifying the outcome you actually want rather than throwing words at a stochastic wall.
Design Thinking shows up naturally in strong prompting:
Frame the problem
Define success
Manage scope
Iterate intentionally
Improve the system through feedback
In a usage-based world, this isn’t philosophical.It’s financial.
Every unnecessary token has a cost.
Every unclear instruction has a cost.
Every poorly scoped task has a cost.
This is the part leadership teams have not yet internalised.AI capability is about to become a budget issue.
What Founders and People Leaders need to do now
If you’re running a company, department or team, the message is simple:
Treat prompt capability as a business competency.
Upskill your teams before usage-based pricing forces you into reactive behaviour.
Stop assuming “everyone can prompt” because everyone can type.
Build shared prompting standards before agentic workflows magnify inefficiencies.
Prepare for the moment when LLM usage sits alongside cloud, infrastructure and software as a cost category you must actively manage.
The pricing model is already shifting.Whether your teams are ready for it is another matter entirely.
The line I’ll leave you with…
If 2026 is the tipping point, the question is no longer whether your teams can use AI.It’s whether they can think with it.




Comments