Last week was about accessing the distributed knowledge inside your organisation. This week is about what you do with it: understanding how your operating model actually creates value, and what changes when AI enters those processes.
This opens a new track: value chain engineering. The first four weeks covered the adoption crisis and its root cause. The decision making track (weeks five and six) covered where AI decisions actually get made. This week shifts to what happens when AI meets the processes it's supposed to improve.
AI at the edges
Look at where AI gets deployed first. Coding assistants, now at 51% enterprise adoption. Customer support chatbots at 31%. Meeting summarisation at 24%. These tools are widespread (enterprises have AI running across an average of three business functions) and relatively easy to justify.
They also share something beyond being easy to deploy: they don't need organisational context to function. A coding assistant doesn't need to understand which downstream team depends on the API it's helping build. A chatbot can handle the tail end of customer service without knowing how escalation patterns feed into the product development backlog. These tools work at the endpoints because they can operate without understanding how the broader system connects.
The problem starts when you move AI closer to the core — where processes connect across teams, where decisions depend on what happens upstream and downstream, where the right action changes based on context that lives in people's heads rather than in any system. 78% of large organisations use AI in at least one business function. Only 1% describe their implementations as mature.
The agentic acceleration
The ambition is growing faster than the foundations. A third of enterprise software is projected to include agentic AI by 2028, according to Gartner. Agents that schedule, route, approve, and execute across entire workflows, chaining decisions together without a human reviewing each step.
Every one of those decisions needs context. Which approval path applies to this case? What happens downstream if the agent routes differently than a human would? Agents don't just need data; they need the organisational context that tells them what the data means in this specific part of the value chain.
That context doesn't exist in most organisations' infrastructure. It lives in the judgment of the people doing the work — the same judgment that compensated for process gaps before automation removed it from the chain. 60% of AI leaders cite legacy system integration as their primary challenge for agentic AI; the same proportion cite governance and risk management concerns. Both challenges point to the same gap: the context layer that agents need hasn't been built.
I wrote a few weeks ago that only 12% of organisations report sufficient data quality and accessibility for their AI efforts. That was about data governance for conventional AI, where humans still review outputs. Autonomous agents make decisions on that same foundation at speed and scale, without the human judgment that used to compensate.
The conversation around agentic AI focuses on what agents can do, not what they need: a context layer built from how work actually flows. Less than 10% of enterprises have AI agents in production, while Gartner believes that more than 40% of agentic AI projects will be cancelled by the end of 2027 — driven by escalating costs, unclear business value, and infrastructure that can't provide the organisational context autonomous systems require.
Why AI makes old problems worse
Technology hitting process friction isn't new — it's a pattern I've covered before. But AI changes the mechanism in a specific way.
When a process has a flaw and a person runs it, the person compensates using context they carry: knowledge of which cases are exceptions, which steps don't apply in certain situations, which downstream teams need a heads-up. People are remarkably good at routing around dysfunction quietly because they understand the broader system, even when they can't fully articulate it.
AI runs the process as designed, without that context. At scale, consistently, without hesitation. And with agentic AI, those context-free decisions feed directly into the next context-free decision. The compensation layer that made flawed processes workable gets removed from the chain entirely.
42% of executives say AI adoption is "tearing their company apart" due to organisational friction. Most of that friction traces back to the same gap: technology that needs organisational context, deployed into infrastructure that doesn't provide it.
Understanding the value chain first
Value chain engineering inverts the typical sequence. Instead of starting with technology and asking where it fits, you start with the operating model and ask how value actually gets created. Where do the dependencies sit between teams and systems? What are the real handoff points, the ones people actually use rather than the ones in the documentation? What happens downstream when you change something upstream?
The instinct is to commission a consulting engagement or commission a process mapping exercise. I've written about why that falls short — both produce static snapshots that start ageing on delivery, capturing what's formally documented rather than what actually happens.
What organisations need instead is continuous visibility into how their processes interlock and create value. Something that stays current as teams restructure, priorities shift, and compliance requirements land. Organisations are adaptive systems; they don't sit still while you plan around them.
That's the principle behind AI Readi's value chain impact analysis. It draws on the distributed knowledge of the people doing the work — the same structured aggregation I described last week — and keeps that understanding current as their reality changes. When contributors across the organisation highlight how specific processes connect, where handoffs create friction, and which dependencies are fragile, the picture builds incrementally. No single person holds the full view; the system assembles it from the people closest to each piece. And when a team reorganises or a supplier changes, the understanding updates because the people in those roles are still feeding into it.
That's the context layer that AI (and particularly agentic AI) needs before it can operate reliably inside your value chain.
The difference matters because selecting technology based on documented processes produces different choices than selecting it based on how work actually flows. And when agentic AI is the technology in question, the consequences of getting that foundation wrong propagate at machine speed.
But understanding your value chain is only half of it. AI impacts don't stop at the initiative boundary. They cascade through every connected process, team, and system, and most organisations don't see the full reach until the budget is committed.
Next week: the cascade effect, and how the people conducting your processes determine whether AI transformation actually works.
Sources
- 78% of large organisations use AI; 71% deployed generative AI; only 1% describe implementations as "mature"; average deployment across 3 business functions — McKinsey Global Survey on AI (2025)
- Code generation adoption 51%; customer support chatbots 31%; meeting summarisation 24% — Industry surveys compiled in consolidated enterprise AI research (2025)
- 42% of executives report AI adoption "tearing their company apart" due to organisational friction — Industry surveys (2025)
- 60% of AI leaders cite legacy system integration; 60% cite governance/risk concerns for agentic AI — Gartner (2025)
- "Over 40% of agentic AI projects will be scrapped by 2027"; 8.6% of enterprises with agents in production; 33% of enterprise software to include agentic AI by 2028 — Gartner, via Reuters (June 2025)
- "12% of organisations report sufficient data quality and accessibility for AI" — Precisely, AI Adoption and Data Program Success (2025); McKinsey Global Institute corroboration