Last week I argued that the knowledge needed to set good AI (and tech) direction doesn't sit where the authority to sponsor it does. Executive sponsorship is essential; setting direction based on executive knowledge produces systematically incomplete results. That creates an obvious follow-up: if leadership can sponsor but can't direct, where does the knowledge to direct actually live?
It's distributed across the organisation, in the people who do the work. And most of it can't be extracted by asking.
This is the second post in a track on where AI decisions actually get made. Last week covered why top-down direction misses organisational reality. This week: the alternative, and what structured access to distributed knowledge looks like in practice.
Where the knowledge actually lives
Think about what an AI initiative needs to get right. Whether a dataset is reliable enough for a particular use case. Which processes break under pressure and which workarounds people rely on. Where the informal handoffs sit between teams. Whether the documented process matches what actually happens at 4pm on a Friday.
This knowledge exists. Someone in the organisation knows each of these things. Usually several people. But it's scattered across roles, departments, and levels, and much of it is tacit: the kind of knowledge people have but struggle to articulate.
Michael Polanyi described this in 1966: "we can know more than we can tell." The operations manager who's been doing the job for eight years knows which supplier data to trust and which to double-check, but ask her to write that down and you'd get a fraction of what she actually knows. The workaround the support team built three years ago to handle a system limitation: it works, everyone uses it, and it's in nobody's documentation. The analyst who knows that the Monday numbers on the dashboard are unreliable until the batch job finishes at 11am.
This is the knowledge that determines whether an AI initiative will work in production. And it's invisible to anyone not doing the work. Hayek's point from 1945 (I've referenced it in weeks two and four) applies directly: this knowledge never exists in concentrated form. It's dispersed as incomplete fragments that only the people closest to the work possess.
Why asking doesn't work
The obvious response is: go ask them. Conduct interviews. Run workshops. Send surveys. And organisations do this all the time. The results are consistently disappointing, for reasons that have more to do with how knowledge works than with the quality of the questions.
People describe their work differently from how they actually do it. Organisational researchers call this the gap between declared and enacted processes. Ask someone to describe their workflow and you'll get the official version, the one that matches the process document. Watch them work and you'll see something different: shortcuts, informal checks, skipped steps that turned out to be unnecessary, added steps that nobody authorised but everyone relies on. The declared version is what they think you want to hear; the enacted version is what actually matters for a tech initiative.
Group settings make it worse. In workshops and brainstorming sessions, the first person to speak anchors the conversation. Seniority effects mean junior team members defer to senior ones even when they have better operational knowledge. People who disagree stay quiet because contradicting a manager in a group setting has social cost. The diversity and independence that make collective intelligence work (I covered this in week four) are exactly what group settings suppress.
What structured aggregation reveals
Nature solved this problem long before organisations existed. Ant colonies find optimal foraging paths without any ant knowing the full map; each follows local pheromone signals, and colony-level intelligence emerges from thousands of independent local decisions. Neural networks do something similar, aggregating millions of individually weak signals into coherent perception (no single neuron "sees" anything, but the network does). Immune systems work the same way, detecting threats through distributed sensing where individual cells respond locally and system-level defence emerges from the aggregation.
Complexity science calls this emergence: system-level intelligence arising from local interactions. The pattern, documented extensively in the Santa Fe Institute's work on complex adaptive systems, holds across biological, social, and economic systems. Value comes from the interaction of components, not from any single component's contribution.
The same principle applies to organisations. In week four, I described Surowiecki's four conditions for this to work in human groups: diversity, independence, decentralisation, and aggregation. When those conditions are met, the results are measurable: the Iowa Electronic Markets outperformed professional polling organisations 74% of the time, and Ford reduced forecast error by 25% with internal prediction markets. These systems work because they aggregate distributed information that no individual possesses.
The practical application for AI adoption is bottom-up discovery: structured input from practitioners across areas of expertise, capability areas, and domains throughout the organisation. Each person evaluates independently, drawing on their own experience and local knowledge, before results are aggregated, without workshops, anchoring, or conformity effects.
What comes back is different from what interviews or surveys produce. Confidence-weighted aggregation surfaces three things at once: where people agree (consensus areas where the organisation can move forward), where they disagree (risk zones that need investigation before resources get committed), and how confident they are in their own judgments (separating strong convictions from educated guesses).
Two departments evaluate data readiness for the same AI use case. Technology rates it 7 out of 10. Operations rates it 3. In a workshop, one of those voices wins (usually the more senior one) and the other goes unheard. With structured aggregation, that disagreement is the most valuable output. It tells you exactly where to investigate before committing resources. And it surfaces a disconnect that would otherwise show up as a production failure six months later.
Disagreement tells you where to investigate; agreement tells you where to move forward.
From discovery to impact
This is the approach we built into AI Readi's contributor model: structured bottom-up discovery that aggregates what the people closest to the work actually know, without filtering it through management layers or conformity effects.
But discovering what the organisation knows is only part of it. The next question is how AI flows through the value chain. Process understanding has to come before technology selection, and when it doesn't, organisations automate dysfunction rather than creating value. Next week, I'll look at value chain engineering and why the organisations that succeed map their processes before choosing their tools.
Next week: value chain engineering — why the organisations that succeed map their processes before choosing their tools.
Sources
- "We can know more than we can tell": Michael Polanyi, The Tacit Dimension, University of Chicago Press (1966)
- Declared vs enacted processes: Chris Argyris & Donald Schön, "espoused theory" vs "theory-in-use," Organisational Learning: A Theory of Action Perspective (1978)
- Distributed knowledge: Friedrich Hayek, "The Use of Knowledge in Society," American Economic Review (1945)
- Emergence in complex adaptive systems: Santa Fe Institute, "Foundational Papers in Complexity Science: Updated Edition" (2024)
- Four conditions for crowd intelligence: James Surowiecki, The Wisdom of Crowds, Doubleday (2004)
- Prediction markets outperformed professional polling 74% of the time: Iowa Electronic Markets, Berg et al. (2008), International Journal of Forecasting
- Ford Motor Company 25% reduction in forecast error: Cowgill & Zitzewitz, "Corporate Prediction Markets: Evidence from Google, Ford, and Firm X" (2015)