80% of enterprise AI projects fail, according to Harvard Business Review. But of course you already knew that, or at least it seemed that way to you. That's nearly double the ~42% failure rate of traditional IT projects.
Traditional technology adoption always had, what were in the past thought as, high failure rates, at around ~42%, but add digital, data, or AI to the mix, and failure rates double.
The misdiagnosis
When AI projects fail, most people blame the technology. The model wasn't accurate enough. The data pipeline broke. The vendor oversold their platform. That team didn't want to adopt it because they were afraid they were getting replaced. These explanations feel satisfying because they're specific. They give you something to point at, and some may actually be true.
But Boston Consulting Group's research, and probably millions of employees silently screaming "I told you so", tells a different story. After analyzing thousands of AI implementations, BCG found that 70% of the obstacles come from people and processes, not algorithms or infrastructure.
20% comes from infrastructure issues. Only 10% is actually about the AI or algorithm itself.
This pattern isn't unique to AI. Go back to any major technology wave over the past thirty years: ERP implementations, cloud migrations, digital transformation programs. The same breakdown keeps showing up. Organizations consistently overestimate the technical challenge and underestimate the organizational one.
The difference with AI is that it makes the organizational failure more visible, especially during these hype waves. A botched ERP implementation can limp along in the shadows for years before anyone admits it's broken. An AI project that doesn't deliver results is obvious within months. The feedback loop is faster and more brutal than with any other technology before it.
Getting worse, not better
What's worrying is that it seems the problem is accelerating.
S&P Global's enterprise AI survey found that 42% of companies abandoned most of their AI initiatives in 2025. That's up from 17% in 2024. In one year, the abandonment rate more than doubled.
This is an unacceptable trajectory, however, not surprising. Organizations are getting worse at this, despite spending more money, hiring more talent, and buying more platforms and tools. The amount of investment going into AI is increasing, but the results aren't keeping pace.
A pattern we've seen before
In my career working in digital, data, and AI transformation programmes, helping organizations make technology choices from financial services to manufacturing, there's one thing I heard everywhere: you have to make these decisions with incomplete information. That's just how it works.
Leadership decides based on what they know, which is filtered through management layers. Vendors pitch based on what their platform does well. Consultants interview a few experts and executives and write a report. Everyone operates on partial information and assumes that's the best anyone can do.
I've always had to begrudgingly accept that and move on. The same pattern repeats across every industry and every technology wave. Capable people making what seem like good decisions at the time, but turn out as awful ones down the line because the information they needed was scattered across the organization in places nobody thought to look or there wasn't enough time to gather it.
That frustration became the seed for AI Readi. Addressing the structural flaws in how organizations approach technology adoption and transformation.
What the 7% do differently
McKinsey's 2025 research is telling. While 71% of organizations now use generative AI regularly, only about 7% have fully scaled their efforts with real business results.
Those 7% aren't working with better technology or bigger budgets than anyone else.
What McKinsey found was that they had a much clearer picture of their own organization before they started: which processes were actually ready for AI, where the dependencies sat, and which teams could absorb change. The organizational reality was mapped before resources got committed.
You've probably seen this yourself. Think back to any project that actually delivered: there was usually someone, or some process, that made sure the team understood how things actually worked on the ground before committing to a plan. The projects that went sideways? Almost always started with assumptions about the organization that turned out to be wrong, but nobody checked until it was too late.
Most businesses skip this step, going straight from "AI is important" to "let's run pilots" or "create an AI strategy" without understanding whether their organization could actually support what they were targeting.
What this means
The 80% failure rate comes down to organizations not understanding their own reality well enough to make good decisions about technology. Better models won't fix that, and neither will another consulting engagement. The ones that succeed took the time to look inward first, honestly and systematically, before committing resources.
Next week: where the disconnect between investment and results actually comes from, and why the traditional approaches to understanding organizational readiness keep falling short.
Sources
- 80% AI project failure rate — Harvard Business Review, "Keep Your AI Projects on Track" (Bojinov, 2023)
- 70% people/process, 20% infrastructure, 10% algorithms — Boston Consulting Group, "From Potential to Profit" (2025)
- 42% abandoning AI in 2025, up from 17% in 2024 — S&P Global Market Intelligence Enterprise AI Survey (2025)
- 7% fully scaled with results; 71% using GenAI regularly — McKinsey Global Institute, "The State of AI" (2025)
- $644 billion enterprise GenAI spending — Gartner forecast (2025)