Quick summary
Most AI projects fail. Up to 80% never deliver their intended outcomes due to common mistakes such as tackling the wrong problems, relying on poor data quality, and ignoring data governance and privacy. Successful AI implementation depends on aligning AI initiatives with business goals, focusing on critical use cases, and making incremental improvements through use of quality data, engineering expertise, and continuous learning.
By addressing root causes early and making sure AI projects are built with the right infrastructure, expertise, and feedback loops, New Zealand organisations can dramatically reduce AI project failure and achieve long‑term value from
artificial intelligence.
Artificial Intelligence implementation in businesses
Artificial intelligence is on every executive's radar. From
chatbots to AI Agents, AI promises dramatic gains in productivity, customer insight, and innovation. But here’s the sobering truth: most AI projects still fail.
According to
global studies, up to 80% of AI and data science projects never deliver their intended outcomes. Some stall in proof-of-concept purgatory. Others unravel due to poor data quality, user resistance, or spiralling costs. As Canon Business Services ANZ's (CBS) Head of Data and AI, Raji Haththotuwegama puts it, "Many people won't say it out loud, but most AI projects are quietly turned off within six months."
So, why is failure so common, and how can leaders avoid it? Here, we explore the real reasons AI projects fall over and offer guidance grounded in field-tested experience.
The AI hype cycle: Rushing in with no plan
Pressure to act is mounting. Boards are demanding AI strategies, vendors are overpromising, and competitors are making noise. But haste leads to shallow thinking.
"There’s a lot of pressure to adopt this new game-changing technology," says Raji. "You’re rushing into solution mode instead of understanding the problem. It’s a solution looking for a problem."
This is what leads to the "POC graveyard:" a glut of pilot projects that go nowhere. They may demonstrate technical feasibility but lack a clear path to deployment or business value.
Mistake #1: Solving the wrong problem
AI is not a universal fix. Yet many projects begin without understanding what AI is actually good at, or where it adds the most value.
"The first thing to do is really understand the right use case," says Raji. "Understand the strengths and weaknesses of AI and then marry that to your biggest challenges. Don’t be afraid to go for the biggest challenge. Just carve off a small, achievable piece."
The CBS AI Accelerator Workshop helps clients identify and prioritise AI opportunities with real-world impact. The workshop brings together stakeholders from across the business (IT, operations,
finance, customer service) to surface pain points and opportunities. Through a structured process, CBS consultants help participants:
"It's not about trying to AI-ify everything," says Raji. "It’s about focusing on the right problems; ones that are valuable, solvable, and scalable."
The output? A prioritised roadmap that aligns technology investment with business value, while avoiding common pitfalls like poor data readiness or low stakeholder engagement.
Mistake #2: Waiting for perfect data
Data fuels AI. But aiming for perfect, enterprise-wide data quality before starting a project is a common trap.
"Instead of solving the entire data quality problem, ask: can we get the right data to solve this problem?" says Raji. "Otherwise, you’ll never get started."
He adds: "What you need is a modern data platform that makes it easy to curate, access, and maintain high-quality data over time."
CBS recommends a phased approach to
data modernisation. Start with the data that matters most to your selected use case. Then build incrementally.
Rather than aiming to overhaul the full data estate, organisations should identify the specific data assets that serve a defined use case. From there, the focus shifts to enabling secure, timely, and relevant access.
"Choose a platform," says Raji, "that only helps create quality data and maintains it, enriches it, and makes it accessible to the right people at the right stage of the lifecycle
Crucially, this isn’t just a technology issue. It's about designing an ecosystem where multiple teams, from compliance to marketing to ops, can engage with data at the level they need. Raji’s team helps clients implement platforms that allow curated access, data lineage tracking, and stakeholder-specific insights to make governance and usability seamless.
This phased approach means every investment in data infrastructure is tied to a business outcome. As Raji puts it, "There’s no point building a beautiful warehouse of data if the customer wants something that isn’t even stored there."
Designing AI-ready data environments in incremental, outcome-focused phases enables organisations to avoid common bottlenecks while still laying the foundation for long-term maturity.
Mistake #3: Forgetting the humans
AI isn’t just a technical project. It impacts workflows, roles, and culture. Failure to engage users early often results in low adoption.
"You need to take a human-centred approach. What are the repetitive, boring tasks that AI can take off their hands? How does it make their work more valuable?"
He warns that adoption falters when users feel displaced rather than empowered. Instead, AI should augment human capability, not replace it.
Resistance is often emotional, not logical. If employees feel sidelined or suspect that
automation is a prelude to redundancy, their engagement drops. Early involvement, transparent communication, and co-designing solutions with users can mitigate this risk.
Raji advocates involving frontline teams in the project lifecycle, from defining requirements to testing prototypes. His team also works with HR and change management leads to prepare employees for new ways of working.
This is where "human-in-the-loop" (HITL) design matters. Especially for business-critical tasks and high-stakes decisions in sectors like
finance, healthcare, or government, AI outputs must be reviewed, verified, interpreted, or actioned by people. And that’s not just a technical design decision. It affects cost, training, and trust, and there’s a high impact if the wrong decision is made. Defining the right checkpoints and escalation paths also ensures trust, accountability, and regulatory compliance.
As Raji says, "When people see how AI can amplify their strengths instead of replacing them, that's when adoption happens."
Mistake #4: Underestimating total cost of ownership
AI may save time eventually. But the upfront and ongoing costs are often underestimated.
"There are algorithms that can solve a problem brilliantly. But if it costs more than the existing process, it won’t get off the ground."
Consumption-based pricing, unpredictable token usage, complicated charging models, human oversight. It all adds up.
Initial proof-of-concept builds are often inexpensive thanks to
low code and citizen builder tools. But enterprise-grade deployment introduces a raft of new costs: platform integration,
data security controls, curating quality knowledge sources, AI Ops, performance monitoring, and ongoing refinement.
Then there's human review. "People often forget," says Raji, "in business-critical use cases every AI output still needs someone to verify, approve, or act on it. That cost doesn't disappear. It just shifts."
Organisations should model true total cost of ownership (TCO) across all stages, from prototype to production. This includes factoring in software licensing, model hosting, governance, user training and organisational change management.
It also includes the cost of failure: what happens if the model makes the wrong decision?
A realistic TCO model allows businesses to evaluate whether AI driven automation is truly worth it or if a simpler rules-based solution would suffice. It also enables better budgeting and stakeholder confidence.
"You need to calculate what AI automates and the cost of human review, platform management, and continuous refinement. That’s your total cost of ownership."
Mistake #5: AI literacy gaps at the top
AI projects often falter because decision-makers don’t fully understand what they’re approving. "The C-suite often nods along, but if they don’t understand how the solution works or what the limitations are, it can lead to misalignment down the track," says Raji.
He urges leaders to ask three simple questions of any partner:
- Have you done this before?
- Have you rolled it out to production?
- Has it been running at least for 6 months?
"There are a lot of so-called experts. But very few AI solutions have been live in production for over 12 months. The whole field is still new."
Education matters. “We frequently deliver executive briefings and AI upskilling sessions designed to demystify concepts like LLMs,
Copilots, and AI Agents. These sessions are tailored to board-level priorities: risk, ROI, and regulatory impact,” Raji explains.
"If you want the right investment decisions, you need literacy at the top," he says. "Otherwise, your projects get hijacked by unrealistic expectations or missed risks."
You can bridge the gap between strategy and execution by embedding data translators, people who can connect business goals with AI solutions. This ensures leadership sets the vision while technologists execute against it.
Mistake #6: Ignoring the pace of change
The AI landscape evolves fast. What was state-of-the-art six months ago might now be obsolete.
"Vendors are pumping out features left, right and centre," says Raji. "You need a technology evaluation process. Is this new feature stable? Secure? Useful?"
Mid-project changes in APIs, pricing models, or regulation can derail momentum. Many teams lack a framework to evaluate updates or determine when to adopt versus wait.
“We encourage clients to build adaptive governance structures—cross-functional committees that regularly assess new capabilities, test in sandboxes, and maintain alignment with
security, compliance, and budget controls,” says Raji.
"You don’t want to be rebuilding every quarter because the foundation keeps shifting," he adds. "You need partners who understand the pace of innovation and how to manage it."
Flexible architecture, modular design, and vendor-agnostic tooling are the key to future-proofing your stacks while maintaining delivery cadence.
Mistake #7: No clear path beyond proof of concept
A successful pilot isn’t enough. Many organisations fail to define what success looks like upfront or what happens next.
"If the POC works, are you willing to spend money to deploy it? That’s the question that needs answering before you begin," says Raji. "Map it out. Have a plan. Secure commitment."
Too often,
AI pilots are treated as experiments with no clear business owner or operational plan. Teams celebrate model accuracy, but forget about integration, user training, change management, or KPIs.
The answer is designing with deployment in mind. From the outset, define:
- Who owns the AI solution after it goes live?
- What systems will it integrate with?
- How will performance be tracked?
- What’s the plan if the AI solution underperforms?
These questions aren’t just operational. They’re strategic. A great AI solution with no support structure is a missed opportunity. A functional model with clear governance can deliver value for years.
Don’t just build AI. Build the business around it.
AI is a powerful enabler, but only when applied with rigour, empathy, and a clear eye on business value. The real differentiator? Not the tools, but the thinking behind them.
"We need to stop entertaining fantasies about every company being ‘AI first,’" says Raji. "AI is a tool to deliver outcomes. It’s not the outcome itself."
Canon Business Services ANZ helps organisations move beyond AI hype and into execution, combining deep technical expertise with business pragmatism. Whether you're choosing a use case, modernising your data estate, or validating your POC, CBS brings the strategy, structure and support to make it work.
Ready to explore AI with eyes wide open? Let’s talk about what success really looks like.