Skip to main content

Most AI Strategies Fail Before the First Prompt

I’ve been thinking a lot about AI strategy lately.  Not the tools nor the models - the strategy itself.

Specifically, I wondered why so many companies seem to be making progress on paper but are not getting the kind of results they expected.  From the outside, it all looks pretty good. Pilots are running. Outputs are being generated. There’s a lot of activity.  But something about it doesn’t quite add up.

And the more I think about it, the more I’ve come to believe that a lot of these efforts are running into trouble much earlier than people realize, often before anything that really looks like AI is even in place.

The Myth: “Every Company Needs an AI Strategy”

I was reminded of this recently when I came across a post on LinkedIn from a venture capitalist that said, plainly:

Looks like progress. Still needs the right formula.
Every private equity-backed company needs an AI strategy.

It wasn’t a surprising take. In fact, I’ve seen variations of this sentiment repeatedly over the past year across posts, panels, and conversations. 

The message is consistent: AI is no longer optional. It’s expected. And so companies respond the way you’d expect:

  • AI initiatives are announced
  • Tools are evaluated
  • Pilots are launched
  • “Strategies” are documented

On paper, progress is happening everywhere.  But beneath the surface, a different reality is playing out.

Many of these same companies, despite the urgency, the investment, and the intent, struggle to move beyond early experiments: they have inconsistent results and questionable insights.  Eventually, momentum stalls, not because they lack ambition or chose the wrong model.  Instead, it's because they’re solving for AI before they’ve solved for something far more fundamental.

The problem isn’t the strategy. It’s the foundation.

The Hidden Bottleneck: Data Access

For most companies, the first real obstacle to executing on an AI strategy isn’t the model. It’s the data.

AI depends on pulling information from across the business in real time. Customer data in your CRM system. Financial data in your ERP system. Support data in your ITSM system. Marketing data in automation platforms. Each system holds part of the story, but no single system tells the whole thing.

In theory, this sounds straightforward. In practice, it rarely is.

Each system has its own API, its own schema, and its own way of handling access and permissions. Even simple questions can require stitching together data from multiple sources, each with different structures and constraints. What starts as a quick experiment often turns into a series of custom integrations, brittle pipelines, and one-off queries that are difficult to maintain.

The result is predictable. Projects slow down. Data becomes inconsistent across environments. Teams spend more time moving and preparing data than actually using it.

This is the first layer most AI strategies run into, whether they recognize it or not:  access to your data.  

Can you reliably reach your data across systems, without friction or constant rework? Until that question is answered, everything that follows is built on shaky ground.

The Missing Layer: Data Architecture

Even when companies solve for access, they still don’t get reliable answers because access alone doesn’t make data usable. AI doesn’t just need data - it needs data that is shaped in a way it can reason over. That means combining information across systems, aligning it in time, and presenting it as a coherent view of the business.

This is where data architecture comes in.

Most operational systems are designed to run the business, not to answer cross-functional questions. Your CRM tracks pipeline. Your ERP tracks invoices and orders. Your support systems track tickets. Each system is internally consistent, but none of them are designed to work together out of the box. So when an AI system tries to answer even a straightforward business question, it runs into a problem.

Consider a simple prompt:

Which customers need immediate attention for revenue impact?

To answer that correctly, you need:

  • Open opportunities from your CRM
  • Unpaid invoices from your ERP
  • Orders that are in progress or pending fulfillment
  • Historical context to understand trends and risk

Now look at what happens without the right architecture in place.

  • If you rely only on live operational data, you get an incomplete picture. Some systems lag. Others don’t expose the right fields in real time.
  • If you rely only on snapshots or a data warehouse, the data is already stale by the time you query it.
  • If you try to stitch systems together on the fly, you often end up with mismatched entities, duplicate records, or conflicting results.

In all three cases, the AI produces an answer. It may even sound confident. But it isn’t reliable.

This is the second layer most companies overlook:  the structure of your data.

Is your data shaped correctly for AI to reason over it? In practice, this often means combining live operational data with curated snapshots in an operational data store or data lake. Not as a replacement for source systems, but as a way to create a consistent, time-aware view of the business.

Without that layer, AI isn’t generating insight. It’s assembling fragments.

The Silent Killer: Accuracy and Trust

Even with the right data architecture in place, there’s still one more problem that most companies underestimate: accuracy.  After all, it’s easy to assume that once data is accessible and well-structured, the answers will be correct. In reality, that assumption breaks down quickly.

In a recent analysis, CData evaluated how different approaches to AI-driven data access performed when answering common business questions across systems like CRM and ERP. The results were not close.

  • When the data access layer was tightly controlled and purpose-built, accuracy was approximately 98.5%.
  • Other approaches, including more generic methods of querying across systems, landed closer to 60 to 75 percent.

At first glance, that gap may not seem catastrophic. But the real issue shows up when you look at how AI systems actually operate.

Most meaningful prompts are not single-step queries. They involve multiple steps. Data is retrieved, filtered, joined, and interpreted before a final answer is produced. And with each step, accuracy compounds.

If you assume a median 68 percent accuracy rate and apply it across just three steps, the effective reliability of the final answer drops to roughly 31 percent. That means the system is wrong over two-thirds of the time. Worse, it doesn’t signal uncertainty. It presents those answers with confidence.

This is the third layer most companies overlook: the trustworthiness of your results.

If the answer isn’t consistently accurate, it doesn’t matter how fast it was generated or how polished it looks.

AI systems are capable of producing outputs that appear entirely credible. They include trends, explanations, and supporting context that feel complete and well-reasoned. In many cases, they look like something you could confidently present to senior leadership.

But that doesn’t mean they’re correct. The numbers can be wrong. The trends may not exist. The explanation can be convincing without being grounded in reality. And that’s the real risk.

AI doesn’t fail in obvious ways. It fails in ways that look plausible enough to trust, which means inaccurate outputs don’t just get ignored: they get used.

The Reframe: AI Strategy Is a Data Readiness Strategy

At this point, a pattern should be clear.

Companies don’t struggle with AI because they lack tools or ambition. They struggle because the data those systems depend on isn’t ready.

What is often called an “AI strategy” is, in practice, a data readiness problem.

It comes down to three things that must exist together:

  1. Access to your data. Can you reliably reach data across your systems without constant rework?
  2. The structure of your data. Is that data shaped in a way that allows AI to reason over it correctly?
  3. The trustworthiness of your results. Are the outputs accurate enough to support real decisions?

All three are required. Not sequentially. Not eventually. Together.

Most organizations over-index on the visible parts of AI. Models, interfaces, and tools get the attention because they are easy to demo and easy to explain. But the harder, less visible work happens underneath: connecting systems; shaping data into consistent views; ensuring accuracy across multiple steps and sources. That work is what determines whether AI produces insight or noise.

This is why so many initiatives stall. Not because the models aren’t capable, but because the data they depend on is incomplete, inconsistent, or unreliable. Companies are entirely focused on AI models while the real problem is the data those models are being asked to interpret.

And until that gap is addressed, progress will continue to look promising on the surface while failing to deliver meaningful outcomes.

Stop Running Science Experiments

There’s no shortage of AI activity in the market today. Pilots are being launched. Tools are being evaluated. Internal demos are being shared. On the surface, it looks like rapid progress.

But much of it has something in common. The results are inconsistent. The outputs are questioned. And over time, usage drops off.

It doesn't happen because the ideas were wrong or the technology isn’t capable. It happens because the foundation was never fully in place.

  • Without reliable access to your data, systems can’t see the full picture.
  • Without the right data architecture, they can’t interpret that picture correctly.
  • Without consistent accuracy, the answers can’t be trusted.

And when trust breaks down, adoption follows. What remains are isolated experiments that never quite make it into day-to-day decision making.

If the goal is to make AI part of how the business actually operates, then the focus has to shift, not to better prompts or more tools. Instead, the focus has to shift to making the underlying data usable, consistent, and reliable. That is what determines whether AI produces real insight or just convincing output.

If your AI strategy doesn’t explicitly address access to your data, the structure of your data, and the trustworthiness of your results, it isn’t a strategy. It’s a science experiment.


Popular posts from this blog

Finding Clarity in the Chaos of a Job Search

Job searches are humbling. They test your confidence, your patience, and your ability to stay motivated when things don’t move as quickly as you’d like. But they also teach you things about yourself that you might not have learned any other way. For me, the past few months have been a crash course in rediscovering what really matters: not just in a résumé, but in relationships, self-perception, and how we use technology to help tell our stories. Here are three lessons that stood out. Reach Out to Your Network (Long Before You Need It) Your network is a living thing. It requires upkeep, time, and attention, just like a flower garden. You can’t ignore it for years and expect it to bloom the moment you need it. Start planting early. Stay in touch with people whose paths you’ve crossed - colleagues, mentors, partners, even those you only worked with briefly. Drop a note once in a while. Comment on their posts. Share something that made you think of them. These small gestures are the sunl...

Time to Level Up!

With the recent news out of Salesforce and Oracle, it’s easy to understand why folks affected by layoffs might feel discouraged. Not only are they leaving companies they may have called home for years, but they’re also facing the daunting prospect of job hunting while headlines scream about “AI taking over human jobs.” Not long ago, another company I follow - let’s call it Acme  - went through a similar round of layoffs. Two employees in particular (we’ll call them Jim and John) showed how mindset can make all the difference. Jim had been at Acme for over 20 years. He was reliable, steady, and well-liked, but not exactly the standout type. When he was laid off, he decided to take some time off before even thinking about his next move. After all, he had a severance package. Didn’t he deserve a break after two decades of hard work? John’s story was different. Though he hadn’t been at Acme as long, he’d built a strong reputation and had both technical and leadership skills. Instead of...

The Assistant You Didn’t Know You Had

Everywhere you look, someone is debating AI:  is it useful, ethical, or even trustworthy? After all the noise, the verdict is still the same: inconclusive.  I’m not here to settle that debate. Instead, I want to show how AI can be used effectively without turning it from a tool into a crutch. Why the Bad Rap? First let's acknowledge something.  AI has an entirely different reputation depending on the context in which it is used.  In the corporate world, AI is often seen as a force multiplier while at the same time is derided as potentially displacing several thousand jobs.  The latter has most recently been seen in the elimination of 4,000 jobs at Salesforce all under the guise of AI being used to do mundane jobs that used to be filled by people.  (Whether this is true or not is a topic for a future discussion.) We've been trying to reach you about your automobile warranty. On a personal level, AI often gets dismissed, whether it’s in academics , fake Amazo...