I’ve been thinking a lot about AI strategy lately. Not the tools nor the models - the strategy itself.
Specifically, I wondered why so many companies seem to be making progress on paper but are not getting the kind of results they expected. From the outside, it all looks pretty good. Pilots are running. Outputs are being generated. There’s a lot of activity. But something about it doesn’t quite add up.
And the more I think about it, the more I’ve come to believe that a lot of these efforts are running into trouble much earlier than people realize, often before anything that really looks like AI is even in place.
The Myth: “Every Company Needs an AI Strategy”
I was reminded of this recently when I came across a post on LinkedIn from a venture capitalist that said, plainly:
![]() |
| Looks like progress. Still needs the right formula. |
It wasn’t a surprising take. In fact, I’ve seen variations of this sentiment repeatedly over the past year across posts, panels, and conversations.
The message is consistent: AI is no longer optional. It’s expected. And so companies respond the way you’d expect:
- AI initiatives are announced
- Tools are evaluated
- Pilots are launched
- “Strategies” are documented
On paper, progress is happening everywhere. But beneath the surface, a different reality is playing out.
Many of these same companies, despite the urgency, the investment, and the intent, struggle to move beyond early experiments: they have inconsistent results and questionable insights. Eventually, momentum stalls, not because they lack ambition or chose the wrong model. Instead, it's because they’re solving for AI before they’ve solved for something far more fundamental.
The problem isn’t the strategy. It’s the foundation.
The Hidden Bottleneck: Data Access
For most companies, the first real obstacle to executing on an AI strategy isn’t the model. It’s the data.
AI depends on pulling information from across the business in real time. Customer data in your CRM system. Financial data in your ERP system. Support data in your ITSM system. Marketing data in automation platforms. Each system holds part of the story, but no single system tells the whole thing.
In theory, this sounds straightforward. In practice, it rarely is.
Each system has its own API, its own schema, and its own way of handling access and permissions. Even simple questions can require stitching together data from multiple sources, each with different structures and constraints. What starts as a quick experiment often turns into a series of custom integrations, brittle pipelines, and one-off queries that are difficult to maintain.
The result is predictable. Projects slow down. Data becomes inconsistent across environments. Teams spend more time moving and preparing data than actually using it.
This is the first layer most AI strategies run into, whether they recognize it or not: access to your data.
Can you reliably reach your data across systems, without friction or constant rework? Until that question is answered, everything that follows is built on shaky ground.
The Missing Layer: Data Architecture
Even when companies solve for access, they still don’t get reliable answers because access alone doesn’t make data usable. AI doesn’t just need data - it needs data that is shaped in a way it can reason over. That means combining information across systems, aligning it in time, and presenting it as a coherent view of the business.
This is where data architecture comes in.
Most operational systems are designed to run the business, not to answer cross-functional questions. Your CRM tracks pipeline. Your ERP tracks invoices and orders. Your support systems track tickets. Each system is internally consistent, but none of them are designed to work together out of the box. So when an AI system tries to answer even a straightforward business question, it runs into a problem.
Consider a simple prompt:
Which customers need immediate attention for revenue impact?
To answer that correctly, you need:
- Open opportunities from your CRM
- Unpaid invoices from your ERP
- Orders that are in progress or pending fulfillment
- Historical context to understand trends and risk
Now look at what happens without the right architecture in place.
- If you rely only on live operational data, you get an incomplete picture. Some systems lag. Others don’t expose the right fields in real time.
- If you rely only on snapshots or a data warehouse, the data is already stale by the time you query it.
- If you try to stitch systems together on the fly, you often end up with mismatched entities, duplicate records, or conflicting results.
In all three cases, the AI produces an answer. It may even sound confident. But it isn’t reliable.
This is the second layer most companies overlook: the structure of your data.
Is your data shaped correctly for AI to reason over it? In practice, this often means combining live operational data with curated snapshots in an operational data store or data lake. Not as a replacement for source systems, but as a way to create a consistent, time-aware view of the business.
Without that layer, AI isn’t generating insight. It’s assembling fragments.
The Silent Killer: Accuracy and Trust
Even with the right data architecture in place, there’s still one more problem that most companies underestimate: accuracy. After all, it’s easy to assume that once data is accessible and well-structured, the answers will be correct. In reality, that assumption breaks down quickly.
In a recent analysis, CData evaluated how different approaches to AI-driven data access performed when answering common business questions across systems like CRM and ERP. The results were not close.
- When the data access layer was tightly controlled and purpose-built, accuracy was approximately 98.5%.
- Other approaches, including more generic methods of querying across systems, landed closer to 60 to 75 percent.
At first glance, that gap may not seem catastrophic. But the real issue shows up when you look at how AI systems actually operate.
Most meaningful prompts are not single-step queries. They involve multiple steps. Data is retrieved, filtered, joined, and interpreted before a final answer is produced. And with each step, accuracy compounds.
If you assume a median 68 percent accuracy rate and apply it across just three steps, the effective reliability of the final answer drops to roughly 31 percent. That means the system is wrong over two-thirds of the time. Worse, it doesn’t signal uncertainty. It presents those answers with confidence.
This is the third layer most companies overlook: the trustworthiness of your results.
If the answer isn’t consistently accurate, it doesn’t matter how fast it was generated or how polished it looks.
AI systems are capable of producing outputs that appear entirely credible. They include trends, explanations, and supporting context that feel complete and well-reasoned. In many cases, they look like something you could confidently present to senior leadership.
But that doesn’t mean they’re correct. The numbers can be wrong. The trends may not exist. The explanation can be convincing without being grounded in reality. And that’s the real risk.
AI doesn’t fail in obvious ways. It fails in ways that look plausible enough to trust, which means inaccurate outputs don’t just get ignored: they get used.
The Reframe: AI Strategy Is a Data Readiness Strategy
At this point, a pattern should be clear.
Companies don’t struggle with AI because they lack tools or ambition. They struggle because the data those systems depend on isn’t ready.
What is often called an “AI strategy” is, in practice, a data readiness problem.
It comes down to three things that must exist together:
- Access to your data. Can you reliably reach data across your systems without constant rework?
- The structure of your data. Is that data shaped in a way that allows AI to reason over it correctly?
- The trustworthiness of your results. Are the outputs accurate enough to support real decisions?
All three are required. Not sequentially. Not eventually. Together.
Most organizations over-index on the visible parts of AI. Models, interfaces, and tools get the attention because they are easy to demo and easy to explain. But the harder, less visible work happens underneath: connecting systems; shaping data into consistent views; ensuring accuracy across multiple steps and sources. That work is what determines whether AI produces insight or noise.
This is why so many initiatives stall. Not because the models aren’t capable, but because the data they depend on is incomplete, inconsistent, or unreliable. Companies are entirely focused on AI models while the real problem is the data those models are being asked to interpret.
And until that gap is addressed, progress will continue to look promising on the surface while failing to deliver meaningful outcomes.
Stop Running Science Experiments
There’s no shortage of AI activity in the market today. Pilots are being launched. Tools are being evaluated. Internal demos are being shared. On the surface, it looks like rapid progress.
But much of it has something in common. The results are inconsistent. The outputs are questioned. And over time, usage drops off.
It doesn't happen because the ideas were wrong or the technology isn’t capable. It happens because the foundation was never fully in place.
- Without reliable access to your data, systems can’t see the full picture.
- Without the right data architecture, they can’t interpret that picture correctly.
- Without consistent accuracy, the answers can’t be trusted.
And when trust breaks down, adoption follows. What remains are isolated experiments that never quite make it into day-to-day decision making.
If the goal is to make AI part of how the business actually operates, then the focus has to shift, not to better prompts or more tools. Instead, the focus has to shift to making the underlying data usable, consistent, and reliable. That is what determines whether AI produces real insight or just convincing output.
If your AI strategy doesn’t explicitly address access to your data, the structure of your data, and the trustworthiness of your results, it isn’t a strategy. It’s a science experiment.
