![]() |
| The problem usually isn’t the tool. |
Recently, I received a message on Teams after a customer demo. The technology worked and the demo went well, but the feedback was telling:
“Finance and sales feel like they wouldn’t know how to prompt… and they’re worried they won’t use it.”
I’ve heard some version of that concern more times than I can count.
On the surface, it sounds reasonable. If people don’t know how to interact with the system, adoption is going to be a challenge. So the natural reaction is to think in terms of training, or user experience, or simplifying the interface.
But that framing misses something important.
These are not inexperienced users. They understand their business. They work with data every day. They’ve been using systems like Salesforce, ERP platforms, and reporting tools for years. Asking them to “learn how to prompt” shouldn’t be the barrier it’s often made out to be.
Which raises a different question: if the technology works and the users are capable, why does this hesitation show up so consistently?
The Problem We Think We Have
At this point, most organizations arrive at a similar conclusion. If people don’t know how to prompt, then the solution must be:
- Better training
- Simpler interfaces
- More guided experiences
Those things can help, but they don’t address the root issue, because this isn’t really a usability problem. If it were, we’d expect to see confusion around how to interact with the system, i.e. where to click, what to type, how to structure a request.
But that’s not typically what’s happening. In most cases, users understand the mechanics almost immediately. Instead, the hesitation shows up in the pause before the first question. It shows up in uncertainty about what to ask and, more importantly, whether the answer will actually be useful.
That’s a very different problem, and it’s not unique to AI - it’s just more visible now.
Traditional systems hide this gap. Dashboards, reports, and predefined queries give us answers to questions that have already been decided for us. The structure is built in, so we rarely have to think about it. AI removes that structure and hands the responsibility back to the user. For many organizations, that’s where things start to break down.
We’re Asking the Wrong Questions
The issue isn’t that people don’t know how to prompt - it’s that they don’t know what questions are worth asking. That distinction matters more than it seems at first. I was talking to my daughter - who is a data analyst - about this recently, and she made an observation that stuck with me. She said most people use ChatGPT as a glorified Google search.
And she’s not wrong. A lot of AI usage today looks like:
- Retrieving known information
- Summarizing existing content
- Answering questions we already know how to ask
In other words, it’s being used to get to answers faster.
But that’s not where the real value is. The real value isn’t in the information - it’s in the analysis of that information. That’s the shift that hasn’t fully landed yet.
When AI is used like a search engine, it ends up replacing things we already had. In other words, it feels useful, but it doesn’t actually change how decisions are made. It just changes how quickly we get to the same answers. Instead, the real opportunity is in asking questions that:
- Connect data points that weren’t previously connected
- Surface patterns that aren’t obvious
- Challenge assumptions instead of confirming them
Questions like:
- What combination of factors tends to predict deal slippage?
- Which accounts look healthy on the surface, but share characteristics with accounts that eventually churn?
Those aren’t questions most systems are designed to answer directly. As a result, people aren’t used to asking questions like these. That’s why “prompting” feels hard - not because the interface is complicated, but because the thinking behind the question is.
This Isn’t a New Problem
If this feels unfamiliar, it really isn’t.
We’ve been here before, just in a more structured environment. As mentioned earlier, business users have relied on dashboards, reports, and predefined queries to understand what’s happening in their organization for years. Those systems work well, but they come with an important constraint: the questions have already been decided. Someone, somewhere, determined what mattered:
- Which metrics to track
- How to define them
- How to present them
Over time, those decisions became the default way the business sees itself.
The advantage of that model is consistency, but the downside is that it limits exploration. For example, you don’t typically ask:
- What am I missing?
- What patterns exist that I haven’t considered?
...because the system isn’t designed for that.
AI changes that dynamic. It removes the predefined structure and replaces it with something far more flexible. Instead of navigating a fixed set of reports, you’re now interacting with a system that can respond to almost any question, assuming you know how to ask it.
And that’s where the disconnect shows up. It doesn't happen because people suddenly became less capable, but because the structure they’ve relied on for years is no longer doing the thinking for them.
The Stack Behind the Screen
Up to this point, it’s easy to focus on the interaction: the prompt, the response, the experience. But that’s only the top layer. Underneath is a stack of dependencies that determine whether AI is useful or just 'interesting.'
At the foundation is the data itself - access, architecture, and accuracy - which I covered in more detail here. If any of those break down, everything built on top of it becomes questionable. This isn’t new either: it’s the same set of challenges that have always existed. AI just makes the consequences more visible.
On top of that is how the agent is designed.
- What context does it have?
- What assumptions is it making?
- How is it guided when the question is vague or incomplete?
Two systems with the same data can produce very different outcomes depending on how this layer is handled.
And then there’s the part that gets the most attention: the human interaction.
- What is the user trying to understand?
- How clearly can they express it?
- Do they recognize when a result is incomplete or misleading?
This is where most of the friction shows up. But it’s also the layer that depends the most on everything beneath it. When organizations struggle with AI adoption, they tend to focus here, on the interaction. But by the time you’re troubleshooting prompts, the outcome has already been shaped by the layers below. And if those layers aren’t aligned, no amount of prompting guidance is going to fix it.
Making It Real
In previous sections, we’ve talked about "the thinking problem." Here’s what it looks like in practice.
In most organizations, the first wave of AI usage tends to mirror existing habits. People ask the same questions they’ve always asked, just through a different interface.
- What’s my pipeline this quarter?
- How did sales perform last month?
- Which opportunities are closing this week?
Those are useful questions, but they’re also questions most companies already answer with existing reporting tools. Using AI for those tasks may feel modern, but it doesn’t materially change how the business operates.
Where AI starts to create value is when the question itself changes. Instead of asking for a number, you ask for a pattern. Instead of asking what happened, you ask what tends to happen next.
Questions start to look more like this:
- What combination of factors tends to predict deal slippage?
- Which accounts look healthy on paper, but share characteristics with accounts that eventually churn?
- What changed in the last 90 days among opportunities that moved from “likely” to “at risk”?
Those aren’t dashboard questions - they require context, relationships, and reasoning across multiple dimensions of the business. And they often reveal things you weren’t explicitly looking for.
That’s the shift most teams haven’t made yet: they’re still treating AI like a faster way to retrieve known information, instead of using it to surface what they couldn’t see before.
Start With the Decision, Not the Tool
The starting point for an AI strategy can’t be the tool. Instead, it has to be the decision. Put another way, most organizations begin with a question like how do we use AI in our business? That sounds reasonable, but it usually leads to vague initiatives, scattered experiments, and a lot of effort with little measurable impact.
A better starting point is much simpler: what decisions are we trying to make that we can’t make well today? That question forces clarity. It anchors the conversation in outcomes instead of features, and it immediately exposes whether AI is actually the right fit.
If the decision is already supported by a dashboard, a report, or a standard workflow, AI may not add much value. But if the decision depends on:
- Patterns across disconnected systems
- Subtle changes over time
- Signals that are hard to see in isolation
...then AI can become genuinely useful. That’s the difference between adoption and impact.
Adoption is usage. Impact is helping people make better decisions because the tool revealed something they couldn’t see before.
And in the long run, impact is the only thing that matters.
A Different Way to Think About It
AI feels new. The interface, interaction, and expectations are different. But at its core, this isn’t a technology problem - it’s a thinking problem.
For years, systems have guided us toward answers. They defined the questions, structured the data, and presented the results in a way that required very little interpretation. That model worked well because it reduced ambiguity.
AI does the opposite. It introduces flexibility, which means it also introduces responsibility. The structure isn’t built in anymore. The value comes from how clearly we can define what we’re trying to understand.
That’s why prompting feels harder than it should. Not because it’s complicated, but because it requires intent. And once that shift happens, everything else starts to fall into place. The interaction becomes more natural. The results become more meaningful. And the technology starts to feel less like a novelty, and more like a tool that actually changes how decisions are made.
Talking to AI is easy.
Knowing what to ask...that is where the real work begins.
