Skip to main content

We’re Asking AI the Wrong Questions

The problem usually
isn’t the tool.

Recently, I received a message on Teams after a customer demo.  The technology worked and the demo went well, but the feedback was telling:

“Finance and sales feel like they wouldn’t know how to prompt… and they’re worried they won’t use it.”

I’ve heard some version of that concern more times than I can count.

On the surface, it sounds reasonable. If people don’t know how to interact with the system, adoption is going to be a challenge. So the natural reaction is to think in terms of training, or user experience, or simplifying the interface.

But that framing misses something important.

These are not inexperienced users. They understand their business. They work with data every day. They’ve been using systems like Salesforce, ERP platforms, and reporting tools for years. Asking them to “learn how to prompt” shouldn’t be the barrier it’s often made out to be.

Which raises a different question:  if the technology works and the users are capable, why does this hesitation show up so consistently?

The Problem We Think We Have

At this point, most organizations arrive at a similar conclusion.  If people don’t know how to prompt, then the solution must be:

  • Better training
  • Simpler interfaces
  • More guided experiences

Those things can help, but they don’t address the root issue, because this isn’t really a usability problem.  If it were, we’d expect to see confusion around how to interact with the system, i.e. where to click, what to type, how to structure a request. 

But that’s not typically what’s happening. In most cases, users understand the mechanics almost immediately.  Instead, the hesitation shows up in the pause before the first question.  It shows up in uncertainty about what to ask and, more importantly, whether the answer will actually be useful.

That’s a very different problem, and it’s not unique to AI - it’s just more visible now.

Traditional systems hide this gap. Dashboards, reports, and predefined queries give us answers to questions that have already been decided for us. The structure is built in, so we rarely have to think about it.  AI removes that structure and hands the responsibility back to the user.  For many organizations, that’s where things start to break down.

We’re Asking the Wrong Questions

The issue isn’t that people don’t know how to prompt - it’s that they don’t know what questions are worth asking.  That distinction matters more than it seems at first.  I was talking to my daughter - who is a data analyst - about this recently, and she made an observation that stuck with me. She said most people use ChatGPT as a glorified Google search.  

And she’s not wrong.  A lot of AI usage today looks like:

  • Retrieving known information
  • Summarizing existing content
  • Answering questions we already know how to ask

In other words, it’s being used to get to answers faster.  

But that’s not where the real value is.  The real value isn’t in the information - it’s in the analysis of that information.  That’s the shift that hasn’t fully landed yet.

When AI is used like a search engine, it ends up replacing things we already had.  In other words, it feels useful, but it doesn’t actually change how decisions are made.  It just changes how quickly we get to the same answers.  Instead, the real opportunity is in asking questions that:

  • Connect data points that weren’t previously connected
  • Surface patterns that aren’t obvious
  • Challenge assumptions instead of confirming them

Questions like:

  • What combination of factors tends to predict deal slippage?
  • Which accounts look healthy on the surface, but share characteristics with accounts that eventually churn?

Those aren’t questions most systems are designed to answer directly.  As a result, people aren’t used to asking questions like these.  That’s why “prompting” feels hard - not because the interface is complicated, but because the thinking behind the question is.

This Isn’t a New Problem

If this feels unfamiliar, it really isn’t.  

We’ve been here before, just in a more structured environment.  As mentioned earlier, business users have relied on dashboards, reports, and predefined queries to understand what’s happening in their organization for years. Those systems work well, but they come with an important constraint: the questions have already been decided.  Someone, somewhere, determined what mattered:

  • Which metrics to track
  • How to define them
  • How to present them

Over time, those decisions became the default way the business sees itself.  

The advantage of that model is consistency, but the downside is that it limits exploration.  For example, you don’t typically ask:

  • What am I missing?
  • What patterns exist that I haven’t considered?

...because the system isn’t designed for that.

AI changes that dynamic.  It removes the predefined structure and replaces it with something far more flexible.  Instead of navigating a fixed set of reports, you’re now interacting with a system that can respond to almost any question, assuming you know how to ask it.

And that’s where the disconnect shows up.  It doesn't happen because people suddenly became less capable, but because the structure they’ve relied on for years is no longer doing the thinking for them.

The Stack Behind the Screen

Up to this point, it’s easy to focus on the interaction: the prompt, the response, the experience.  But that’s only the top layer.  Underneath is a stack of dependencies that determine whether AI is useful or just 'interesting.'

At the foundation is the data itself - access, architecture, and accuracy - which I covered in more detail here.  If any of those break down, everything built on top of it becomes questionable.  This isn’t new either:  it’s the same set of challenges that have always existed. AI just makes the consequences more visible.

On top of that is how the agent is designed.

  • What context does it have?
  • What assumptions is it making?
  • How is it guided when the question is vague or incomplete?

Two systems with the same data can produce very different outcomes depending on how this layer is handled.

And then there’s the part that gets the most attention: the human interaction.

  • What is the user trying to understand?
  • How clearly can they express it?
  • Do they recognize when a result is incomplete or misleading?

This is where most of the friction shows up.  But it’s also the layer that depends the most on everything beneath it.  When organizations struggle with AI adoption, they tend to focus here, on the interaction.  But by the time you’re troubleshooting prompts, the outcome has already been shaped by the layers below.  And if those layers aren’t aligned, no amount of prompting guidance is going to fix it.

Making It Real

In previous sections, we’ve talked about "the thinking problem."  Here’s what it looks like in practice.

In most organizations, the first wave of AI usage tends to mirror existing habits. People ask the same questions they’ve always asked, just through a different interface.

  • What’s my pipeline this quarter?
  • How did sales perform last month?
  • Which opportunities are closing this week?

Those are useful questions, but they’re also questions most companies already answer with existing reporting tools. Using AI for those tasks may feel modern, but it doesn’t materially change how the business operates.

Where AI starts to create value is when the question itself changes.  Instead of asking for a number, you ask for a pattern.  Instead of asking what happened, you ask what tends to happen next.

Questions start to look more like this:

  • What combination of factors tends to predict deal slippage?
  • Which accounts look healthy on paper, but share characteristics with accounts that eventually churn?
  • What changed in the last 90 days among opportunities that moved from “likely” to “at risk”?

Those aren’t dashboard questions - they require context, relationships, and reasoning across multiple dimensions of the business.  And they often reveal things you weren’t explicitly looking for.

That’s the shift most teams haven’t made yet:  they’re still treating AI like a faster way to retrieve known information, instead of using it to surface what they couldn’t see before.

Start With the Decision, Not the Tool

The starting point for an AI strategy can’t be the tool. Instead, it has to be the decision. Put another way, most organizations begin with a question like how do we use AI in our business?  That sounds reasonable, but it usually leads to vague initiatives, scattered experiments, and a lot of effort with little measurable impact.  

A better starting point is much simpler: what decisions are we trying to make that we can’t make well today? That question forces clarity.  It anchors the conversation in outcomes instead of features, and it immediately exposes whether AI is actually the right fit.

If the decision is already supported by a dashboard, a report, or a standard workflow, AI may not add much value.  But if the decision depends on:

  • Patterns across disconnected systems
  • Subtle changes over time
  • Signals that are hard to see in isolation

...then AI can become genuinely useful.  That’s the difference between adoption and impact.

Adoption is usage.  Impact is helping people make better decisions because the tool revealed something they couldn’t see before.

And in the long run, impact is the only thing that matters.

A Different Way to Think About It

AI feels new.  The interface, interaction, and expectations are different.  But at its core, this isn’t a technology problem - it’s a thinking problem.

For years, systems have guided us toward answers.  They defined the questions, structured the data, and presented the results in a way that required very little interpretation. That model worked well because it reduced ambiguity.

AI does the opposite.  It introduces flexibility, which means it also introduces responsibility.  The structure isn’t built in anymore.  The value comes from how clearly we can define what we’re trying to understand.

That’s why prompting feels harder than it should.  Not because it’s complicated, but because it requires intent.  And once that shift happens, everything else starts to fall into place.  The interaction becomes more natural.  The results become more meaningful.  And the technology starts to feel less like a novelty, and more like a tool that actually changes how decisions are made.

Talking to AI is easy.

Knowing what to ask...that is where the real work begins.

Popular posts from this blog

Finding Clarity in the Chaos of a Job Search

Job searches are humbling. They test your confidence, your patience, and your ability to stay motivated when things don’t move as quickly as you’d like. But they also teach you things about yourself that you might not have learned any other way. For me, the past few months have been a crash course in rediscovering what really matters: not just in a résumé, but in relationships, self-perception, and how we use technology to help tell our stories. Here are three lessons that stood out. Reach Out to Your Network (Long Before You Need It) Your network is a living thing. It requires upkeep, time, and attention, just like a flower garden. You can’t ignore it for years and expect it to bloom the moment you need it. Start planting early. Stay in touch with people whose paths you’ve crossed - colleagues, mentors, partners, even those you only worked with briefly. Drop a note once in a while. Comment on their posts. Share something that made you think of them. These small gestures are the sunl...

Time to Level Up!

With the recent news out of Salesforce and Oracle, it’s easy to understand why folks affected by layoffs might feel discouraged. Not only are they leaving companies they may have called home for years, but they’re also facing the daunting prospect of job hunting while headlines scream about “AI taking over human jobs.” Not long ago, another company I follow - let’s call it Acme  - went through a similar round of layoffs. Two employees in particular (we’ll call them Jim and John) showed how mindset can make all the difference. Jim had been at Acme for over 20 years. He was reliable, steady, and well-liked, but not exactly the standout type. When he was laid off, he decided to take some time off before even thinking about his next move. After all, he had a severance package. Didn’t he deserve a break after two decades of hard work? John’s story was different. Though he hadn’t been at Acme as long, he’d built a strong reputation and had both technical and leadership skills. Instead of...

The Assistant You Didn’t Know You Had

Everywhere you look, someone is debating AI:  is it useful, ethical, or even trustworthy? After all the noise, the verdict is still the same: inconclusive.  I’m not here to settle that debate. Instead, I want to show how AI can be used effectively without turning it from a tool into a crutch. Why the Bad Rap? First let's acknowledge something.  AI has an entirely different reputation depending on the context in which it is used.  In the corporate world, AI is often seen as a force multiplier while at the same time is derided as potentially displacing several thousand jobs.  The latter has most recently been seen in the elimination of 4,000 jobs at Salesforce all under the guise of AI being used to do mundane jobs that used to be filled by people.  (Whether this is true or not is a topic for a future discussion.) We've been trying to reach you about your automobile warranty. On a personal level, AI often gets dismissed, whether it’s in academics , fake Amazo...