I’ve been thinking a lot about AI strategy lately. Not the tools nor the models - the strategy itself. Specifically, I wondered why so many companies seem to be making progress on paper but are not getting the kind of results they expected. From the outside, it all looks pretty good. Pilots are running. Outputs are being generated. There’s a lot of activity. But something about it doesn’t quite add up. And the more I think about it, the more I’ve come to believe that a lot of these efforts are running into trouble much earlier than people realize, often before anything that really looks like AI is even in place. The Myth: “Every Company Needs an AI Strategy” I was reminded of this recently when I came across a post on LinkedIn from a venture capitalist that said, plainly: Looks like progress. Still needs the right formula. Every private equity-backed company needs an AI strategy. It wasn’t a surprising take. In fact, I’ve seen variations of this sentiment repeatedly ...
Every week, there’s a new headline suggesting that AI is about to make large portions of the workforce obsolete. Development cycles are collapsing. Models are writing code. Systems are improving themselves. It’s not unreasonable to ask whether the role of the human professional is shrinking. But that framing misses something important. AI is not simply reducing labor costs - it is raising expectations. And when expectations rise, the need for human judgment doesn’t disappear. It shifts. Acceleration Without Elimination Recently, while preparing a business demo, I leaned heavily on AI to troubleshoot and refine parts of the workflow. It generated SQL, suggested configuration changes, and dramatically accelerated the iteration cycle. But it also produced malformed SQL queries and confidently recommended an incorrect fix for a configuration issue. Each time, the system moved me forward faster, but it didn’t actually solve the problem. I still had to diagnose the root cause, constrain th...