- Published on
Beyond the Hype: Leadership and Trust in AI
- Authors
- Name
- Edward Mangini
- @edward_mangini
AI dominates consulting pitches. It appears on slide one, justifies substantial budgets, and provides executives with a narrative to present to their boards.
But once discovery begins, technologists have to reconcile hype with reality.
The case isn't necessarily nefarious. Consulting firms have been hit hard by the economy. Tech spending is down, which means demand for consulting services across the board has contracted.
The popularity of AI arrived at precisely the wrong (or right) time, amplifying the hype cycle. AI has become more than technology. It is a foot in the door, a hook to start conversations.
Unfortunately, executives and buyers often don't fully understand what AI can and cannot do. Miscommunication, misplaced assumptions, and wishful thinking on both sides of the table lead to misaligned agreements and unintentional promises.
Out of this environment, two problems surface again and again:
- Many companies don't need AI at all, yet sales teams have already sold them the story.
- Even the small group of companies that do have viable AI use cases usually aren't ready.
Most companies don’t need AI, and many of those that could benefit from it can’t sustain it yet. That isn’t failure. It’s clarity.
But when consulting firms oversell AI, they undermine both credibility and impact.
Challenge One: Selling AI Before Discovery
Overselling AI creates the biggest credibility trap. Sales teams pitch agents, assistants, and predictive models before anyone investigates the actual problem or its constraints.
Technologists walk in afterward, perform discovery, and realize the solution is much simpler. Now they have to tell the client that what they bought is not only a poor fit, but also a distraction.
“Firms sell AI as the hammer, but the client’s problem isn’t a nail.”
For example, a company was told it needed "AI agents" for order management. The sales narrative promised transformation. Discovery revealed that the real solution was straightforward automation.
Automation cut costs, improved reliability, and avoided the brittleness that comes with more complex systems.
At first glance, that sounds like good news: the client saves money, the solution is easier to implement, and the benefits are real.
But the conversation erodes trust. Executives feel misled. The consultants look incoherent. The technology itself suffers reputational damage.
The human cost is also easy to overlook. Consider what the client had to do to secure funding for the AI project. Were they under pressure from their board to "do something transformative"? Did they cut staff or reallocate resources from critical operations to free up funds?
Discovering later that a modest automation project would have sufficed doesn't just create disappointment. It creates resentment, embarrassment, and sometimes political fallout inside the organization.
Challenge Two: When AI Fits but Fails
Some companies face problems that align with AI's strengths, such as fraud detection, large-scale personalization, and predictive maintenance. On paper, AI fits.
But when technologists assess readiness, the same obstacles surface:
- Dirty, inconsistent, or incomplete data.
- Core systems that don't integrate or share information.
- Weak or nonexistent governance.
- KPIs that don't align and are still reported manually.
- Missing talent to build and sustain advanced solutions.
Drop AI into that environment, and it produces noise, not insight. Poor data leads to poor predictions. Disconnected systems block meaningful patterns. Weak governance creates security and compliance risks.
This trap is subtler than the first. The problem itself is legitimate, but the organization lacks the foundation to support the solution.
The degree of struggle varies. Some organizations can carve out a thin slice of readiness. They may pull together a limited dataset to tackle one urgent problem. A narrowly scoped pilot may succeed and create momentum for broader readiness.
Others have such entrenched issues that every attempt to build AI on top of them collapses under its own weight.
Either way, the journey is longer and more complicated than the client believed when they signed the deal.
The Double Trap: Trust Debt
Together, these challenges create a serious trust problem.
- Companies are sold AI they don't need. When technologists correct the story, relationships strain. In extreme cases, companies may have already taken extreme measures such as layoffs, budget cuts, or restructuring, only to be told later that the solution requires a smaller, simpler investment.
- Companies that do need AI aren’t ready. Projects stall, limp along, or fail outright.
The result is predictable: wasted budgets, frustrated executives, demoralized delivery teams, and growing skepticism about AI.
The deeper cost is a trust debt.
Sales generate short-term revenue by overselling AI, but delivery teams inherit a relationship already poisoned by disappointment.
Instead of building confidence during discovery, teams spend their early months paying down that debt. They explain why the promised transformation isn't feasible, why timelines need extending, or why expectations need scaling back.
That friction delays value, drives up costs, and damages the consulting firm's long-term reputation.
A Better Way Forward
Consultants and technologists can avoid the trap by reframing the approach.
Lead with discovery. Sell clarity, not AI. Make discovery the product. Frame the engagement as a process of uncovering the right solution, not as pre-packaged transformation.
Leading with discovery also ensures that technologists are invited into the sales conversation early enough to defend against the challenges we’ve discussed.
Anchor on outcomes. Executives don't necessarily want AI. They want fewer errors, faster onboarding, stronger margins, and better customer experience.
The hype machine has whispered to many executives that AI delivers these results. Start with outcomes. Then select the right tools: automation, analytics, workflow redesign, or AI.
Polluting the problem space. Simon Sinek's concept of "Start with why" is terrific. But we must still give "what" and "how" their due. Leading conversations with AI is a classic case of polluting the problem space with the solution space.
“AI belongs in the toolbox, not the opening slide.”
Experimentation, testing, learning, and research are all valid reasons to start a project with AI. The outcome is knowledge: patterns we can apply to real-world problems.
These patterns become the emergent tools of our repertoire, the nouns and verbs we use in conversations with clients after we've built an understanding of their problem.
Tell the truth. If agents aren’t the answer, say so. If the company isn’t ready, explain it plainly. Losing a deal hurts less than losing credibility. Honesty earns the trust that leads to future work.
Keep in mind: your foot is already in the door. If AI isn’t the answer to their problem, the problem doesn’t go away. Solve what’s real, and the relationship strengthens.
Frame AI as a force multiplier. Once data is clean, systems connected, and governance established, AI amplifies value. It makes good foundations better. But it seldom belongs at the beginning of the journey.
Scenarios in Practice
Retailer: Agents vs. Automation A retailer was sold on AI agents for order management. Discovery revealed that automation solved the problem more cheaply and reliably. AI would have slowed the process and added risk.
Financial Services: A Fit Without Readiness A bank wanted AI for fraud detection. The use case was legitimate, but fragmented data and inconsistent records made any model unreliable. The project began with consolidation and governance. Only later did AI make sense.
Healthcare: Both Problems at Once A healthcare provider asked for patient risk prediction. Discovery showed two issues: the actual bottleneck was patient throughput, not prediction, and the data was inconsistent and non-compliant. The solution required operational redesign and governance, not AI.
Reflection
AI is extraordinary, but it isn't a universal solution.
Most companies don't need it. Their problems require automation, analytics, or workflow redesign. Yet AI continues to be sold, and when technologists deliver the truth, trust suffers.
The companies that do have legitimate use cases rarely possess the readiness. Without strong foundations, AI amplifies dysfunction instead of solving it.
Consulting firms can't keep leading with AI as the answer. The firms that will thrive are those that anchor on discovery, tell the truth about fit and readiness, and frame AI as a tool that comes later, not first.
The most valuable thing a consultant can say today isn't "AI will transform your business."
It's: "You don't need AI. And that's okay. Here's what will actually make the difference."
That’s not hype. That’s leadership. And that’s how you build trust.