← Back to blog

Why the Hell Are There So Many AI Tools in Your Enterprise?

7 min read
enterprise-aiai-toolsai-strategy

Walk into any company today and ask five departments which AI tools they use. You'll get five different answers. Marketing is on one platform. Engineering swears by another. Sales has a third. HR is piloting a fourth. Finance built their own with an API nobody else has heard of.

Nobody planned this. It accumulated one expensed subscription, one skipped approval process, one ignored procurement policy at a time.

We Tried to Build Our Own. It Didn't Work.

The first instinct wasn't to buy it was to build. Internal chatbots, fine-tuned models, "our own GPT." The logic made sense: proprietary data, own the IP, control the experience.

Then GPT-4 dropped. Then Gemini. Then Claude. Then Perplexity.

The gap between what an internal team could ship and what frontier labs were releasing every quarter became impossible to close. Enterprises quietly killed their programs and started buying instead. Fast

But here's the thing nobody admits: we didn't stop building our own. Every team that couldn't get what they needed from the approved stack just built their own custom GPT for their use case. The chatbot for HR. The assistant for legal. The internal search tool for the product team. Dozens of them. Each one siloed. None of them talking to each other.

The sprawl didn't start with vendors. It started with us.

The Chatbot That Nobody Trusted

For many enterprises, the wake-up call came the same way: you build an internal chatbot, you demo it, everyone's impressed. Then employees actually use it and the responses are confidently wrong.

Outdated policies. Missing context. Hallucinated procedures. The chatbot was only as good as the data behind it, and the data was a mess.

This is the core problem that gets buried under all the tool conversation: garbage in, garbage out. Without a robust RAG (Retrieval-Augmented Generation) system grounded in clean, current, well-structured data, no AI use case - chatbot, prototype, internal tool will hold up in production.

And the data problem in most enterprises is real. Knowledge is scattered across SharePoint sites that nobody maintains, Confluence pages last updated three years ago, PDFs buried in shared drives with no metadata, and institutional memory that lives only in the heads of people who might leave next quarter. Getting AI to work across that landscape isn't a model problem. It's a data infrastructure problem. Bureaucratic data silos don't just slow down humans, they cripple AI.

Before you can build anything useful, you need to answer a question most orgs haven't: whose job is it to make the data ready?

So We Chased the Market Instead

 When the internal build failed, the next move was to evaluate what was out there. Which sounds reasonable until you realize it became its own full-time job.

The AI market doesn't sit still. Tools that looked promising six months ago are now obsolete or acquired. Every vendor claims to solve the same problems differently. Every evaluation takes time, and by the time you finish it, something new has launched. Organizations end up in a permanent cycle of assessment without ever fully committing to anything which means the stack keeps growing and nobody owns the outcome.

Meanwhile, employees aren't waiting. They're signing up for whatever works, expensing it, and getting on with their day. One $20/month subscription at a time, the shadow stack builds itself.

And Nobody Knows Which One to Use

Once you've assembled the collection, ChatGPT Enterprise here, Microsoft Copilot there, Claude Cowork for documents, Perplexity for research, a custom internal assistant for HR queries the question employees ask is entirely reasonable:

When am I supposed to use which one?

Nobody has a good answer. No decision tree. No onboarding. Just tribal knowledge and six browser tabs open at once.

The research backs this up clearly. Employees across levels say the same things: "I just use ChatGPT, simple looking at the internal tool is so stressful, which LLM to choose, when to use what model, it's just overcomplicating my role." Or: "I will not go and experiment with different tools, my day-to-day is so busy already."

This isn't laziness. It's a rational response to an irrational environment. When there's no clear guidance, people default to whatever they already know. And the cost of that isn't visible it's a thousand quiet decisions to not explore, not experiment, not grow.

Now Anyone Can Ship. Which Makes It Worse.

Just as enterprises started getting a handle on the tool conversation, the ground shifted again.

No-code and low-code platforms - Lovable, v0 by Vercel, Claude Code mean any employee can now ship a working product in minutes, no engineering background required. A designer spins up a web app. A PM builds an internal dashboard. An analyst ships a data tool before the sprint starts.

This is genuinely powerful. It's also a new layer of chaos.

The problem is no longer just too many tools it's too many *things being brought into the enterprise*, outside any formal process, no competitor analysis, no user research being done to evaluate whether its suited for enterprise use case. A Lovable app shared via link is invisible until it breaks something.

The Real Issue: AI Literacy Was Never Part of the Plan

Underneath all of this the sprawl, the failed chatbots, the confusion, the shadow stack is a problem most enterprises haven't named clearly: we never invested in AI literacy.

The optimism is real. Employees across roles know AI matters, feel the pressure to use it, and genuinely want to keep up. But awareness isn't capability. Knowing that AI is "all around us" and knowing how to actually deploy it for your specific role are two completely different things.

What fills that gap? Hype. LinkedIn posts. YouTube tutorials. Vendor marketing. People are learning AI from external noise because there's no internal signal. And that produces fragmented, uneven understanding some people highly capable, most left behind, with no shared baseline across the organization.

The result is predictable: employees revert to what they know. The tools they're most comfortable with, even when better ones exist. The workflows that feel safe, even when AI could transform them. Exploration feels risky when you're already at capacity and nobody's given you the time or the framework to learn properly.

The Bureaucracy Problem Nobody Wants to Touch

There's one more layer to this that rarely makes it into the strategy decks: organizational bureaucracy is actively slowing AI adoption down.

Every time an employee needs three approvals to access a data source, every time a promising tool gets stuck in a six-month procurement process, every time knowledge is locked in a SharePoint site that only one team can access, the same goes for MCP servers the gap between AI's potential and AI's reality gets wider.

AI works best with connected, accessible, well-governed data, tools and fast iteration cycles. Most enterprise environments are built for the opposite: siloed data, slow approvals, rigid processes. The tools are ready. The organizations often aren't.

Fixing this isn't glamorous. It doesn't make a good demo. But data integration across Confluence, SharePoint, internal wikis, and legacy systems is the unglamorous foundation everything else depends on. Without it, you're not building AI-powered workflows you're building expensive, unreliable autocomplete.

So Where Does That Leave Us?

The ambition is real. The pressure is real. But most enterprises are caught in a loop: chasing hype cycles, evaluating tools, building things that don't work because the data isn't ready, and watching employees disengage because nobody gave them a map.

Getting out of the loop requires being honest about the order of operations:

Data before tools. Clean, connected, governed data is the foundation. Without it, nothing else works.

Literacy before adoption. Employees can't use AI effectively if they don't understand it. Training isn't optional overhead it's what turns a tool into a capability.

Strategy before sprawl. Before the next tool evaluation, ask whether the current stack is actually being used well. It usually isn't.

The tools are not the hard part. The hard part is the infrastructure, the culture, and the slow, unglamorous work of making an organization actually ready for AI not just subscribed to it.

Y-ou don't have an AI tool problem. You have an AI readiness problem. The tools are just where it shows.

Share this article:

Enjoyed this article?

If you found this helpful, consider buying me a coffee!

Buy me a coffee
JY

Javier Yong

AI Product Manager at SAP. Writing about product strategy, AI, and building products that scale.