Your AI Is a Brain Without Arms — Here's How to Fix That
A practical guide to Model Context Protocol (MCP) for enterprise teams ready to move AI from advisor to actor.
AI's biggest limitation isn't intelligence — it's containment.
Large language models can reason, analyze, and generate. But by themselves, they're brains without arms and legs: unable to access your systems, query your data, or take action in the tools where your work actually happens. They can tell you what API call to make, but they can't make it. They can draft the email, but they can't send it.
Model Context Protocol (MCP) gives the brain a set of arms and legs, and packages them up so you can access them conveniently.
And the organizations figuring this out right now are operating in a fundamentally different way, where everyone is a manager of their own AI agent.
The Integration Wall Every Organization Hits
The LLM brains are now so good at pattern recognition that it doesn't even feel like that's what they're doing anymore. They create, advise, guide, but prior to the era of agents they were missing a key capability - to act.
The industry addressed this through tools, which is just function calling. Engineers would define functions for the LLM brain to decide: one, whether to use it; and two, what to feed into it. This promoted our LLM brain to an agent, giving it the first limbs to interact with the outside world. But every set of tools has been bespoke. Want Claude to interact with your CRM? Build a Claude-specific integration. Want GPT-4 to query your database? Build an OpenAI-specific integration. Efficiency was throttled by the limited reusability.
Enterprises faced three compounding problems:
Fragmentation. Even if tool development was straightforward, the infrastructure tax of handling your own auth, rate limiting, payload formats becomes heavier and heavier overhead, making implementations with more than a few tools feel like they were held together by string.
Vendor lock-in. Your AI capabilities become tightly coupled to specific providers. The integrations you built for one model don't transfer to another unless you used your own abstraction layer.
Capability bottlenecks. Building custom integrations required engineering resources. Getting anything past POC became a bureaucratic challenge as much as a technical one.
So the gap between "AI could theoretically help with this" and "AI is actually doing this" is where most organizations are stuck.
MCP closes that gap.
What MCP Actually Is (The Practical Model)
Model Context Protocol is an open standard — like USB-C for AI connections. When a toolbox is packaged as an MCP server, any MCP-compatible AI client can use it.
The mental model is simple. An MCP Server is a packaged toolbox that defines and executes tools. In most enterprise use cases the server is remote, hosted by the same vendors that run your Google Drive or Slack workspace. An MCP Client sits between the LLM and the server — it tells the LLM what tools are available and handles the back-and-forth when the LLM wants to use one.
You've probably already seen this in action: Claude and ChatGPT connecting to Google Drive, Slack, and other platforms. That's MCP at work.
The Brain
Decides what to do. Sees available tools and chooses which to call based on the user's request. Doesn't talk to servers directly.
The Orchestrator
The app you use — Claude.ai, Cursor, etc. Discovers tools, routes calls to the right server, returns results to the LLM.
The Hands
Defines tools, stores data, runs compute. Can be local or remote. Slack, Google Drive, GitHub — each hosts their own MCP server.
Where MCP servers come from:
- Open source — Community-built servers for common capabilities: web browsing, file systems, databases, code execution. Quality varies. The best (like Playwright MCP for browser automation) are production-ready. Many others are demos.
- Vendor-provided — SaaS providers increasingly expose MCP servers that let AI interact with their platforms. When your CRM vendor offers one, you connect rather than build. The category is early stages but undoubtedly growing. For most organizations this is a very low hanging fruit, and anyone with SaaS vendors that offer it should make the most of it.
- Custom-built — When off-the-shelf doesn't exist, you wrap your internal systems in the standard interface. Build once, every MCP-compatible client in your organization can use it. Once it's set up, click ops that might take 10 headcount hours a week can now be handled by an LLM.
What MCP is not: It's not an AI model. Not a product you buy. It's an open protocol — the infrastructure that lets AI reach beyond the chat window.
Most notably, it's not an automation platform or a predefined workflow like what Zapier and Workato do. Predefined workflows work amazingly with structured tasks that don't change. It's almost just a data pipeline that may or may not use an LLM to process the data in some way. Like an analyst with blinders on: they just stick to doing that one kind of analysis. Agents (what MCP enables) use the LLM brain to decide what tool to use. They can act in many different ways depending on what the situation calls for. Same agent, same set of tools, but the approach changes based on what you ask. Tell it to prep for a sales call and it pulls CRM data and recent emails. Tell it to investigate a revenue dip and it queries the data warehouse and cross-references with marketing spend. The tools are the predetermined logic, but which and how the agent uses them are up to the LLM.
What We Built (And What We Learned)
Over the past year, our team has gone deep on MCP: building custom servers for internal systems, deploying open source tools like Playwright for browser automation, and constructing a unified, model agnostic client that connects to the MCP servers our SaaS vendors expose.
The first custom server was a RAG layer we built over the internal knowledge base — years of accumulated documentation across 200+ products that lived in a system with a decent API but no AI integration path. Before MCP, accessing this meant copy-paste workflows. After MCP, any AI client could query it directly. The build took about a week of engineering time, and the capability has paid for itself many times over — not just in time saved, but in quality. People actually use the knowledge base now because the friction dropped to near zero.
The unified client was more ambitious. The objective was to connect to our own MCP servers and the ones that our SaaS providers expose, handling authentication centrally, and exposing those capabilities to whatever AI interface we're working in. Our team can now query CRM data, pull financial reports, check project status — all through conversation with AI that has live access to those systems. The custom client build allowed us to become vendor agnostic given the pace of models overtaking each other, and giving finer control to what auth method people wanted to use.
Open source deployments (particularly Playwright MCP) became core to our workflows for research, data gathering, and automating web-based processes. Before: tasks either didn't get done (too tedious) or required custom scraping solutions. Now: "Go to this site, find the pricing page, extract the plan comparison into a table." UI testing and iterative, agentic development also liberated significant engineering hours.
The key insight: once a capability exists as an MCP server, it's available everywhere. New team members get access automatically without provisioning separate access. When we adopt a new AI tool, it inherits all our existing capabilities. The integration work pays off across every use case.
A few things we learned the hard way:
Quality variance is the biggest ecosystem challenge. The maturity gap between the best and worst MCP servers is enormous. We've encountered servers that work flawlessly in production and servers that break every few weeks, even with the vendor support team looking after it. Servers with excellent documentation and servers with none. Developing evaluation rigor, knowing what to look for, how to test, when to build instead of adopt has been as important as any technical skill.
Security and governance can't be afterthoughts. When AI can access your CRM, query your database, and browse the web, the attack surface expands. Permissions, access controls, logging and guardrails are the key to maintaining quality of output and therefore trust. Build policies before you need them: who authorizes new MCP connections, what actions can AI take autonomously vs. with human approval, how do you audit what AI is doing in your systems. Our approach: wrap all the MCPs with our own logging and analytics layer, then dockerize and deploy on our own infrastructure. The additional observability has been worth the effort.
What You Can Actually Do With It Today
Any serious AI strategy for the business will involve MCP. But what separates strategy from boardroom daydreams is knowing what actually works today and what's still risky.
CRM Augmentation (Sales)
When your AI assistant can query your CRM directly, the workflow changes fundamentally.
Before: Sales rep asks AI to help draft a follow-up email. Manually copies in the deal context, account history, recent activity. AI responds with generic output that needs heavy editing.
After: Sales rep asks AI to draft a follow-up for the Anderson deal. AI pulls opportunity details, communication history, and account context directly from the CRM. Generates a draft grounded in actual deal specifics. Rep reviews, tweaks, sends.
This extends to meeting prep ("give me the full context on this account before my call"), pipeline review ("which deals are at risk of slipping this quarter and why?"), and prospect research — where web browsing capabilities combine with CRM access to enrich records automatically. What took 20 minutes of clicking around (even with a standard LLM chat interface) becomes a single request.
Natural Language Data Exploration (Finance)
MCP connections to data warehouses mean business users can explore data through conversation rather than learning SQL or waiting for analyst support. It's self serve analytics taken to the next level.
"What was our gross margin by product line last quarter, compared to the same period last year?" AI queries the data warehouse, pulls the numbers, does the comparison, presents the analysis. Follow-up questions refine the view: "Break that down by region" or "Exclude the one-time adjustments."
High-value, but requires careful implementation — financial data is sensitive, access controls matter, and you need confidence the AI is querying correctly.
Documentation and Knowledge Access (Engineering)
Engineering teams accumulate documentation that's theoretically accessible but practically impossible to search — architecture decisions, runbooks, API specs, incident post-mortems.
An MCP server wrapping your documentation repository makes that knowledge queryable. "What was the decision rationale for the event sourcing architecture?" "What's the runbook for database failover?" New team members get access to institutional knowledge without playing 20 questions with senior engineers.
This works even with imperfect documentation, one of the biggest time wasters for engineers with high hourly opportunity cost. AI handles messy reality better than traditional search — it can synthesize across multiple partial sources and surface relevant context even when no single document has the full answer.
The Real Power: Composability
There's an underlying theme from the examples above: Individual MCP capabilities are useful. Stacking them is transformative.
A workflow that queries your CRM for recent deals, pulls relevant context from your knowledge base, checks project capacity in your PM tool, and drafts a proposal — that's not four separate AI interactions that needs someone to copy and paste in all the context. It's one conversation with an AI that has access to all four capabilities simultaneously.
The mental model shifts from "AI as assistant I hand information to" to "AI as collaborator with access to our systems." Each new server you add multiplies the potential combinations. This composability is where the real leverage emerges. When we onboard a new hire, we don't just give them access to one system and expect them to do their job with just that. With MCP that connect broadly, LLMs can also work without that constraint.
What's Not Ready Yet
Autonomous multi-step workflows — MCP excels at single-tool invocations. Complex workflows requiring multiple sequential tool uses with dependencies are less reliable. "AI orchestrates a complex multi-system workflow autonomously" is still experimental. Though frankly, most enterprises aren't ready for that anyway — deterministic guardrails and human-in-the-loop steps are a better fit for how most organizations actually want to operate.
Write operation maturity — Read operations are broadly more mature than writes. Many MCP servers that let you retrieve data don't let you modify it. For workflows that need to take action, not just retrieve information, expect to do more custom work.
Universal coverage — The biggest ERPs, several major CRMs, most HRIS platforms, and significant chunks of the enterprise software stack don't have MCP servers yet. The trajectory though is clearly toward broader adoption.
Authentication granularity — This again depends on the exact server. Many servers have basic authentication but lack fine-grained permissions, others inherit permissions from the user and auth method. For organizations with strict data governance, this can be a blocker that may require custom builds to address.
These gaps are closing fast. What was experimental six months ago is proven today. But knowing where the boundaries are right now saves you from over-committing.
How to Get Started
We crawl before we walk, and walk before run. The best initial implementations are low risk, high learning value, and deliver visible impact.
Many organizations have started already by connecting Claude or ChatGPT to Drive or Calendar. The next step is connecting higher value systems like ERPs and CRMs, and integrating them into the workflows of different teams. Team leads can map out the tasks that are still mostly manual clicking through UIs, and begin with read-only MCP access to those systems. As teams get comfortable, they'll naturally start thinking about composing with MCPs from adjacent systems and expanding permissions. Testing is crucial here — understand the common failure modes and build guardrails before you scale.
Once teams have baseline confidence, audit your existing processes, data sources, and required outputs to figure out where custom MCPs are worth building. Custom MCPs can have immense ROI in reviving underutilized company IP, databases, documentation or existing APIs. That's where you start to see what's truly high leverage.
On cost: the barrier is lower than most people assume. Connecting to vendor-provided MCPs is essentially free beyond your existing subscriptions. Open source servers cost nothing to deploy. Custom builds require engineering time — a straightforward API wrapper is days, something more complex is weeks — but the work is bounded, not open-ended. The main ongoing cost is increased token usage from AI making tool calls, which scales with how much your team actually uses it. No org is genuinely priced out of getting started; the first step is connecting what you already pay for.
MCP support is already becoming a competitive differentiator among SaaS vendors. Within 12 months, it'll be a standard checkbox in enterprise software evaluation. The teams already fluent in this world will evaluate and deploy new capabilities in days while everyone else is still figuring out the basics.
We've built MCP infrastructure for our own operations and for clients across sales, finance, operations, and engineering. If you're evaluating MCP for your organization or want to accelerate an implementation, reach out — we're happy to share what we've learned.