All Posts
· Jiayu Zhang ·
AITools

The Missing Layer Between Your AI and Your Business Systems

A practical guide to Model Context Protocol (MCP) for enterprise teams ready to move AI from advisor to actor.


AI's biggest limitation isn't intelligence — it's containment.

Large language models can reason, analyze, and generate. But by themselves, they're brains without arms and legs: unable to access your systems, query your data, or take action in the tools where your work actually happens. They can tell you what API call to make, but they can't make it. They can draft the email, but they can't send it.

Model Context Protocol (MCP) gives the brain a set of arms and legs, and packages them up so you can access them conveniently.

And the organizations figuring this out right now are operating in a fundamentally different way, where everyone is a manager of their own AI agent.


The Integration Wall Every Organization Hits

The LLM brains are now so good at pattern recognition that it doesn't even feel like that's what they're doing anymore. They create, advise, guide, but prior to the era of agents they were missing a key capability — to act.

The industry addressed this through tools, which is just function calling. Engineers would define functions for the LLM brain to decide: one, whether to use it; and two, what to feed into it. This promoted our LLM brain to an agent, giving it the first limbs to interact with the outside world. But every set of tools has been bespoke. Want Claude to interact with your CRM? Build a Claude-specific integration. Want GPT-4 to query your database? Build an OpenAI-specific integration. Efficiency was throttled by the limited reusability.

Enterprises faced three compounding problems:

Fragmentation. Even if tool development was straightforward, the infrastructure tax of handling your own auth, rate limiting, payload formats becomes heavier and heavier overhead, making implementations with more than a few tools feel like they were held together by string.

Vendor lock-in. Your AI capabilities become tightly coupled to specific providers. The integrations you built for one model don't transfer to another unless you used your own abstraction layer.

Capability bottlenecks. Building custom integrations required engineering resources. Getting anything past POC became a bureaucratic challenge as much as a technical one.

So the gap between "AI could theoretically help with this" and "AI is actually doing this" is where most organizations are stuck.

MCP closes that gap.


What MCP Actually Is (The Practical Model)

Model Context Protocol is an open standard — like USB-C for AI connections. When a toolbox is packaged as an MCP server, any MCP-compatible AI client can use it.

The mental model is simple. An MCP Server is a packaged toolbox that defines and executes tools. In most enterprise use cases the server is remote, hosted by the same vendors that run your Google Drive or Slack workspace. An MCP Client sits between the LLM and the server — it tells the LLM what tools are available and handles the back-and-forth when the LLM wants to use one.

You've probably already seen this in action: Claude and ChatGPT connecting to Google Drive, Slack, and other platforms. That's MCP at work.

LLM the brain
API Call
Client the app
MCP json-rpc
Server tools + data

The Brain

Decides what to do. Sees available tools and chooses which to call based on the user's request. Doesn't talk to servers directly.

The Orchestrator

The app you use — Claude.ai, Cursor, etc. Discovers tools, routes calls to the right server, returns results to the LLM.

The Hands

Defines tools, stores data, runs compute. Can be local or remote. Slack, Google Drive, GitHub — each hosts their own MCP server.

Where MCP servers come from:

  • Open source — Community-built servers for common capabilities: web browsing, file systems, databases, code execution. Quality varies. The best (like Playwright MCP for browser automation) are production-ready. Many others are demos.
  • Vendor-provided — SaaS providers increasingly expose MCP servers that let AI interact with their platforms. When your CRM vendor offers one, you connect rather than build. The category is early stages but undoubtedly growing.
  • Custom-built — When off-the-shelf doesn't exist, you wrap your internal systems in the standard interface. Build once, every MCP-compatible client in your organization can use it.

What MCP is not: It's not an AI model. Not a product you buy. It's an open protocol — the infrastructure that lets AI reach beyond the chat window.

Most notably, it's not an automation platform or a predefined workflow like what Zapier and Workato do. Predefined workflows work amazingly with structured tasks that don't change. Agents (what MCP enables) use the LLM brain to decide what tool to use. They can act in many different ways depending on what the situation calls for. Same agent, same set of tools, but the approach changes based on what you ask.


What We Built (And What We Learned)

Over the past year, our team has gone deep on MCP: building custom servers for internal systems, deploying open source tools like Playwright for browser automation, and constructing a unified, model agnostic client that connects to the MCP servers our SaaS vendors expose.

The first custom server was a RAG layer we built over the internal knowledge base — years of accumulated documentation across 200+ products that lived in a system with a decent API but no AI integration path. Before MCP, accessing this meant copy-paste workflows. After MCP, any AI client could query it directly. The build took about a week of engineering time, and the capability has paid for itself many times over.

The unified client was more ambitious. The objective was to connect to our own MCP servers and the ones that our SaaS providers expose, handling authentication centrally, and exposing those capabilities to whatever AI interface we're working in. Our team can now query CRM data, pull financial reports, check project status — all through conversation with AI that has live access to those systems.

Open source deployments (particularly Playwright MCP) became core to our workflows for research, data gathering, and automating web-based processes. Before: tasks either didn't get done (too tedious) or required custom scraping solutions. Now: "Go to this site, find the pricing page, extract the plan comparison into a table."

The key insight: once a capability exists as an MCP server, it's available everywhere. New team members get access automatically without provisioning separate access. When we adopt a new AI tool, it inherits all our existing capabilities.

A few things we learned the hard way:

Quality variance is the biggest ecosystem challenge. The maturity gap between the best and worst MCP servers is enormous. We've encountered servers that work flawlessly in production and servers that break every few weeks. Developing evaluation rigor has been as important as any technical skill.

Security and governance can't be afterthoughts. When AI can access your CRM, query your database, and browse the web, the attack surface expands. Permissions, access controls, logging and guardrails are the key to maintaining quality of output and therefore trust.


What You Can Actually Do With It Today

CRM Augmentation (Sales)

When your AI assistant can query your CRM directly, the workflow changes fundamentally.

Before: Sales rep asks AI to help draft a follow-up email. Manually copies in the deal context, account history, recent activity. AI responds with generic output that needs heavy editing.

After: Sales rep asks AI to draft a follow-up for the Anderson deal. AI pulls opportunity details, communication history, and account context directly from the CRM. Generates a draft grounded in actual deal specifics. Rep reviews, tweaks, sends.

Natural Language Data Exploration (Finance)

MCP connections to data warehouses mean business users can explore data through conversation rather than learning SQL or waiting for analyst support.

"What was our gross margin by product line last quarter, compared to the same period last year?" AI queries the data warehouse, pulls the numbers, does the comparison, presents the analysis.

Documentation and Knowledge Access (Engineering)

Engineering teams accumulate documentation that's theoretically accessible but practically impossible to search — architecture decisions, runbooks, API specs, incident post-mortems.

An MCP server wrapping your documentation repository makes that knowledge queryable. New team members get access to institutional knowledge without playing 20 questions with senior engineers.

The Real Power: Composability

AI Agent one conversation
MCP
CRM Server deals + contacts
MCP
Data Warehouse analytics + metrics
MCP
Knowledge Base docs + runbooks
MCP
PM Tool tasks + capacity

Composability

Each new MCP server multiplies the potential combinations. One conversation can query your CRM, pull financials, check project capacity, and draft a proposal — shifting AI from "assistant you hand information to" to "collaborator with direct system access."

Individual MCP capabilities are useful. Stacking them is transformative. A workflow that queries your CRM for recent deals, pulls relevant context from your knowledge base, checks project capacity in your PM tool, and drafts a proposal — that's not four separate AI interactions. It's one conversation with an AI that has access to all four capabilities simultaneously.


What's Not Ready Yet

Autonomous multi-step workflows — MCP excels at single-tool invocations. Complex workflows requiring multiple sequential tool uses with dependencies are less reliable.

Write operation maturity — Read operations are broadly more mature than writes. For workflows that need to take action, not just retrieve information, expect to do more custom work.

Universal coverage — The biggest ERPs, several major CRMs, most HRIS platforms don't have MCP servers yet. The trajectory is clearly toward broader adoption.

Authentication granularity — Many servers have basic authentication but lack fine-grained permissions. For organizations with strict data governance, this can be a blocker.


How to Get Started

We crawl before we walk, and walk before run. The best initial implementations are low risk, high learning value, and deliver visible impact.

Many organizations have started already by connecting Claude or ChatGPT to Drive or Calendar. The next step is connecting higher value systems like ERPs and CRMs, and integrating them into the workflows of different teams. Team leads can map out the tasks that are still mostly manual clicking through UIs, and begin with read-only MCP access to those systems.

Once teams have baseline confidence, audit your existing processes, data sources, and required outputs to figure out where custom MCPs are worth building. Custom MCPs can have immense ROI in reviving underutilized company IP, databases, documentation or existing APIs.

On cost: the barrier is lower than most people assume. Connecting to vendor-provided MCPs is essentially free beyond your existing subscriptions. Open source servers cost nothing to deploy. Custom builds require engineering time — a straightforward API wrapper is days, something more complex is weeks — but the work is bounded, not open-ended.

MCP support is already becoming a competitive differentiator among SaaS vendors. Within 12 months, it'll be a standard checkbox in enterprise software evaluation. The teams already fluent in this world will evaluate and deploy new capabilities in days while everyone else is still figuring out the basics.


We've built MCP infrastructure for our own operations and for clients across sales, finance, operations, and engineering. If you're evaluating MCP for your organization or want to accelerate an implementation, reach out — we're happy to share what we've learned.