UnlockLive IT designs and ships custom Model Context Protocol (MCP) servers — the cross-vendor standard for connecting AI agents to your APIs, databases, and internal tools. We build internal MCP servers for engineering teams using Claude Code, Cursor, and Codex CLI; customer-facing remote MCP servers with OAuth 2.1 and multi-tenant scope enforcement; and audit/hardening engagements for existing MCP servers. Default stack: Python or TypeScript on Cloudflare Workers or AWS Lambda, with strict per-tool schemas, audit logging, and a documented threat model. MCP is the new integration surface for AI products — being early here is a distribution advantage.
What we build
Internal MCP servers for engineering teams:Expose your databases, observability stack, internal APIs, and runbooks to Claude Desktop, Cursor, Claude Code, and Codex CLI — so your engineers can query Snowflake, page Datadog, and triage Sentry without leaving their AI coding agent.
Customer-facing MCP servers for SaaS products:Ship an MCP server alongside your SaaS so your customers can integrate your product into their AI agents in minutes — same value proposition as a public API, with one-click installation in Claude, Cursor, and the ChatGPT desktop app.
Database & data-warehouse MCP servers:Read-only (or scoped read/write) MCP wrappers around PostgreSQL, MySQL, Snowflake, BigQuery, Redshift, and ClickHouse — with row-level security, query-cost controls, and audit logging.
Workflow automation MCP servers:Wrap Salesforce, HubSpot, Zendesk, Linear, Jira, GitHub, Slack, Notion, and Google Workspace as MCP tools — turning AI agents into competent ops automation.
Hosted/remote MCP servers (HTTP / SSE):Production-ready remote MCP servers (HTTP+SSE or streamable HTTP transport) deployed on AWS, Cloudflare Workers, or your VPC. OAuth 2.1, rate limiting, multi-tenant isolation, and full observability.
MCP server hardening & security review:Audit existing MCP servers for prompt-injection exposure, scope leakage, over-broad tool permissions, and authentication gaps. We deliver a written threat model and a remediation patch.
Our MCP technology stack
MCP SDKs: Official @modelcontextprotocol/sdk for TypeScript, mcp Python SDK, FastMCP (Python ergonomic wrapper)
Observability: OpenTelemetry, Sentry, structured logging, per-tool latency and error-rate dashboards
Security tooling: Per-tool allowlisting, output sanitization, prompt-injection detection, audit logging on every tool call
Compatible clients: Claude Desktop, Claude Code, Cursor, Windsurf, Codex CLI, Continue, Cline, ChatGPT Desktop, custom agents
Our MCP server development process
Use case mapping (3-5 days): What tools do you want to expose, to whom, with what permissions? Most MCP server projects fail because they expose too much, too broadly. We start by writing the threat model and the scope boundaries.
API and tool design (1 week): Design the tool surface — clear, single-purpose tools beat sprawling 'do anything' tools every time. Define inputs (with strict JSON schemas), outputs, error semantics, and idempotency. Document each tool the way you'd document a public API.
Implementation (2-6 weeks): Build the MCP server with the official SDK, wire up backend integrations, add authentication and scope enforcement, structured logging, and a comprehensive test suite (unit + integration + LLM-driven end-to-end tests).
Security review and red-teaming (3-5 days): Prompt-injection testing, scope-escalation testing, malformed-input fuzzing, and a written threat model with documented mitigations. We deliver both the test results and the security patch.
Deployment and rollout (1-2 weeks): Production deployment with TLS, OAuth, rate limiting, and observability. Distribution via direct config, MCP server registries, or one-click install URLs depending on audience.
Maintenance and evolution (ongoing): MCP is moving fast — new spec features, new transport modes, new client integrations. We maintain MCP servers under our standard retainer, including spec upgrades and new client compatibility.
Frequently asked questions
What is the Model Context Protocol (MCP)?
MCP is an open standard introduced by Anthropic in late 2024 (and now adopted by OpenAI, Google, Cursor, and most of the AI tooling ecosystem) for connecting AI agents to external tools, data, and services. Think of it as 'USB-C for AI tools' — write your tool integration once as an MCP server, and any compatible AI client can use it. It's quickly becoming the default way to extend AI agents.
Why build a custom MCP server instead of an OpenAI plugin or function calls?
OpenAI plugins were deprecated. Function calls work but are vendor-specific and live inside one app's context — you'd write the integration once for each LLM provider and each application. MCP is cross-vendor, cross-app, and supports richer primitives (resources, prompts, sampling, elicitation) than plain function calls. If you want one integration that works in Claude Desktop, Cursor, ChatGPT Desktop, Codex CLI, and your own custom agents, build an MCP server.
How long does it take to build an MCP server?
A focused internal MCP server wrapping 3-5 tools typically ships in 2-4 weeks. A customer-facing remote MCP server with OAuth, multi-tenancy, and 10-20 tools ships in 6-10 weeks. Enterprise-grade MCP servers with deep audit, scoped permissions, and one-click marketplace install run 10-16 weeks.
How much does it cost to build an MCP server?
Internal MCP servers (small surface, single transport, no auth) typically range from $12,000 to $35,000. Production remote MCP servers with OAuth, multi-tenant scope enforcement, and 10+ tools range from $35,000 to $90,000. Enterprise-grade servers with security review, SOC 2 controls, and ongoing maintenance contracts start at $80,000.
What about prompt injection? AI agents calling my tools is scary.
It should be. Every MCP server we ship includes (1) strict per-tool input schemas and output validation, (2) per-user and per-client scope enforcement at the tool level, (3) audit logging on every call, (4) destructive-action confirmation prompts, (5) rate limiting and quota enforcement, and (6) a documented threat model. For high-stakes tools we add a human-in-the-loop confirmation step on the AI client side. Security is a first-class concern, not a follow-up ticket.
Where should we host the MCP server?
Local stdio servers ship with desktop apps. Remote servers we usually deploy to Cloudflare Workers (best latency profile and easiest auth) or AWS Lambda + API Gateway (if you're already an AWS shop). For high-throughput servers we use ECS Fargate or Fly.io. Self-hosted on Kubernetes is a reasonable choice for enterprise customers with strict data residency.
Can you make our existing REST API into an MCP server?
Yes — this is one of our most common engagements. We don't auto-wrap the entire API as a sprawl of tools (that's a security and UX disaster). Instead we identify the 5-15 most useful agentic workflows your API enables, design clean MCP tools around those workflows, and ship a focused server. Most production REST APIs need significant reshaping for agent ergonomics; we handle that as part of the project.
Tell us what your APIs do today and where AI agents could automate the work. We'll respond within one business day with a candid take on tool design and effort. Book a free strategy call with our Toronto team.