5 MCP Servers We Use to Run Client Outbound on Claude Code
How Stellar Digital connects Claude Code to real business systems using MCP servers -- Supabase, Playwright, Google Sheets, Discord, and custom API integrations that run our entire GTM operation.
MCP servers -- Model Context Protocol servers -- are external integrations that connect Claude Code to real business systems like databases, browsers, spreadsheets, and APIs. For a B2B agency, they are what turn Claude Code from a coding assistant into an operations platform that can actually run your GTM workflows end to end.
Here is what we use, why we use it, and what the actual workflows look like.
Why MCP Servers Change What Claude Code Can Do
Out of the box, Claude Code is powerful for writing, editing, and running code. But running a GTM agency means operating across a dozen systems: a lead database, an email sender, enrichment APIs, a CRM, reporting tools, communication channels. Each handoff between systems is friction. Each manual export-import is a place where things break.
MCP servers close those gaps. When Claude Code is connected to Supabase via MCP, it does not just write a script that queries Supabase -- it can query Supabase directly, see the results in context, reason about them, and take a next action. The intelligence and the data are in the same place at the same time.
According to Anthropic's documentation, MCP is an open standard designed specifically to solve the "tool fragmentation" problem: the fact that AI systems need a consistent way to communicate with the external world. The protocol has seen rapid adoption since its release, with hundreds of community-built servers now available for common platforms.
For us, MCP servers are what made the shift from "Claude Code as a helpful assistant" to "Claude Code as an operational layer" possible.
MCP Server 1: Supabase -- The Central Nervous System
What it connects to: Our Supabase PostgreSQL database, which holds all client leads, campaign records, enrichment cache, and onboarding data. What we use it for: Reading lead status, updating records, pulling campaign metrics, checking for duplicate leads before imports, and surfacing data during reporting sessions. Why it matters: Before MCP, getting data into a Claude session meant manually exporting a CSV, pasting it in, and hoping Claude could parse it correctly. With Supabase MCP, Claude can run a query like "show me all leads for this client with status = ready" and get live results it can reason about immediately. No export. No paste. No staleness.A sanitized example workflow we run weekly:
- Claude queries
client_leadsfor all records wherestatus = 'enriched'andscore IS NULL - It pulls each lead's company data, enrichment fields, and ICP criteria from context
- It runs a scoring pass and writes scores back to the database
- It flags any leads that look like mismatches for human review
- It updates status to
readyfor leads that clear the threshold
{
"mcpServers": {
"supabase": {
"command": "npx",
"args": ["-y", "@supabase/mcp-server-supabase@latest",
"--supabase-url", "YOUR_SUPABASE_URL",
"--supabase-key", "YOUR_SERVICE_ROLE_KEY"]
}
}
}
One important note: we use a restricted role for the MCP connection -- not the service role key in most sessions. The service role can do anything. For day-to-day reads and status updates, a scoped role with explicit table permissions is safer.
MCP Server 2: Playwright -- Browser Automation and Research
What it connects to: A headless Chromium instance that Claude Code can control like a browser. What we use it for: Competitor research, lead verification, scraping company data from sites that do not have APIs, and checking whether a lead's website is still active before enrichment. Why it matters: A large percentage of leads in any list have outdated or incorrect data. A company's website can tell you more about their ICP fit than any data provider -- what language they use, what problems they emphasize, what tech stack they mention. Playwright MCP lets Claude browse and synthesize this in real time rather than relying on static database snapshots.A sanitized example workflow:
When we import a new batch of leads for a client, Claude runs a verification pass using Playwright. For each company:
- Navigate to the company website
- Check if the site loads (dead sites get flagged immediately)
- Pull the homepage headline and key messaging
- Check the "About" or "Team" page for headcount signals
- Note any technology indicators (Shopify badge, "Built on AWS", etc.)
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"]
}
}
}
Playwright MCP is one of the more resource-intensive servers -- running a browser is heavier than a database query. We scope it to specific workflows rather than leaving it always-on.
MCP Server 3: Google Drive / Sheets -- Client-Facing Reporting
What it connects to: Google Drive, Google Sheets, and Google Docs in our workspace. What we use it for: Writing weekly reports directly into client-shared spreadsheets, pulling lead lists from Sheets when a client sends us data in that format, and generating slides for monthly reviews. Why it matters: Clients do not care what is in our database. They care what shows up in the spreadsheet they can see. The last mile of any reporting workflow is getting data from our system into a format the client can open, understand, and act on. Google Sheets MCP eliminates the manual step of exporting, formatting, and sharing.A sanitized example workflow:
At the end of each week, Claude runs a reporting pass for each active client:
- Query Supabase for the week's campaign metrics (opens, replies, meetings booked)
- Compare to prior week and flag any significant changes
- Pull into the client's shared Google Sheet -- pre-formatted with their brand colors and column structure
- Add a written summary paragraph at the top noting anything worth their attention
- Drop a notification in Discord with a link to the updated sheet
{
"mcpServers": {
"gdrive": {
"command": "npx",
"args": ["-y", "@google/mcp-server-gdrive@latest"],
"env": {
"GOOGLE_CLIENT_ID": "YOUR_CLIENT_ID",
"GOOGLE_CLIENT_SECRET": "YOUR_CLIENT_SECRET",
"GOOGLE_REFRESH_TOKEN": "YOUR_REFRESH_TOKEN"
}
}
}
}
MCP Server 4: Discord / Slack -- Team Notifications and Alerts
What it connects to: Our Discord workspace (and Slack for clients who prefer it). What we use it for: Notifying the team when workflows complete, alerting when something breaks or hits an edge case, posting session summaries after work sessions, and dropping daily lead pipeline status updates. Why it matters: One of the failure modes of agentic systems is that they run quietly in the background and you have no idea whether they succeeded, failed, or did something unexpected. Discord MCP gives Claude a voice -- it can surface what it did, flag what it needs help with, and keep the team in the loop without requiring anyone to dig through logs.A sanitized example workflow:
After a lead import and enrichment run:
- Enrichment completes across 200 leads
- Claude posts a Discord embed: "Enrichment complete -- 200 leads processed, 147 verified, 31 skipped (missing domain), 22 flagged for manual review"
- It includes a link to the flagged leads in Supabase so someone can check them quickly
- If the verification rate dropped more than 10% from the previous run, it adds a note: "Verification rate lower than expected -- possible issue with the data source"
{
"mcpServers": {
"discord": {
"command": "python3",
"args": ["path/to/discord_mcp_server.py"],
"env": {
"DISCORD_BOT_TOKEN": "YOUR_BOT_TOKEN",
"DISCORD_CHANNEL_ID": "YOUR_CHANNEL_ID"
}
}
}
}
We built a lightweight custom MCP server for Discord rather than using an off-the-shelf one because we wanted control over the embed format and which channels get which notifications.
MCP Server 5: Custom API MCPs -- Connecting to Instantly and Enrichment APIs
What it connects to: Instantly (our cold email sending platform), enrichment APIs like Prospeo and Hunter, and any other third-party service that has an API but no official MCP server. What we use it for: Pulling campaign stats from Instantly directly into reporting workflows, triggering enrichment API calls from within Claude sessions, and querying our email verification service without leaving the workflow. Why it matters: Most of the tools in a GTM stack do not have official MCP servers yet. But they all have REST APIs. Building a thin MCP wrapper around a REST API is not complicated -- it is usually a 100-150 line Python file. Once it exists, Claude can call that API as naturally as it calls any other tool.A sanitized example of our Instantly MCP wrapper structure:
# mcp_instantly.py -- simplified example, not production code
import httpx
from mcp.server import Server
from mcp.types import Tool
app = Server("instantly-mcp")
@app.list_tools()
async def list_tools():
return [
Tool(name="get_campaign_stats",
description="Fetch open/reply/bounce rates for a campaign",
inputSchema={...}),
Tool(name="list_active_campaigns",
description="List all active sending campaigns",
inputSchema={...})
]
@app.call_tool()
async def call_tool(name, arguments):
if name == "get_campaign_stats":
# call Instantly API with credentials from env
# return formatted stats
...
The key principle: API credentials live in environment variables, never in the MCP server code itself. The MCP server is just a translator between Claude's tool calls and the API's expected format.
A workflow this enables:
- Claude pulls campaign stats from Instantly via the custom MCP
- Compares against last week's numbers from Supabase
- Identifies which sequences have reply rates below threshold
- Drafts revised subject lines and message variants for the underperformers
- Posts recommendations to Discord for human review before anything goes live
The Full .mcp.json Configuration
Here is a sanitized version of the configuration file that wires these servers together. In practice this file lives in the project root and is loaded automatically when Claude Code starts in that directory.
{
"mcpServers": {
"supabase": {
"command": "npx",
"args": ["-y", "@supabase/mcp-server-supabase@latest",
"--supabase-url", "${SUPABASE_URL}",
"--supabase-key", "${SUPABASE_KEY}"]
},
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"]
},
"gdrive": {
"command": "npx",
"args": ["-y", "@google/mcp-server-gdrive@latest"],
"env": {
"GOOGLE_CLIENT_ID": "${GOOGLE_CLIENT_ID}",
"GOOGLE_CLIENT_SECRET": "${GOOGLE_CLIENT_SECRET}",
"GOOGLE_REFRESH_TOKEN": "${GOOGLE_REFRESH_TOKEN}"
}
},
"discord": {
"command": "python3",
"args": ["./execution/infrastructure/discord_mcp.py"],
"env": {
"DISCORD_BOT_TOKEN": "${DISCORD_BOT_TOKEN}",
"DISCORD_CHANNEL_ID": "${DISCORD_CHANNEL_ID}"
}
},
"instantly": {
"command": "python3",
"args": ["./execution/sequencers/mcp_instantly.py"],
"env": {
"INSTANTLY_API_KEY": "${INSTANTLY_API_KEY}"
}
}
}
}
All credential references use environment variable interpolation. The .env file stays separate and is never committed to the repo.
What This Actually Changes
Before MCP servers, a Claude Code session looked like this: ask Claude to write a script, run the script in a terminal, copy the output back into Claude, ask it what to do next. Functional, but every step required manual handoffs.
With MCP servers connected, a session looks like this: describe the workflow you want to run, watch Claude execute it across all your systems, get a summary of what happened. The intelligence and the tooling are in the same place.
This is not theoretical. In a recent campaign setup for a SaaS client, what used to take our team 4-5 hours of setup (pulling the lead list, running enrichment, scoring, loading into Instantly, setting up the campaign) now runs in under an hour. Most of that hour is us reviewing Claude's outputs and approving before anything goes live -- which is exactly where our attention should be.
The practical ceiling for MCP-connected workflows is not the AI -- it is the quality of your data and the clarity of your directives. Which is why we pair every MCP integration with a written directive that defines exactly what the workflow is supposed to do, what good output looks like, and what edge cases to handle.
For a deeper look at how we structure those directives, see our post on agentic workflows explained. For what this looks like as a client-facing service, see our go-to-market systems page.
What to Build First
If you are setting up MCP servers for a GTM operation and do not know where to start:
Start with your database. Whether that is Supabase, Postgres, or another data store, connecting Claude Code to your primary data source is the highest-leverage first step. Everything else in your stack flows through data. Add Playwright second if you do any kind of research or lead verification work. The ability to browse the web in context is worth a lot. Build custom API MCPs last for the tools specific to your stack. These take a few hours each to build but pay off quickly for any tool you use daily.The pattern we have found that works: build one integration, prove the workflow, then add the next. Trying to wire up five systems at once before you have validated any single workflow is a recipe for a complex system that does not actually work reliably.
MCP servers are not magic. They are pipes. What matters is what you do with the connection.
Frequently Asked Questions
What are MCP servers in Claude Code?
MCP stands for Model Context Protocol -- an open standard that lets Claude Code connect to external tools, databases, and APIs as persistent context providers. Unlike one-off tool calls, MCP servers stay connected across a session, meaning Claude can read from your database, write to a spreadsheet, trigger a browser, and fire an API call in a single coherent workflow. Think of each MCP server as giving Claude a new set of hands that reach into a specific system.
Can Claude Code replace a sales operations team with MCP servers?
Not entirely -- and you should be skeptical of anyone who says otherwise. Claude Code with MCP servers can automate the deterministic, repeatable parts of sales ops: pulling lead data, enriching contacts, generating reports, triggering notifications, and syncing records across systems. The judgment layer -- deciding which accounts to prioritize, interpreting nuanced replies, managing relationships -- still needs a human. What changes is the ratio. One person can now oversee workflows that previously required three or four.
Which MCP server is most valuable for a B2B agency?
For us, Supabase MCP is the highest-leverage single integration because our entire lead and campaign database lives there. When Claude Code can read and write to Supabase directly, it can run multi-step workflows -- checking if a lead already exists, pulling enrichment history, updating status, and pulling campaign results -- without any manual data handoffs. If you only implement one MCP server, make it the one that connects to your primary data store.
How do you secure MCP servers so Claude cannot make destructive changes?
The main controls are: use read-only database credentials for any MCP server where Claude only needs to read data; configure row-level security in Supabase so the MCP user can only touch specific tables; and add explicit safety rules to your CLAUDE.md that require human approval before Claude runs any delete or bulk-update operation. We also maintain a staging environment for testing new workflows before they touch production data.
What is the difference between MCP servers and Claude Code tools?
Claude Code tools are built-in capabilities -- file editing, bash commands, web search. MCP servers are external integrations you configure yourself to connect Claude Code to systems it would not otherwise know about: your CRM, your database, your communication channels, your cold email platform. Tools come with Claude. MCP servers are how you extend Claude into your specific business context.
Want us to build this for you?
30 minutes. We'll tell you what to automate first. No pitch, just the plan.
Book a free audit