5 MCP Servers We Use to Run Client Outbound on Claude Code
How Stellar Digital connects Claude Code to real business systems using MCP servers -- Supabase, Playwright, Google Sheets, Discord, and custom API integrations that run our entire GTM operation.
MCP servers (Model Context Protocol) are external integrations that connect Claude Code to real business systems: databases, browsers, spreadsheets, APIs. For a B2B agency, they're what turn Claude Code from a coding assistant into an operations platform that can actually run your GTM workflows end to end.
Here's what we use, why we use it, and what the workflows actually look like.
Why MCP servers change what Claude Code can do
Out of the box, Claude Code is good at writing, editing, and running code. But running a GTM agency means operating across a dozen systems: a lead database, an email sender, enrichment APIs, a CRM, reporting tools, communication channels. Every handoff is friction. Every manual export-import is a place where things break.
MCP servers close those gaps. When Claude Code is wired into Supabase via MCP, it doesn't just write a script that queries Supabase. It can query Supabase directly, see the results in context, reason about them, and take a next action. The intelligence and the data are in the same place at the same time.
Per Anthropic's documentation, MCP is an open standard built to solve the "tool fragmentation" problem: AI systems need a consistent way to talk to the outside world. The protocol has been adopted quickly since launch, with hundreds of community-built servers now available for common platforms.
For us, MCP servers are what made the shift from "Claude Code as a helpful assistant" to "Claude Code as an operational layer" possible.
MCP server 1: Supabase, the central nervous system
What it connects to: our Supabase PostgreSQL database. It holds all client leads, campaign records, enrichment cache, and onboarding data. What we use it for: reading lead status, updating records, pulling campaign metrics, checking for duplicates before imports, and surfacing data during reporting sessions. Why it matters. Before MCP, getting data into a Claude session meant exporting a CSV, pasting it in, and hoping Claude could parse it correctly. With Supabase MCP, Claude can run a query like "show me all leads for this client with status = ready" and get live results it can reason about immediately. No export, no paste, no staleness.A sanitized example workflow we run weekly:
- Claude queries
client_leadsfor records wherestatus = 'enriched'andscore IS NULL - It pulls each lead's company data, enrichment fields, and ICP criteria into context
- It runs a scoring pass and writes scores back to the database
- It flags anything that looks like a mismatch for human review
- It updates status to
readyfor leads that clear the threshold
{
"mcpServers": {
"supabase": {
"command": "npx",
"args": ["-y", "@supabase/mcp-server-supabase@latest",
"--supabase-url", "YOUR_SUPABASE_URL",
"--supabase-key", "YOUR_SERVICE_ROLE_KEY"]
}
}
}
One important note: we use a restricted role for the MCP connection in most sessions, not the service role key. The service role can do anything. For day-to-day reads and status updates, a scoped role with explicit table permissions is safer.
MCP server 2: Playwright, browser automation and research
What it connects to: a headless Chromium instance that Claude Code can control like a browser. What we use it for: competitor research, lead verification, scraping company data from sites that don't have APIs, and checking whether a lead's website is still alive before enrichment. Why it matters. A big chunk of leads on any list have stale or wrong data. A company's website tells you more about ICP fit than most data providers do. The language they use, the problems they emphasize, the tech stack they mention. Playwright MCP lets Claude browse and synthesize this in real time instead of relying on stale database snapshots.A sanitized example workflow:
When we import a new batch of leads for a client, Claude runs a verification pass using Playwright. For each company:
- Navigate to the company website
- Check if it loads (dead sites get flagged immediately)
- Pull the homepage headline and key messaging
- Check the "About" or "Team" page for headcount signals
- Note any tech indicators (Shopify badge, "Built on AWS", etc.)
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"]
}
}
}
Playwright MCP is one of the heavier servers. Running a browser costs more than a database query. We scope it to specific workflows rather than leaving it always-on.
MCP server 3: Google Drive / Sheets, client-facing reporting
What it connects to: Google Drive, Google Sheets, and Google Docs in our workspace. What we use it for: writing weekly reports directly into client-shared spreadsheets, pulling lead lists from Sheets when a client sends us data that way, and generating slides for monthly reviews. Why it matters. Clients don't care what's in our database. They care what shows up in the spreadsheet they can see. The last mile of any reporting workflow is getting data from our system into a format the client can open, understand, and act on. Google Sheets MCP kills the manual export-format-share step.A sanitized example workflow:
At the end of each week, Claude runs a reporting pass for each active client:
- Query Supabase for the week's campaign metrics (opens, replies, meetings booked)
- Compare to the prior week and flag anything notable
- Push into the client's shared Google Sheet, pre-formatted with their brand colors and column structure
- Add a written summary paragraph at the top calling out anything worth their attention
- Post a Discord notification with a link to the updated sheet
{
"mcpServers": {
"gdrive": {
"command": "npx",
"args": ["-y", "@google/mcp-server-gdrive@latest"],
"env": {
"GOOGLE_CLIENT_ID": "YOUR_CLIENT_ID",
"GOOGLE_CLIENT_SECRET": "YOUR_CLIENT_SECRET",
"GOOGLE_REFRESH_TOKEN": "YOUR_REFRESH_TOKEN"
}
}
}
}
MCP server 4: Discord / Slack, team notifications and alerts
What it connects to: our Discord workspace (and Slack for clients who prefer it). What we use it for: notifying the team when workflows finish, alerting when something breaks or hits an edge case, posting session summaries after work sessions, and dropping daily pipeline status updates. Why it matters. A common failure mode of agentic systems is they run quietly in the background and you have no idea whether they succeeded, failed, or did something weird. Discord MCP gives Claude a voice. It can surface what it did, flag what it needs help with, and keep the team in the loop without anyone digging through logs.A sanitized example workflow:
After a lead import and enrichment run:
- Enrichment finishes across 200 leads
- Claude posts a Discord embed: "Enrichment complete. 200 leads processed, 147 verified, 31 skipped (missing domain), 22 flagged for manual review."
- It includes a link to the flagged leads in Supabase so someone can check them quickly
- If the verification rate dropped more than 10% from the last run, it adds a note: "Verification rate lower than expected. Possible issue with the data source."
{
"mcpServers": {
"discord": {
"command": "python3",
"args": ["path/to/discord_mcp_server.py"],
"env": {
"DISCORD_BOT_TOKEN": "YOUR_BOT_TOKEN",
"DISCORD_CHANNEL_ID": "YOUR_CHANNEL_ID"
}
}
}
}
We built a lightweight custom MCP server for Discord rather than using something off the shelf because we wanted control over the embed format and which channels get which notifications.
MCP server 5: custom API MCPs, connecting Instantly and enrichment APIs
What it connects to: Instantly (our cold email sending platform), enrichment APIs like Prospeo and Hunter, and any other third-party service that has an API but no official MCP server. What we use it for: pulling campaign stats from Instantly directly into reporting workflows, triggering enrichment API calls from inside Claude sessions, and querying our email verification service without leaving the workflow. Why it matters. Most tools in a GTM stack don't have official MCP servers yet. But they all have REST APIs. Wrapping a REST API with a thin MCP server isn't complicated. It's usually a 100-150 line Python file. Once it exists, Claude can call that API as naturally as it calls any other tool.A sanitized example of our Instantly MCP wrapper structure:
# mcp_instantly.py -- simplified example, not production code
import httpx
from mcp.server import Server
from mcp.types import Tool
app = Server("instantly-mcp")
@app.list_tools()
async def list_tools():
return [
Tool(name="get_campaign_stats",
description="Fetch open/reply/bounce rates for a campaign",
inputSchema={...}),
Tool(name="list_active_campaigns",
description="List all active sending campaigns",
inputSchema={...})
]
@app.call_tool()
async def call_tool(name, arguments):
if name == "get_campaign_stats":
# call Instantly API with credentials from env
# return formatted stats
...
The key principle: API credentials live in environment variables, never in the MCP server code itself. The MCP server is just a translator between Claude's tool calls and the API's expected format.
A workflow this enables:
- Claude pulls campaign stats from Instantly via the custom MCP
- Compares against last week's numbers from Supabase
- Identifies sequences with reply rates below threshold
- Drafts revised subject lines and message variants for the underperformers
- Posts recommendations to Discord for human review before anything goes live
The full .mcp.json configuration
A sanitized version of the configuration file that wires these servers together. In practice this lives in the project root and is loaded automatically when Claude Code starts in that directory.
{
"mcpServers": {
"supabase": {
"command": "npx",
"args": ["-y", "@supabase/mcp-server-supabase@latest",
"--supabase-url", "${SUPABASE_URL}",
"--supabase-key", "${SUPABASE_KEY}"]
},
"playwright": {
"command": "npx",
"args": ["-y", "@playwright/mcp@latest"]
},
"gdrive": {
"command": "npx",
"args": ["-y", "@google/mcp-server-gdrive@latest"],
"env": {
"GOOGLE_CLIENT_ID": "${GOOGLE_CLIENT_ID}",
"GOOGLE_CLIENT_SECRET": "${GOOGLE_CLIENT_SECRET}",
"GOOGLE_REFRESH_TOKEN": "${GOOGLE_REFRESH_TOKEN}"
}
},
"discord": {
"command": "python3",
"args": ["./execution/infrastructure/discord_mcp.py"],
"env": {
"DISCORD_BOT_TOKEN": "${DISCORD_BOT_TOKEN}",
"DISCORD_CHANNEL_ID": "${DISCORD_CHANNEL_ID}"
}
},
"instantly": {
"command": "python3",
"args": ["./execution/sequencers/mcp_instantly.py"],
"env": {
"INSTANTLY_API_KEY": "${INSTANTLY_API_KEY}"
}
}
}
}
All credential references use environment variable interpolation. The .env file stays separate and never gets committed to the repo.
What this actually changes
Before MCP servers, a Claude Code session went like this: ask Claude to write a script, run the script in a terminal, copy the output back into Claude, ask it what to do next. Functional, but every step is a manual handoff.
With MCP servers wired in, a session goes like this: describe the workflow you want, watch Claude execute it across all your systems, get a summary of what happened. The intelligence and the tooling are in the same place.
This isn't theoretical. In a recent campaign setup for a SaaS client, what used to take our team 4-5 hours (pulling the lead list, running enrichment, scoring, loading into Instantly, setting up the campaign) now runs in under an hour. Most of that hour is us reviewing Claude's outputs and approving before anything goes live, which is exactly where our attention should be.
The practical ceiling on MCP-connected workflows isn't the AI. It's the quality of your data and the clarity of your directives. That's why we pair every MCP integration with a written directive defining what the workflow is supposed to do, what good output looks like, and what edge cases to handle.
For more on how we structure those directives, see agentic workflows explained. For what this looks like as a client-facing service, see our go-to-market systems page.
What to build first
If you're setting up MCP servers for a GTM operation and don't know where to start:
Start with your database. Supabase, Postgres, whatever. Connecting Claude Code to your primary data source is the highest-leverage first step. Everything else in your stack flows through data. Add Playwright second if you do any kind of research or lead verification. Browsing the web in context is worth a lot. Build custom API MCPs last for the tools specific to your stack. Each one takes a few hours to build and pays off quickly for any tool you use daily.The pattern that works: build one integration, prove the workflow, then add the next. Trying to wire up five systems at once before you've validated any single workflow is how you end up with a complex setup that doesn't actually run reliably.
MCP servers aren't magic. They're pipes. What matters is what you do with the connection.
Frequently Asked Questions
What are MCP servers in Claude Code?
MCP stands for Model Context Protocol -- an open standard that lets Claude Code connect to external tools, databases, and APIs as persistent context providers. Unlike one-off tool calls, MCP servers stay connected across a session, meaning Claude can read from your database, write to a spreadsheet, trigger a browser, and fire an API call in a single coherent workflow. Think of each MCP server as giving Claude a new set of hands that reach into a specific system.
Can Claude Code replace a sales operations team with MCP servers?
Not entirely -- and you should be skeptical of anyone who says otherwise. Claude Code with MCP servers can automate the deterministic, repeatable parts of sales ops: pulling lead data, enriching contacts, generating reports, triggering notifications, and syncing records across systems. The judgment layer -- deciding which accounts to prioritize, interpreting nuanced replies, managing relationships -- still needs a human. What changes is the ratio. One person can now oversee workflows that previously required three or four.
Which MCP server is most valuable for a B2B agency?
For us, Supabase MCP is the highest-leverage single integration because our entire lead and campaign database lives there. When Claude Code can read and write to Supabase directly, it can run multi-step workflows -- checking if a lead already exists, pulling enrichment history, updating status, and pulling campaign results -- without any manual data handoffs. If you only implement one MCP server, make it the one that connects to your primary data store.
How do you secure MCP servers so Claude cannot make destructive changes?
The main controls are: use read-only database credentials for any MCP server where Claude only needs to read data; configure row-level security in Supabase so the MCP user can only touch specific tables; and add explicit safety rules to your CLAUDE.md that require human approval before Claude runs any delete or bulk-update operation. We also maintain a staging environment for testing new workflows before they touch production data.
What is the difference between MCP servers and Claude Code tools?
Claude Code tools are built-in capabilities -- file editing, bash commands, web search. MCP servers are external integrations you configure yourself to connect Claude Code to systems it would not otherwise know about: your CRM, your database, your communication channels, your cold email platform. Tools come with Claude. MCP servers are how you extend Claude into your specific business context.
Want us to build this for you?
30 minutes. We'll tell you what to automate first. No pitch, just the plan.
Book a free audit