We Replaced a $5k/Month SaaS Stack With Claude Code. Here's What Changed.
A transparent breakdown of how Stellar Digital cut monthly software costs from $5,200 to under $300 by replacing enrichment orchestration, research, and reporting tools with Claude Code -- and the real tradeoffs that came with it.
We were paying $5,200 a month in SaaS tools to run our GTM operation. Today we spend under $300 on the same workflows -- better output quality, more customization, and full ownership of the code. Here is exactly what changed, what we kept, and the tradeoffs nobody talks about when they write "we replaced X with AI" posts.
This is not a pitch for Claude Code. It is a transparent breakdown of a real migration, including the parts that were harder than expected.
The Old Stack and What It Cost
Before the migration, our monthly SaaS bill looked like this:
| Tool | Monthly Cost | What We Used It For |
|---|---|---|
| Clay | $500 | Enrichment orchestration, waterfall logic |
| Apollo | $400 | Contact data and company search |
| Zapier | $200 | Workflow automation, system connections |
| Make | $100 | More complex multi-step automations |
| Prospeo + Hunter | $300 | Email finding and verification |
| Clearbit | $200 | Company enrichment data |
| Manual research | $1,000 | 20+ hours/month at $50/hr average |
| Total | $2,700/month |
Total real cost: closer to $3,500/month.
The frustration was not the money, though. It was the ceiling. Clay is powerful, but it runs on Clay's logic. When we needed enrichment to do something Clay did not support -- custom scoring based on enrichment results, cross-referencing against our own historical data, conditional branching based on company type -- we hit walls. Zapier and Make have the same problem at a different layer. These tools are designed for common use cases. Our use cases were increasingly uncommon.
What We Replaced and What We Kept
Before going into the numbers, I want to be clear about what this migration actually is. We did not rip out our entire stack and replace it with an AI. We replaced specific components where custom code gave us better outcomes than SaaS tools. The distinction matters.
What Claude Code Replaced
Enrichment orchestration (Clay, $500/month): The enrichment waterfall -- try source A, if no result try source B, apply verification, write to database -- is fundamentally a decision tree. Claude Code, backed by a set of Python scripts that call each enrichment API, replicates this with more flexibility and no per-row pricing. Our enrichment scripts call the same APIs Clay was calling, but with custom logic we control entirely. Workflow automation (Zapier + Make, $300/month): Most of our Zaps and Make scenarios were data transformation and routing tasks -- take output from script A, clean it, route to destination B. Claude Code with a well-configured directive structure handles these natively. The "if-then" logic that required visual workflow builders now lives in Python scripts that are faster, more reliable, and easier to debug. Manual research (20+ hours/month): This was the biggest win. The manual research line -- reviewing companies, checking websites, writing personalization notes -- is now handled by Claude Code using Playwright for web research and our enrichment scripts for data retrieval. Output quality is higher than what junior researchers were producing because the process is consistent. No variance in how thoroughly each lead gets researched. Report generation: Our weekly client reports used to require someone pulling data from multiple sources, pasting into a template, writing a summary paragraph, and sending. Now Claude Code pulls from Supabase, formats into a Google Sheet, writes the summary, and drops a link in Discord. Nobody touches it.What We Kept
Instantly ($300/month, unchanged): We still use Instantly for cold email sending. Claude Code is not an email sending platform -- it does not manage inbox rotation, domain health, sending schedules, or deliverability monitoring. Instantly does these things well and the cost is reasonable for what it provides. Replacing dedicated email infrastructure with a general-purpose AI would be the wrong trade. Supabase ($50/month): We still need a database. Supabase is cheap, reliable, and the Supabase MCP server makes it natively accessible from Claude Code sessions. This is not a tool we are trying to replace -- it is infrastructure. Domain infrastructure (unchanged): DNS, warmup services, inbox rotation -- these are not things Claude Code touches. Email deliverability is its own discipline. Apollo ($400/month, partial): We reduced our Apollo usage significantly but kept a lower-tier plan for prospecting and company search. The contact database is genuinely hard to replicate from scratch. We use Apollo more surgically now -- for the initial company and contact pull -- rather than as the center of the enrichment workflow.The New Cost Structure
| Component | Monthly Cost |
|---|---|
| Claude API tokens | ~$175 |
| Supabase Pro | $50 |
| Apollo (reduced tier) | $99 |
| Prospeo + Hunter | $150 |
| Instantly | $300 |
| Total | ~$775/month |
The comparison that matters most: we eliminated $1,000/month in manual research labor and $800/month in orchestration tooling (Clay, Zapier, Make) and replaced both with approximately $175/month in Claude API tokens and about 40 hours of upfront development to build the execution scripts.
That development investment paid back in month two.
The Honest Tradeoffs
This is the part most "we replaced X with AI" posts skip. Let me be direct about what we gave up.
Pro: 10x cheaper with better customization
The cost reduction speaks for itself. But the customization improvement is arguably more valuable. We can now build enrichment logic, scoring models, and reporting workflows that are specific to our operation -- not constrained by what a SaaS tool's feature set supports. We added a cross-client deduplication check last month that would have been impossible in Clay. It took about three hours to build in Python.
Con: Technical maintenance is now our responsibility
Every SaaS tool we replaced was handling its own reliability. When Zapier's API changed, Zapier fixed it. When Clay added a new enrichment source, Clay shipped it. Now when an enrichment API changes its response format, we update the script. When a library breaks, we fix it. The maintenance burden is real and ongoing.
We estimate 4-6 hours per month of maintenance and improvement work on the execution scripts. At our cost basis, this is still a favorable trade. For an agency without technical capacity, it might not be.
Pro: We own the code
When you build your enrichment and automation logic in SaaS tools, you are writing your operational playbook in someone else's format. If they change pricing, shut down, or remove a feature, you lose your workflow. Our Python scripts are in a version-controlled repository. We can run them anywhere, hand them off to any developer, and extend them without asking permission.
Con: Not "set and forget"
SaaS tools are designed to be configured once and monitored occasionally. Claude Code requires active engagement. Someone on the team needs to understand how the system works, catch when outputs drift from expectations, and maintain the directive library that governs Claude's behavior. The system is more powerful than what we replaced, but it requires more intellectual involvement.
Pro: Can do things SaaS tools literally cannot
The biggest unreplaceable advantage: cross-tool orchestration with custom logic. Our enrichment workflow now pulls from three sources, cross-references against our historical campaign data to avoid re-enriching contacts we already have, applies a custom ICP scoring model, and updates four different tables in a single coherent workflow. No combination of SaaS tools could do this cleanly. Everything would require manual handoffs between systems.
Con: Learning curve is steep
Getting Claude Code to behave reliably requires building out a proper CLAUDE.md, a directive library, and a tested execution script library. This is weeks of work, not days. Teams that approach Claude Code as a drop-in replacement for their SaaS tools will be disappointed. It requires building a system, not just installing a tool.
A Real Example: Enrichment Before and After
To make this concrete, here is how a lead enrichment run worked before and after the migration.
Before (Clay-based):- Export new leads from Apollo as CSV
- Import CSV into Clay table
- Clay runs waterfall enrichment (Prospeo → Hunter → Clearbit)
- Manually review Clay output for errors
- Export enriched leads as CSV
- Import CSV into our CRM/database
- Manually flag low-confidence rows for follow-up
- New leads imported directly to Supabase via
execution/lead_sourcing/import_and_prepare_leads.py - Claude reads the enrichment directive, runs
execution/enrichment/enrich.pyagainst the Supabase records - Script calls Prospeo, Hunter, and Clearbit APIs in waterfall sequence
- Results written back to Supabase automatically
- Low-confidence rows flagged in a
needs_reviewfield - Claude posts summary to Discord: "Enrichment complete -- 200 leads, 163 verified, 18 flagged, 19 skipped"
Who Should Consider This Switch
Good candidates:- Agencies doing $1M+ ARR with high-volume outbound
- Teams with at least one person who can write and maintain Python
- Operations where the SaaS tool ceiling is already being hit -- you need custom logic the tools do not support
- Companies paying per-row or per-seat fees that scale painfully with volume
- Agencies under $500k ARR (the setup cost does not justify the savings)
- Teams with no technical capacity (no developer, no one who can debug scripts)
- Operations running standard workflows that existing SaaS tools handle well
- Anyone looking for a "plug it in and walk away" solution
What to Do If You Want to Explore This
If you want to see whether this approach makes sense for your operation, I would suggest:
- Audit your actual SaaS bill -- not just the tool costs but the manual labor hours spent managing and working around tool limitations. That number is usually higher than people expect.
- Identify the highest-friction workflows -- where are you spending the most time on manual handoffs, export-import cycles, or fighting tool limitations? Those are the best candidates for replacement.
- Build one script and test it -- before committing to a full migration, pick one workflow and build the replacement. Run it in parallel with your existing tool for 2-3 weeks and compare output quality, reliability, and time investment.
- Model the real ROI -- include setup time, ongoing maintenance, and the value of customization, not just the monthly cost delta.
The switch was worth it for us. The $4,400/month in savings is real. So is the four weeks it took to build the system properly. Both things can be true.
Frequently Asked Questions
Can Claude Code actually replace Clay for B2B enrichment?
For enrichment orchestration -- the logic layer that decides which sources to try, in what order, and what to do when one source fails -- yes. Claude Code can replicate Clay's waterfall enrichment by calling the same underlying APIs (Prospeo, Hunter, Apollo, Clearbit) through Python scripts, applying custom logic at each step, and writing results directly to a database. What you lose is Clay's visual interface and the ease of setup for non-technical users. What you gain is full customization, no per-row pricing, and logic that can be as complex as you need it to be.
What is the real monthly cost of running Claude Code for a GTM agency?
Our costs break down to roughly: $150-200/month in Claude API tokens (varies with volume), $50/month for Supabase (Pro plan), and existing subscriptions to tools Claude Code does not replace (email sending infrastructure, domain management). Total: $200-250/month for the AI and data layer. This compares to $2,700/month in SaaS tools we were paying for before the migration -- Clay, Apollo, Zapier, Make, and various enrichment APIs -- not counting manual research time.
How long does it take to replace a SaaS stack with Claude Code?
The migration took us about six weeks of part-time effort to do properly. The first two weeks were building and testing the core enrichment and scoring scripts. Weeks three and four were rebuilding the reporting workflows and integrating with our database. The final two weeks were running both systems in parallel to verify output quality matched or exceeded what the SaaS tools were producing. The timeline depends heavily on how complex your existing workflows are and whether you have someone technical to build the execution layer.
What are the biggest risks of replacing SaaS tools with Claude Code?
Three main risks: First, Claude Code requires ongoing technical maintenance. SaaS tools handle their own updates, API changes, and reliability -- you own all of that with custom code. Second, there is no support line. When something breaks at 2am, you debug it yourself. Third, the learning curve is real. Teams without technical capacity will struggle. The sweet spot is an agency with at least one person who can write Python, understands REST APIs, and is willing to build and maintain custom tooling.
Should every agency replace their SaaS stack with Claude Code?
No. For agencies under $500k ARR running standard outbound workflows, off-the-shelf SaaS tools are almost certainly the right choice. The setup cost and ongoing maintenance burden of a custom Claude Code stack does not justify the savings at small scale. The calculation changes as you grow: the per-seat and per-row pricing models of most SaaS tools scale linearly with volume, while a Claude Code API stack scales much more efficiently. The crossover point in our experience is around $1M-$2M ARR with high outbound volume.
Want us to build this for you?
30 minutes. We'll tell you what to automate first. No pitch, just the plan.
Book a free audit