Case Studies

How We Automated 80% of Our Client's Outbound Process (Case Study)

A detailed case study of how Stellar Digital reduced a client's outbound costs by 60% and increased monthly meetings from 15 to 32 by automating their prospecting, enrichment, and outreach pipeline.

March 29, 20269 min read

In early 2025 a B2B SaaS client came to us with the usual problem: two SDRs working full days, 15 meetings a month, and unit economics that didn't survive a calculator. They were spending roughly $980 per meeting once you factored in salaries, tools, and manager time. They needed more meetings at lower cost, and a system that would scale without hiring a third SDR.

Here's how we built their automated outbound pipeline, what each piece does, and where things stood three months in.


The starting point

Before changing anything, we documented their existing process. What we found:

Their SDR workflow before automation:
  • 2 SDRs, 45 hours per week each
  • Time allocation per SDR per week:
- 25 hours: manual prospecting (Apollo searches, LinkedIn lookups, building lists in spreadsheets) - 8 hours: data cleanup and deduplication - 5 hours: writing and sending emails by hand - 4 hours: logging activity in HubSpot - 3 hours: actual reply handling and meeting booking

Out of 45 hours a week, 38 hours (84%) were going to tasks that require no judgment. Research and data work that a real process handles in minutes per contact was eating entire days.

Their monthly output:
  • Contacts reached: ~800/month (both SDRs combined)
  • Reply rate: 2.8%
  • Positive replies: 22/month
  • Meetings booked from positives: 15/month (7 didn't convert because of slow follow-up, lost-in-inbox, wrong contact)
  • Monthly cost: $14,700 (2 SDRs fully burdened plus tools)
  • Cost per meeting: $980
The obvious problems:
  1. 84% of SDR time on admin work
  2. Losing 7 of 22 positive replies because of disorganized inbox management
  3. No A/B testing. Same sequence running unchanged for 6 months.
  4. List quality was inconsistent. Some months the sourcing was tight, others it wasn't.

The architecture we built

We rebuilt their outbound process in five layers. Each one automated a specific chunk of what the SDRs had been doing by hand.

Layer 1: automated lead sourcing

Before. SDRs burned 25 hours a week searching Apollo, filtering results, exporting CSVs, and building lists in Google Sheets. Quality drifted because what they searched for shifted based on mood, available time, and whoever set up the search that week. After. We defined their ICP tightly: SaaS companies, 50-500 employees, Series A or B, US-based, using Salesforce or HubSpot (sales maturity + budget), with at least one SDR or BDR on staff (active outbound investment).

We built an Apollo-based list-builder that runs weekly and automatically pushes 400-600 new contacts matching these criteria into a staging table in their Supabase database. The script deduplicates against existing contact history and the suppression list before anything is added.

Time saved. 25 hours per SDR per week down to zero. The list is ready every Monday morning. List quality. Because the ICP filters are consistent and tight, quality jumped immediately. Contacts per month went from 800 to 1,800 on the same tool budget.

Layer 2: enrichment and verification

Before. SDRs verified emails one at a time in Hunter.io, occasionally pulled direct dials off LinkedIn, and regularly sent to unverified emails that bounced. After. A waterfall enrichment pipeline:
  1. Step 1: every contact runs through Prospeo for LinkedIn-sourced emails. Finds verified emails for ~65% of contacts.
  2. Step 2: the remaining 35% goes through Hunter.io for domain-pattern matching. Adds another 15-20%.
  3. Step 3: every found email runs through Zerobounce. Anything below an 85% confidence score gets dropped.
  4. Final result: ~75-80% of contacts make it to the verified stage. The rest get discarded rather than risk bounces.
This runs every night on contacts staged the previous day. The SDRs wake up to verified, enriched contacts ready for qualification. Bounce rate before: 4.2% (domain reputation was suffering). Bounce rate after: 0.6% (well inside safe thresholds).

The domain recovery itself, over about 60 days, lifted inbox placement and added roughly 0.8 percentage points to reply rate. Per Zerobounce's 2024 Email Hygiene Report, every 1% reduction in bounce rate improves inbox placement by 2-4%.

Layer 3: AI qualification and personalization

Before. Same 3-4 template emails to everyone. Occasionally an SDR would research a company if the deal looked big enough. It was sporadic. After. Two AI agents run in sequence on every contact that passes enrichment: Qualification agent:
  • Reads company data (size, funding, tech stack, recent job postings)
  • Scores the contact against their ICP, 0-100, with reasoning
  • Below 60 routes to a lighter 2-email sequence
  • 60-80 goes to the standard 5-email sequence
  • 80+ goes to a priority sequence and gets flagged for manual research
Personalization agent:
  • For 80+ contacts: pulls company news, recent LinkedIn posts, and job posting changes, then writes a custom opener (e.g., "Saw you just posted a VP of Sales role and expanded your BDR team. The timing on what I'm about to say might be relevant.")
  • For 60-80 contacts: generates an industry-specific opener using company vertical and the usual pain points for that segment
  • Every opener is fact-checked by the system before it lands on the contact record
What it produced:
  • High-priority contacts (80+): 4.1% positive reply rate
  • Standard contacts (60-80): 2.4% positive reply rate
  • Lower-priority (below 60): 0.9% positive reply rate. Still worth sending to, just on a lighter sequence.
Routing breakdown: 18% high-priority, 55% standard, 27% lower-priority.

Layer 4: sequence execution and A/B testing

Before. One sequence, unchanged for 6 months. No testing. Emails sent manually in batches. After. Instantly.ai configured with:
  • 4 active sequences running at once, each targeting a different ICP segment
  • 2 variants per sequence: different subject line, different opening paragraph
  • Automated A/B testing: after 100 sends per variant, Instantly picks the winner and routes the rest of the contacts to it
  • Daily sending limits: 35 emails per inbox across 6 sending domains (210/day max)
  • Volume ramp: started at 50/day in week one, hit full speed by week six
Sending schedule:
  • Mon-Thu only (Friday runs 20-30% lower on opens and replies per Salesloft's 2024 data)
  • Send window: 7am-11am in recipient's timezone
  • Auto follow-ups on day 3, 7, 12, and 18 if no response
Sequence structure (standard priority):
  • Email 1: hook on a company-specific angle or industry trigger, 85 words, one soft question
  • Email 2: social proof with a relevant client result, 70 words, direct ask for a 15-minute call
  • Email 3: challenge framing (here's the problem companies like yours face), 90 words, offer to share a framework
  • Email 4: quick bump, 30 words ("Did this get buried?")
  • Email 5: break-up email, 40 words, leaves the door open
Better copy, A/B testing, and improved deliverability (from verified lists and proper warmup) pushed overall reply rate from 2.8% to 4.3%.

Layer 5: reply handling and meeting booking

This was the single biggest improvement. The client was losing 32% of positive replies to slow response, inbox chaos, and inconsistent handling.

Before. Two SDRs shared an inbox, read every reply, decided on a response, typed it out, and hoped the prospect was still interested by the time they got back. After. A reply classification agent reads every incoming reply within 15 minutes and:
  • Positive: posts a Slack notification with the reply and a brief on the prospect's company, auto-sends a response with a Calendly link, marks the deal in HubSpot as "Meeting Requested"
  • Conditional / objection: routes to a pre-written response template (we wrote 8 covering their most common objections)
  • Not interested: pulls from all sequences, adds to suppression list
  • Not the right contact: queries Apollo for the right decision-maker title at that company and creates a task for the SDR to research and re-outreach
  • Out of office: pauses sequences for that contact until 3 days after the return date in the OOO
The Calendly integration auto-books meetings without SDR involvement for 70% of interested responses. The SDR only touches the 30% that involve back-and-forth or unusual scheduling. Meeting booking rate:
  • Before: 15 meetings from 22 positives (68% conversion)
  • After: 32 meetings from 47 positives (88% conversion)
The improvement is two things at once: more positive replies (better targeting and sequences) and a much higher positive-to-meeting conversion (68% to 88%).

Results: three months in

The full before/after, three months after the system went live:

MetricBeforeAfterChange
Contacts reached per month8001,800+125%
Overall reply rate2.8%4.3%+54%
Positive replies per month2247+114%
Meetings booked per month1532+113%
Email bounce rate4.2%0.6%-86%
SDR time on admin tasks84%18%-79%
SDR time on actual selling16%82%+413%
Monthly tooling cost$1,200$1,850+$650
Monthly labor cost$13,500$13,500No change
Total monthly cost$14,700$15,350+4%
Cost per meeting$980$480-51%
By month six, with sequences tuned to A/B winners and ICP refinements, meetings hit 38 a month and cost per meeting dropped to $367. A 63% reduction from where they started.

What the SDRs do now

This is the part that matters most when you're thinking about headcount.

The two SDRs didn't lose their jobs. They changed jobs. Instead of 38 hours a week on data work, they now spend their time on:

  • Reply handling and warm prospect nurturing: 15 hours/week each. The 30% of interested responses that need a human touch.
  • Pre-call research: 8 hours/week. Going deeper on high-priority accounts before calls.
  • Meeting follow-through: 8 hours/week. Post-call summaries, next-step follow-up, clean handoffs to the AE.
  • Campaign oversight: 4 hours/week. Reviewing A/B test results, flagging weird patterns, making judgment calls on borderline suppression cases.
The improvement isn't just cost. The client's AEs said meetings coming in were better qualified, SDR handoff notes were more detailed (because the SDRs finally had time to write them), and prospects who showed up to calls had warmer context because the reply system engaged them within minutes of their first interest.

What this system cost to build

Being honest: building a system like this isn't instant.

Setup time: 4 weeks (discovery, ICP definition, infrastructure build, sequence writing, testing). Our build fee: this was part of a retained engagement. For clients who want a similar build from scratch, setup runs $5,000-$8,000 depending on complexity. Ongoing monthly tools cost: $1,850. Covers Instantly ($97), Apollo ($99), Prospeo ($49), Zerobounce ($49), HubSpot Professional ($450), Calendly Teams ($20), Zapier ($49), Supabase ($25), and AI API costs (~$1,000/month at their sending volume). Break-even vs the manual process: ROI-positive by month two thanks to higher output and lower cost per meeting.

What would make results even better

Three months in, here's what we're still working on:

ICP refinement. The qualification agent's scoring gets sharper as we feed back which meetings actually became pipeline. We're building a feedback loop where closed-won data adjusts the scoring weights over time. LinkedIn layer. We haven't added LinkedIn touches yet. A coordinated connection request at step 2 or 3 usually adds 15-25% more meetings from the same contact list. Intent signal layer. Adding Bombora or G2 intent data to qualification scoring would let us prioritize contacts whose companies are actively researching relevant topics. Early tests suggest another 30-40% lift on positive reply rates.

For how this fits into a broader go-to-market system, see what is a go-to-market engine. For more on the AI agent layer, agentic workflows explained covers the architecture.

If you want a system like this built for your team, our B2B lead generation services page covers how we work.


Frequently Asked Questions

How much can you reduce outbound costs with automation?

Based on our case study with a B2B SaaS client, automating the prospecting, enrichment, personalization, and reply-handling layers of an outbound program reduced per-meeting cost by 63% -- from $980 per meeting to $367 per meeting. The primary savings came from eliminating 80+ hours per week of manual SDR work on tasks that could be systematized. Meetings booked simultaneously increased from 15 to 32 per month, meaning the client got more than twice the output at 37% of the original cost.

What parts of outbound can be automated?

In a modern outbound program, the following can be automated: lead list sourcing and filtering (Apollo, Sales Navigator exports), data enrichment and email verification (Prospeo, Zerobounce), ICP qualification scoring (AI classifier), personalized opening line generation (AI research agent), email sequence execution and follow-up (Instantly, Smartlead), reply classification and routing (AI agent), meeting booking (Calendly integration), and CRM data sync (Zapier or native integration). The only steps that require humans are writing the initial campaign strategy, handling positive replies that need live conversation, and running the actual sales calls.

How long does it take to set up an automated outbound system?

For a client starting from scratch with no existing outbound infrastructure, a full automated outbound system takes 3-4 weeks to set up properly: one week for ICP definition and list building, one week for domain setup, DNS configuration, and inbox warmup ramp, one week for sequence writing, A/B variant creation, and tool configuration, and then a launch in week four with a gradual volume ramp-up over the following two weeks. Expect the first month to be optimization-heavy as you learn which segments and messages produce the best results.

What results should you expect from an automated outbound program?

Based on our client programs, a well-built automated outbound system targeting a validated ICP typically produces: 2-4% overall reply rate, 0.5-1.5% positive reply rate, 15-40 meetings per month (depending on list size and ICP tightness), and $300-$700 cost per meeting. These ranges vary by industry, offer strength, and target company size. Mid-market SaaS companies targeting operations, sales, or marketing leaders tend to see the higher end of these ranges.

Do you need SDRs if you automate outbound?

Automation replaces the administrative and research work of SDRs, not the judgment work. In our client's case, they went from two SDRs spending 80% of their time on prospecting and data tasks to two SDRs spending 90% of their time on reply handling, meeting prep, and pre-call research. The team size stayed the same; the output more than doubled. For companies without any SDRs, a lean automated system can run with a single person managing campaigns 5-8 hours per week.

Want us to build this for you?

30 minutes. We'll tell you what to automate first. No pitch, just the plan.

Book a free audit