6 AI Signal Agents, 1 Selector, 2 Writers: The Split-Signal Architecture That Changed How I Build Every Campaign

6 AI Signal Agents
859 Companies Enriched
12 Warm-Up Domains
3 Transformation Tiers
30K Monthly Capacity

The Client

A London-based customer experience outsourcer founded in 2014 with operations on four continents. Their model breaks from the legacy BPO playbook: instead of throwing headcount at the problem, they combine human expertise with agentic AI to replace FTE-based models with AI-orchestrated business processes. Their delivery is concierge-style: start small, co-create, scale fast.

The numbers behind the positioning are real: 25M+ customer interactions per year, 95% client satisfaction score, 20+ markets served, and a client portfolio that includes recognizable brands in food delivery, e-commerce, and financial services. Revenue sits around GBP 30-40M. Their minimum engagement is a 5-agent team plus a team leader, and they can stand up a new operation in 10 days.

They came to us for a free 1-month MVP engagement. The KPI was specific: deliver 1-3 SQLs in 4 weeks. Success meant a 6-month paid retainer. Failure meant they would walk, and they had walked from providers before.

The Challenge

This client had a constraint profile that tested every layer of my standard build:

The Solution

I built a five-layer system: split-signal AI architecture, transformation maturity scoring, contractual BPO exclusion, four-persona sender infrastructure, and a full intelligence pipeline for 859 companies.

1

The Split-Signal Architecture (6 Agents + 1 Selector + 2 Writers)

This is the technical innovation that came out of this build, and it changed how I approach every campaign after it.

The standard approach in AI-personalized cold email is to give one AI agent a company URL and ask it to find a relevant signal and write a personalization line. That approach produces mediocre output because the model is doing two cognitively different jobs simultaneously: research (broad, exploratory, uncertain) and copywriting (precise, constrained, confident). I split the process into three discrete batches with strict role separation:

Batch 1: Six Signal Finder Agents (one job each):

Agent Signal Type What It Finds
hiring_signal Workforce scaling CX/Support/Ops hiring at manager level or above
expansion_signal Market growth New market launches, funding rounds, acquisitions
review_signal Service quality pressure Trustpilot/G2 scores below 4.0, complaint themes
tech_signal Technology stack Zendesk, Salesforce, Genesys, Twilio, Workday, Greenhouse, Snowflake
regulatory_signal Compliance pressure Data privacy changes, industry regulation shifts
seasonal_signal Demand volatility Peak season patterns, holiday staffing, seasonal revenue spikes

Batch 2, Signal Selector: A deterministic priority cascade, not AI, picks the strongest signal. Hiring beats expansion beats review beats tech beats regulatory beats seasonal. If all six return null, a tenure-based fallback hook fires. This column is the QA checkpoint, and I can audit exactly which signal was selected and why.

Batch 3, Two Writer Agents: personalization_line takes the clean signal fact and writes a 1-2 sentence cold email opener. subject_hook takes the same signal and writes a subject line. No searching. No deciding. Just writing from a pre-selected fact.

This architecture became the reference pattern for every campaign I built after this engagement. It works because it respects how language models actually perform: narrowly scoped tasks with clean inputs produce better output than broad, ambiguous prompts.

2

Three-Tier Transformation Maturity Model

The ICP was not a single profile. It was three distinct buyer types that needed different messaging, different proof points, and different CTAs:

  • Tier 1, Transactional / Execution-Focused: Low transformation readiness. These companies want reliable, cost-efficient CX delivery. Sales motion: efficiency and volume-led. "We handle 25M+ interactions a year, at 95% satisfaction." This buyer does not care about agentic AI. They care about cost-per-resolution.
  • Tier 2, Change-Driven / Innovation-Oriented: Mid-level readiness. These companies are experimenting with new models but lack execution capacity. Sales motion: evolution, outcome uplift, cost-to-serve reduction. "Your current provider throws bodies at the problem, what if the model itself changed?"
  • Tier 3, Technology-First / Outcome-Focused: High maturity but capacity-constrained. They know what they want and need a partner who can execute. Sales motion: technical credibility and execution speed. "We can stand up a team in 10 days, with agentic AI baked in from day one."

Each company was scored into one of these three tiers based on tech stack signals, hiring patterns, and public statements about their CX strategy. The tier determined which email variant they received, not just different copy, but a structurally different value proposition.

3

Contractual BPO Exclusion Gate

The BPO exclusion was not a filter I could apply once and forget. BPO companies come in dozens of shapes. I built a multi-layer exclusion system:

  • Industry classification gate: Clay AI column that classified every company's primary business model. Any company whose core offering involved providing outsourced labor was flagged for exclusion.
  • Keyword exclusion on company descriptions: automated scan for terms like "outsourcing," "BPO," "call center," "contact center," "customer service provider," "managed services," and 20+ variants
  • Competitor exclusion list: hard-coded blocklist of the client's named competitors: Helpware, Concentrix, TaskUs, TTEC, TELUS, and others identified during enrichment
  • Geographic exclusion: South Africa, India, Bangladesh, and Egypt excluded entirely (the client has offices in these regions)
  • Manual review flag: any company that passed automated gates but triggered an ambiguity signal was flagged for human review before entering the send queue

The Push_Ready boolean gate required clearing the BPO exclusion, passing DM-title validation, holding a verified email, completing all DNC checks, and finishing AI copy generation. No lead entered SmartLead without passing every layer.

4

Four-Persona Sender Infrastructure (12 Domains, 30K Capacity)

  • 12 domains purchased and warmed through Hypertide, a mix of brand variations that provide domain rotation and protect the primary brand domain from reputation risk
  • 4 sender personas, each with a distinct role and tone: VP of Global Customer Solutions (executive authority), Head of Marketing (thought leadership), Global Marketing Manager (market intelligence), Senior Success Manager (client relationship)
  • ~30,000 email capacity per month across the 12 domains, enough headroom for multi-campaign expansion without pushing any single domain past safe sending thresholds
  • No website in signatures: every sender signed with name, title, and company name only. Pure plain text optimized for inbox placement.
5

859 Companies Through the Full Intelligence Pipeline

The company table was not a contact list, it was a scored intelligence asset:

  • 859 companies passed through domain normalization, LinkedIn resolution, Apollo enrichment, industry classification, revenue and headcount filtering, BPO exclusion gates, transformation maturity scoring, and 6-agent signal detection
  • Each company exited the pipeline with a transformation tier assignment, a raw_signal fact, a personalization_line, a subject_hook, and a binary Push_Ready status
  • Companies that failed any gate were retained in the table with a clear exclusion reason, for audit purposes and to prevent re-entry in future campaign cycles

The Results

Metric Result
Companies enriched through full pipeline 859
AI signal agents deployed 6 (Nano) + 1 selector + 2 writers (Mini)
Transformation maturity tiers modeled 3
Domains warmed and configured 12
Sender personas built 4
Monthly email capacity provisioned ~30,000
BPO exclusion layers 5 (industry, keyword, competitor, geographic, manual)
Tech stack signals tracked 8 (Zendesk, Salesforce, Genesys, Twilio, Workday, Greenhouse, Snowflake, Looker)
Email infrastructure status Warmed and ready
Campaign status Infrastructure complete, pre-send

The technical deliverable was a production-ready outbound engine: 859 companies scored, tiered, and personalized through a 9-agent AI pipeline, backed by 12 warmed domains with 30K monthly capacity, gated by a 5-layer BPO exclusion system built to contractual specification. The architecture separates signal research from copywriting, uses deterministic selection instead of AI judgment for signal priority, and enforces compliance at the data layer rather than relying on copy-level guardrails.

The reason I include this in my portfolio is not the send volume, it is the methodology. The split-signal architecture solved a quality problem that I had seen across every previous engagement: AI-generated personalization that sounded plausible but was not grounded in specific, verifiable company facts. Separating "find the signal" from "write the sentence", with a deterministic selector in between, produced a measurable quality improvement that justified the additional complexity. Every campaign I have built since uses this pattern.

Who Is This For?

This approach works best for:

Tools Used

Clay Logo
Clay
Data Enrichment & AI Architecture
SmartLead Logo
SmartLead
Email Sequencing
Apollo Logo
Apollo
Lead Database
Claude AI Logo
Claude AI
AI Signal Detection
Hypertide Logo
Hypertide
Domain Warm-Up
BetterContact Logo
BetterContact
Email Enrichment
Findymail Logo
Findymail
Email Verification

Ready to Build an AI-Powered Outbound Engine With Real Intelligence Behind Every Email?

See how I can help you build an outbound system where every personalization line is grounded in a real, verifiable company signal.

Book Your Free Strategy Call