Quick answer. Conventional procurement logic says larger outsourcing contracts always have better unit economics. In outbound voice in 2026, the opposite is often true. A 10-seat pilot lets buyers tune scripts, route lists, and compliance posture before committing capital, and the unit economics often beat 50-seat blanket contracts on actual outcomes per booked transfer. Here is why, what to test, and the four KPIs that determine whether scaling is justified.
Procurement teams default to volume thinking. Larger contract, lower hourly rate, better unit economics. That heuristic still works in steady-state customer service operations. It stopped working in outbound voice somewhere between the September 2024 FCC declaratory ruling under CG Docket No. 02-278 and the persistent industry-typical voice attrition data published by ContactBabel and the Quality Assurance and Training Connection (QATC). What pencils today is a small, tuned, observable program. What does not pencil is a 50-seat blanket contract sized off forecast and locked into annual prepay. This piece walks the assumption, the six things a pilot actually tests, the modeled unit economics, the four KPIs that matter, the scale triggers, and the pricing structure that aligns vendor and buyer.
Why the "bigger is cheaper" assumption breaks in 2026
The headline rate per seat falls when you commit to more seats. That part is real. Two costs scale faster than the rate decreases, and they swamp the discount on regulated outbound voice.
First, compliance load. The September 2024 FCC declaratory ruling under CG Docket No. 02-278 expanded location-disclosure obligations for offshore-originated calls into US consumers in regulated verticals. Whether disclosure language, list scrubbing, and call monitoring cost the program $X or $5X depends on how cleanly the scripts and routing are tuned. A 50-seat program calling at scale on un-tuned scripts amplifies every compliance defect by a factor of five versus a 10-seat program running the same scripts. Defects do not get cheaper per call at volume. They get more expensive per call at volume because the regulatory and reputational exposure compounds.
Second, attrition replacement. ContactBabel's US Contact Center Decision-Makers' Guide places industry voice attrition broadly in a 45 to 60 percent annualized band, and QATC industry data sits in a similar range. Industry-typical replacement cost per fronter (recruit, screen, train, ramp, lost productivity) is commonly modeled at $3,000 to $5,000. On a 10-seat program at the offshore band, that is roughly $15,000 to $30,000 of annualized churn cost. On a 50-seat program at the same band, it is $75,000 to $150,000, plus the supervisor and trainer load to actually execute the replacements. The unit rate decrease at 50 seats has to absorb that delta before the buyer sees any net savings.
Third, ramp risk. The 50-seat program is sized on a contact-rate and pre-qualification assumption the buyer has not yet measured. If the actual pre-qualification rate is 30 percent below forecast, the buyer is now paying for 50 seats to produce the throughput 35 seats would have produced if the assumption had been tested first. A 10-seat pilot tests the assumption with three weeks of live data before the next 40 seats commit.
What a properly-structured 10-seat pilot actually tests
A pilot is not a small contract. A pilot is a measurement instrument. Six things should be under explicit test:
- Script tuning against live objections. The opening, the qualification questions, the rebuttal tree, and the disclosure language only survive contact with real prospects. Two to three iterations in the first three weeks is normal.
- Dialing posture. Time-of-day windows, day-of-week cadence, list rotation, and re-dial frequency are all measurable inputs. The wrong posture suppresses contact rate before any other variable matters.
- Warm-transfer routing. The transfer from the offshore fronter to the client's licensed US closer needs to land cleanly. Hold time, transfer warmth (full agent-to-agent handoff versus blind transfer), and the closer's acceptance rate all depend on routing setup.
- QA cadence. Call monitoring frequency, scorecard calibration, and feedback loop tightness determine how fast the program converges. A pilot that does not move QA from weekly to daily within two weeks is not learning.
- Compliance review. Every disclosure, every consent capture, every state-level wrinkle (TCPA, FCC CG Docket 02-278, vertical-specific requirements) gets stress-tested at low volume before the volume is real.
- Vendor cultural fit. Cadence of reporting, responsiveness to objection-pattern shifts, willingness to swap an underperforming fronter inside 14 days. These are observable in a pilot and unrecoverable in a 50-seat contract.
Each of these costs money to test at scale. Each can be tested cheaply at 10 seats.
Unit economics: 10 seats vs 50 seats actually modeled
The comparison below uses public benchmarks (ContactBabel attrition band, QATC turnover ranges, US Bureau of Labor Statistics contact-center occupational data on US wage floors) and treats the rate inputs as industry ranges, not CFG-specific numbers. The point is the shape of the curve, not a specific quote.
| Component | 10-Seat Pilot | 50-Seat Blanket |
|---|---|---|
| Headline hourly rate | Baseline | Lower by 8 to 15 percent |
| Annualized attrition cost (45 to 60 percent industry band, $3K to $5K per replacement) | $15K to $30K | $75K to $150K |
| Ramp risk if pre-qualification rate is 30 percent below forecast | 3 over-staffed seats max | 15 over-staffed seats |
| Compliance defect amplification | 1x | 5x |
| Time to first measured pre-qualification rate | 14 to 21 days | 14 to 21 days, but committed to 5x the seat count |
| US in-house comparison (BLS-derived loaded US wage floor) | $30K to $50K per seat per year below US in-house | $30K to $50K per seat per year below US in-house, multiplied by ramp risk |
Versus US in-house, both options save money. Between the two outsourced structures, the 50-seat program saves on hourly rate and loses on attrition exposure, ramp risk, and compliance defect amplification. The net depends entirely on whether the pre-qualification, transfer acceptance, and compliance assumptions are accurate. A pilot tests them. A blanket contract bets on them.
The 4 metrics that matter in a pilot
Forget cost per hour. Forget cost per call. In an outbound voice pilot built around the fronter model (offshore agent pre-qualifies, warm-transfers to the client's licensed US staff to close), four metrics determine whether the program is working:
- Pre-qualification rate. Percent of contacted prospects that meet the client's filter criteria. Drives the cost-per-qualified-prospect denominator.
- Transfer acceptance rate by the client's licensed staff. Percent of warm transfers the licensed US closer accepts as workable. This is the single best leading indicator of pilot health, because it is the licensed closer's blind verdict on fronter quality.
- Post-transfer disposition close rate. Percent of accepted transfers that close into a sale, enrollment, or downstream qualified outcome. This is the revenue truth metric.
- Compliance error rate. Percent of monitored calls with a flagged disclosure, consent, or scripting deviation. A program that hits the first three metrics and fails this one is a liability, not an asset.
Per-transfer cost and cost per closed deal are derived outputs of these four. Optimize the four inputs and the derived costs land where they need to.
When to scale beyond 10 seats
Three explicit triggers should justify the next 20 to 50 seats. Skipping any one of them produces the over-build the pilot was designed to prevent.
- The four pilot KPIs stabilize for two consecutive 30-day windows. Pre-qualification rate, transfer acceptance, close rate, and compliance error all need to land inside the modeled band twice in a row. One good month is variance. Two consecutive months is a signal.
- Per-transfer cost lands inside or below the modeled CAC ceiling. The buyer's existing customer acquisition cost ceiling is the budget. The pilot needs to produce qualified transfers below that ceiling on a unit basis before scaling.
- The licensed closing team has confirmed bandwidth. Doubling fronter capacity doubles warm-transfer volume. If the client's licensed US closers cannot accept the next tranche, the additional fronters generate dropped transfers and frustrated prospects, not revenue. This trigger gets skipped more than any other.
When the three triggers all clear, the buyer scales with measured assumptions. When one or two clear, the buyer fixes the gap before scaling. When zero clear, the pilot is the answer and the seat count stays at 10.
Pilot pricing structure that aligns vendor and buyer
A 10-seat pilot only works if the commercial structure aligns the vendor with the buyer. Three structural choices matter.
- No setup fee. A setup fee compensates the vendor for ramp cost regardless of pilot outcome. That is the wrong incentive. CFG runs pilots with no setup fee, which means CFG is on the hook to perform.
- No annual prepay. Annual prepay protects vendor revenue from underperformance. That is also the wrong incentive. CFG pilots run month-to-month, which means CFG keeps the engagement by hitting the four pilot KPIs every 30 days.
- Live in 7 days from signed pilot. Speed-to-test matters. The faster the pilot produces measured data, the faster the buyer can decide to scale, fix, or pause. CFG's standard pilot ramp is 7 days from signed pilot agreement.
CFG runs fronter-only rooms in Jamaica, Saint Lucia, Trinidad, Belize, and Colombia, with HQ in Toronto. We pre-qualify, we do not close, and we warm-transfer regulated work to the client's licensed US agents. The CFG outsourcing calculator runs a 60-second comparison of a 10-seat pilot against your current vendor's loaded hourly.
Get the pilot design playbook
When we publish updated pilot KPI ranges or new compliance posture notes, we send a short methodology update. No pitch. Unsubscribe anytime.
Sources
- ContactBabel. The US Contact Center Decision-Makers' Guide. Recent editions, industry attrition and turnover benchmarks.
- Quality Assurance and Training Connection (QATC). Industry attrition, turnover, and replacement-cost benchmark data.
- US Bureau of Labor Statistics. Occupational Employment and Wage Statistics, contact-center and customer-service representative occupational categories.
- Federal Communications Commission. Declaratory Ruling, CG Docket No. 02-278. September 2024. Location-disclosure obligations for offshore-originated calls into US consumers.
- Telephone Consumer Protection Act (TCPA) statutory framework and related FCC orders, on consent capture and outbound voice compliance posture.
Frequently Asked Questions
Why does a 10-seat outbound pilot often beat a 50-seat blanket contract in 2026?
In outbound voice, compliance load and attrition replacement scale faster than headline hourly rate decreases. A 10-seat pilot lets the buyer tune scripts, list segmentation, transfer routing, QA cadence, and compliance posture against live data before committing capital. On actual cost per booked transfer (the only metric that matters to revenue), a tuned 10-seat program often beats a 50-seat program that was sized off forecast rather than measured pre-qualification and transfer-acceptance rates. The 50-seat program also carries five times the attrition exposure during ramp.
What does a properly-structured 10-seat pilot actually test?
Six things. Script tuning against live objections. Dialing posture (cadence, time-of-day windows, list rotation). Warm-transfer routing to the client's licensed US closers. QA cadence and scorecard calibration. Compliance review against the client's vertical (TCPA, FCC CG Docket 02-278 disclosure language, state-level wrinkles). And vendor cultural fit. None of these can be tested at scale without burning capital.
What four metrics matter most in an outbound voice pilot?
Pre-qualification rate (percent of contacted prospects that meet client's filter criteria), transfer acceptance rate by the client's licensed staff (percent of warm transfers the licensed closer accepts as workable), post-transfer disposition close rate (percent of accepted transfers that close into a sale or enrollment), and compliance error rate (percent of monitored calls with a flagged disclosure or scripting deviation). Per-transfer cost is a derived output, not an input.
When should a buyer scale beyond a 10-seat pilot?
Three triggers. First, the four pilot KPIs stabilize for two consecutive 30-day windows (pre-qualification, transfer acceptance, close rate, compliance error). Second, per-transfer cost lands inside or below the client's modeled CAC ceiling. Third, the client's licensed US closing team confirms it has bandwidth for the warm transfer volume the next seat tranche would generate. Skipping any one of these and scaling on hope produces the 50-seat overspend the pilot was designed to prevent.
What pilot pricing structure aligns vendor and buyer incentives?
No setup fee, no annual prepay, and month-to-month commercial terms. When the vendor has not collected an annual prepay, the only way to keep the program alive is to hit the four pilot KPIs every 30 days. That is the alignment buyers should be optimizing for. CFG runs 10-seat pilots with no setup fee, no annual prepay, and live in 7 days from signed pilot, with warm-transfer to the client's licensed US closers.
Test the assumption
Run a 10-seat pilot against your current vendor
CFG runs fronter-only pilots in Jamaica, Saint Lucia, Trinidad, Belize, and Colombia. Native English, US Eastern overlap, warm-transfer to your licensed US closers. The 60-second CFG calculator compares your current vendor's loaded hourly against a 10-seat pilot. No setup fee, no annual prepay, live in 7 days from signed pilot.
Already modeled it? Book a 20-minute discovery call.