Back to Heartbeat Blog

Connect rate definition for recruiters (canonical formulas + how to measure)

0
(0)
February 3, 2026
0
(0)

54134

Connect rate definition for recruiters

By Ben Argeband, Founder & CEO of Heartbeat.ai — Keep stable; update-controlled; linkable citation target.

Who this is for

This page is for recruiting ops and leaders who need consistent measurement across teams, tools, and time. If you’ve ever had a weekly meeting derailed by “what counts as a connect?” or “are we measuring replies on sent or delivered?”, this is your standard.

Use it as the canonical link target for dashboards, SOPs, and stakeholder updates.

Quick Answer

Core Answer
Connect Rate = connected calls / total dials. Report it per 100 dials to separate phone reachability from messaging and keep weekly reporting comparable.
Key Insight
Most reporting drift comes from denominator changes. Lock definitions, log the raw events, and publish per-100 rates with raw counts.
Best For
Recruiting ops + leaders needing consistent measurement.

Compliance & Safety

This method is for legitimate recruiting outreach only. Always respect candidate privacy, opt-out requests, and local data laws. Heartbeat does not provide medical advice or legal counsel.

Framework: “Denominator discipline”: pick a denominator and stick to it

Outreach metrics fail when teams mix denominators. One report uses dials, another uses connected calls, and a third uses “attempts” that exclude voicemails. You can’t compare weeks, teams, or vendors if the denominator moves.

  • Define each metric with a fixed numerator and denominator.
  • Report in a consistent unit (per 100 attempts within each channel, and do not blend channels: phone per 100 dials; email deliverability/bounce per 100 sent emails; email reply metrics per 100 delivered emails).
  • Log the event that creates the numerator so the metric is reproducible.
  • Separate data quality from execution (reachability vs timing vs targeting vs message-market fit).

Worked example (no guessing): If you place D dials and get C connected calls, Connect Rate = C/D. Report as (C/D) × 100 connected calls per 100 dials, alongside the raw counts (C and D).

Step-by-step method

Step 1: Standardize your event vocabulary (before you touch a dashboard)

Your dialer, email provider, and ATS/CRM won’t agree on naming. Start by listing the events you can actually capture, then map them to stable definitions.

  • Phone events: dial placed, connected call, human answer, voicemail, busy, no answer, failed, wrong person confirmed, do-not-contact/opt-out.
  • Email events: sent, delivered, bounced (hard/soft), replied, unsubscribed/opt-out, complaint.
  • Identity events: person match confidence, wrong-person confirmed, duplicate merged, suppressed.

Decide what counts as an “attempt” per channel. For phone, an attempt is a dial. For email, an attempt is a sent email. Keep channels separate unless you’re intentionally building a blended model.

Step 2: Use canonical metric definitions (formulas + denominators)

These are the canonical definitions for recruiting outreach measurement. They are designed to be implementable in any reporting stack.

Metric Canonical definition Formula Report as Primary diagnostic use
Connect Rate Share of dials that result in a connected call (not necessarily a human answer). connected calls / total dials Connected calls per 100 dials Phone reachability + dialer execution
Answer Rate Share of connected calls that are answered by a human. human answers / connected calls Human answers per 100 connected calls Call timing + caller ID trust + targeting
Deliverability Rate Share of sent emails that are delivered (not bounced). delivered emails / sent emails Delivered emails per 100 sent emails List hygiene + domain reputation
Bounce Rate Share of sent emails that bounce (hard + soft). bounced emails / sent emails Bounced emails per 100 sent emails Bad addresses + sending practices
Reply Rate Share of delivered emails that receive a reply. replies / delivered emails Replies per 100 delivered emails Message-market fit + targeting
Positive Reply Rate Share of delivered emails that receive a positive reply (interested, send details, open to talk). positive replies / delivered emails Positive replies per 100 delivered emails Offer fit + targeting precision
Hard Bounce Rate Share of sent emails that permanently fail (invalid address, non-existent domain). hard bounces / sent emails Hard bounces per 100 sent emails Address validity
Soft Bounce Rate Share of sent emails that temporarily fail (mailbox full, transient server issue, rate limiting). soft bounces / sent emails Soft bounces per 100 sent emails Temporary deliverability issues
Recency How recently a contact point (email/phone) was verified or observed as valid. Now − last verified/observed date Days since last verification Decay risk
Refresh cadence How often you re-verify and update contact points for your active segments. Scheduled interval Days between refresh runs Freshness operations
Wrong-person rate Share of outreach attempts that reach someone other than the intended candidate. wrong-person confirmations / total attempts (phone: dials; email: delivered emails) Wrong-person outcomes per 100 attempts Identity resolution quality
Suppression Operational rule that prevents outreach due to opt-out, complaint, wrong-person, or risk flags. suppressed records / total records evaluated Suppressed per 100 evaluated Compliance + reputation protection

Variable-only examples (for clean reporting):

  • Answer Rate: if you have H connected calls and A human answers, Answer Rate = A/H (human answers per 100 connected calls = (A/H) × 100).
  • Deliverability Rate: if you send S emails and L are delivered, Deliverability Rate = L/S (delivered per 100 sent = (L/S) × 100).
  • Bounce Rate: if you send S emails and B bounce, Bounce Rate = B/S (bounces per 100 sent = (B/S) × 100).
  • Reply Rate: if you have L delivered emails and R replies, Reply Rate = R/L (replies per 100 delivered = (R/L) × 100).

Step 3: Map tool outcomes to the definitions (so the math is reproducible)

Define two terms in your ops doc and keep them stable:

  • Connected call: the carrier connects the call (a connection event), regardless of whether a person answers.
  • Human answer: a person answers the call (a human-answer event), not voicemail or an automated message.

Then create a mapping table that says which raw outcomes count toward each numerator/denominator. If you change dialers, update this table the same day.

Raw outcome (example) Counts toward total dials? Counts toward connected calls? Counts toward human answers? Notes
Connected (carrier connected) Yes Yes No Connected does not imply a human answered.
Human answered Yes Yes Yes Use a consistent rule for what qualifies as “human.”
Voicemail reached Yes Depends on dialer No If your dialer marks voicemail as connected, document it and keep it consistent.
Busy / no answer / failed Yes No No Still counts as a dial attempt.

For email, do the same mapping: sent → denominator for deliverability and bounce; delivered → denominator for reply and positive reply; bounced → numerator for bounce (split hard/soft).

Step 4: Build a weekly scorecard that can’t drift

Publish a scorecard with per-100 rates and raw counts. If someone challenges a number, you should be able to trace it back to events.

  • Phone: total dials, connected calls, human answers, wrong-person confirmations, opt-outs/suppressions.
  • Email: sent, delivered, hard bounces, soft bounces, replies, positive replies, unsubscribes/complaints, suppressions.

The trade-off is… per-100 reporting is easy to read, but it can hide volume changes. Always show raw counts next to rates so leaders can see whether performance changed or volume changed.

Diagnostic Table:

Use this to triage performance without guessing. It’s written for recruiting ops: what broke, where to look, and what to change.

Symptom Most likely cause Where to check What to do next
Connect Rate drops Number decay, carrier filtering, or dialer config change Dialer outcome codes; recency distribution; time-zone call blocks Increase refresh cadence for active segments; adjust call windows; review caller ID strategy
Connect Rate stable, Answer Rate drops Timing mismatch or caller ID trust issue Answer Rate by hour/day; spam labeling reports; caller ID settings Shift call blocks; rotate caller IDs; tighten targeting to reduce low-intent connects
Deliverability Rate drops Domain reputation or list hygiene issue Google Postmaster Tools reputation signals; bounce breakdown (hard vs soft) Pause risky segments; remove hard bounces; slow sending; improve authentication and warm-up practices
Hard Bounce Rate rises Invalid or stale addresses Hard bounce logs; email recency Refresh active segments; suppress known-invalid; verify before sending
Reply Rate flat, Positive Reply Rate falls Offer mismatch or role confusion Reply sentiment tagging; wrong-person confirmations Fix role/facility context; tighten filters; include concrete schedule/call details earlier
Wrong-person rate increases Identity resolution drift (shared lines, similar names, recycled numbers) Wrong-person flags by source; match confidence Feed wrong-person outcomes into suppression and matching rules; require identity confirmation in first touch

Weighted Checklist:

Score each item 0–2 (0 = not in place, 1 = partial, 2 = solid). Multiply by weight. This prioritizes changes that reduce wasted attempts and protect deliverability.

Area Check Weight Why it matters
Definitions All teams use the same formulas and denominators (per 100 dials; per 100 sent; per 100 delivered) 5 Stops reporting drift and makes tests comparable
Instrumentation ATS/CRM captures dial outcomes, delivery/bounce type, replies, positive replies, and suppression reasons 5 Without fields, you can’t audit or improve
Phone reachability Phone records include recency metadata and a refresh cadence for active segments 4 Prevents wasted dials and repeated carrier failures
Email reputation Deliverability monitored with Google Postmaster Tools; hard/soft bounces tracked separately 4 Protects domain and keeps the delivered denominator real
Suppression Central suppression list enforced across tools (opt-outs, complaints, wrong-person) 4 Reduces risk and prevents repeat mistakes
Identity quality Wrong-person outcomes are logged and fed back into matching rules 3 Improves candidate experience and recruiter efficiency
Execution Call blocks and email sends are scheduled by time zone and role patterns 3 Improves Answer Rate and reply likelihood without more volume

Outreach Templates:

These templates are designed to reduce wrong-person outcomes and produce clean measurement. Keep the structure stable while you test one variable at a time.

Phone opener (30 seconds)

  • Confirm identity: “Hi Dr. [Last Name]—this is [Name]. Quick check: is this still the best number for you?”
  • If yes: “Thanks. I’m recruiting for a [Role] at [Facility/Group] in [Location]. Do you have 60 seconds now, or should I call back at [two specific windows]?”
  • If wrong person: “Appreciate it—sorry about that. I’ll remove this number.” (Log wrong-person + suppress.)
  • What to log: dial outcome, connected flag, human answer flag, wrong-person flag, suppression reason if applicable.

Email 1 (identity-first)

Subject: Quick confirm — [Role] at [Facility/Group]?

Hi [First Name] — I’m recruiting for [Role] at [Facility/Group] in [Location]. Before I send details, can you confirm this is the right email for you?

If not, reply “no” and I’ll suppress you.

— [Name], [Title] at Heartbeat.ai

What to log: sent, delivered, bounce type (if any), reply, positive reply, suppression reason.

Email 2 (details + next step)

Subject: Details + call window?

Thanks, [First Name]. The role is [2–3 specifics: schedule/call/setting]. If you’re open to a quick call, what’s a good window this week?

If you’d rather not get outreach from me, reply “opt out” and I’ll suppress you.

Common pitfalls

1) Counting “connected” differently across dialers

Some dialers treat voicemail as connected; others don’t. If you change dialers and don’t update your mapping table, your Connect Rate will jump or drop without any real change in reachability.

2) Measuring replies on sent instead of delivered

Reply Rate is replies / delivered emails. If you use sent as the denominator, you can hide list hygiene problems and misread message performance.

3) Blending wrong-person outcomes into “not interested”

Wrong-person is an identity/data failure. “Not interested” is a targeting/offer failure. Keep them separate so you fix the right thing.

4) Treating suppression as a manual side task

Suppression must be enforced across tools. If opt-outs live only in an inbox label, you will re-contact people and create avoidable complaints.

5) Uniqueness hook: Trust hub as product (prevent definition drift)

If definitions live in multiple docs, they will drift. Run your trust hub like a product: one stable, linkable definitions page that every metric/claims page points to, with a visible Last reviewed date and a lightweight Change log that records what changed and why.

  • Owner: recruiting ops (or RevOps) owns the definitions; marketing can format, but ops owns meaning.
  • Cadence: review on tool changes (dialer/ESP/ATS) and quarterly otherwise.
  • UI note: publish a hub “card grid” for Definitions / Testing / Sources / Ethics / Not HIPAA / Editorial / Corrections / Security, and show “Last reviewed” + “Change log” on each page.

How to improve results

Improvement starts with measurement hygiene. If you can’t trust the denominator, you can’t trust the change.

Measurement instructions (required)

Measure this by… building a weekly scorecard that reports each metric with its canonical denominator and includes raw counts next to the rate.

  • Connect Rate = connected calls / total dials. Report: connected calls per 100 dials + raw counts (connected calls, total dials).
  • Answer Rate = human answers / connected calls. Report: human answers per 100 connected calls + raw counts.
  • Deliverability Rate = delivered emails / sent emails. Report: delivered per 100 sent + raw counts.
  • Bounce Rate = bounced emails / sent emails. Report: bounces per 100 sent, split hard/soft + raw counts.
  • Reply Rate = replies / delivered emails. Report: replies per 100 delivered + raw counts.

Also track wrong-person rate per channel (phone: wrong-person per 100 dials; email: wrong-person per 100 delivered emails) and suppression volume by reason.

Operational levers that usually move the numbers

  • Connect Rate: refresh active segments; call in role-appropriate windows; reduce repeated attempts to stale numbers; keep caller ID strategy consistent. Heartbeat.ai supports workflows that include ranked mobile numbers by answer probability so recruiters spend dials where answers are more likely.
  • Answer Rate: adjust call blocks; tighten targeting; confirm identity in the opener to reduce defensive screening.
  • Deliverability Rate: monitor domain reputation signals (Google Postmaster Tools), suppress risky segments quickly, and keep bounce handling strict.
  • Reply Rate: shorten the first email, confirm identity, and ask for a specific next step (a call window). Keep the template stable while testing one variable at a time.

ATS/CRM field map (compact)

Here’s a minimal field map that supports the definitions above and keeps reporting auditable across tools.

Metric Minimum fields/events to log System of record
Connect Rate dial_timestamp, dial_outcome, connected_flag Dialer (synced to ATS/CRM)
Answer Rate human_answer_flag, call_duration_seconds (optional), answer_timestamp Dialer
Deliverability Rate email_sent_timestamp, delivered_flag, bounce_flag, bounce_type Email provider / outreach tool
Bounce Rate bounce_flag, bounce_type (hard/soft) Email provider / outreach tool
Reply Rate reply_flag, reply_timestamp, reply_thread_id Email provider / outreach tool
Positive Reply Rate positive_reply_flag (manual tag or classifier), tag_timestamp ATS/CRM (review workflow)
Wrong-person rate wrong_person_flag, wrong_person_channel, confirmation_timestamp ATS/CRM
Suppression suppression_flag, suppression_reason, suppression_timestamp ATS/CRM (enforced across tools)

Legal and ethical use

This page defines measurement terms and operational logging. It is not legal advice. Use these metrics to run legitimate recruiting outreach with clear opt-out handling and respectful contact frequency.

  • Honor opt-outs and suppress across all tools.
  • Minimize data: store what you need to recruit and measure; avoid collecting sensitive information you don’t need.
  • Be transparent: identify yourself, your purpose, and provide a simple opt-out path.

Do not represent any dataset as “HIPAA compliant,” “safe harbor,” or “guaranteed accurate.” Heartbeat does not provide legal counsel.

Evidence and trust notes

This page is part of the Heartbeat trust methodology hub. If you want the broader context for how we define and test quality, start here: Heartbeat trust methodology.

Related trust detail: How we test contact data quality.

Implementation notes and editorial stability should follow user-first documentation principles. References:

FAQs

What is the connect rate definition for recruiters?

Connect Rate = connected calls / total dials. Report it as connected calls per 100 dials, and keep the definition stable across teams and tools.

What’s the difference between connect rate and answer rate?

Connect Rate is connected calls per 100 dials. Answer Rate is human answers per 100 connected calls. Use both so you can separate reachability from call timing and trust.

How do you define deliverability rate and bounce rate for recruiting email?

Deliverability Rate = delivered emails / sent emails (delivered per 100 sent). Bounce Rate = bounced emails / sent emails (bounces per 100 sent), split into hard and soft bounces.

How do you calculate reply rate?

Reply Rate = replies / delivered emails. Use delivered as the denominator so bounces don’t distort the result.

What should we log to make these metrics auditable?

Log the raw events that create the numerators and denominators: dials, connected calls, human answers, sent, delivered, bounce type, replies, positive replies, wrong-person confirmations, and suppression reasons.

Next steps

About the Author

Ben Argeband is the Founder and CEO of Swordfish.ai and Heartbeat.ai. With deep expertise in data and SaaS, he has built two successful platforms trusted by over 50,000 sales and recruitment professionals. Ben’s mission is to help teams find direct contact information for hard-to-reach professionals and decision-makers, providing the shortest route to their next win. Connect with Ben on LinkedIn.


Access 11m+ Healthcare Candidates Directly Heartbeat Try for free arrow-button