![]()
Reply rate tracking for physician outreach
By Ben Argeband, Founder & CEO of Heartbeat.ai — Extremely operational: tables, definitions, logging fields.
Physician outreach fails in two ways that look identical in a pipeline meeting: your emails aren’t getting delivered, or they’re delivered but your targeting/message isn’t connecting. If you don’t measure those separately, you’ll waste time rewriting copy that never reached the inbox. This playbook gives you a clean measurement spine (definitions, ATS field map, ownership, and a weekly report) so you can fix the right thing first.
What’s on this page:
Who this is for
This is for recruiters and ops teams who want consistent outreach reporting across email/SMS/call sequences—without metric drift when tools or recruiters change.
- Recruiting ops building weekly reporting and coaching loops
- Agency owners who need predictable throughput and fewer wasted touches
- Recruiters who want to know whether to fix list quality, targeting, or copy
Quick Answer
- Core Answer
- Track reply rate using delivered emails as the denominator, dedupe to one reply per person per campaign, and label positive replies with a shared rubric.
- Key Insight
- Separate deliverability from replies so you don’t confuse list/domain problems with messaging problems.
- Best For
- Recruiters and ops teams who want consistent outreach reporting.
Compliance & Safety
This method is for legitimate recruiting outreach only. Always respect candidate privacy, opt-out requests, and local data laws. Heartbeat does not provide medical advice or legal counsel.
Framework: The “Measure Then Fix” Ladder: Deliverability → Reply → Positive Reply
- Deliverability: did the email get delivered (not bounced)?
- Reply: did a human reply (any reply)?
- Positive Reply: did the reply create a next step (call, details, timing)?
Operational rule: you don’t touch messaging until deliverability is stable, and you don’t celebrate replies until you can separate positive replies from everything else.
Step-by-step method
Step 1: Lock definitions and denominators (non-negotiable)
- Deliverability Rate = delivered emails / sent emails (per 100 sent emails).
- Bounce Rate = bounced emails / sent emails (per 100 sent emails).
- Reply Rate = replies / delivered emails (per 100 delivered emails).
- Positive Reply Rate = positive replies / delivered emails (per 100 delivered emails).
Two rules that prevent bad math:
- Reply metrics use delivered as the denominator so bounces don’t hide inside “low reply.”
- Count max 1 reply per person per campaign unless you explicitly want “total replies.”
Step 2: Choose your unit of analysis (campaign-first)
Pick a primary rollup and keep it consistent:
- Campaign (recommended): a defined list + sequence + time window (example: “Cardiology-Hospital-2026W03”).
- Recruiter-week: best for coaching and capacity planning.
- Req: best for hiring manager updates, but can hide list quality issues if sourcing is shared.
Step 3: Instrument the funnel (what you must log)
Track these counts per campaign:
- Sent emails
- Delivered emails
- Bounced emails (hard vs soft if available)
- Replies (any)
- Positive replies
- Opt-outs / unsubscribes
- Spam complaint proxy signals (defined in Step 6)
This requires manual verification. Most sequencing tools can capture replies, but positive vs negative classification and deduping (one reply per person per campaign) still needs a lightweight review step. If you skip this, your “positive reply” metric becomes recruiter-dependent and unusable for ops.
Step 4: Standardize what “positive” means (simple rubric)
- Positive: “Yes, send details.” “Call me after clinic.” “I’m open to locums in March.” “Can you share comp and call?”
- Neutral: “Not now.” “Maybe later.”
- Negative: “Not interested.” “Stop emailing me.” “Wrong specialty.”
Rule: if it doesn’t move you toward a call or a defined next step, it’s not positive.
Step 5: Implement the ATS field map + data dictionary (ATS_FIELD_MAP)
This is the measurement spine. Your sequencing tool can change; your ATS/CRM is where reporting should survive.
| Metric / Attribute | Where it lives | Field name (example) | How it’s populated | Notes |
|---|---|---|---|---|
| Campaign ID | ATS/CRM | Outreach Campaign | Manual picklist | Naming convention: Specialty-Setting-YYYYW## (no free-text) |
| Campaign Start Date | ATS/CRM | Campaign Start | Auto or manual | Recommended: add Campaign End to enforce a strict campaign window for dedupe and audits |
| Campaign End Date | ATS/CRM | Campaign End | Auto or manual | Recommended for clean weekly rollups and cross-channel dedupe |
| Channel | ATS/CRM | Outreach Channel | Manual picklist | Email / SMS / Call / Mixed |
| Sent count | Sequencer export | Emails Sent | Weekly import | Integer per campaign |
| Delivered count | Sequencer export | Emails Delivered | Weekly import | Integer per campaign; denominator for reply metrics |
| Bounced count | Sequencer export | Emails Bounced | Weekly import | Split hard/soft if possible |
| Reply count | Sequencer + ops review | Email Replies | Weekly import + audit | Deduplicate to 1 reply/person/campaign |
| Positive reply count | ATS/CRM | Positive Replies | Manual tagging | Use the rubric from Step 4 |
| Opt-out | ATS/CRM | Do Not Contact | Immediate manual update | Must suppress across all tools |
| Source | ATS/CRM | Source Detail | Auto or manual | Include Heartbeat.ai when applicable |
ATS data dictionary (copy/paste into your ops doc)
- Outreach Campaign (picklist): format Specialty-Setting-YYYYW##. No free-text values.
- Campaign Window: defined by Campaign Start and Campaign End (recommended). If you can’t store Campaign End, define a fixed window rule in ops (for example: “Monday–Sunday of the campaign week”).
- Outreach Channel (picklist): Email, SMS, Call, Mixed.
- Reply (boolean): true if any non-auto reply is received from the person within the campaign window.
- Positive Reply (boolean): true if reply meets Step 4 rubric.
- Reply Dedupe Rule: count max 1 reply per person per campaign regardless of channel.
- Reply Timestamp (datetime): first reply time in campaign window.
- Do Not Contact (boolean): true if opt-out requested; must sync to all tools.
If your ATS can’t support campaigns (fallback that still works)
- Keep a campaign ledger (spreadsheet) with Campaign ID, Start/End, and weekly counts (sent/delivered/bounced/replies/positive/opt-outs).
- On each candidate record, store only Outreach Campaign (ID) + Reply + Positive Reply + Do Not Contact.
- Do weekly rollups from the ledger, not from ad-hoc recruiter notes.
Ownership & cadence (so this actually runs)
- Recruiter (daily): tag Reply and Positive Reply; apply Do Not Contact immediately when requested.
- Ops (weekly): import sent/delivered/bounced/replies counts; audit a sample of reply labels for consistency.
- Ops (weekly): review Postmaster Tools and proxy signals; log any deliverability incident notes.
- Recruiting lead (weekly): review the one-page report and approve one-variable tests for the next week.
Step 6: Deliverability monitoring + spam complaint proxy (what to track when you can’t see true complaints)
Deliverability is not just bounces. You need domain-level monitoring and internal proxy signals so you can catch reputation issues early. For Google Workspace domains, use Postmaster Tools to monitor trends.
Spam complaint proxy signals (operational)
- Sudden drop in Deliverability Rate (delivered emails / sent emails) without a list-source change
- Sudden rise in Bounce Rate (bounced emails / sent emails), especially hard bounces
- Spike in opt-outs/unsubscribes for a campaign compared to the prior 2–4 weeks for the same sending domain and specialty slice
- Increase in replies like “stop,” “spam,” or “remove me” (tag as negative + compliance)
- Postmaster Tools trend deterioration coinciding with reply-rate drops
Action rule: if proxy signals spike, pause scaling volume, audit suppression, and fix list quality before you test new copy.
Step 7: Build the weekly report (one table, one page)
| Week | Campaign | Sent | Delivered | Bounced | Replies | Positive Replies | Deliverability Rate | Bounce Rate | Reply Rate | Positive Reply Rate | Notes / Changes |
|---|---|---|---|---|---|---|---|---|---|---|---|
| YYYY-WW | Specialty-Setting-YYYYW## | # | # | # | # | # | Delivered/Sent | Bounced/Sent | Replies/Delivered | PosReplies/Delivered | List source, domain, sequence version, send window, suppression update |
Diagnostic Table:
| What you see | Most likely cause | What to check next | Fix (fastest first) |
|---|---|---|---|
| High bounce rate, low delivered | Bad emails / stale data | Bounce type (hard vs soft), list source, recent imports | Verify/suppress risky addresses; tighten sourcing inputs; re-run verification before next send |
| Stable deliverability, low reply rate | Targeting or message mismatch | Specialty fit, geography, clarity of schedule/call/comp, subject line relevance | Adjust targeting slice; rewrite first 2 lines for relevance; test one variable per week |
| Replies exist, but low positive reply | Offer friction or unclear next step | Do replies ask basic questions you didn’t answer? | Add minimum viable clarity; tighten CTA to a specific scheduling ask |
| Reply rate drops suddenly across campaigns | Deliverability incident or suppression failure | Postmaster Tools trend change, opt-out spike, recent tool/domain/list change | Pause scaling volume; audit suppression; fix list quality before copy tests |
| Reply counts look inflated | Double-counting across channels or threads | Are email and SMS replies both being counted for the same person/campaign? | Enforce 1 reply/person/campaign regardless of channel; store Reply Timestamp for audit |
Weighted Checklist:
- (5) Definitions locked: deliverability rate, bounce rate, reply rate, positive reply rate documented and shared.
- (5) Denominators enforced: reply metrics use delivered emails; bounce uses sent emails.
- (4) Campaign naming convention: strict IDs across ATS/CRM and sequencing exports.
- (4) Suppression workflow: opt-outs update a single source of truth and sync to all tools within 24 hours.
- (4) Positive reply rubric: binary rubric + examples; weekly calibration sample.
- (3) Weekly report cadence: same day/time, one table, notes required.
- (3) Deliverability monitoring: Postmaster Tools checked weekly; incidents logged.
- (2) Audit trail: random sample of replies reviewed weekly for misclassification.
Outreach Templates:
Template 1: Email (first touch) — fast yes/no
Subject: Quick question — {Specialty} role near {City}
Dr. {LastName} — recruiting for a {Specialty} opening with {EmployerType} near {City}. Is it worth sending details, or should I close the loop?
If yes, what’s the best number/time to reach you after clinic this week?
— {YourName}
Template 2: Email (follow-up) — constraint-first
Subject: Re: {Specialty} — schedule / call / comp
Keeping this tight: what matters most for you right now—schedule, call, comp, or location?
If you reply with one word (schedule/call/comp/location), I’ll send only the relevant details.
Template 3: SMS (after an email open or call attempt)
Dr. {LastName}, this is {YourName} re: a {Specialty} role near {City}. OK to text you details here, or prefer email?
Template 4: Voicemail (measurable next step)
Hi Dr. {LastName}, {YourName} calling about a {Specialty} opportunity near {City}. If you’re open to details, text me “yes” at {Number} and I’ll send a one-page summary. If not, text “no” and I’ll stop.
If you’re sourcing physicians with Heartbeat.ai, you can operationalize this faster by using ranked mobile numbers by answer probability and keeping campaign IDs consistent from day one.
Common pitfalls
Pitfall 1: Calculating reply rate on “sent”
If you calculate replies / sent, you hide deliverability issues inside “low reply.” Use replies / delivered so you can see whether the message is failing or the email never arrived.
Pitfall 2: Counting auto-replies as replies
Exclude auto-replies/OOO from reply rate. Track them separately if you need an operational reminder to follow up later.
Pitfall 3: Letting each recruiter self-grade “positive”
Without a rubric and calibration, your positive reply metric becomes inconsistent. Use the Step 4 rubric and audit a small sample weekly.
Pitfall 4: Suppression failures
If opt-outs aren’t centralized and synced, you’ll keep contacting people who asked you to stop. That’s a compliance risk and it can damage deliverability.
Pitfall 5: Double-counting cross-channel replies
For clean ops reporting, count one reply per person per campaign regardless of channel and store the first Reply Timestamp for audit.
Once measurement is clean, if you need the behavioral side, read why physicians don’t reply (and what to do about it).
How to improve results
1) Fix deliverability before you touch messaging
- Verify and suppress risky emails before sending, especially older lists.
- Segment reporting by list source and sending domain so you can isolate the problem.
- Use Postmaster Tools trends plus your proxy signals (Step 6) to catch issues early.
If you need a concrete workflow for list hygiene, use email verification for healthcare recruiting.
2) Run one-variable tests on reply rate (campaign-level)
Pick one variable per week per campaign (don’t stack changes):
- Subject line
- First two lines (relevance)
- CTA (specific scheduling ask vs vague ask)
- Targeting slice (setting, geography, seniority)
Measure this by… holding the list slice and send window constant, then comparing replies per 100 delivered emails week-over-week for the same campaign ID.
3) Improve positive replies by removing “basic questions” from the thread
- State setting (hospital employed vs private group) and schedule shape
- Be explicit about call expectations when known
- Offer two concrete call windows
4) Enforce change-log discipline (so improvements are attributable)
- List source (and whether it was refreshed/verified)
- Sending domain and mailbox used
- Sequence version (v1, v2, etc.)
- Send window (day/time range)
- Suppression update (what changed, when)
If you want a multi-channel workflow that’s easy to log, see a physician recruiting sequence across email, SMS, and calls.
Legal and ethical use
- Honor opt-outs immediately and suppress across all tools.
- Only contact physicians for legitimate recruiting opportunities; avoid deceptive subject lines.
- Minimize data: store what you need to recruit and report.
- Coordinate with your compliance/legal team on applicable privacy and communications rules in your jurisdictions.
Evidence and trust notes
- Google Postmaster Tools (domain-level deliverability and reputation signals; not per-recipient inbox placement)
- Google Workspace deliverability basics (baseline practices and common causes of delivery issues)
How we approach accuracy and sourcing in Heartbeat resources: Heartbeat trust methodology.
Reminder: this page does not provide any guarantee of inbox placement or outcomes.
FAQs
What is reply rate in physician outreach?
Reply Rate = replies / delivered emails (per 100 delivered emails).
Should reply rate be calculated on delivered or sent?
Delivered. If you use sent, you’ll mix deliverability failures into your reply metric and diagnose the wrong problem.
What counts as a positive reply?
Positive Reply is a response that indicates interest or creates a next step (send details, schedule a call, share timing). Track it separately from any reply.
Do auto-replies or out-of-office messages count as replies?
No. Exclude auto-replies/OOO from reply rate. If you track them, track them separately so they don’t inflate performance.
Do unsubscribes count as replies?
No. Track opt-outs/unsubscribes separately as a compliance and deliverability signal. A reply is a message response; an unsubscribe is a suppression event.
How do I avoid double-counting replies across email and SMS?
Use a single rule: count one reply per person per campaign regardless of channel. Store the first Reply Timestamp in the campaign window so ops can audit edge cases.
Next steps
- If you’re considering static lists: they decay fast. Standard ops is Access + Refresh + Verification + Suppression before you scale sends.
- Implement the ATS field map + data dictionary above and run the weekly report for 3 consecutive weeks without changing definitions.
- Then run one-variable tests on targeting and messaging using the same campaign IDs.
- If you want to operationalize sourcing + outreach faster, start free search & preview data in Heartbeat.ai and build campaigns that are measurable from day one.
About the Author
Ben Argeband is the Founder and CEO of Swordfish.ai and Heartbeat.ai. With deep expertise in data and SaaS, he has built two successful platforms trusted by over 50,000 sales and recruitment professionals. Ben’s mission is to help teams find direct contact information for hard-to-reach professionals and decision-makers, providing the shortest route to their next win. Connect with Ben on LinkedIn.