Back to Heartbeat Blog

Make automation for recruiting: 3 branching scenarios (enrichment, dedupe, refresh) with retries + suppression

0
(0)
February 3, 2026
0
(0)

54304

Make automation for recruiting

Ben Argeband, Founder & CEO of Heartbeat.ai — Simple, stepwise; avoid jargon.

Who this is for

You’re running recruiting ops (or you’re the recruiter who also owns ops). You want automations that can branch, retry, and keep your ATS/CRM clean without babysitting every run. This is for ops-minded recruiters who want more control than basic linear automations provide.

Scope: Make scenarios for enrichment, dedupe, and scheduled refresh using Heartbeat.ai via the API, with explicit consent and opt-out gates.

Quick Answer

Core Answer
Build three Make scenarios: enrich via API, dedupe using NPI/license matching, and schedule refresh runs with retries and suppression gates.
Key Insight
Make is a good fit when you need conditional routing, error handling, and scheduled refresh so recruiters stop working stale or duplicated records.
Best For
Ops-minded recruiters who want more control than basic linear automations provide.

Compliance & Safety

This method is for legitimate recruiting outreach only. Always respect candidate privacy, opt-out requests, and local data laws. Heartbeat does not provide medical advice or legal counsel.

Framework: The “Power User” Automation: Branching workflows without code

Recruiting automation fails for predictable reasons: duplicates, stale contact fields, and “one-size-fits-all” write-backs that overwrite good data. The fix is routing: different actions for different confidence levels, plus suppression before anything downstream.

The “Power User” loop:

  • Access: pull the minimum identifiers you need to match the right person.
  • Refresh: re-check on a schedule so contactability doesn’t decay.
  • Verify: only overwrite fields when the match is strong (NPI/license).
  • Suppress: enforce consent/opt-out before any downstream action.

The trade-off is… you spend more time designing match rules and branches up front, but you stop paying the hidden tax of recruiter cleanup and duplicate outreach.

Step-by-step method

Below are three Make scenarios (a set of three scenarios) that cover the workflows most recruiting teams actually need: enrichment, dedupe, and refresh. Each includes branch conditions and retry logic, plus where to place suppression checks.

Step 0: Decide your identifiers and your “golden record” rules

  • Primary match keys: NPI and license matching (state + license number) when available.
  • Fallback match keys: normalized full name + specialty + city/state (only when NPI/license is missing).
  • Golden record: define which system wins on conflicts (ATS vs CRM vs upload). Write it down.
  • Suppression source of truth: decide where consent and opt-out live and which field(s) Make must check every time.

Field mapping (minimum viable)

This is the simplest mapping that keeps your workflow debuggable and safe.

  • Inputs to the API lookup (send what you have, prefer stable identifiers): NPI; license state + license number; full name; specialty; city/state.
  • Outputs to store in ATS/CRM (for audit and routing): match method (NPI, license, name-only); match timestamp; last checked; last verified (if you maintain it); automation outcome (enriched_high_confidence, enriched_review, suppressed, refresh_updated, refresh_no_change, refresh_failed).
  • Write-back rule: overwrite contact fields only on NPI/license match; otherwise append and create a review task.
  • Suppression fields: opt-out boolean and consent status must be readable by Make before any write-back or outreach action.

Scenario 1: Enrichment + routing + suppression gate

Goal: When a recruiter creates or updates a lead, enrich it via API, then route based on match confidence and compliance.

Trigger

  • ATS/CRM “New lead” or “Updated lead” event, or a Make webhook.

Modules and logic

  1. Normalize input: standardize state abbreviations, trim whitespace, split full name if needed.
  2. Suppression gate (hard stop): if opt-out is true or consent is missing per your policy, route to “Suppressed” and stop.
  3. Heartbeat.ai API lookup: search using best available identifiers (prefer NPI/license; fall back to name + location).
  4. Router: match confidence
    • Branch A (High confidence): NPI match OR license match AND name aligns → write enriched fields back to ATS/CRM.
    • Branch B (Medium confidence): no NPI/license, but strong name + specialty + geography alignment → write limited fields + create a “Review needed” task.
    • Branch C (Low confidence): multiple possible matches or missing key fields → create a review task only; do not overwrite existing data.
  5. Audit log: store match method (NPI/license/name-only), timestamp, and which fields were updated.

Example Router condition: If NPI is present AND match method equals NPI, route to Branch A. If match method equals name-only, route to Branch B or C depending on how many candidates were returned.

Retry logic

  • On transient API failures (timeouts/5xx), retry with backoff (increase delay each attempt) and then route to an ops queue after the final attempt.
  • On “no match,” do not loop retries immediately. Instead, let Scenario 3 re-check after you collect better identifiers.

Scenario 2: Dedupe workflow using NPI/license matching

Goal: Prevent duplicate records across ATS/CRM/uploads so recruiters don’t double-contact the same clinician and your opt-out rules don’t get missed.

Trigger

  • Scheduled run (nightly) and/or “New record created” event.

Modules and logic

  1. Pull candidate set: fetch records created/updated in your chosen window.
  2. Compute dedupe keys:
    • Key 1: NPI
    • Key 2: license state + license number
    • Key 3 (fallback): normalized full name + city/state + specialty
  3. Router: dedupe decision
    • Branch A: same NPI across multiple records → merge/link per your system’s capabilities; keep the most recently verified contact fields.
    • Branch B: same license match across multiple records → merge/link; preserve license fields as authoritative.
    • Branch C: name-only collisions → do not auto-merge; create a review task with both record links.
  4. Write back + audit: update a “dedupe status” field and attach an audit note (what matched, what changed, when).
  5. Suppression propagation: if any merged record is opt-out, propagate opt-out to the surviving record.

Retry logic

  • If merge/link fails (permissions, locked record, API error), route to a manual merge queue with the dedupe keys and record URLs.
  • Do not repeatedly retry locked records; tag them and notify ops.

Scenario 3: Scheduled refresh workflow (keep contactability current)

Goal: Refresh key fields on a schedule so recruiters aren’t calling dead numbers or emailing stale addresses.

Trigger

  • Scheduled run (daily/weekly) segmented by priority (hot pipeline vs long-term nurture).

Modules and logic

  1. Select refresh cohort: records where “last verified” is older than your threshold, excluding opt-outs.
  2. Heartbeat.ai API refresh: request updated contact fields and verification metadata.
  3. Router: change detection
    • Branch A: contact fields changed → update ATS/CRM + log “refreshed.”
    • Branch B: no change → update “last checked” only.
    • Branch C: conflicting identity signals → create a review task; do not overwrite.
  4. Suppression enforcement: re-check consent/opt-out before any downstream action that could trigger outreach.

Retry logic

  • Retry transient failures with backoff; after the final failure, tag the record “refresh_failed” and re-queue for the next run.
  • Rate-limit the scenario to avoid bursts that trip vendor limits.

Implementation Notes

This section is here so ops can build and maintain the scenarios without guessing.

  • Use a Router for branching: route by match confidence (NPI/license/name-only) and by suppression status.
  • Use an Error handler for retries: catch API timeouts/5xx, apply increasing delays, then route to an ops queue with the record URL and error summary.
  • Use scheduling for refresh: run refresh by cohort (hot vs nurture) instead of trying to refresh everything at once.
  • Monitoring & alerts: define what “failure” means (refresh_failed, merge_failed, API_error) and route those items to the place ops actually works (ticket queue or email). Keep the payload minimal (record URL, error code, scenario step, timestamp).
  • Keep an audit field: store match method, timestamps, and which fields were updated. Avoid copying sensitive payloads into free-text notes.
  • Reference: Make’s official docs cover scheduling and error handling patterns in the Make help center.

Diagnostic Table:

Ops problem What breaks in recruiting Make pattern to use Data/Heartbeat.ai note
Duplicates across ATS + uploads Two recruiters work the same clinician; notes split; opt-out missed Scheduled dedupe with Router: NPI → merge/link; license → merge/link; name-only → review task Prefer NPI and license matching; propagate opt-out on merge
Enrichment overwrites good fields Weak match replaces verified phone/email; recruiters lose trust Confidence branches: overwrite only on NPI/license; otherwise append + task Log match method and timestamp; keep a “last verified” field
Stale contactability Bounces, wrong numbers, wasted call blocks Scheduled refresh cohorts + change-detection branches Schedule refresh workflows; separate “last checked” vs “last verified”
Silent failures Scenario fails quietly; recruiters blame the system Error handler routes: retry with backoff → ops queue with record link Store request IDs and error summaries for traceability

Weighted Checklist:

Score each item 0–2. If you score high, Make is usually worth the setup time.

  • (2) You need branching based on match confidence (NPI/license vs name-only).
  • (2) You need scheduled refresh runs (not just event triggers).
  • (2) You need retries + error routing (ops queue) with audit notes.
  • (2) You must enforce consent and opt-out before any downstream action.
  • (1) You need to write back to multiple systems (ATS + CRM + upload).
  • (1) You need dedupe logic that treats NPI/license as authoritative.
  • (1) You can define a golden record rule (which system wins on conflicts).
  • (1) You have an ops owner who will maintain fields and mappings.

Interpretation

  • 0–5: Start with Scenario 1 only (enrichment + suppression + audit).
  • 6–10: Add Scenario 2 (dedupe) once your identifiers are consistently captured.
  • 11–14: Add Scenario 3 (refresh) and formalize monitoring and alerts.

Outreach Templates:

These templates are for the handoff points your automation creates: review tasks, suppression stops, and recruiter-ready summaries. Keep them short so they get used.

Template 1: Review task (medium/low confidence match)

Title: Verify identity before outreach (missing or conflicting identifiers)

Body: Automation found possible matches but did not overwrite fields. Verify using NPI or license matching if available. If confirmed, mark “Verified” and re-run enrichment.

  • Record link: [ATS/CRM URL]
  • Match method attempted: [NPI | license | name-only]
  • Next identifier to collect: [NPI | license state/number]
  • Compliance: confirm consent and check opt-out before outreach

Template 2: Suppression stop notice

Title: Outreach blocked (suppression rule)

Body: This record is suppressed due to opt-out or missing consent per our policy. Do not contact. If you believe this is incorrect, escalate to ops with documentation.

Template 3: Recruiter-ready enrichment summary (high confidence)

Title: Enrichment complete (verified identifiers)

Body: Record updated using NPI/license match. Changes logged. Proceed only if not suppressed.

  • Verified by: [NPI | license]
  • Last checked: [timestamp]
  • Audit note field: [field name]

Common pitfalls

  • Overwriting on weak matches: If you don’t branch by confidence, you’ll corrupt your ATS. Only overwrite on NPI/license matches; otherwise append + review.
  • Suppression too late: Put opt-out/consent checks at the top of the scenario and again before any downstream action that could trigger outreach.
  • No audit trail: If you can’t answer “what changed and why,” ops will lose trust and turn the scenario off.
  • Retry storms: Blind retries can create duplicate updates. Use backoff and route to an ops queue after the final attempt.
  • One scenario doing everything: Separate enrichment, dedupe, and refresh so you can debug and ship changes without breaking the whole pipeline.

How to improve results

Automation only matters if it improves contactability and reduces wasted recruiter time. Measure this by… tracking outcomes at each branch and tying them to outreach results.

Measurement instructions

  1. Instrument every branch: write a field like “automation_outcome” with values such as enriched_high_confidence, enriched_review, suppressed, refresh_updated, refresh_no_change, refresh_failed.
  2. Track outreach performance separately from automation health:
    • Deliverability Rate = delivered emails / sent emails (per 100 sent emails).
    • Bounce Rate = bounced emails / sent emails (per 100 sent emails).
    • Connect Rate = connected calls / total dials (per 100 dials).
    • Answer Rate = human answers / connected calls (per 100 connected calls).
    • Reply Rate = replies / delivered emails (per 100 delivered emails).
  3. Required definition: automation branching: Automation branching is routing a workflow into different paths based on conditions (for example, NPI match vs license match vs name-only), with different actions per path.
  4. Run a controlled comparison: pick one team to run refresh + dedupe for a set period while another team stays manual. Compare bounce/connect/reply rates and recruiter time spent on cleanup.
  5. Review the review queue weekly: if too many records land there, tighten match rules or collect NPI/license earlier in intake.

Workflow upgrades that usually pay off

  • Identifier-first intake: add NPI/license fields to intake and make them required when possible.
  • Segment refresh cadence: hot pipeline refreshes more often than long-term nurture.
  • Centralize suppression: keep opt-out in one source of truth and have every scenario check it.

Legal and ethical use

Build compliance into the workflow so it doesn’t depend on memory. For jurisdiction-specific requirements, consult your counsel.

  • Consent and opt-out: store them in explicit fields and check them before write-back and before any outreach action.
  • Data minimization: store only what you need for recruiting workflow; avoid copying sensitive data into free-text notes.
  • Auditability: log match method (NPI/license/name-only), timestamps, and what fields changed.
  • Human review for ambiguity: route low-confidence matches to a task instead of forcing an automated merge.

Evidence and trust notes

For how we think about data quality, verification, and responsible outreach, see our trust methodology for recruiting data.

Make implementation behavior (scheduling, error handling, modules) is documented in the Make help center.

If you’re implementing via Heartbeat.ai, start with the Heartbeat.ai API documentation. Keep suppression rules (consent/opt-out) explicit in your system of record and enforce them in every scenario.

FAQs

What should I automate first in Make for recruiting ops?

Start with enrichment + suppression + audit logging (Scenario 1). It gives recruiters cleaner records without risking duplicate merges or noisy refresh runs.

What identifiers work best for dedupe in clinician recruiting?

Use NPI first, then license matching (state + license number). Use name-only matching only as a fallback and route collisions to review.

How do I prevent enrichment from overwriting good ATS data?

Branch by confidence and only overwrite on NPI/license matches. For medium confidence, append fields and create a review task. Always log what changed.

How do I handle API failures and rate limits in Make?

Use an error handler with backoff retries for transient failures, then route to an ops queue after the final attempt. Rate-limit scheduled refresh runs by cohort instead of refreshing everything at once.

Where should consent and opt-out checks live in the scenario?

At the top of the scenario (hard stop) and again before any downstream action that could trigger outreach. Treat suppression as a shared service so every workflow enforces the same rules.

Next steps

About the Author

Ben Argeband is the Founder and CEO of Swordfish.ai and Heartbeat.ai. With deep expertise in data and SaaS, he has built two successful platforms trusted by over 50,000 sales and recruitment professionals. Ben’s mission is to help teams find direct contact information for hard-to-reach professionals and decision-makers, providing the shortest route to their next win. Connect with Ben on LinkedIn.


Access 11m+ Healthcare Candidates Directly Heartbeat Try for free arrow-button