Home / Tools / Pipeline Health / Blueprint
Blueprint · Hiring & Mobility · Autonomy Tier: 1

A[i]gent Tool Blueprint: Talent Pipeline Health

Turn noisy recruiting data into a single source of truth. This tool sits on top of ATS + agent-emitted events to produce a consistent, trustable view of the funnel — time-to-fill, pass-through, SLAs, and capacity.

What this Blueprint is

This blueprint defines the operating model: the data model, metric families, dashboard blueprints, and change control so every stakeholder gets the same answer every time.

Who this Blueprint is for

TA Leaders, Recruiting Ops, and People Analytics teams who want one trusted set of funnel health numbers — with guardrails that support Tier-1 autonomy.

What this tool solves

Detects & Troubleshoots Bottlenecks in Real Time | Instant Early Stage & Urgent Warnings | Automated Threat Response | Immediate Investigations | Insights & Recommendations | QA | Self-Reporting | Time-to-Fill | Acceptance Rates | Conversion Rates | Benchmark Analysis | Trend Assessment | Time-to-Hire | Resource Planning | Headcount Planning

1. Problem & Purpose

1.1 The Problem

Most recruiting teams live in dashboard chaos:

  • Different teams use different definitions for “time-to-fill”, “pipeline”, or “on-time SLAs”.
  • Numbers pulled from the ATS, Excel, and BI tools rarely match.
  • Leaders argue about whose report is correct instead of fixing bottlenecks.
  • AI/automation pilots generate new data fields, but they’re not tied into a shared model.

The result is a lack of trust in the numbers, and a huge missed opportunity to learn from Screening and Workflow A[i]gents.

1.2 Purpose of Metrics A[i]gent

Metrics A[i]gent is the analytics layer of the A[i]gents suite. Its job is to:

  • Standardize metric definitions via a shared Metrics Dictionary.
  • Ingest clean events from ATS and agents (Screening, Workflow, Capacity tools).
  • Produce a coherent set of dashboards and views for different stakeholders.
  • Act as the “truth broker” when numbers don’t match.

2. Scope & Design Principles

2.1 In-Scope

  • Data model and metric definitions for recruiting funnel analytics.
  • Ingestion of ATS and agent-generated events (screening, interviews, SLAs).
  • Dashboards and reports for TA leadership, recruiters, HMs, and People Analytics.
  • Alignment with a central Metrics Dictionary (definitions, formulas, owners).

2.2 Out-of-Scope (v1)

  • HRIS / post-hire analytics (retention, performance, comp) – can be extended later.
  • Budget and financial forecasting (e.g., fully-loaded cost per hire across all cost centers).
  • Vendor procurement and contract analytics.

2.3 Design Principles

  • Dictionary-first. Every metric is defined in the Metrics Dictionary before it appears on a dashboard.
  • Event-based. Time-to-X metrics come from events and timestamps, not hand-maintained spreadsheets.
  • Explainable. Drill-down views show how each number is calculated.
  • Role-aware. Different audiences see different slices and levels of detail.
  • Composable. Metrics can be reused in new dashboards without redefining them.

3. Roles & Responsibilities

Role Responsibilities
TA Leader / Head of Recruiting Defines the questions the org needs to answer; sponsors metric standards.
Recruiting Operations Owns the Metrics Dictionary, ensures definitions are implemented consistently.
People Analytics / BI Implements data model and dashboards; maintains data pipelines.
Recruiters & Coordinators Generate clean data via ATS hygiene; consume dashboards for day-to-day decisions.
Hiring Managers Use views tailored to their open roles; provide feedback on usefulness.
Owner Diane Wilkinson – design of metric taxonomy, funnel views, and A[i]gent integration.

4. Data Model & Event Sources

Metrics A[i]gent is built on an event-based model:

  • Each candidate’s journey is a sequence of events (apply, screen, interview, offer, hire, etc.).
  • Each event has timestamps, stage labels, and metadata (role, location, recruiter, source).
  • Agents (Screening, Workflow) generate additional events and fields, rather than separate spreadsheets.

4.1 Core Entities

Entity Description
Requisition / Role Unique opening with attributes like department, location, level, hiring manager.
Candidate Individual person; can be associated with multiple requisitions over time.
Application Candidate + Role + lifecycle events from apply to close.
Event Time-stamped action (stage change, interview, offer, agent decision, etc.).

4.2 Event Sources

  • ATS events: application created, stage changes, interview scheduled/complete, offer extended, offer accepted/declined, req opened/closed.
  • Screening A[i]gent events: screening_started_at, screening_decision_at, screening_score_overall, screening_recommendation, risk flags.
  • Interview A[i]gent + execution-layer events: stage_entered_at, interview_scheduled_at, interview_start_at, interview_end_at, scorecard_submitted_at, delay_update_sent_at.
  • Planner / Capacity tools: weekly hiring targets, recruiter capacity assumptions, plan vs actual comparisons.

The Metrics Dictionary ties these events to named metrics and reusable formulas.

5. Metric Families & Definitions

Metrics A[i]gent organizes metrics into families, each with a use case and owner.

5.1 Volume & Funnel Metrics

  • Applications by week / month / source.
  • Stage counts (pipeline at each stage).
  • Pass-through rates between stages (Apply → Screen, Screen → Interview, Interview → Offer, Offer → Hire).

5.2 Speed & SLA Metrics

  • Time from application to first decision (screening SLA).
  • Time in each stage (days in Screen, HM Interview, Panel, Offer).
  • Time-to-offer and time-to-accept.
  • Scorecard SLAs (on-time %, avg hours to complete).

5.3 Quality Metrics

  • Offer rate by source, recruiter, and role family.
  • Screening band correlations with late-stage success.
  • Pass-through rate by Screening A[i]gent recommendation (Advance vs HM Review vs Do Not Advance).

5.4 Capacity & Workload Metrics

  • Req load per recruiter and coordinator.
  • Candidates in process per recruiter (by stage).
  • Plan vs actual hires per period.

5.5 Experience & Health Metrics

  • Candidate delay coverage (% of candidates who received proactive updates).
  • No-show rate by stage.
  • Candidate NPS / CSAT (where measured).
  • Regret reasons distribution (structured, not free-text only).
All metrics reference a single definition in the Metrics Dictionary (name, formula, owner, grain, and filterability).

6. Dashboard Blueprints

Metrics A[i]gent surfaces metrics through role-based dashboards.

6.1 TA Leadership – “Health of Hiring” View

  • Time-to-fill and time-to-accept by department and location.
  • Pass-through and conversion rates by role family.
  • Top bottleneck stages and SLA breaches.
  • Hiring vs plan (headcount targets vs actual starts).

6.2 Recruiting Ops – “System Performance” View

  • Stage duration and SLA adherence at a granular level.
  • Scorecard compliance by interviewer and hiring manager.
  • Source mix and quality (offers and hires by source).
  • Screening A[i]gent override patterns and calibration signals.

6.3 Recruiter – “Desk View”

  • Active reqs and candidates by stage.
  • Upcoming interviews and overdue scorecards for their desk.
  • Projected time-to-fill vs current funnel strength.

6.4 Hiring Manager – “Role Snapshot”

  • Pipeline for their open roles.
  • Where candidates are getting stuck.
  • Screening bands for their candidates at a glance.
  • Time since last activity on each candidate.

7. Integration with Screening & Interview A[i]gents

Metrics A[i]gent doesn’t exist in isolation; it’s downstream from other A[i]gents.

7.1 From Screening A[i]gent

  • Hybrid score and band (Strong / Solid / Partial / Weak).
  • Recommendation (Advance / HM Review / Do Not Advance).
  • Risk flags and override flags.

These enable:

  • Analysis of how well screening bands predict late-stage success.
  • Fairness and calibration reviews.
  • Source quality assessments grounded in screening + downstream outcomes.

7.2 From Interview A[i]gent + Execution Layer

  • Stage entry and exit timestamps for each interview stage.
  • Interview scheduled/start/end times.
  • Scorecard submission times.
  • Delay update events.

These enable:

  • Time-in-stage and time-to-offer metrics.
  • Scorecard SLA and escalation impact analysis.
  • Candidate delay coverage and no-show metrics.

7.3 Capacity & Planner Tools

  • Weekly plan vs actual hires and interviews.
  • Pipeline sufficiency (do we have enough candidates to hit goals?).
  • “What-if” scenarios based on conversion rate assumptions.

8. Implementation & Stack

Metrics A[i]gent can run on whatever BI stack the company already uses.

8.1 Technical Architecture

In practice, Metrics A[i]gent runs on a simple, modular stack: a data warehouse or central data store (e.g., BigQuery, Snowflake, Redshift), a Python or SQL transformation layer that applies Metrics Dictionary definitions, and a BI layer (Looker, Tableau, internal dashboards) for surfacing views to TA, HMs, and leadership.

Screening and Workflow A[i]gents emit structured events (scores, timestamps, recommendations) that land in the same store as ATS data, so the analytics layer is just another consumer of those events — not a separate shadow system.

8.2 Data Flow (Conceptual)

  • Export or sync ATS data (API, scheduled exports, connectors).
  • Ingest Screening and Workflow A[i]gent events into the same warehouse.
  • Apply a semantic layer that maps raw fields to Metrics Dictionary definitions.
  • Build dashboards and reports on top of this semantic model.

8.3 Practical Implementation Tips

  • Start with a small set of “must-have” metrics and dashboards.
  • Prioritize correctness and explainability over complexity.
  • Document assumptions (time zones, working days vs calendar days, etc.).

9. Governance & Change Management

Metrics without governance are just fancy numbers.

9.1 Metrics Dictionary Ownership

  • Recruiting Ops owns definitions for recruiting metrics.
  • People Analytics owns implementation in the warehouse / BI tool.
  • Any metric change goes through a lightweight review: name, formula, impact.

9.2 Change Log

  • Maintain a simple change log for metric updates.
  • Communicate changes to recruiters and HMs so they understand shifts in numbers.

9.3 Data Quality

  • Set expectations for ATS data hygiene (stage movement, close reasons, etc.).
  • Monitor data completeness and create feedback loops with recruiters.

Appendix A – Example Metric Mapping

Illustrative mapping of a few key metrics to data fields and formulas.

Metric Name Definition (Short) Key Fields
Time-to-Fill Days between requisition open and candidate accepted offer. req_open_date, offer_accept_date (per role).
Time-in-Stage Days a candidate spends in a given stage. stage_entered_at, stage_exited_at (per application, per stage).
Offer Rate Offers / candidates who reached final interview. offer_extended_flag, reached_final_stage_flag.
Screen → Interview Conversion Percent of screened candidates who reach interview. screen_completed_flag, interview_scheduled_flag.
Scorecard SLA Compliance % of scorecards submitted within SLA window. interview_end_at, scorecard_submitted_at, SLA threshold.

In the actual Metrics Dictionary, each metric also has owner, grain (per role, per candidate, etc.), and allowed filters.

Appendix B – Example Questions & Views

Metrics A[i]gent is designed to answer questions like:

  • “Where are candidates getting stuck for senior AE roles in North America?”
  • “How much faster do candidates screened as ‘Strong Match’ move through the funnel?”
  • “Which sources produce the highest offer rates and fastest time-to-offer?”
  • “Which hiring managers or interviewers consistently break scorecard SLAs?”
  • “Which roles are at risk of missing their hiring targets based on current funnel strength?”

By standardizing metrics and wiring agents into the same data layer, Metrics A[i]gent turns these questions into reusable views instead of one-off spreadsheets.

Let's Connect

Open to roles in People Analytics, Talent Intelligence, People Ops, and Recruiting Operations — especially teams building internal AI capabilities.