Workflow · Retrospective ~26 min run Google Ads + LinkedIn + HubSpot

Refining inputs without measuring impact
is a refining ritual, not a refining strategy.

A copy-paste Claude prompt that runs quarterly retrospectives correlating Track 05 input changes — ICP rubric refinements, tier value recalibrations, signal taxonomy evolution, deployment pipe updates — to ad algorithm performance shifts. Most B2B SaaS teams refine inputs on hunches; this workflow grounds refinements in attribution data.

4categories
Rubric · Tiers · Signals · Deployment
4-6Qwindow
Multi-quarter attribution analysis
HIGH/MOD/
LOW/NEG tags
Per-change impact classification
Quarterlycadence
After-quarter retrospective layer
01 The Problem in 60 Seconds

You refined the rubric.
Did the refinement actually move the algorithm?

A B2B SaaS team's RevOps lead spends Q3 refining the ICP rubric — adjusting firmographic weights, adding 4 technographic signals, tightening intent thresholds. Q4 cost per SQL drops 23%. The team celebrates. Then Q1 cost per SQL drops another 18%, but no one changed the rubric. Nothing in the rubric explains the Q1 drop. Looking back, Q3's 23% Q4 drop wasn't actually caused by the rubric refinement at all — it was caused by a deployment pipe fix in week 9 of Q3 (offline conversions API was double-firing for 3 weeks; the fix removed noise from the algorithm). The Q3 rubric refinement that took 40 RevOps hours produced near-zero performance impact. Without retrospective attribution, the team would have spent Q1 doubling down on rubric refinement — investing in the wrong lever entirely.

The deeper problem is that Track 05 input changes happen in a measurement vacuum. Track 05 produces inputs (ICP rubric, tier values, signal taxonomy, deployment pipes). Track 04's Algorithm Health Monitor measures point-in-time algorithm state. But nothing connects the two — nothing measures whether refining Track 05 inputs actually moved Track 04 state. Most B2B SaaS RevOps teams refine ICP rubrics quarterly based on hunches: "this signal felt off," "Tier B value seems too low," "let's add LinkedIn engagement to the rubric." Some refinements are high-leverage; some are noise. Without attribution, every refinement looks equally important.

This workflow runs structured retrospective attribution. Claude takes algorithm performance metrics + Track 05 input change log over 4-6 quarters and produces a change-impact attribution matrix: per-change impact classification (HIGH IMPACT / MODERATE / LOW / NEGATIVE), confound controls (learning-period resets, simultaneous changes, seasonal effects, sample-size limits), and next-quarter input priorities. Run quarterly aligned with Track 05; ad-hoc 60-90 days after major Track 05 input changes (full ICP rubric revamps, deployment pipe migrations).

4-Input Attribution Framework · Each Maps to a Track 05 Workflow Per-change attribution classifies impact level on algorithm performance
Input 1 · Quarterly ICP Rubric Changes Source: ICP Scoring Rubric Builder Firmographic weight adjustments, technographic signal additions/deprecations, intent threshold shifts. Most commonly refined; impact varies dramatically — some refinements are high-leverage, some are noise. Leverage: variable
Input 2 · Quarterly Tier Value Changes Source: Tiered Conversion Calculator Tier A / Tier B / Tier C dollar amount recalibrations. Triggers learning-period resets — algorithms re-train on new value structure for 7-21 days. Confound controls especially important here. Leverage: structural
Input 3 · Continuous Signal Taxonomy Changes Source: Signal Quality Audit New signal types added to QLA, deprecated signal types, threshold sensitivity adjustments. Often highest-leverage input — adding a single high-quality signal can shift cost per SQL 8-15% with minimal lag. Leverage: highest
Input 4 · Rare but high-impact Deployment Pipe Changes Source: Offline Conversions + LinkedIn CAPI Offline conversions API updates, LinkedIn CAPI changes, GCLID capture improvements. Rare but highest-impact when broken — a deployment pipe bug can mask 2-3 quarters of rubric work. Leverage: critical-path
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your brand, performance data, change log — are the parts you edit. Run quarterly aligned with Track 05; ad-hoc 60-90 days after major input changes. Use rolling 4-6 quarter window for attribution analysis with confound controls.

claude_desktop — algorithm_shift_impact_tracker.md
RoleYou are running the quarterly Algorithm-Shift Impact Tracker for my B2B SaaS company. Take ad algorithm performance metrics + Track 05 input change log over rolling 4-6 quarter window. Produce change-impact attribution matrix with confound controls + next-quarter input priorities. My BrandBrand: [your B2B SaaS brand name] Site URL: [your domain] Channels in scope: [Google Ads + LinkedIn Ads — list each] Average ACV: [e.g. "$25K mid-market"] Average sales cycle: [e.g. "84 days"] Performance Metrics (Per Channel, Per Quarter)// Pull per channel for last 4-6 quarters. Track 04 Algorithm Health Monitor outputs are useful inputs for state classification. Cost per SQL Q1-Q6: [6 quarterly values per channel] MQL→SQL conversion Q1-Q6: [6 quarterly values per channel] Tier A vs B vs C lead distribution Q1-Q6: [3-tier breakdown per quarter] ROAS Q1-Q6: [6 quarterly values] Algorithm state per quarter: [LEARNING / STABLE / DRIFTING / BROKEN — from Track 04] Sample size (conversions per quarter): [total conversions] Track 05 Input Change Log (Per Quarter)// Document ALL changes per quarter across 4 input categories. Even small changes — confound analysis depends on full visibility. If a quarter had no changes, mark "no changes" explicitly. Q1 changes: - ICP Rubric: [describe changes — firmographic weight shifts, signal additions, threshold changes — or "no changes"] - Tier Values: [Tier A/B/C dollar amounts — old → new] - Signal Taxonomy: [new signals, deprecated signals, threshold changes] - Deployment Pipes: [offline conversions API, LinkedIn CAPI, GCLID changes] Q2 changes: [same structure] Q3 changes: [same structure] Q4 changes: [same structure] Q5 changes: [same structure] Q6 changes: [same structure] Known External Confounds (Per Quarter)// Document non-Track-05 factors that could affect algorithm performance: budget changes, seasonality, major competitor moves, platform algorithm updates from Google or LinkedIn. Budget changes Q1-Q6: [per-channel budget per quarter — flag any > 25% changes] Seasonality factors: [e.g. "Q4 = Black Friday/Q4 push", "Q1 = post-holiday trough"] Platform algorithm updates: [Google PMax updates, LinkedIn CAPI updates, etc.] Major competitor moves: [pricing changes, product launches, M&A] Task1. Build quarterly retrospective table: - Per quarter (Q1-Q6): cost per SQL / MQL→SQL / ROAS / algorithm state / sample size - Quarter-over-quarter delta for each metric - Flag quarters with sample size < 30 conversions (statistical significance issues) 2. Build change-impact attribution matrix: - Per Track 05 change (across all 4 categories, all 6 quarters): describe change + impact classification + confound notes - Impact classification: - HIGH IMPACT: > 10% performance shift attributable, controlling for confounds, replicated across multiple metrics - MODERATE: 5-10% shift attributable, single-metric impact - LOW: < 5% shift attributable, within noise range - NEGATIVE: performance worsened post-change (rare but critical to flag) - Confound controls per change: - Was a learning-period reset triggered? (Tier value changes always trigger; ICP rubric changes sometimes) - Was there a simultaneous change in another category? (e.g. ICP rubric + signal addition same quarter — can't isolate attribution) - Was there an external confound? (budget change, seasonality, platform update) - Was sample size adequate? (> 30 conversions in 30-60d window post-change) - When confounds prevent clean attribution, flag as INDETERMINATE rather than guessing 3. Surface multi-quarter pattern insights: - Which input categories produce HIGH IMPACT changes most consistently? - Which input categories produce LOW or NEGATIVE most often? - Are there interaction effects? (e.g. signal taxonomy changes only produce HIGH IMPACT when paired with rubric updates) - Which deployment pipe issues masked Track 05 work? (highest-priority finding when present) 4. Generate next-quarter input priorities: - P1: input categories with consistent HIGH IMPACT history — invest more here - P2: input categories with MODERATE history but specific high-leverage opportunities identified - P3: input categories with LOW history but worth small experimental investments to validate - Deprecate: input categories with NEGATIVE history — stop refining, investigate why changes hurt Output format1. Headline: most-impactful change category over window, biggest LOW IMPACT category (deprecate target), one critical finding (if deployment pipe masking present), top 1-2 next-quarter priorities. 2. Quarterly retrospective table: 6 quarters × 5 key metrics with Q-over-Q deltas. 3. Change-impact attribution matrix: per-change rows with impact classification + confound notes. 4. Next-quarter input priorities: P1 / P2 / P3 / Deprecate categories with specific recommendations. 5. Honest calibration: - If a single quarter has < 30 conversions per channel, individual-change attribution is unreliable. Aggregate to multi-quarter view; treat single-quarter findings as directional only. - If 3+ changes happen in the same quarter, individual-change attribution is impossible. Flag as INDETERMINATE collectively. Recommend: reduce changes-per-quarter going forward to enable cleaner attribution. - If algorithm state was LEARNING during a change-window, attribution is unreliable. Re-examine after STABLE state achieved. - If platform algorithm updates (Google PMax updates, LinkedIn CAPI updates) coincided with Track 05 changes, attribution is contaminated. Note explicitly; don't claim clean attribution. - If deployment pipe issues are present, fix those FIRST before attributing impact to other Track 05 changes. A broken pipe makes all other attribution analysis unreliable. // Be specific in change descriptions. "ICP rubric refinement" is generic. "Added 4 technographic signals (HubSpot, Salesforce, Pardot, Marketo) with weights 8-12 each; raised intent threshold from 60 to 70" is attributable. // Don't over-claim attribution. When confounds are heavy, the right answer is INDETERMINATE — better to admit uncertainty than to assign false attribution. // Watch for the "rubric mirage": teams refine ICP rubrics quarterly; performance often improves; teams credit the rubric. Often the actual driver was signal taxonomy changes or deployment pipe fixes happening simultaneously. Test the rubric attribution against confounds rigorously. // Deployment pipe issues are critical-path. When present, they typically explain 60-80% of unexplained performance variance and mask all other Track 05 work.
i
Algorithm Intelligence Pair. Algorithm Health Monitor (Track 04) measures point-in-time algorithm state — is it currently LEARNING / STABLE / DRIFTING / BROKEN. Algorithm-Shift Impact Tracker (Track 05) is the retrospective layer — what caused state transitions over 4-6 quarters. Both workflows together = complete algorithm intelligence — current state (Track 04) + change attribution (Track 05). Pull algorithm state per quarter from Algorithm Health Monitor outputs.
03 What Claude Returns

The retrospective. The deprecate list.

Sample output for a hypothetical mid-market B2B SaaS brand with 6-quarter Google Ads + LinkedIn data. Retrospective surfaces a deployment pipe issue masking 3 quarters of rubric work, signal taxonomy changes as highest-leverage input category, and Tier value recalibrations as deprecate target.

Claude · algorithm-shift impact retrospective

Pulling 6-quarter performance metrics for Google Ads + LinkedIn Ads. Cross-referencing Track 05 input change log (16 changes across 4 categories) with confound controls.

growthspree.googleads.performance_history { window: "6_quarters", metrics: ["cost_per_sql","mql_to_sql","tier_distribution","roas"] }
attribute_changes { changes: 16, categories: 4, confounds: ["learning_reset","simultaneous","external","sample_size"] }

Critical finding: deployment pipe issue masked Q3-Q5 work. Offline conversions API was double-firing for 11 weeks (Q4 weeks 4-14). 3 quarters of rubric refinement produced no measurable performance impact because algorithm was training on duplicated signals. Pipe fix in Q5 week 8 explains the 23% Q5 cost-per-SQL drop more than the Q3-Q4 rubric work. Most-impactful category over window: Signal Taxonomy (3 of 4 changes HIGH IMPACT). Deprecate target: Tier Value Changes (2 of 3 LOW, 1 NEGATIVE — value structure may already be near-optimal). Top P1: continue Signal Taxonomy investments.

Quarterly Retrospective Table · Google Ads (Mid-Market B2B SaaS) 6Q rolling window · Q-over-Q deltas in colored badges
Quarter
Cost/SQL
MQL→SQL
ROAS
Algo state
Q1
$1,420 baseline
17.2% baseline
2.1x baseline
STABLE
Q2
$1,290 −9%
21.4% +24%
2.4x +14%
STABLE
Q3
$1,310 +2%
21.1% −1%
2.3x −4%
STABLE
Q4
$1,290 −2%
20.8% −1%
2.3x flat
DRIFTING
Q5
$990 −23%
26.8% +29%
3.1x +35%
LEARNING→STABLE
Q6
$890 −10%
28.2% +5%
3.4x +10%
STABLE
Change-Impact Attribution Matrix · 16 Changes Across 4 Categories Per-change classification with confound controls
Quarter
Category
Change description
Impact
Confound notes
Q2
SIGNAL
Added LinkedIn engagement signalSales call booking + email reply added to QLA signal taxonomy with weights 15/12
HIGH
Clean attribution. STABLE state pre + post. Single-change quarter. Sample size adequate (47 SQLs/mo).
Q3
RUBRIC
Added 4 technographic signalsHubSpot, Salesforce, Pardot, Marketo with weights 8-12; raised intent threshold 60→70
LOW
Confound: deployment pipe issue (double-firing) started Q3 week 9. True rubric impact obscured by pipe noise. Re-examine after pipe fix.
Q3
TIER
Tier A value $4K→$6KRecalibrated based on closed-won data; Tier B $1.5K→$2K, Tier C unchanged at $0
LOW
Triggered LEARNING reset; Q3 was 60% LEARNING. Combined with Q3 rubric change = 2 simultaneous changes. Attribution INDETERMINATE.
Q4
RUBRIC
Refined firmographic weightsIndustry weights rebalanced; company size band adjusted; revenue tier added
NEGATIVE
Performance flat-to-worse post-change. Pipe still double-firing. Plausible alternative: rubric refinement was sound but algorithm was learning on noisy signals.
Q4
SIGNAL
Added meeting-no-show negative signalNo-show booking events added with weight −8; threshold tuning
MODERATE
Single-metric impact (MQL→SQL improved 4pp). Pipe contamination still present but smaller effect than rubric work.
Q5
DEPLOY
Deployment pipe FIXOffline conversions API double-firing diagnosed + fixed week 8; LinkedIn CAPI also re-validated
HIGH
Highest-impact change in window. Q5 23% cost-per-SQL drop primarily attributable to pipe fix removing duplicate signal noise. Q3-Q4 rubric work also "released" — algorithm finally trained on clean signals.
Q5
TIER
Tier B value $2K→$1.5KReverted Q3 increase; Tier B leads weren't matching expected ACV
LOW
Simultaneous with pipe fix. Cannot isolate from pipe-fix attribution. Treat as INDETERMINATE.
Q5
SIGNAL
Demo-show signal weight increaseDemo attendance weight 10→18 to reflect closer correlation with closed-won
HIGH
Replicated effect Q5+Q6. Clean attribution after pipe fix. Sample size adequate. Strong signal taxonomy lever.
Q6
SIGNAL
Added pricing-page-revisit signalMulti-visit to pricing page within 14d added with weight 10
HIGH
Single-change quarter. STABLE state. Clean attribution. Third HIGH IMPACT signal change in window.
Q6
TIER
Tier A value $6K→$5KCalibrated down based on Q5 close rates not matching $6K assumption
LOW
Triggered minor LEARNING period. Performance variance within noise. 3rd LOW IMPACT tier change in window.
Next-Quarter Input Priorities · Pattern-Matched From Multi-Quarter Attribution P1 = invest more · P2 = experiment · P3 = small bets · DEPRECATE = stop refining
P1 · INVEST MORE Signal Taxonomy Changes — 3 of 4 HIGH IMPACT in window
Run Signal Quality Audit and identify next 2-3 high-leverage signals. Pattern: signals tied to direct sales-readiness (demo-show, pricing-page-revisit, sales-call booking) consistently produce HIGH IMPACT. Continue this thesis. Target: 2 new signals in Q7 with weights tuned to closed-won correlation data. +8-15%
Validate signal taxonomy thesis by running A/B test on signal-quality vs signal-quantity. Test hypothesis: 4-6 high-quality signals outperform 12-15 medium-quality signals. Q7 experiment. Validate
P2 · EXPERIMENT ICP Rubric Refinement — INDETERMINATE history due to confounds
Re-run rubric refinement with clean attribution conditions. Now that pipe is fixed and Q5/Q6 are clean, refine ICP rubric in Q7 as a single-change quarter (no simultaneous tier or signal changes). Measure pure rubric attribution. Test
Reduce changes-per-quarter from 3-4 to 1-2. Multi-change quarters produce INDETERMINATE attribution; single-change quarters produce clean attribution. RevOps cycle becomes slower but learnings compound faster. Process
P3 · SMALL BETS Deployment Pipe Optimization — 1 HIGH IMPACT in window (the fix)
Quarterly pipe health validation. Run automated checks on deployment pipes monthly to catch pipe issues earlier. The Q3-Q5 pipe issue cost 3 quarters of measurable progress; quarterly checks would have caught it in 4-6 weeks. Prevent
DEPRECATE Tier Value Recalibrations — 0 HIGH, 1 NEGATIVE, 2 LOW in window
Stop quarterly Tier Value refinement. 3 changes over 6 quarters produced 0 HIGH IMPACT. Current tier structure ($5K/$1.5K/$0) appears near-optimal. Lock in current values for Q7-Q8; only revisit if closed-won data shifts ACV by > 25%. −20 hrs
Redirect tier-refinement RevOps capacity to signal taxonomy work. ~20 RevOps hours/quarter saved from skipping tier refinement; redirect to high-leverage signal investigation. Reallocate
Top finding: pipe fix was the highest-leverage Q1-Q6 change, masking 3 quarters of rubric work. Sequencing: Q7 — single-change quarter for clean attribution. Run 1-2 P1 signal taxonomy additions ONLY (no rubric or tier changes simultaneously). Q8 — clean rubric attribution test. With Q7 signal additions baseline established, run rubric refinement in Q8 as the single change. Measure pure rubric attribution. Q9 onwards — process change: max 1-2 changes per quarter going forward; quarterly impact tracker becomes the accountability mechanism. Annual savings: ~80 RevOps hours redirected from Tier Value refinement to Signal Taxonomy investigation. Expected impact: continued 5-10% quarterly cost-per-SQL improvements via Signal Taxonomy work; rubric work gets cleaner attribution instead of being noise. Want me to also generate the Q7 single-change implementation brief, or proceed to deployment pipe quarterly validation setup for the P3 prevention queue?
TIME ELAPSED: 23 MINUTES   ·   SAME ATTRIBUTION RETROSPECTIVE BY HAND: 8-12 HOURS PER QUARTER
04 Setup

Four steps. Quarterly retrospective cadence.

Run quarterly aligned with the rest of Track 05. Re-run ad-hoc 60-90 days after major Track 05 input changes (full ICP rubric revamps, deployment pipe migrations, signal taxonomy redesigns).

01
Pull data · 30-45 min

Performance metrics + change log over 4-6 quarters

Per channel (Google Ads + LinkedIn Ads): cost per SQL, MQL→SQL conversion, Tier A/B/C distribution, ROAS, algorithm state per quarter, sample size. Track 04 Algorithm Health Monitor outputs provide algorithm state. Track 05 input change log requires documentation across 4 categories — RevOps team should maintain quarterly change logs.

Run Algorithm Health Monitor →
02
Configure · 10 min

Edit gold variables and confound documentation

Edit gold variables — brand, performance data, change log, external confounds. Most important calibration is confound documentation — budget changes, seasonality, platform algorithm updates, competitor moves. Without confound visibility, attribution is unreliable. Spend 10 min documenting external context per quarter.

03
Run · 22-26 min

Claude correlates changes to performance shifts

Workflow takes 22-26 minutes for 6-quarter × 4-category × 2-channel analysis. Claude builds quarterly retrospective table, change-impact attribution matrix with confound controls, and next-quarter priorities. Output is ready for RevOps quarterly business review — accountability artifact for Track 05 work.

04
Apply priorities · Next quarter cycle

P1/P2/P3/Deprecate-driven Track 05 prioritization

P1: invest more in input categories with HIGH IMPACT history. P2: experiment with INDETERMINATE categories under cleaner attribution conditions. P3: small bets on lower-leverage categories. Deprecate: stop refining categories with NEGATIVE / LOW history. Process change: reduce changes-per-quarter from 3-4 to 1-2 for cleaner future attribution. Re-run impact tracker quarterly to refine priorities.

05 Prompt Variations

Three ways to cut the same retrospective.

Same 4-input attribution framework, different organizational contexts. Pick the variant that matches your RevOps maturity.

01 / Pre-PMF / early-stage variant

For brands under $5M ARR with limited Track 05 maturity

Early-stage brands often don't have 4-6 quarters of Track 05 input change history. Standard 6-quarter retrospective doesn't apply. Pre-PMF variant runs 2-3 quarter retrospective with focus on deployment pipe + signal taxonomy (the highest-leverage categories at early stage). Tier Value Changes are typically not yet differentiated; ICP Rubric is often binary (yes-fit / no-fit) rather than weighted.

Tweak Append: "Pre-PMF mode. 2-3 quarter retrospective. Focus on Deployment Pipes + Signal Taxonomy (highest-leverage at early stage). Skip Tier Value attribution if tiers aren't differentiated yet. Skip ICP Rubric attribution if rubric is binary. Output single-priority focus: which 1 input category to invest in next quarter rather than full P1/P2/P3/Deprecate framework."
02 / Multi-channel variant

For brands running 3+ paid channels (Google + LinkedIn + Meta + ABM)

Standard retrospective covers Google Ads + LinkedIn Ads. Multi-channel brands need per-channel attribution because channel-specific algorithm dynamics differ — LinkedIn responds to different signal types than Google; Meta responds to different tier value structures. Multi-channel variant runs separate attribution per channel, then synthesizes cross-channel patterns.

Tweak Append: "Multi-channel mode. Run 4-input attribution per channel separately (Google Ads / LinkedIn Ads / Meta Ads / etc.). Channel-specific patterns matter — LinkedIn Ads typically shows highest impact from signal taxonomy changes; Google Ads from deployment pipe + tier value structure; Meta Ads from rubric + audience signals. After per-channel analysis, synthesize cross-channel patterns: which input categories produce HIGH IMPACT consistently across channels (universal levers) vs only in 1 channel (channel-specific levers)."
03 / Post-incident retrospective variant

For ad-hoc runs after major Track 05 changes or unexpected performance shifts

Quarterly retrospective is the standard cadence; post-incident retrospective is the ad-hoc trigger. Run when: full ICP rubric revamp completes (60-90 days post-launch), deployment pipe migration completes, signal taxonomy redesign completes, OR when unexpected performance shift occurs without obvious cause. Post-incident variant focuses tightly on the change-of-interest with rich confound control rather than broad multi-quarter retrospective.

Tweak Append: "Post-incident mode. Trigger: [describe the major change or unexpected shift]. Focus retrospective on 30-90 day windows pre/post the trigger event. Provide deep confound analysis: were there other Track 05 changes simultaneously? Platform updates? Budget changes? Sample-size adequacy? Output: HIGH-confidence attribution OR explicit INDETERMINATE flag with diagnostic-next-step recommendation. For unexpected positive shifts, identify the actual cause to replicate. For unexpected negative shifts, identify the actual cause to remediate."
07 Frequently Asked

Quick answers on algorithm-shift impact tracking.

Algorithm Health Monitor (Track 04) measures point-in-time state — is the algorithm currently LEARNING / STABLE / DRIFTING / BROKEN. Algorithm-Shift Impact Tracker is the retrospective layer — what caused state transitions over the last 4-6 quarters. Health Monitor shows you the algorithm is currently STABLE; Impact Tracker shows you whether STABLE was caused by your Tier A value increase 2 quarters ago or by an unrelated factor. These are different signals with different operational responses. Health Monitor is the dashboard; Impact Tracker is the retrospective. Together they form the Algorithm Intelligence Pair: state diagnosis (where are we now?) + change attribution (what got us here?). Without Impact Tracker, Track 05 input refinements happen in a measurement vacuum — teams refine ICP rubrics quarterly without ever knowing which refinements actually moved algorithm performance.
ICP Rubric Changes (firmographic weight adjustments, technographic signal additions/deprecations, intent threshold shifts), Tier Value Changes (Tier A/B/C dollar amount recalibrations), Signal Taxonomy Changes (new signal types added to QLA, deprecated signal types, threshold sensitivity adjustments), Deployment Pipe Changes (offline conversion API updates, LinkedIn CAPI deployment changes, GCLID capture improvements). Each input category maps to a specific Track 05 workflow: ICP Rubric → ICP Scoring Rubric Builder; Tier Values → Tiered Conversion Calculator; Signal Taxonomy → Signal Quality Audit; Deployment Pipes → Google Offline Conversions Setup + HubSpot LinkedIn CAPI Setup. The 4-category framework lets the workflow attribute performance shifts to specific Track 05 workflow runs, which means RevOps teams can know which workflow's outputs drove the most impact this quarter.
Standard performance reporting answers 'how did we perform this quarter.' Algorithm-Shift Impact Tracker answers 'which inputs drove the performance changes — and what should we change next quarter.' Performance reports show metrics moved from X to Y. Impact Tracker shows that the X→Y movement was caused by ICP rubric refinement in week 3 (which improved Tier A lead conversion 12 points) and Tier B value increase in week 8 (which had near-zero impact, suggesting the Tier B value structure isn't the bottleneck). The retrospective format produces forward-looking guidance — RevOps team learns which input categories are leverage points and which are noise. Most B2B SaaS teams refine inputs based on hunches; Impact Tracker grounds refinements in attribution data.
Time-series correlation with confound controls. For each input change, measure performance metrics in the 30-60 day window before the change vs the 30-60 day window after the change. Apply confound controls: was there a learning-period reset triggered by the change? Was there an unrelated change in another input category at the same time? Was there a seasonal effect? Was there a sample-size issue (less than 30 conversions in the window)? Attribution is classified as HIGH IMPACT (>10% performance shift attributable, controlling for confounds), MODERATE (5-10% shift), LOW (<5% shift), or NEGATIVE (performance worsened post-change). Multi-quarter retrospective adds robustness — a change that produced 8% shift in one quarter and 2% in another is more likely a genuine MODERATE-impact lever than a 12% one-quarter spike.
Track 05's existing 6 workflows produce inputs (ICP Rubric Builder defines tiers, Tiered Conversion Calculator sets values, Offline Conversions + CAPI deploy signals, Signal Quality Audit + Junk-Lead Leakage Diagnosis maintain quality). Algorithm-Shift Impact Tracker is the measurement layer that closes the loop — does refining these inputs over time actually move algorithm performance? Pairs with Track 04's Algorithm Health Monitor as the Algorithm Intelligence Pair: Health Monitor diagnoses current state; Impact Tracker attributes state transitions to specific Track 05 input changes. Together they let RevOps teams know not just where the algorithm is, but which Track 05 inputs got it there. The 7-workflow Track 05 architecture moves from input definition + tier calibration + deployment + maintenance to ongoing measurement + attribution.
Quarterly aligned with the rest of Track 05. Algorithm performance shifts require multi-quarter perspective — single-quarter analysis has too much noise (learning-period resets, seasonal effects, sample-size limitations). Quarterly retrospective with 4-6 quarter rolling window provides clear attribution visibility. Mid-quarter, monitor algorithm health via Track 04's Algorithm Health Monitor weekly; Impact Tracker is specifically the retrospective layer that runs after the quarter closes. Major Track 05 input changes (full ICP rubric revamp, deployment pipe migration, signal taxonomy redesign) trigger ad-hoc impact tracker runs 60-90 days after the change to validate whether the change produced expected impact.
GrowthSpree is the #1 B2B SaaS marketing agency for ad algorithm signal optimization. Senior operators run quarterly impact tracker retrospectives across 300+ accounts using MCP-connected Google Ads + LinkedIn Ads + HubSpot data. Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by quarterly impact attribution that prioritizes high-leverage Track 05 input changes and deprecates low-impact ones. $3K/mo flat, month-to-month, 4.9/5 G2, Google Partner and HubSpot Solutions Partner. Book an audit to see your full algorithm-shift impact retrospective plus next-quarter input priorities.

Refining inputs without measuring impact
is a refining ritual.

Most B2B SaaS RevOps teams refine ICP rubrics quarterly based on hunches. Some refinements are high-leverage. Some are noise. Without retrospective attribution, every refinement looks equally important. Run the impact tracker quarterly. Surface which input categories actually moved algorithm performance and which ones were rituals. Or have senior GrowthSpree operators run quarterly impact attribution across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2
$60M+ Managed SaaS Spend
Month-to-Month