A copy-paste Claude prompt that runs quarterly retrospectives correlating Track 05 input changes — ICP rubric refinements, tier value recalibrations, signal taxonomy evolution, deployment pipe updates — to ad algorithm performance shifts. Most B2B SaaS teams refine inputs on hunches; this workflow grounds refinements in attribution data.
A B2B SaaS team's RevOps lead spends Q3 refining the ICP rubric — adjusting firmographic weights, adding 4 technographic signals, tightening intent thresholds. Q4 cost per SQL drops 23%. The team celebrates. Then Q1 cost per SQL drops another 18%, but no one changed the rubric. Nothing in the rubric explains the Q1 drop. Looking back, Q3's 23% Q4 drop wasn't actually caused by the rubric refinement at all — it was caused by a deployment pipe fix in week 9 of Q3 (offline conversions API was double-firing for 3 weeks; the fix removed noise from the algorithm). The Q3 rubric refinement that took 40 RevOps hours produced near-zero performance impact. Without retrospective attribution, the team would have spent Q1 doubling down on rubric refinement — investing in the wrong lever entirely.
The deeper problem is that Track 05 input changes happen in a measurement vacuum. Track 05 produces inputs (ICP rubric, tier values, signal taxonomy, deployment pipes). Track 04's Algorithm Health Monitor measures point-in-time algorithm state. But nothing connects the two — nothing measures whether refining Track 05 inputs actually moved Track 04 state. Most B2B SaaS RevOps teams refine ICP rubrics quarterly based on hunches: "this signal felt off," "Tier B value seems too low," "let's add LinkedIn engagement to the rubric." Some refinements are high-leverage; some are noise. Without attribution, every refinement looks equally important.
This workflow runs structured retrospective attribution. Claude takes algorithm performance metrics + Track 05 input change log over 4-6 quarters and produces a change-impact attribution matrix: per-change impact classification (HIGH IMPACT / MODERATE / LOW / NEGATIVE), confound controls (learning-period resets, simultaneous changes, seasonal effects, sample-size limits), and next-quarter input priorities. Run quarterly aligned with Track 05; ad-hoc 60-90 days after major Track 05 input changes (full ICP rubric revamps, deployment pipe migrations).
The gold variables — your brand, performance data, change log — are the parts you edit. Run quarterly aligned with Track 05; ad-hoc 60-90 days after major input changes. Use rolling 4-6 quarter window for attribution analysis with confound controls.
Sample output for a hypothetical mid-market B2B SaaS brand with 6-quarter Google Ads + LinkedIn data. Retrospective surfaces a deployment pipe issue masking 3 quarters of rubric work, signal taxonomy changes as highest-leverage input category, and Tier value recalibrations as deprecate target.
Pulling 6-quarter performance metrics for Google Ads + LinkedIn Ads. Cross-referencing Track 05 input change log (16 changes across 4 categories) with confound controls.
Critical finding: deployment pipe issue masked Q3-Q5 work. Offline conversions API was double-firing for 11 weeks (Q4 weeks 4-14). 3 quarters of rubric refinement produced no measurable performance impact because algorithm was training on duplicated signals. Pipe fix in Q5 week 8 explains the 23% Q5 cost-per-SQL drop more than the Q3-Q4 rubric work. Most-impactful category over window: Signal Taxonomy (3 of 4 changes HIGH IMPACT). Deprecate target: Tier Value Changes (2 of 3 LOW, 1 NEGATIVE — value structure may already be near-optimal). Top P1: continue Signal Taxonomy investments.
Run quarterly aligned with the rest of Track 05. Re-run ad-hoc 60-90 days after major Track 05 input changes (full ICP rubric revamps, deployment pipe migrations, signal taxonomy redesigns).
Per channel (Google Ads + LinkedIn Ads): cost per SQL, MQL→SQL conversion, Tier A/B/C distribution, ROAS, algorithm state per quarter, sample size. Track 04 Algorithm Health Monitor outputs provide algorithm state. Track 05 input change log requires documentation across 4 categories — RevOps team should maintain quarterly change logs.
Run Algorithm Health Monitor →Edit gold variables — brand, performance data, change log, external confounds. Most important calibration is confound documentation — budget changes, seasonality, platform algorithm updates, competitor moves. Without confound visibility, attribution is unreliable. Spend 10 min documenting external context per quarter.
Workflow takes 22-26 minutes for 6-quarter × 4-category × 2-channel analysis. Claude builds quarterly retrospective table, change-impact attribution matrix with confound controls, and next-quarter priorities. Output is ready for RevOps quarterly business review — accountability artifact for Track 05 work.
P1: invest more in input categories with HIGH IMPACT history. P2: experiment with INDETERMINATE categories under cleaner attribution conditions. P3: small bets on lower-leverage categories. Deprecate: stop refining categories with NEGATIVE / LOW history. Process change: reduce changes-per-quarter from 3-4 to 1-2 for cleaner future attribution. Re-run impact tracker quarterly to refine priorities.
Same 4-input attribution framework, different organizational contexts. Pick the variant that matches your RevOps maturity.
Early-stage brands often don't have 4-6 quarters of Track 05 input change history. Standard 6-quarter retrospective doesn't apply. Pre-PMF variant runs 2-3 quarter retrospective with focus on deployment pipe + signal taxonomy (the highest-leverage categories at early stage). Tier Value Changes are typically not yet differentiated; ICP Rubric is often binary (yes-fit / no-fit) rather than weighted.
Standard retrospective covers Google Ads + LinkedIn Ads. Multi-channel brands need per-channel attribution because channel-specific algorithm dynamics differ — LinkedIn responds to different signal types than Google; Meta responds to different tier value structures. Multi-channel variant runs separate attribution per channel, then synthesizes cross-channel patterns.
Quarterly retrospective is the standard cadence; post-incident retrospective is the ad-hoc trigger. Run when: full ICP rubric revamp completes (60-90 days post-launch), deployment pipe migration completes, signal taxonomy redesign completes, OR when unexpected performance shift occurs without obvious cause. Post-incident variant focuses tightly on the change-of-interest with rich confound control rather than broad multi-quarter retrospective.
Most B2B SaaS RevOps teams refine ICP rubrics quarterly based on hunches. Some refinements are high-leverage. Some are noise. Without retrospective attribution, every refinement looks equally important. Run the impact tracker quarterly. Surface which input categories actually moved algorithm performance and which ones were rituals. Or have senior GrowthSpree operators run quarterly impact attribution across 300+ B2B SaaS accounts.