A copy-paste Claude prompt that maps your 4-tier ICP rubric values to LinkedIn CAPI conversion events, configures the HubSpot lifecycle workflow, and validates post-deployment that events are firing with healthy match rates and correct tier value distribution. The LinkedIn-side companion to Track 05's Google offline conversions workflow — together they give you complete ad-platform offline conversion coverage.
A B2B SaaS team configures HubSpot → LinkedIn CAPI in a weekend sprint. They map MQL → LinkedIn conversion event with a default $1 value because that's the path of least resistance. Six weeks later they ask why LinkedIn's Smart Bidding is finding leads that look like the wrong companies — analysts at 5-person startups, not directors at the 500-person mid-market accounts they actually close. The CAPI is working — events are firing, match rate is 73%, conversions are flowing. The calibration is broken. LinkedIn's algorithm has been told every conversion is worth the same dollar amount, so it optimizes for conversion volume at any tier instead of conversion value at the high tier.
The deeper problem is that most B2B SaaS teams treat CAPI setup as a one-time configuration step rather than an ongoing optimization layer. The mechanical setup (authorize CAPI, configure workflow, map lifecycle stages) is well-documented across multiple guides. The calibration step (which dollar value to assign per ICP tier) and the validation step (verifying events are firing at expected rates with healthy match rates and correct tier distribution) get skipped because they're harder to standardize. This is where most CAPI implementations leak value.
This workflow runs both calibration and validation. Claude maps your 4-tier ICP rubric to LinkedIn conversion values ($500 / $200 / $50 / excluded), generates the HubSpot workflow definition + LinkedIn CAPI event configuration, and after 7-14 days of live operation runs the validation pass — pulling event firing rates, match rates per audience, and tier value distribution. Run validation at Day 3, Day 14, and Day 30 post-deployment, then monthly thereafter as part of Track 05's signal quality cadence.
The gold variables — your brand, ICP rubric tier thresholds, HubSpot lifecycle stages, and LinkedIn ad account — are the parts you edit. Run once for setup, then again at Day 3 / 14 / 30 for validation.
hubspot and linkedin_ads via the free Growthspree MCP. ICP rubric thresholds come from ICP Scoring Rubric Builder. For setup mechanics see the existing GrowthSpree CAPI guide; this workflow handles the calibration and validation steps not covered there.Sample output for a hypothetical mid-market B2B SaaS brand running validation at Day 14 post-deployment. The validation surfaces one DANGER issue (match rate at 47% due to li_fat_id capture failure on the demo-request form) and produces a specific fix.
Pulling 14 days of CAPI submissions and cross-referencing with HubSpot lifecycle transitions.
Validation status: WARNING. 4 events configured, 412 submissions in 14 days, 196 accepted (match rate 47.6%). Firing rate HEALTHY (412 submissions vs 421 HubSpot transitions, 2.1% gap). Match rate DANGER (47.6% well below 70% target). Tier distribution HEALTHY (T4 18% / T3 41% / T2 41%, matches ICP rubric expectations). Value flow HEALTHY (tier values flowing correctly through HubSpot workflow).
Run the setup mode once at deployment. Run the validate mode at Day 3, Day 14, and Day 30 post-deployment. Then monthly thereafter as part of Track 05's signal quality cadence.
This workflow assumes a calibrated 4-tier ICP rubric. If you don't have one, run ICP Scoring Rubric Builder first — it produces the 100-point rubric with 4 tier thresholds (0-30 / 31-49 / 50-69 / 70-100) used directly here. LinkedIn Insight Tag must also be deployed for at least 30 days before CAPI activation so attribution baseline exists.
Run ICP rubric builder →Run the prompt with mode=setup. Claude generates the HubSpot workflow definitions (4 workflows, one per lifecycle stage), LinkedIn CAPI event registrations (4 events with tier-conditional values), and the test plan. Implement workflows in HubSpot, register events in LinkedIn Campaign Manager, run the 4 manual test triggers (one per tier) to verify firing. Total operator time: 35 minutes including the test runs.
Run the prompt with mode=validate at three checkpoints. Day 3 catches authorization or webhook failures (events not firing at all). Day 14 catches match rate health and tier distribution issues (the most common silent failure). Day 30 verifies Smart Bidding is actually using the offline events for optimization (algorithm-level integration). Each validation run takes ~6 minutes via Claude vs 2-4 hours by hand.
After Day 30 passes clean, switch to monthly validation as part of Signal Quality Audit's broader cadence. The Track 05 cadence covers ICP rubric review, Google offline conversions validation, signal quality audit, and LinkedIn CAPI validation in a unified monthly pass. This workflow becomes the LinkedIn-side fragment of that monthly cadence.
See Signal Quality Audit →Same calibration framework, different infrastructure. Pick the one that matches your CRM and tracking setup.
The calibration logic and validation framework are CRM-agnostic. The mechanical setup differs: Salesforce uses Process Builder or Flow instead of HubSpot workflows, and Salesforce's LinkedIn Ads integration uses LinkedIn's Marketing Developer Platform instead of HubSpot's native connector.
Default workflow maps 4 lifecycle stages (MQL/SQL/Opp/Won). Some teams need finer granularity: SQL → SDR Accepted → Discovery Booked → Proposal Sent → Closed-Won. Each additional stage gets a separate LinkedIn event with its own tier-calibrated value.
LinkedIn CAPI needs 30+ conversions per month per event for Smart Bidding to optimize against it. Below that, the algorithm doesn't have enough signal. Solution: collapse the 4 lifecycle events into 2 — combined "Qualified" event (MQL+SQL) and combined "Pipeline" event (Opp+Won).
Mechanical CAPI setup is solved. Calibration to your ICP and validation that events are actually flowing — those are the steps where most CAPI implementations leak value. Run setup once. Validate at Day 3, 14, 30. Roll into monthly Track 05 cadence. Or have senior GrowthSpree operators run calibration + weekly validation across both LinkedIn and Google offline conversions — the same operating motion run across 300+ B2B SaaS accounts.