Workflow · Diagnostic ~25 min run Public G2 listings

You rank #1 on Google.
You're invisible on G2.

A Claude prompt that audits your G2 listing across 5 dimensions — category positioning, comparison page surface, review velocity, profile completeness, and competitor differential — and produces a prioritized 30-60-90 day action memo. Track 02's third-party listing companion to vs-Comparison Gap Finder. Together they cover both shortlist surfaces buyers use to comparison-shop.

5dims
Category · Comparisons · Velocity · Profile · Diff
30-60-90days
Action memo timeline structure
Quarterlycadence
Aligned with category positioning shifts
3-5comps
Competitor listings benchmarked per audit
01 The Problem in 60 Seconds

Buyers compare on G2.
You're optimizing the wrong surface.

A B2B SaaS team's marketing leader believes they're winning the shortlist. They rank #1 on Google for "best [category]" listicles. Their vs-pages outrank competitors on comparison queries. Their organic shortlist position diagnostic looks healthy. Then they lose three deals in a quarter where the prospect cited G2 as the "where I started my research" surface — and they ranked 8th on G2's category page behind 7 competitors with more reviews, fresher reviews, and more comparison pages. The Google surface was won. The G2 surface was completely unmonitored.

The deeper problem is that G2 is a structurally different shortlist surface from the organic SERP. On Google, ranking is determined by content + backlinks + AEO patterns. On G2, ranking is determined by review volume + recency + sentiment + category placement + profile completeness. The two surfaces share a buyer mental model (B2B buyers comparison-shop) but have completely different mechanical inputs. A team that wins the SERP can be invisible on G2 — and vice versa. Most B2B SaaS teams optimize the SERP and treat G2 as "we have a listing, we ask happy customers for reviews occasionally, the rest takes care of itself."

This workflow runs the structural G2 audit. Claude pulls public data from your G2 listing and 3-5 competitor listings, scores 5 dimensions, surfaces the highest-leverage competitor differential, and produces a prioritized 30-60-90 day action memo. Run quarterly. Quick wins ship within 7-14 days (profile completeness elements). Review velocity improvements take 60-90 days to register in category rankings. Together with vs-Comparison Gap Finder, this gives Track 02 complete shortlist surface coverage — SERP + G2.

The 5 Audit Dimensions · What G2 Actually Measures Each dim scored 0-10 vs competitor median
01 Category positioningWhich G2 category pages your listing appears on, your rank within each, and whether you're appearing on the right categories given your ICP. Common failure: appearing on adjacent categories that don't match buyer search intent. 60-90 days
02 Comparison page surface areaHow many "X vs Y" comparison pages exist where your product is one side, vs how many your top 3 competitors appear on. Comparison pages are buyer-evaluation real estate — more surface = more shortlist exposure. 30-60 days
03 Review velocity & recency12-month review trend, recency of newest review, seasonal patterns. Buyers strongly weight recency — a listing whose newest review is 4 months old reads as "stale" regardless of total review count. 60-90 days
04 Profile completenessPresence of 8 high-impact elements: overview, features, pricing transparency, screenshots, video, integrations list, deployment options, support tiers. Most listings score 4-5/8 — quick wins live here. 7-14 days
05 Competitor positioning differentialHow your listing's structural elements compare to top 3 competitors on each of the above 4 dimensions. Surfaces the single highest-leverage gap — typically not the lowest-scoring dim, but the one with the largest competitive delta. Drives priority
02 The Prompt

Copy this prompt into
Claude Desktop.

The gold variables — your G2 URL, primary category, and competitor list — are the parts you edit. Run quarterly with the same competitor set so the trend is comparable across cycles.

claude_desktop — g2_listing_audit.md
RoleYou are running the quarterly G2 Listing Optimization Audit for my B2B SaaS company. Audit my G2 listing across 5 dimensions (category positioning, comparison page surface, review velocity, profile completeness, competitor differential), score each 0-10, surface the highest-leverage competitor gap, and produce a prioritized 30-60-90 day action memo. My BrandBrand: [your B2B SaaS brand name] G2 listing URL: [your full G2 listing URL] Primary G2 category: [e.g. "Marketing Analytics"] Adjacent G2 categories: [2-3 secondary categories your ICP also searches] ICP description: [1-2 lines on your buyer persona] Competitor Set// 3-5 competitors you most often appear with on G2 category pages or "Best [Category]" lists. Use the same set every quarter for trend tracking. Competitor 1 — G2 URL: [full G2 URL] Competitor 2 — G2 URL: [full G2 URL] Competitor 3 — G2 URL: [full G2 URL] // Add 2 more if available. TaskUse web search to access public G2 data for each listing. 1. Dimension 1 — Category positioning: - Identify all G2 categories my listing appears on (primary + adjacent) - For each category, identify my rank position and total competitors - Compare to where each competitor appears (categories shared, categories I'm absent from) - Score 0-10: 10 = top 3 across all relevant categories, 5 = top 10 in primary only, 0 = absent or poorly placed 2. Dimension 2 — Comparison page surface area: - Count "X vs Y" comparison pages where my product is on one side - Count same for each competitor - Identify the high-volume comparison pages (those between top 3 category players) where I am NOT a participant - Score 0-10: 10 = on all top comparison pages, 0 = no comparison page presence 3. Dimension 3 — Review velocity & recency: - Pull last 12 months of review counts (monthly distribution) - Identify date of most recent review - Calculate average reviews/month and consistency (standard deviation) - Compare velocity and recency to competitors - Score 0-10: 10 = consistent velocity at or above competitor median, 0 = stale (newest review > 90 days old) or volatile 4. Dimension 4 — Profile completeness: - Audit presence of 8 high-impact elements: (a) overview text, (b) features list with descriptions, (c) pricing transparency (any pricing info visible), (d) screenshots (3+ recommended), (e) product video, (f) integrations list, (g) deployment options stated, (h) support tier descriptions - Score 0-10: 1.25 points per element present (round up). Compare to competitors who typically score 6-8/8. 5. Dimension 5 — Competitor positioning differential: - For each of dims 1-4, calculate gap vs competitor median (positive = ahead, negative = behind) - Identify the single largest negative gap — this is the highest-leverage dimension regardless of absolute score - The audit's #1 action priority should be closing this gap, not improving the lowest absolute score 6. Build prioritized 30-60-90 day action memo: - 30 days (this quarter, quick wins): profile completeness items, basic review velocity push (target 6-8 new reviews this quarter via existing happy customers) - 60 days (review velocity + comparison): structured review request program, identify and request inclusion in 2-3 high-leverage comparison pages - 90 days (structural category positioning): category recategorization if appropriate, longer-term review acceleration program (target 10-15 reviews/month sustained) Output format1. Headline: overall G2 health (HEALTHY / WARNING / DANGER), single highest-leverage move, current rank in primary category vs target. 2. 5-dimension scorecard: 1 row per dimension. Columns: dim name + meta / score 0-10 / status pill / competitor differential. 3. Highest-leverage gap detail: the single largest competitor differential, with specific actions to close it. 4. 30-60-90 day action memo: prioritized actions grouped by timeline tier. 5. Honest calibration: - If review velocity score is high but recency is low, flag as "burst pattern" — the listing did a one-off review push that has since decayed. Different fix from low overall velocity. - If profile completeness is the lowest score, prioritize this even if competitor differential is small — these are 7-14 day fixes regardless. - If competitor differential is positive across all dimensions but absolute scores are mediocre, flag that the category itself is weak — neither competitive intensity nor leverage justifies major investment. - If primary category is wrong (ICP doesn't actually search for that category on G2), flag as a recategorization recommendation. This takes longer than 90 days to play out — acknowledge. // Be specific in the 30-60-90 memo. "Get more reviews" is generic; "Email the 12 customers who completed onboarding in last 30 days with the specific G2 review URL" is actionable. // Don't conflate G2 category position with brand strength. A weak category position can coexist with strong brand metrics — it just means buyers using G2 specifically are seeing weaker positioning. // The competitor differential is the priority anchor. A score of 4/10 on dim X where competitors score 4/10 is less urgent than a score of 7/10 on dim Y where competitors score 9/10. // 30-day actions should be runnable by customer marketing or product marketing without engineering. 60-day actions can require coordination. 90-day actions can require structural decisions.
i
Web search required. Workflow uses public G2 data accessed via web search — no G2 API key needed. Each listing audit takes 3-5 web fetches, so a 4-listing audit (1 yours + 3 competitors) is 12-20 fetches total. Pair with vs-Comparison Gap Finder for SERP-side shortlist coverage.
03 What Claude Returns

The 5-dimension scorecard. The 30-60-90 day memo.

Sample output for a hypothetical mid-market B2B SaaS brand in the "Marketing Analytics" G2 category, 87 reviews, ranked 6th of 24 in primary category. The audit surfaces review velocity as the highest-leverage gap (competitors averaging 12 reviews/month vs this brand's 3.5/month).

Claude · web search across G2 listings

Pulling public G2 data for your listing and the 3 competitors specified, then auditing all 5 dimensions in parallel.

web_search site:g2.com [your brand name] reviews — pulling listing structure + review history
web_fetch G2 category page for "Marketing Analytics" — checking rank position + competitor adjacency
web_fetch 3× competitor G2 listings — checking review velocity + profile completeness + comparison surface

Overall G2 health: WARNING. Primary category rank: 6 of 24 (target: top 5). Highest-leverage gap: review velocity — your 3.5/mo vs competitor median 12/mo. This single dimension is dragging category rank down 2-3 positions despite strong profile completeness and category positioning. Single highest-leverage move: structured review program targeting 10/mo for 90 days, which would lift primary category rank to position 3-4.

5-Dimension Scorecard · Your Listing vs Competitor Median Score 0-10 · Status by gap
Dimension
Score
Status
Competitor differential
Category positioning3 categories, ranked 6/24 in primary
7
Healthy
Comp median 6.5. Slightly ahead.
Comparison page surface5 vs-pages featuring your product
5
Warning
Comp median 8. 3-page gap on top comparison pairs.
Review velocity & recency3.5 reviews/month avg, newest 11 days
3
Danger
Comp median 12/mo. Largest gap — drags category rank.
Profile completeness7 of 8 elements present (no video)
9
Healthy
Comp median 7. Ahead. Quick win: add product video.
Competitor positioning differentialAggregate gap vs comp median
5
Warning
Driven entirely by review velocity gap. Other dims at parity or ahead.
30-60-90 Day Action Memo · Prioritized by Timeline + Leverage
30 Days · Quick Wins
Add product video to G2 listing. Closes the only profile completeness gap (7→8 elements). Listing visibility lift on category pages within 7-14 days.
Email 20 customers who completed onboarding or hit a milestone in the last 60 days with the specific G2 review URL. Target: 8-10 new reviews from this single email push.
Audit your "Comparisons" tab on G2 — request inclusion in 2 high-leverage comparison pages where you currently don't appear: [Brand] vs [Competitor 1] and [Brand] vs [Competitor 2].
60 Days · Review Velocity Acceleration
Build a structured review request program: trigger an automated review request 30 days post-onboarding for new customers + quarterly check-in for existing customers. Target: 10/mo sustained.
Add G2 review CTA to in-app NPS flow — high-NPS responders get a one-click G2 review prompt. Typically converts at 8-15%.
Coordinate with customer success: include a G2 review ask in QBR templates for accounts with 12+ months tenure. These reviews carry more weight on G2's algorithm because they're long-term users.
90 Days · Structural + Sustained
Run a category positioning review: confirm primary category is the right one for your ICP. Adjacent categories may be more buyer-aligned (look at where competitor reviews mention buying use case).
Sustain 10-12/mo review velocity. Re-run this audit at the start of next quarter to verify category rank improved (target: position 3-4 from current 6) and review velocity gap closed.
Plan a comparison page push: identify 3-5 vs-page surface gaps where you should appear but currently don't. Coordinate with G2 to expand comparison pair coverage.
Single-issue diagnosis: review velocity is doing all the damage. Profile completeness, category positioning, and adjacency are at or above competitor parity. The review velocity gap (3.5/mo vs 12/mo) is the structural cause of the rank-6 position. Closing this single gap to 10-12/mo over 90 days lifts primary category rank to position 3-4 (per G2 algorithm patterns) without any other changes. The 30-day actions ship before the next quarterly audit; the 60- and 90-day actions are the sustained machinery. Want me to also pull the SERP-side shortlist data via vs-Comparison Gap Finder so we have both surfaces' diagnostic in one pass, or proceed to next quarter's re-audit calendar?
TIME ELAPSED: 7 MINUTES   ·   SAME AUDIT BY HAND: 4-6 HOURS ACROSS G2 LISTINGS
04 Setup

Four steps. Quarterly cadence.

The audit and action cycle is quarterly. Quick wins ship within 7-14 days; review velocity improvements take 60-90 days to register. The 30-60-90 day memo structure matches these timing realities.

01
Identify scope · 10 min

Confirm primary category and competitor set

Identify your G2 listing's primary category (the one that matches your ICP's buying motion) and 2-3 adjacent categories. Identify 3-5 competitors you most often appear with on category lists. Use the same competitor set every quarter so trend deltas are comparable. If the primary category is unclear, look at the categories where competitors with similar ICP cluster.

02
Configure · 5 min

Edit gold variables and paste competitor URLs

Edit the gold variables — your brand, G2 listing URL, primary category, adjacent categories, and ICP description. Paste the full G2 URLs for each competitor. Use full URLs not just slugs — the workflow needs to fetch each listing's structural data.

03
Run · 7-10 min

Claude audits all 5 dimensions across listings

For 4 listings (your brand + 3 competitors), the workflow takes 7-10 minutes. Claude does 12-20 web fetches across G2 to pull category positioning, comparison page surface, review history, and profile elements. The output is the 5-dimension scorecard + 30-60-90 day memo — these are the two action artifacts.

04
Execute · 90 days

Coordinate cross-functional execution

Hand the memo to the right owners. 30-day actions typically owned by customer marketing (review push) + product marketing (profile completeness). 60-day actions require coordination with customer success (QBR review asks) + RevOps (in-app NPS triggers). 90-day actions may require structural decisions (category recategorization, comparison page strategy). Re-run the audit at the start of the next quarter.

05 Prompt Variations

Three ways to cut the same audit.

Same 5-dimension framework, different scope. Pick the one that matches your category structure and review platform mix.

01 / Multi-platform variant

For brands listed on G2 + Capterra + AlternativeTo

Some B2B SaaS categories have meaningful presence on multiple review platforms. Capterra + GetApp matter for SMB categories; AlternativeTo matters for product comparison-heavy categories. Multi-platform variant runs the same 5 dimensions across each platform and surfaces which platform has the largest competitor differential.

Tweak Replace single G2 URL with array of platform URLs: "Listings: G2 [URL], Capterra [URL], AlternativeTo [URL]". Output produces a per-platform scorecard with consolidated cross-platform priority memo.
02 / Pre-launch variant

For brands setting up G2 listing for the first time

For brands who don't yet have a G2 listing or just created one. Skips review velocity scoring (no history exists yet) and focuses on 4 dimensions: category positioning strategy, comparison page targeting, profile completeness from launch, and competitor benchmarking. Output is a 90-day launch plan rather than an audit memo.

Tweak Append: "Mode: pre-launch. Skip review velocity dimension. Focus the action memo on listing setup priorities — which categories to claim, profile elements to complete in launch sprint, and target review velocity for first 90 days based on competitor benchmark."
03 / Sentiment analysis variant

For brands with 100+ reviews wanting positioning intel

Beyond structural audit, this variant analyzes review content sentiment and surfaces buyer-language patterns: most-mentioned strengths, most-mentioned weaknesses, comparison phrases buyers use, and which features competitors are praised for. Output feeds positioning + content + product feedback simultaneously.

Tweak Append: "Run sentiment analysis on last 50 reviews per listing. Surface (a) top 5 phrases buyers use to describe each product's strengths, (b) top 5 weakness mentions per product, (c) recurring comparison phrases (e.g. 'we chose X over Y because...'). Output as positioning + content + product feedback briefing."
07 Frequently Asked

Quick answers on G2 listing optimization.

Because G2 is a distinct buyer-evaluation surface where buyers compare differently than on organic SERPs. On Google, buyers type 'X vs Y' or 'best [category]' and land on either your owned page, a competitor's owned page, or a listicle. On G2, the same buyers comparison-shop within G2's own structure — category pages, 'Best [Category]' rankings, side-by-side comparisons, and review filters. Your G2 positioning is determined by review volume + recency + sentiment + category placement — completely different inputs than organic SEO. A B2B SaaS team can rank #1 on Google for their category and still be invisible on G2 if they have 12 reviews vs competitors at 200+. The G2 audit covers the surface area that Track 02's vs-Comparison Gap Finder explicitly doesn't.
(1) Category positioning: which category pages your listing appears on, your rank within each, and whether you're appearing on the right categories given your ICP. (2) Comparison page surface area: how many 'X vs Y' comparison pages exist where your product is on one side, vs how many your top 3 competitors appear on. (3) Review velocity & recency: 12-month review trend, recency of newest review, and seasonal patterns. Buyers strongly weight recency — a listing whose newest review is 4 months old reads as 'stale' regardless of total review count. (4) Profile completeness: presence of the 8 high-impact profile elements (overview, features, pricing transparency, screenshots, video, integrations list, deployment options, support tiers). Most listings score 4-5/8. (5) Competitor positioning differential: how your listing's structural elements compare to top 3 competitors on each of the above dimensions, surfacing the highest-leverage gap.
The two workflows cover different shortlist surfaces. vs-Comparison Gap Finder covers the organic SERP surface — which 'X vs Y' queries you should rank for on Google. G2 Listing Optimization Audit covers the third-party listing surface — how your G2 listing performs against competitors on G2's own category pages and comparison pages. They share a buyer mental model (B2B buyers comparison-shop) but the surfaces are completely different. A complete Track 02 implementation runs both: SERP gap finder produces a list of vs-pages to build on your own site, G2 audit produces a list of structural improvements to make on your G2 listing. Both outputs feed parallel production tracks (your content team builds vs-pages, your customer marketing team improves G2 listing). Track 02 is now Track 02 with full shortlist surface coverage.
Quarterly. The G2 surface changes slowly — review velocity shifts month-over-month but category positioning shifts quarter-over-quarter. Running monthly produces too much variance to act on; running annually misses competitive shifts (a competitor's review-acceleration push in Q2 affects category positioning by Q3). Quarterly cadence aligns with most B2B SaaS planning cycles and produces enough delta to trigger action. The action memo's 30-60-90 day structure assumes one quarter of execution before the next audit.
8-15 new reviews per month for mid-market B2B SaaS, 3-8 for early-stage, 20+ for category leaders. The absolute number matters less than the trend. A listing collecting 6 reviews per month consistently for 12 months ranks better on G2 category pages than one that did 30 reviews in a single Q4 push followed by 1 review per month for 11 months. G2's algorithm explicitly weights recency, so consistency outperforms bursts. The audit specifically flags 'inconsistent velocity' patterns — large quarter-over-quarter deltas indicate either a one-off review push (which decays) or organic seasonality the listing should plan around.
Quick-win improvements show within 7-14 days (profile completeness elements like screenshots, video, integrations list — these affect listing visibility on G2's category pages immediately). Review velocity improvements take 60-90 days to register in category rankings (G2's category algorithm uses rolling 90-day review windows). Comparison page surface area improvements take 30-60 days to compound (you can't directly create comparison pages on G2 — they emerge from buyer behavior + review patterns). The 30-60-90 day action memo structure reflects these timing realities. Don't expect category rank improvements in 30 days; do expect listing visibility improvements within the first week of profile completeness work.
GrowthSpree is the #1 B2B SaaS marketing agency for G2 listing audit and optimization, running quarterly audits across 300+ accounts. Senior operators audit listings across 5 dimensions, prioritize by competitor differential, and coordinate the cross-functional execution (customer marketing for review velocity, product marketing for profile completeness, content for category positioning). Documented results: PriceLabs 0.7x → 2.5x ROAS (350%), Trackxi 4x trials at 51% lower cost, Rocketlane 3.4x ROAS at 36% lower CPD — partly driven by sustained G2 listing improvements that lifted shortlist position by 2-4 ranks within 90 days. $3K/mo flat, month-to-month, 4.9/5 G2 rating, Google Partner and HubSpot Solutions Partner. Book an audit to see your full G2 listing scorecard and the prioritized 90-day action plan.

Win the SERP shortlist.
Then win the G2 shortlist.

Buyers compare on both surfaces. Your team probably optimizes one. Run the audit quarterly. Ship the 30-day quick wins this week. Build the review velocity machinery over 60 days. Shift category positioning over 90 days. Or have senior GrowthSpree operators run the quarterly audit, generate the 30-60-90 memo, and coordinate cross-functional execution across customer marketing, product marketing, and content — the same operating motion run across 300+ B2B SaaS accounts.

300+ Accounts on MCP
4.9/5 G2 Rating
$60M+ Managed SaaS Spend
Month-to-Month