A copy-paste Claude prompt that runs your buyer's queries through web search, observes which sources get cited in AI answers, and identifies the citation gaps where your competitors get authority and your brand stays invisible. No MCP connector required — just Claude with web search.
A buyer types "best [your category] tools for [their use case]" into ChatGPT, Perplexity, or Google's AI Overview. The answer cites three competitors with embedded source links. Your brand doesn't appear. The same query in regular Google might rank you #2 — but the buyer never gets to that page because they got their shortlist from the AI answer first. By the time they reach your homepage, the deal is already partially decided.
This is the central problem with AEO in 2026: citation share and search rank have diverged. AI engines weight different signals than Google — entity authority, multi-source corroboration, structured answer formats, citation network density. A site with 100 backlinks and decent rankings can be invisible in AI answers. A site with 30 well-structured comparison pages and clear entity signals can dominate.
This workflow runs a 15-25 query battery through web search, tags each query as cites your brand, cites competitors but not you, or cites neither, and ranks competitors by citation frequency across the full battery. The "cites competitors but not you" queries are the citation gaps. The output is a fix-order list — content to publish, entity authority to build, or schema improvements to existing pages.
The gold variables — your category, brand, top competitors, and query list — are the parts you edit. List 15-25 buyer-intent queries that your target ICP would actually run.
Claude Desktop or Claude.ai with web search enabled. The most portable workflow in the playbook — it works anywhere Claude can browse.Sample output for a hypothetical mid-market B2B SaaS brand in the customer onboarding category. Real structure, illustrative competitor names. The pattern below — ~30% citation share with one dominant competitor pulling away — is overwhelmingly common.
Running the 21-query citation battery now.
[+18 more queries across all 5 categories]
Bottom line: your brand is CITED in 6 of 21 queries (29% citation share), PARTIAL in 4 queries, and GAP in 11 queries (52% gap rate). The dominant competitor across the GAP queries is Competitor A — cited in 9 of 11 gap queries. The single most common content pattern producing your gaps: third-party comparison pages on review sites where Competitor A has a comprehensive entry and you have a stub or no entry at all.
Pattern detected. Competitor A dominates because they have a comprehensive comparison page that gets cited in nearly every "best of" or "comparison" query. Competitor B wins on review-site authority — they're the top-listed entry on G2 and TrustRadius for the category, which AI engines pull as primary citations. Your brand wins on educational content (Cited on "what is" and "how to" queries) but loses on every comparison and vendor-evaluation query.
First run only. Every monthly re-run takes under 10 minutes — paste, edit, observe.
Open Claude Desktop or Claude.ai. Confirm web search is enabled — usually toggled on by default. No MCP connector required for this workflow. The portability is intentional: AEO is about your brand's external visibility, not your internal data.
Use the five buyer-query categories from section 01 as the framework. Aim for 3-5 queries per category, weighted toward Categories 02 and 03 (solution comparison and vendor evaluation) since those produce the most decisive citation gaps. The accuracy of the audit depends on the queries reflecting real buyer behavior — if you're not sure what your buyers actually ask, ask 3-5 customers what they typed before finding you.
Copy the prompt from section 02. Edit the gold variables — brand, category, top 4-6 competitors, ICP, and the 21 queries. The competitor list matters: if you list the wrong competitors (companies you never compete against), the citation frequency table becomes noise. List only competitors who appear in your real win-loss reports.
Citation share moves slower than other metrics — quarterly is the right cadence. AI engines re-crawl and re-index on monthly cycles, so changes from content fixes typically show up 60-90 days later. Save each quarter's output and compare query-by-query to track which fixes are working and which aren't. The first quarterly comparison after deploying fix 01 is usually the most informative — content fixes either work fast or not at all.
Same query battery foundation, different angle. Pick the one that matches what you're trying to decide right now.
Once the main audit identifies a dominant competitor, the deep-dive variation analyzes only that competitor across all 21 queries — what content do they have, what entity authority signals do they have, and what's reproducible.
For teams who already know educational content is fine but suspect they're losing on comparison queries. Runs only Categories 02 and 03 (solution comparison + vendor evaluation) with extra depth — typically the highest-CAC queries in B2B SaaS.
Wraps the audit in a quarterly memo format. If you have prior audit output in context, Claude compares Q-over-Q citation share movement, calls out wins (queries that flipped from GAP to CITED), regressions (queries that flipped the other direction), and recommends focus for next quarter.
Open Claude with web search, paste the prompt, edit your category and competitors. The 11 queries where your competitors get cited and you don't become visible in 25 minutes. Or have senior GrowthSpree operators run the audit, build the comparison content, and execute the entity authority fixes across your stack.