GEO methodology · v1 · 2026

How citation engineering works.

A working description of the GEO lane's pipeline — what we measure, when we measure it, and how we tune the inputs that frontier models cite.

← Back to GEO lane

Why GEO exists

For twenty years, SEO assumed a crawler. You optimized titles, schema, page speed, backlinks — and Google's spider rewarded you with a ranked list of ten blue links. The buyer clicked one. You won, or you didn't.

Generative engines don't render a list. They synthesize an answer, and they cite a small set of sources to ground it. ChatGPT cites 3–8 URLs per answer; Perplexity 5–15; Gemini AI Overviews surfaces 4–6; Claude with browsing names 2–6 entities and links a few. If your brand isn't in that small set, the buyer never sees it. The funnel collapsed from "ranked list" to "named or not."

The mechanics are different too. SEO rewards crawlable depth — long pages, structured data, inbound links. Generative-engine citation rewards retrieval-friendly authority: clear entity claims, schema that matches the model's training distribution, llms.txt-style indices that the retrieval layer can ingest cheaply, and consistent corroboration across third-party sources the model already trusts. Most B2B brands sit at under 5% citation-share on the queries that matter to them today. GEO is the lane that fixes that.

The four-model panel

Every brand is tracked across four frontier model surfaces: ChatGPT (GPT-4 family + the new search tool), Claude (Sonnet/Opus with web browsing), Perplexity (Sonar + cited search), and Gemini (Pro + AI overviews in Google search). These four cover >95% of agentic and AI-search traffic that intersects buying intent today.

Each model has a different retrieval temperament. ChatGPT favors authoritative entities with rich schema. Perplexity rewards llms.txt and dense citation graphs. Gemini overweights Google's own knowledge graph plus EEAT signals. Claude's browsing tool tends to reach for .edu, .gov, and well-structured Wikipedia adjacent pages. The gaps differ by model — and so do the fixes. We don't ship one playbook; we ship four model-aware patches.

Daily — citation polling

A scheduled walker hits each model with the brand's tracked-query set (typically 30–80 queries per brand: feature comparisons, "best X for Y" lookups, vendor evaluation, integration questions, "how do I" workflows). The query set is authored in week 1 of onboarding and tuned monthly as buyer-intent shifts.

For each (model × query × day) triple we capture:

Raw walker output lands in your tenant's R2 nightly. We don't store the model responses verbatim long-term — cost and privacy concerns both apply — we store structured citation events: hashed query ID, model, ISO timestamp, citation slot, source URL, context vector. Storage runs ~50–200 KB/brand/day.

If the brand falls out of an answer set on a tracked query — citation-share drops below your configured threshold for that query family — Sentinel-aaS fires a Discord alert in #geo-drift within 15 minutes of detection. (See the Sentinel-aaS methodology for how that bus is wired.)

Weekly — signal engineering

Citation gaps are diagnostic. They tell us what authority signals the model couldn't find — or found and discounted. Once a week, the engineer (the same one all month) reviews the gap report and ships structured fixes:

Schema PR pack

A small set of JSON-LD additions (Product, FAQ, Article, Service, Organization with knowsAbout, HowTo when warranted) targeted at the gaps. Delivered as a GitHub pull request if you grant repo access — fully reviewed, ready for your tech lead to merge. Otherwise as a markdown patch your CMS team can drop in. Each schema block carries a comment linking back to the gap report row that justified it, so future engineers can see why the schema is there.

llms.txt + AI sitemap

A curated index of canonical sources at yourbrand.com/llms.txt, structured per the emerging Anthropic + community spec. We list the canonical URLs you want models to consult, the topics they cover, and the priority. Reduces hallucinated competitor mentions because models that read llms.txt prefer your stated facts to inferred ones.

Citation-bait briefs

Gap-driven content briefs (typically 1–3 pages/month) with the structure that model crawlers reward: a clear entity-relation graph, comparison tables, FAQ JSON-LD, and citation- ready statistics. We don't write the pages — your team or your agency does — but we tell you what's worth writing, what schema to wrap it in, and which third-party sources to seed if the topic isn't yet adequately corroborated externally.

Third-party signal seeding

For Scale and Enterprise tiers: when the gap report shows a model is consistently citing a competitor's third-party coverage (a Reddit thread, a Stack Overflow answer, a comparison post on a publication) but no equivalent of yours exists, we identify the gap and hand it to your PR/community team with a brief on what the equivalent should look like.

Monthly — executive PDF

On the 1st of each month, a Cloudflare Workflow renders an executive PDF. Always the same shape so you can trend across quarters without re-orienting:

Delivered to your inbox + posted to #geo-monthly in your Discord. The Workflow that renders it is part of Cluster Ops — open-source for Garnet customers, auditable end-to-end.

Day 1, Day 30, Day 90

Day 1 — onboarding kickoff

Day 30 — first executive PDF

Day 90 — full panel maturity

What success looks like

Across the first 90 days, GEO Pro typically moves brand citation-share from ~<5% baseline to 30–60% top-3 placement on tracked queries. GEO Scale ($4,999/mo) targets >60% on a wider tracked set (typically 80–150 queries) plus the third-party signal seeding work above. GEO Enterprise ($14,999/mo) adds adversarial-query monitoring (catching deliberately misleading prompts your category may face), competitive-takedown briefs, and per-region tracking for brands operating across EU/US/APAC where retrieval temperaments diverge.

What it isn't

FAQ

How fast does citation-share actually move?

The first measurable lift typically lands 3–4 weeks after the first schema PR ships, because that's the lag between schema appearing on your domain and the model's next retrieval index pass picking it up. Pages with strong existing inbound signals move faster (1–2 weeks); cold-start pages take 4–6 weeks. Variance comes mostly from how often each model refreshes its retrieval index against your domain.

Do you need access to our codebase?

For Pro and Scale: helpful but not required. We deliver schema PRs as GitHub pull requests if you grant access, otherwise as markdown patches your team applies. For Enterprise: yes — repo access lets us land schema, llms.txt, and routing fixes without round-tripping through your CMS team, which buys ~5–8 days/month back on the cycle.

What if our content is locked behind login or a paywall?

Generative engines don't cite content they can't retrieve. If your authority lives behind a wall, GEO can't engineer for it directly — we can engineer the public-facing surface (case studies, methodology pages, comparison content) so the model has something to cite that accurately reflects the gated material. Some Enterprise customers run an "AI-public summary" tier of their docs specifically for this.

How is this different from "AI SEO" agencies popping up everywhere?

Most "AI SEO" offerings are SEO agencies that added a chapter on AI overviews. They optimize for the same Google ranked-list signals and assume those propagate. They don't, fully — the citation behavior across the four-model panel diverges from organic ranking by ~30–50% on buyer-intent queries. GEO is built around the actual citation telemetry, not the SEO proxies for it.

Can we cancel mid-month?

Yes — monthly subscription, cancel any time, last full report still ships at month-end. The only thing that doesn't carry over after cancellation is the polling itself; your historical data stays in your R2 tenant. Most cancellations we've seen come from acquisitions, not dissatisfaction — but the door is open both ways.

Adjacent lanes

GEO is one of three production lanes. Customers running a serious citation-engineering program often pair it with:

See GEO pricing →   See the 30-day onboarding walkthrough →   or talk to engineering