The four approaches
Brands that want to be cited by ChatGPT / Claude / Perplexity / Gemini today have four paths. Each works for some queries; each leaves others on the floor. The choice depends on your starting point and the query categories that actually translate to revenue.
- Traditional SEO agency — pivots existing tooling toward AI Overviews, treats it as another organic-search surface.
- "AI SEO" agency — newer firms positioning around the AI-search wave; typically SEO agencies that added a chapter on schema and llms.txt.
- In-house schema team — your own engineers writing JSON-LD, often as a side project on the marketing or platform team.
- Garnet GEO — citation-engineering as a recurring engineering retainer, with daily polling against the four-model panel as the feedback loop.
What moves citation share
| Traditional SEO |
"AI SEO" agency |
In-house schema |
Garnet GEO |
|
|---|---|---|---|---|
| Daily citation telemetry across 4 models | No | Sometimes (weekly) | No | Yes |
| Per-model retrieval-temperament tuning | No | No | No | Yes |
| Schema delivered as merged GitHub PRs | No (handoff doc) | No (handoff doc) | Yes | Yes |
| llms.txt curation | No | Sometimes | Sometimes | Yes |
| Drift alerts within 15 min of citation loss | No | No | No | Yes |
| Third-party signal seeding (Reddit / SO / HN) | PR-team handoff | PR-team handoff | No | Briefs included Scale+ |
| Engineering follow-through | Recommendations | Recommendations | Yes (your team) | Yes (Garnet team) |
| Monthly executive PDF | Slide deck | Slide deck | Internal | Auto-rendered |
| Cost (annualized, for an equivalent scope) | $60K–$200K | $72K–$240K | 0.5–1 FTE = $80K–$200K loaded | $24K (Pro) – $180K (Enterprise) |
Where each approach wins
Traditional SEO is right when
- Your buyer-intent queries skew heavily toward Google ranked-list (e.g. local business, consumer e-commerce, content publishers monetizing display ads). The buyer hits a SERP page, not a Perplexity answer.
- You don't yet have citation-share telemetry on the four-model panel — start with SEO, then add GEO once you have a baseline measurement.
- Your category isn't yet well-represented in the open models' training data. Even with perfect schema, the model doesn't have you to cite. Authority groundwork (PR, content, third-party coverage) compounds first.
"AI SEO" agencies are right when
- You want a single-vendor relationship covering both the SEO and GEO surfaces, and the agency has demonstrably moved citation share (not just SERP rank or AI-Overview inclusion) on prior clients. Ask for the polling methodology before signing.
- Your schema work is genuinely greenfield — the agency can ship the first 80% of obvious additions quickly while you decide whether to deepen with a citation- engineering retainer.
In-house schema is right when
- You have engineering bandwidth for monthly schema PR work AND a marketing partner who can author the gap-driven content briefs AND someone running the citation polling feedback loop. Most teams find one of those is missing.
- Your stack has unusual constraints (custom CMS, headless setup, multi-region locale routing) where an outside vendor would burn a quarter just learning the architecture.
- You're past Series C with a serious AI-search line item and the citation operator role is a full-time hire. Above ~$300K/year of AI-search spend, in-housing the operator is usually cheaper than a retainer.
Garnet GEO is right when
- You want citation polling as the feedback loop, not as a quarterly check-in. The daily-polling cadence is what surfaces drift early.
- You want schema and llms.txt as merged code in your repos, not as recommendation slides. The retainer ships PRs.
- You want one engineer accountable across the engagement, not a team of associates with rotating ownership. Same engineer ships the lane every month.
- You don't want to build the polling infrastructure yourself — Garnet's polling Worker, aggregation pipeline, and PDF-rendering Workflow are deployed in your Cloudflare account but operated by Garnet.
The 30–50% divergence: why it matters
Across the brands we've measured, the citation behavior on buyer-intent queries diverges 30–50% from organic ranking. A page that ranks #1 on Google for "best vector database for serverless" might be cited 0% by Perplexity (which prefers llms.txt and dense citation graphs over Google's authority signals). A page that doesn't even rank in the top 20 organically might be cited 60% of the time by ChatGPT (which rewards authoritative entities with rich schema, regardless of inbound link count).
That divergence means SEO-optimization-for-AI-Overviews — the implicit play behind most "AI SEO" offerings — is structurally incomplete. It captures the Google-overlap; it misses the divergence. The divergence is where the buyers your competitors aren't reaching live. Citation-engineering is the discipline that targets it.
What we'd recommend by company stage
Pre-product-market-fit: skip GEO, focus on the product. Citation share is a lagging indicator of authority; without authority signals (third-party coverage, well-cited customer content, actual usage volume) there's nothing to engineer.
Post-PMF, pre-Series-B: GEO Pro ($1,999/mo). Tracked-query set focused on category-defining buyer intent. Schema PR pack monthly. The cost is small enough to absorb from the marketing budget; the citation-share lift compounds because your category is still settling.
Series B / mid-market: GEO Scale ($4,999/mo). 80–150 tracked queries, weekly schema PRs, third-party signal seeding, citation-bait briefs. This is where the retainer pays for itself in pipeline impact within a single quarter.
Enterprise / post-Series-D: GEO Enterprise ($14,999/mo). Adversarial-query monitoring, competitive-takedown briefs, per-region tracking. Or hire the citation operator full-time and use Garnet for the infrastructure layer (Cluster Ops + the operator-bus pattern from Sentinel).
How to evaluate any GEO/AI-SEO vendor
If you're talking to a vendor that says they handle "AI search optimization," ask:
- How often do you actually poll the model panels? If the answer is "weekly" or "monthly," you're getting drift detection at a coarser cadence than the models update at. Daily polling is the floor.
- How many engines do you track? ChatGPT alone misses 60% of the citation surface. The four-model panel is the minimum.
- Do you tune signals per model? Each model has a different retrieval temperament. A vendor that ships one schema pack for "all four models" is probably SEO-tooling repurposed.
- What's the deliverable shape — slide deck or merged code? Slide decks become technical debt; merged PRs become production.
- Who runs your polling infrastructure? If it's in their account, cancellation kills your historical data. If it's in your account (the Garnet pattern), your data outlives the vendor relationship.
Adjacent lanes
If your team is also evaluating other lanes:
- Audit Retainer — vs Big-4 architecture audits, in-house platform teams, security consultancies. The same continuous-vs-one-shot argument applies.
- Sentinel-aaS — vs Zapier, vs custom Cloudflare-Worker builds, vs PagerDuty + Slack. The operator-bus pattern is non-obvious until you've seen it work.
- Cluster Ops — vs API-only inference, vs cloud GPU farms, vs in-house DevOps owning your Mac Mini rack. The economic argument flips around $5K–$10K/month of API spend.
See GEO pricing → Read the full methodology → or talk to engineering