AI Engine Benchmark.

Same buyer question, fired daily at ChatGPT, Claude, Perplexity, Gemini across 8 consulting categories. Which engine recommends whom — raw, unfiltered, dated. Open dataset, CC0. Same infra we run for paid GEO clients.

Loading…

Why publish this. SEO agencies sell ranked-list traffic. GEO is different: it’s the citation pattern inside the assistant’s answer. If a buyer asks an AI for "best ERP consultants 2026," the recommendation set varies a lot by engine. This dashboard makes that visible — for you, for your competitors, and for us.

Methodology. Same query, same day, four engines (ChatGPT / Claude / Perplexity / Gemini). pollLane.js (the same module the paid GEO subscription uses) writes per-engine JSON snapshots. build-engine-benchmark.mjs rolls the last 30 days into /assets/data/engine-benchmark.json.

What this isn’t. It’s not a ranking authority. Engines disagree, indexes shift week-to-week, and any single-day snapshot has noise. The 30-day rolling smooths that. For a deeper read on a specific category, the paid GEO subscription runs 12 queries × 4 engines × 30 days against your domain plus competitors with full attribution.

See the GEO subscription → Free 1-query audit
·