When we first started hearing "GEO" framed as the next big thing, our honest reaction was scepticism mixed with déjà vu. It felt very similar to earlier moments in SEO where a real shift was happening underneath, but the industry response jumped straight to tactics before evidence.
The terminology was ahead of the data. We were naming and productising something before we had stable signals to optimise against.
After actually testing GEO-style approaches with clients across different sectors, our thinking has shifted. Not towards hype, but towards nuance. The most honest answer right now really is "it depends", but the important part is what it depends on.
Here's how to work that out using your own data.
What GEO Actually Means
GEO (Generative Engine Optimisation) is about optimising for LLM engines like ChatGPT, Gemini, and Perplexity. These platforms now answer queries directly, often before users click through to any website; as opposed to AEO (Answer Engine Optimisation) focussing on AI answers like Google snippets, or AI SEO which is.. I guess everything?
GEO impact is highly intent-dependent. Informational, top-of-funnel queries are where AI answer engines most clearly intercept traffic. Commercial and high-consideration queries still lean heavily on traditional search journeys.
The context matters because AI platforms are starting to drive measurable referral traffic. Between September 2024 and February 2025, SMB referral traffic from generative AI rose by 123%. ChatGPT accounts for roughly 87% of all AI referral traffic across major industries.
Brand strength matters more than optimisation tricks. In AI answers, being the cited or synthesised source is far more correlated with existing authority, clarity, and consistency than with any new "GEO tactic".
Set Up Proper Tracking First
Before anyone reallocates a single hour from SEO to GEO, you need a measurement spine that answers one question: If organic clicks go down, do we lose revenue, or do we just lose noise?
Here's the minimum viable measurement setup:
Segment organic traffic by intent, not pages
If you're still looking at "organic sessions" in aggregate, GEO decisions will be guesswork. Classify your top organic queries and pages into informational, commercial investigation, and transactional categories. Track trend lines separately for each segment.
AI answer engines disproportionately affect informational demand. If that segment drops 30% but commercial stays flat, GEO urgency is low.
Monitor branded versus non-branded trends
This is your proxy for mental availability. Track branded search impressions and clicks from Search Console monthly. Look for lagged correlation (4 to 12 weeks) with informational content publication and known AI answer exposure moments.
If AI answers are increasing awareness, branded demand should rise after informational clicks fall.
Connect organic to pipeline, not just leads
Lead volume is a trap here. Track pipeline influenced by organic through first-touch or meaningful-touch attribution. Monitor conversion rates by intent segment. If lost traffic was low-intent, pipeline shouldn't move. If it does, GEO risk is real.
Watch query-level volatility
Create a watchlist of 20 to 50 high-value informational queries. For each, track impression trends, CTR trends, and ranking stability monthly. If impressions are flat but CTR collapses, AI answers are intercepting demand. That tells you where to test GEO, not whether to rewrite everything.
Capture qualitative sales signals
This is the most underused and most revealing input. Keep a shared document where sales drops verbatim quotes mentioning "ChatGPT", "AI search", or "I saw you mentioned". Mental availability often shows up here before it shows up in dashboards.
Run a Small, Meaningful Test
If you see some GEO risk but aren't ready to commit full resources, the goal of your first test is signal detection. You're testing whether AI systems will choose you when you deliberately give them the chance to.
Pick a single, high-stakes query cluster. One category, one buyer stage (usually commercial investigation), and 5 to 10 closely related queries like "best X for Y" or "how to choose X when". AI answers generalise, so if you test scattered topics, you won't learn anything.
Choose the right page or create one canonical asset. The page must be authoritative already (top 5 to 10 rankings), sit close to revenue, and have a clear point-of-view opportunity.
Make one strategic change, not many. Introduce a named, opinionated decision framework that defines the category, explicitly states trade-offs, and makes a recommendation conditional (not universal). Don't add schema or reformat the whole page for AI yet.
Reinforce the entity signal lightly. Ensure your brand is explicitly named in the framework. Align internal links to point to this page as the canonical explanation. Reference the framework once on a core commercial page.
Define success before you ship. Look for qualitative and visibility signals first. Does your brand start appearing in AI answers for those queries? Is your framework language paraphrased or echoed? Are competitors framed relative to your criteria?
Traffic recovery is not a success metric here.
Recognise the Variables That Matter
After testing with clients, we've noticed that impact is highly intent-dependent. Informational, top-of-funnel queries are where AI answer engines most clearly intercept traffic. Commercial and high-consideration queries still lean heavily on traditional search journeys.
Brand strength matters more than optimisation tricks. In AI answers, being the cited or synthesised source correlates far more with existing authority, clarity, and consistency than with any new GEO tactic. An Ahrefs study of 75,000 brands found that brand web mentions had a 0.664 correlation with AI Overview mentions, whilst backlinks had only a 0.22 correlation.
Traffic loss doesn't always equal value loss. In some cases, fewer clicks don't hurt pipeline because the queries being displaced were never converting anyway. In others, especially content-led demand generation, the loss is very real. Without tying this back to downstream metrics, GEO discussions stay superficial.
Resource Allocation That Makes Sense
There is no universal percentage to shift from SEO to GEO. But there is a defensible way to bound the decision so it's rational, reversible, and tied to economic reality rather than fear.
Start from risked value, not budget percentages. Quantify how much revenue is influenced by organic search, then narrow it to the portion tied to interceptable intent (informational and commercial investigation queries that trigger AI answers today). Subtract what GEO cannot realistically defend, like pure "what is" queries or commodity definitions.
What's left is your defendable value pool.
GEO investment should not exceed the gross margin of the revenue it can plausibly protect or amplify. If $5m of pipeline is genuinely at risk and your gross margin is 70%, then your upper bound for GEO spend is roughly $3.5m over time. You will never spend that, but it anchors the discussion in economics.
Weight the amplification upside, not just defence. If your test showed success, GEO isn't just protecting SEO value. It's acting like a brand accelerator through shortlisting bias, sales efficiency, and brand compounding. That justifies a higher allocation even if traffic never recovers.
Apply practical constraints. If positioning isn't sharp, cap GEO at 10 to 15% of SEO effort and focus on 1 to 2 categories only. If positioning is sharp, 20 to 30% becomes realistic. Beyond that, returns diminish fast.
One Action to Take This Week
If you want to start moving in the right direction, don't write content, change pages, or "test GEO" yet.
Identify the one buying decision in your market that most often happens before a sales conversation, and check whether you are named when that decision is summarised.
Sit down with sales and answer: "What do buyers usually need to decide before they talk to us?" Write it as a decision statement, not a query. Then explore that decision in Google (look at AI overviews) and one or two AI answer engines your buyers use.
Don't look for links. Look for who is named, how the trade-offs are framed, and what criteria are used.
Be brutally honest: If this answer stood alone, would it naturally point someone towards us?
Pull up your best page on the topic and ask whether you define the decision in the same way, whether you impose criteria or just describe options, and whether your explanation could be paraphrased without naming you.
Then write one sentence: "In our market, the right choice depends on ___, and most people get this wrong because ___."
If you can't write that sentence confidently, no amount of GEO work will stick yet. If you can, you've just uncovered your point-of-view gap, your category opportunity, and your first meaningful GEO test.
AI answer engines don't reward activity. They reward decisions made legible. The leader who does this one thing this week won't panic, won't chase tactics, and won't waste budget, because they'll finally be working on the problem that actually matters: Are we shaping how our buyers think, even when we're not in the room?
If you'd like to explore your business' potential GEO play for 2026, reach out to ADMATICians here.