The Sparkle Star Is Google's Trojan Horse

    April 27, 2026|
    AI

    TL;DR

    • Gemini in Chrome collapses the research journey into the search interface itself, cutting click-through rates to near zero for many queries
    • AI Overviews now generate an average zero-click rate of 83%, compared to 60% for traditional search queries
    • LinkedIn is now the second most cited domain across major AI platforms, ahead of Wikipedia, YouTube, and every major news publisher
    • Your website is no longer the primary discovery asset. Distributed content and earned signals are now the training data for AI-driven visibility
    • Brands that keep changing their message every week are fragmenting their signal, not building it

    When we first tested Gemini's sparkle star in Chrome, we expected an efficiency tool.

    What we found was something fundamentally different.

    The sparkle star icon sitting in Chrome's toolbar is not making search faster. It is removing the need to search. And for brands built on traffic acquisition strategies, that is not an incremental shift. It is a structural one.

    What Actually Changed

    The behaviour shift is subtle but commercially profound.

    Gemini collapsed the consideration phase into the interface itself. Queries that would typically trigger a comparison journey, such as "best CRM for mid-sized business" or "top mining companies in Australia," were resolved within the SERP layer. Gemini synthesised multiple sources into a single, coherent viewpoint, effectively pre-curating the shortlist.

    For many categories, that now means:

    • No review-site browsing
    • No brand website hopping
    • No "research loop" across tabs

    From a media perspective, that is not a UX improvement. It is a compression of the funnel.

    The numbers back this up. Searches triggering AI Overviews now show an average zero-click rate of 83%, while traditional queries average around 60%. That means 8 out of 10 users now get their answer directly inside the search interface.

    It Chose a Different Version of the Internet

    One example crystallised this shift.

    We tested a query for a B2B client in the industrial services category: "Top providers of [specific industrial solution] in Australia."

    Pre-Gemini, this query reliably produced paid search ads, organic listings (client website included), and aggregators. Gemini returned a synthesised answer that listed 3 to 5 providers with short summaries, citing sources without requiring a click.

    Here is what surprised us. Our client was included, but not via their website.

    Instead, the references underpinning Gemini's summary were:

    • A recent LinkedIn post from the company page discussing a project case study
    • A thought leadership article from the CEO's LinkedIn profile
    • A Reddit thread where industry professionals discussed vendors and named the client organically

    The company website, despite being well-optimised and historically ranking on page one, was absent from the synthesis layer.

    What we are seeing is Gemini prioritising evidence of real-world relevance over traditional SEO signals. The LinkedIn content signalled recency, authority, and expertise. The CEO's perspective added human credibility. The Reddit discussion validated market perception through peer endorsement.

    This aligns with broader patterns. LinkedIn is now the second most cited domain across major AI platforms, trailing only Reddit. On average, 11% of AI responses reference a LinkedIn URL. That puts LinkedIn ahead of Wikipedia, YouTube, and every major news publisher.

    Your Website Is No Longer the Primary Asset

    The shift underneath all of this is a reweighting of the digital ecosystem.

    Owned media (your website) is no longer the primary authority signal. Earned and distributed content is becoming the training data for discovery.

    If AI is forming opinions before a website visit, then influence is happening upstream of traffic. Most brands are still optimising for click-through rate, landing page conversion, and search rankings. But Gemini is optimising for:

    • Credibility signals across the open web
    • Consistency of narrative across sources
    • Presence in high-trust environments

    The question changes from "How do we get traffic?" to "How do we become part of the answer?"

    What This Means for Your Content Budget

    We are no longer planning against the web as users browse it. We are planning against the web as AI interprets it. Those are now two very different ecosystems.

    Historically, content budgets have followed a simple flow: Create content, rank in search, drive traffic, convert. That model assumes the website is the centre of gravity and that users do the evaluation themselves.

    In an AI-mediated environment, the flow becomes: Create signals, AI aggregates, AI recommends, user decides. The evaluation layer has been outsourced to the interface.

    Most clients today are still weighted like this:

    • 60 to 70% on paid search and performance capture
    • 20 to 30% on website and SEO
    • Less than 10% on thought leadership and distributed content

    That made sense in a click-driven economy. But if discovery is happening before the click, this model underinvests in the very inputs shaping the decision. There are three layers to this:

    Control Layer (Owned): Your website, landing pages, and SEO. Still critical, but increasingly a destination for confirmation rather than discovery. Its role is to validate credibility and convert high-intent demand.

    Influence Layer (Distributed): LinkedIn, creator platforms, editorial partnerships, high-trust environments. This is where the biggest opportunity sits. AI pulls from credible, frequently updated, high-authority environments. LinkedIn content and thought leadership are increasingly cited in AI outputs, yet this layer is often treated as "organic support" rather than a core investment.

    Validation Layer (Community and Earned): Reddit, forums, peer discussions, reviews. This is the hardest to control but increasingly the hardest to ignore. AI does not just look for what brands say. It looks for what others say about them.

    How to Measure What You Cannot See

    There is no Search Console for Gemini. We have moved into an environment where measurement is inferred, not directly reported.

    Instead of waiting for a platform dashboard that may never arrive, we build what we call a composite visibility model, stitching together signals that indicate whether a brand is likely being included in AI-generated answers.

    The closest thing we have to "seeing" Gemini is structured querying. We build a bank of prompts across three categories:

    • Category discovery: "Best X providers in Australia"
    • Comparative evaluation: "X vs Y vs Z"
    • Problem-led queries: "How to solve X in [industry]"

    We then run them across Gemini (and other LLMs) and track which brands appear, how often, and in what context. Over time, this creates a directional metric we call Share of AI Answer (SoAA).

    We also measure upstream signals that AI is most likely to use:

    • Growth in branded search queries
    • Increase in comparison-style searches that include the brand name
    • Earlier-stage queries that include the brand name unprompted

    If AI is recommending you, users start searching for you by name.

    The Single Variable That Moves the Needle

    When we have tracked Share of AI Answer over time for clients, the single biggest variable is not volume. It is not even quality in the traditional sense. It is the consistency of credible signals across multiple trusted environments.

    We have had clients publish more content and see no movement. We have had clients produce higher-quality content by classic standards and still plateau.

    But when SoAA shifts meaningfully, it is almost always because the same narrative starts showing up independently in multiple places that AI trusts. LLMs like Gemini are not ranking pages. They are building confidence in an answer. Confidence does not come from one strong source. It comes from signal convergence.

    A single high-quality blog is a claim. A LinkedIn post, executive point of view, a publisher mention, and a community discussion is a verified pattern.

    Where Commercial Impact Shows Up First

    It rarely shows up where brands expect.

    Most assume they will see it in traffic, leads, or campaign performance. But the first place narrative density creates a commercial signal is far earlier and far more qualitative. It shows up in the language prospects use when they enter the conversation.

    Before volume changes, before conversion rates move, you start hearing:

    • "We have been thinking about [your exact framing of the problem]..."
    • "We saw something about [your methodology]..."
    • "We are trying to move away from [the thing you have been critiquing]..."

    These prospects often cannot pinpoint where they heard it. That is the AI layer at work: synthesised understanding, blended sources, no single attribution point.

    The next shift is in who is coming in, not how many. You see fewer "educate me" conversations and more "validate this approach" conversations, with a higher baseline understanding of your category and point of view.

    In practical terms: shorter sales cycles, less time spent reframing the problem, and faster movement to commercial discussion.

    What to Stop Doing Immediately

    When a brand internalises the shift from competing for attention to competing for inclusion, the first thing we tell them to stop is usually uncomfortable, because it has been the backbone of their marketing for years.

    Stop creating net-new content every time you show up.

    Most brands are stuck in a production mindset: new post, new topic, new campaign, new angle, new quarter, new messaging. It feels productive. It looks active. But in an AI-shaped environment, it is destructive. Every time you introduce a new idea, you reset the signal, fragment your narrative, and weaken your association.

    From an AI perspective, you do not look dynamic. You look inconsistent.

    This "always new" approach was built for social algorithms that reward freshness, engagement metrics that reward novelty, and campaign cycles built for short-term bursts. AI does not reward novelty. It rewards pattern recognition over time.

    Replace "What should we say this week?" with "What idea are we reinforcing this week?"

    Instead of cycling through an industry trend, then a product update, then a culture post, then a case study, you move to: define a core tension, reframe the problem, introduce your point of view, show it working. Same idea. Different angles. Repeated intentionally.

    The Shift That Changes Everything

    The most dangerous assumption right now is this: "AI will just surface the best version of our existing content."

    It will not.

    Discovery is no longer a navigation problem. It is a consensus problem. You do not win by being the best page. You win by being the most consistently validated idea.

    When brands stop chasing novelty and start building consistency, narrative density increases, AI association strengthens, sales cycles shorten, and performance media becomes more efficient. Everything else starts to compound.

    If there is one line that sticks, it is this: if your message changes every week, you do not have a message. You have activity. And activity does not get you included in the answer. Only consistent, reinforced ideas do.

    If this is a question your team is working through, it is often useful to pressure-test the thinking with the team at ADMATIC who are navigating the same territory with clients across Australia and New Zealand.

    Frequently Asked Questions

    Traditional SEO optimises for page rankings in search results. AI Engine Optimisation (AEO/GEO) focuses on whether a brand's content is included in AI-generated answers. SEO rewards the best-ranked page, while AEO/GEO rewards the most consistently corroborated idea across multiple trusted sources. For brand visibility, a well-ranked website is no longer sufficient. Brands need distributed, reinforced signals across high-authority environments such as LinkedIn, industry publications, and community forums to appear in AI Overviews and LLM responses. With AI Overviews now generating a zero-click rate of 83%, brands absent from AI-generated answers are invisible at the most critical point in the decision journey.

    Share of AI Answer (SoAA) is a directional metric that tracks how often and how prominently a brand appears in AI-generated responses across platforms like Gemini, ChatGPT, and Perplexity. Because no native dashboard equivalent to Google Search Console exists for AI platforms, SoAA is measured through structured prompt querying across three categories: category discovery queries, comparative queries, and problem-led queries. These prompts are run regularly across AI platforms and results tracked to identify which brands appear, how often, and in what context. Upstream signals such as growth in branded search queries and unprompted brand name mentions in early-funnel searches are used to validate SoAA tracking.

    LinkedIn has become the second most cited domain across major AI platforms, trailing only Reddit, with 11% of AI responses referencing a LinkedIn URL, placing it ahead of Wikipedia, YouTube, and every major news outlet. AI systems favour LinkedIn because of its recency (frequently updated content), named authorship (executive posts carry identifiable human authority), domain specificity (industry-focused posts demonstrate topical depth), and engagement signals (likes, comments, and shares provide lightweight validation of content quality). For brands, this means LinkedIn is no longer an organic support channel. It is a primary input into AI-driven discovery.

    Most content budgets are currently weighted toward paid search (60 to 70%), website and SEO (20 to 30%), and thought leadership or distributed content (less than 10%). In an AI-mediated environment, the recommended shift is toward signal-based budgeting across three layers: the Control Layer (website, SEO, and landing pages, still necessary for conversion and credibility validation but no longer the primary discovery channel), the Influence Layer (LinkedIn, editorial partnerships, and creator platforms, the most underinvested layer relative to its impact on AI inclusion), and the Validation Layer (Reddit, forums, peer reviews, and industry discussions, which provide uncontrolled third-party signals that AI treats as high-trust corroboration).

    The single strongest variable improving AI inclusion is consistency of credible signals across multiple trusted environments, not content volume or individual content quality. Brands that see meaningful improvement in Share of AI Answer typically define one central tension or problem framing and return to it repeatedly, publishing the same core idea from multiple sources including company LinkedIn, executive LinkedIn, earned editorial, and community discussion. Avoiding content calendar thinking that prioritises novelty is critical, as AI rewards pattern recognition over time. The inflection point typically occurs when the same ideas appear independently across multiple high-trust environments within the same period, at which point AI systems treat the narrative as corroborated and increase the likelihood of inclusion.

    Share this article: