Your AI Strategy Isn’t a Strategy — It’s SEO With a Rebrand
The real opportunity isn't a new acronym. It's embedding discoverability into how your organisation operates.
The panic is palpable. Every conference deck, every boardroom discussion, every vendor pitch now includes some variation of “AI Optimisation” or whatever acronym emerged last Tuesday.
Here’s the uncomfortable truth: most of what’s being sold as a revolutionary new discipline is foundational search engine optimisation, repackaged with worse data and better marketing.
But there’s a more interesting conversation buried under all this noise—one that decision-makers and practitioners alike keep missing. The real opportunity isn’t a new technical discipline. It’s an organisational one.
The Technical Reality: Grounding Is Just Retrieval
Let’s dispense with the mysticism.
When an AI system provides a factual answer and cites a source, it isn’t exercising judgment or developing preferences. It’s running a retrieval process—typically Retrieval-Augmented Generation (RAG)—to fetch relevant information before generating a response.
That retrieval step is, mechanically, a search task. It relies on indexing, vector search, and relevance scoring. The same principles we’ve been optimising for in traditional search for two decades.
If your content appears reliably in grounded AI responses, it’s because you’re crawlable, parsable, and topically relevant. If it doesn’t, no amount of “AI-specific optimisation” will save you. The grounding layer doesn’t run on different physics. It runs on the same information retrieval fundamentals; just with a generative interface on top.
The supposed evidence for “two different rulebooks”—a site appearing in ChatGPT but invisible in Google—isn’t evidence of a new form of authority. It’s evidence of noise.
Google operates on largely deterministic retrieval. LLMs operate on stochastic token prediction. When a chatbot surfaces your brand in a response despite your absence from traditional SERPs, you haven’t unlocked a secret AI ranking factor. You’ve won a probabilistic lottery. That’s not a strategy. That’s a slot machine.
The predictable part (and the part you can actually influence) is grounding. And grounding is retrieval. And retrieval is search. Boy! This sounds familiar…
The Measurement Problem: Your “AI Data” Is Guesswork
This is where the new tooling ecosystem falls apart under scrutiny.
A wave of SaaS products now promise “AI visibility tracking” and “insight into the black box.” What they’re actually offering is inference based on outputs you can’t verify, from systems you can’t interrogate, personalised by context you can’t see.
These models are non-deterministic. They don’t expose query data. They don’t provide reliable attribution. When someone claims their “AI rankings” are up while their search visibility is down, and calls the data “directional,” what they mean is: “This is fabricated, but I hope the noise confirms my priors.”
There is currently no scientific method for measuring the ROI of “optimising for AI” because the engineers building these systems cannot fully explain the inference process themselves. If the scientists can’t map the output path, a dashboard with a confidence score certainly can’t either.
This doesn’t mean you should ignore AI as a discovery surface. It means you should be honest about what you can and cannot measure, and build strategy on the parts that are actually knowable.
The Real Opportunity: Discoverability as Organisational Capability
Here’s where the conversation gets more interesting.
The frantic search for “AI optimisation tactics” misses a larger point: discoverability—whether by traditional search engines, LLMs, or any future retrieval system—is not a channel to be gamed. It’s a capability to be built.
And that capability doesn’t live in the SEO team. It lives in how an organisation structures, surfaces, and communicates its information across every function.
The companies that will actually benefit from AI-driven discovery aren’t the ones hiring “AEO specialists” or buying new AI prompt tracking subscriptions. They’re the ones treating discoverability as a product concern, a content architecture concern, and a communications concern. Not a marketing afterthought bolted on post-launch.
This is the real shift. Not a new acronym. An organisational reframe.
When discoverability is embedded into how a business operates—into product development, into content strategy, into how teams think about information architecture—you stop optimising for individual channels and start building something that works across all of them.
Traditional SEO. AI grounding. Voice interfaces. Whatever retrieval surface comes next. The fundamentals are the same: be crawlable, be parsable, be relevant, be complete.
The question isn’t “how do we optimise for ChatGPT?” It’s “how do we build an organisation where discoverability is a first-class concern?”
Why This Keeps Not Happening
If this reframe is so obvious, why isn’t everyone doing it?
Because the incentive structures of the industry actively work against it.
Commercial pressure favours activity over outcome. Agencies and consultancies often need to demonstrate constant, visible work to justify retainers. It’s far easier to present a report on a new tactic or a reaction to a named algorithm update than to explain the slow, often invisible progress of foundational improvements. Selling “quick wins” is an easier business model than selling architectural discipline, even when the latter delivers more value.
The attention economy rewards novelty over rigour. Personal brands in the SEO space are built on being first to a new technique or trend. “Let’s wait for the data” doesn’t generate engagement. “New ranking factor discovered!” does. This creates a cycle where practitioners feel pressure to adopt new terminology and tactics to appear current, even when critical evaluation would suggest caution.
Tool marketing shapes workflow. Software vendors market features, and practitioners often adopt tool-centric thinking—solving for the tool’s checklist rather than the actual retrieval problem. When your workflow is organised around what a dashboard can measure, you optimise for the dashboard, not for discoverability.
Fear and insecurity drive short-termism. Imposter syndrome is rampant in the industry. Many practitioners, uncertain about their depth of knowledge, believe they must constantly chase the newest thing to avoid appearing outdated. Advocating for “boring” fundamentals feels professionally risky, even when it’s strategically correct.
KPIs are misaligned. When success is measured by narrow metrics, like rank for a vanity keyword, visibility in a single tool; rather than business outcomes, the focus naturally shifts to short-term tactics. The architectural work that delivers compounding returns gets deprioritised because it doesn’t move the number this quarter.
Recognising these forces isn’t defeatism. It’s the first step to building something different.
What This Actually Looks Like in Practice
Treating discoverability as organisational capability sounds good in a strategy deck. Here’s what it means in operational terms.
Discoverability in Product Development
If the first time someone thinks about search is after a feature launches, you’ve already lost months.
Discoverability considerations belong in product requirements. Not as an afterthought, but as a core question: How will users find this?
This means asking, at the requirements stage:
What information need does this address, and how are people currently expressing that need in search?
What’s the URL and information architecture strategy for this feature or content?
How does this fit into the existing site structure, and what internal linking is required?
What structured data is needed to make this parsable by machines (search engines and LLMs alike)?
What are the dependencies on engineering, content, and design to make this discoverable at launch rather than six months later?
When these questions are part of the product process, you stop playing catch-up. SEO becomes a constraint that shapes better products, not a cleanup crew called in after the damage is done.
This requires SEO (or whoever owns discoverability) to have input at the roadmap and requirements stage. If they’re only consulted post-launch, the org structure is the problem.
Content Architecture as Technical Infrastructure
Most organisations treat content as a marketing asset: something the content team produces, the SEO team “optimises,” and the CMS stores.
This framing is inadequate for a world where machines need to retrieve, parse, and synthesise your information across multiple surfaces.
Content architecture — how information is structured, related, and surfaced — is technical infrastructure. It should be treated with the same rigour as your database schema or API design.
In practice, this means:
Entity modelling. Understanding what concepts your organisation needs to be known for, and ensuring those entities are consistently represented across your content. Not keyword lists — actual semantic entities with defined relationships.
Schema implementation. Structured data isn’t a “nice to have.” It’s how you communicate unambiguously with machines. This should be systematic across the site, maintained as code, and validated in CI/CD pipelines — not manually added to individual pages by the SEO team.
Internal linking logic. Links aren’t just navigation. They’re signals of relationship and importance. Internal linking should follow a defined architecture, not the whims of whoever last touched a page.
Content inventory discipline. Knowing what content exists, what state it’s in, what entities it covers, and how it relates to other content. Most organisations cannot answer these questions. Without answers, “content strategy” is guesswork.
When content architecture is treated as infrastructure, it becomes maintainable, auditable, and scalable. When it’s treated as a marketing deliverable, it accrues debt until someone declares SEO bankruptcy and starts again.
Distributed Ownership, Centralised Enablement
“Who owns SEO?” is usually the wrong question. If the answer is “the SEO team,” you’ve created a bottleneck and an excuse.
Discoverability is everyone’s concern: product, engineering, content, design, marketing. The SEO function shouldn’t be doing all the work. It should be enabling others to do the work correctly and auditing the outcomes.
A more useful ownership model:
Product teams are responsible for discoverability of their features. This is part of their success criteria.
Content teams are responsible for content architecture and entity coverage within their domains.
Engineering is responsible for technical foundations: crawlability, rendering, structured data implementation, performance.
The SEO/discoverability function provides standards, tooling, training, and audit. They’re the centre of excellence, not the centre of execution.
This doesn’t mean less SEO expertise. It means SEO expertise is leveraged across the organisation rather than siloed in a team that’s always behind.
The shift is from “SEO team does SEO” to “SEO team makes everyone better at discoverability.”
Measurement That Acknowledges Reality
Here’s what you can measure with reasonable confidence:
Organic traffic and its contribution to business outcomes (conversions, revenue, qualified leads)
Crawl behaviour and indexing status (this KB article on Log-file analysis might help)
Ranking positions for defined keyword sets
Technical health metrics (Core Web Vitals, crawl errors, structured data validation)
Content coverage against your entity model
Here’s what you currently cannot measure reliably:
“AI visibility” or “AI share of voice” (this KB article on Visibility in LLMs has more detail)
Attribution from LLM-driven discovery
The impact of specific “AI optimisation” tactics
Stop pretending bad data is “directional.” It isn’t. It’s noise dressed up as signal.
When stakeholders ask about AI performance, the honest answer is: “We cannot reliably measure this yet because the systems don’t provide attribution data and the outputs are non-deterministic. What we can tell you is that the fundamentals that drive AI grounding are the same fundamentals that drive traditional search, and here’s how we’re performing on those.”
This is harder to sell than a dashboard with an “AI Score.” It’s also true.
Measurement discipline also means focusing on leading indicators over vanity metrics. Rank for a single keyword is a vanity metric. Indexed coverage of your entity model is a leading indicator. Traffic to a page is a vanity metric. Conversion rate from organic entry points is a business metric.
When measurement is honest, strategy becomes clearer. When measurement is theatre, strategy is just storytelling.
The Actual Work
None of this is new. That’s rather the point.
The organisations that will benefit from AI-driven discovery, and from whatever retrieval surfaces come next, aren’t the ones chasing acronyms. They’re the ones doing the foundational work: building crawlable, parsable, semantically rich information architectures and embedding discoverability into how the business operates.
That work is slow. It’s often invisible. It doesn’t produce exciting conference talks or LinkedIn posts about “cracking the AI algorithm.”
It does produce compounding returns across every machine interface that needs to understand what you do and why you matter.
The practitioners who will thrive aren’t the ones mastering “GEO” or “AEO.” They’re the ones who understand information retrieval deeply enough to recognise that the fundamentals haven’t changed—and who have the organisational credibility to embed those fundamentals into product development, content architecture, and business strategy.
The strategic imperative isn’t “optimise for AI.” It’s “build an organisation where discoverability is a first-class concern.”
That’s not a new strategy. It’s the strategy we should have been running all along.



