SEO in the AI Era: Optimizing for LLMs and Answer Engines
For two decades, the game was clear. Write content that matched what people typed into a search box. Acquire links from authoritative sites. Ensure your technical foundation was clean. Rank high on a list of ten blue links. Drive traffic.
In 2025, that game has fundamentally changed — and it has changed faster than any comparable shift in search's history. Google's AI Overviews now appear above the organic results for the majority of informational queries. ChatGPT Search, Perplexity, and Claude's browsing features have introduced a new generation of users who never see a list of links at all. They ask a question, receive a synthesized answer with citations, and make their decision without clicking.
This is not the end of SEO. It is a profound redefinition of what SEO is optimizing for.
The New Visibility Problem
In the traditional search paradigm, your goal was to appear in the top three organic results for your target queries. Appearing at position one versus position five had measurable impact on click-through rate, but all positions were visible.
In the AI overview paradigm, there are two states: cited or invisible. If Google's AI synthesizes an answer to a query and your content is one of the two or three sources cited, you receive a visible attribution — often with a thumbnail and a direct link. If your content is not cited, you are invisible to users whose queries trigger AI overviews, which is an increasingly large proportion of all informational searches.
The click-through rate data from early AI overview rollouts tells a clear story: organic traffic to informational content declined 15-30% on average when AI overviews were deployed, but traffic to the specifically cited sources increased. The distribution is becoming more winner-take-all.
Understanding how to become a cited source — rather than an invisible one — is the central SEO challenge of 2025.
How LLMs Evaluate Content Quality
Large language models do not evaluate content quality the way a human editor does, or the way Google's historical ranking algorithm did (primarily through link signals). They evaluate content across several dimensions simultaneously:
Factual accuracy and specificity: LLMs are trained to prefer content that makes specific, verifiable claims. Vague statements ("performance is important for conversion") are less useful to an LLM than specific ones ("a 100ms improvement in page load time correlates with a 1% increase in conversion rate"). The more specific and citable your claims, the more likely your content is to be included in a synthesized answer.
Structural clarity: LLMs extract information most reliably from well-structured content. Clear headings that match the semantic topic of their section. Short, dense paragraphs that each address a single point. Numbered lists for sequential processes. Tables for comparative data. Content that looks like a well-organized answer to a specific question is more likely to be surfaced as such.
Topical comprehensiveness: For a piece of content to be considered authoritative on a topic, it needs to cover the topic deeply and completely, not just introduce it. A 300-word blog post about Next.js performance gives an LLM nothing useful to cite for a complex performance question. A 2,500-word post that covers LCP, INP, bundle optimization, rendering strategies, and image delivery provides the specificity needed.
E-E-A-T: Experience, Expertise, Authoritativeness, Trustworthiness
Google's guidelines for evaluating content quality have always referenced E-A-T (Expertise, Authoritativeness, Trustworthiness). In 2023, they added a second "E" for Experience. This distinction matters.
An AI can write factually accurate content about web performance. What it cannot do is describe what it actually looks like when a client sees their Lighthouse score jump from 34 to 97, what the client said, what the specific technical decisions were that produced that outcome. First-person, experiential content from someone who has actually done the work is the highest quality signal available, and it is one that AI-generated content structurally cannot replicate.
Technical Foundations: What Still Works
While the visibility model is changing, the technical foundations of SEO remain unchanged. These are still required:
Semantic HTML and Structured Data
Schema.org markup tells search engines and LLMs explicitly what type of content a page contains and what each element means. For a blog post, BlogPosting schema communicates the author, publication date, topic, and article body in a machine-readable format that AI systems can parse and cite with confidence.
For a local service business like Ruberio, LocalBusiness and ProfessionalService schemas provide the entity information (address, phone, service area, services offered) that LLMs use when answering queries like "who are the best web design agencies in North Brabant?"
For a FAQ section, FAQPage schema allows Google to directly surface your question-and-answer pairs in AI Overviews, with your brand attribution.
These are not optional enhancements. In the LLM era, they are the mechanism by which you communicate structured information to systems that cannot reliably extract it from prose alone.
Page Speed and Core Web Vitals
The correlation between Core Web Vitals scores and AI overview citation rates is an emerging area of research, but the causal logic is sound: Google's AI systems are built on Google's data, and Google's data has always favored fast, well-performing pages. Content that previously ranked well for speed will continue to be favored.
More directly: even when a user's query triggers an AI overview, the cited sources receive click-through traffic. If your page loads in under a second, more of those clicks convert to real engagement. If it loads in five seconds, users bounce before the content renders, and that bounce signal gradually reduces your authority.
Internal Linking and Topic Clusters
The concept of topic clusters — organizing your content into pillar pages and supporting articles, with systematic internal linking between them — remains highly effective in the LLM era. When multiple pages from your domain cover related aspects of a topic with strong internal link connections between them, both search crawlers and LLMs recognize the domain as a topical authority.
A web agency that has published in-depth content on web performance, Core Web Vitals, Next.js rendering strategies, image optimization, and CDN architecture is a more credible citation source for performance-related queries than a competitor with a single surface-level post on the topic.
The New Content Strategy: Writing for Humans and Machines Simultaneously
The old SEO advice — "write for humans, not search engines" — needs updating. In 2025, writing for humans and writing for LLMs are more aligned than ever, but with specific requirements:
Answer the question completely. Before covering adjacent topics, answer the core question implied by your article title as specifically and directly as possible, early in the article. LLMs summarizing your content will extract this.
Use the exact language your audience uses. Not keyword stuffing — semantic alignment. If your clients ask "how do I make my website faster," use that language, not "optimizing Core Web Vital metrics." The latter is accurate; the former is how the query arrives.
Cite your sources and data. When you make a specific claim (conversion rate correlation, performance benchmark, market statistic), link to the source. This signals factual rigor to LLM evaluators and provides users with the ability to verify.
Publish consistently and deeply. Frequency without depth is worse than infrequency with depth. One 3,000-word authoritative article per month outperforms ten 300-word posts in the LLM era, because depth is the primary signal of authority.
What We Are Building at Ruberio
Our content strategy is explicitly designed for the AI era. Every journal article is structured with clear H2 and H3 hierarchies that match the semantic subtopics of the piece. We include specific data, cite sources, and write from direct experience. We implement BlogPosting and BreadcrumbList Schema.org markup on every article. We maintain strong internal linking across our content on related topics.
The goal is simple: when a potential client asks an AI assistant which web agency they should hire in the Netherlands, or what the best approach to building a high-performance Next.js site is, we want to be the source the AI cites.
That is the new definition of ranking number one. We are optimizing for it deliberately.