On AI, SEO, and Why Shortcuts Don’t Age Well

Written by Carolyn Shelby

This interview was originally published in Italian as part of SEO Confidential, a weekly column by Roberto Serra exploring how search, AI, and digital strategy are evolving.

Below is the English version of that conversation, republished with permission.


Q: In recent years, SEO has often rewarded complex structures, hidden information, and “technical” solutions rather than clarity. Today, with engines and LLMs that need to truly understand what a brand offers, aren’t we discovering that much of the optimization done so far has made sites less readable rather than more authoritative?

C: In many cases, yes — but not because clarity suddenly stopped mattering. I think it’s because, for a long time, shortcuts worked.

Optimization aimed purely at aggressive traffic growth caused many teams to build pages that functioned technically for search engines, but didn’t actually communicate anything useful to humans. Once SEOs saw that those tactics delivered results, they doubled down on filler content, over-engineered internal linking, and pages designed to dodge answers in order to inflate word count and keyword usage rather than simply stating the point.

What’s changing now is that both search engines and LLMs have far less tolerance for that kind of noise. They don’t reward clever obfuscation or artificial complexity. They reward content that gets to the point, explains things clearly, and stays consistent.

So I wouldn’t describe this as a new direction so much as a correction. Authority never really came from complexity. It came from being understandable and reliable. AI just makes it harder to pretend otherwise.

Q: If the goal is no longer just ranking but being found, cited, and considered reliable across multiple platforms, how dangerous is inconsistent messaging? Is the real SEO challenge in 2026 still technical, or is it primarily about communication and strategic discipline?

C: I’d say inconsistency is one of the biggest risks brands are underestimating right now.

When a website says one thing, product pages say something slightly different, social profiles tell another story, and third-party sources fill in the gaps, people may shrug. But over time, those inconsistencies stack up. The brand becomes harder to describe and easier to distrust.

You see a related problem with AI-written content that follows a very recognizable cadence. The issue isn’t that LLMs struggle with this style — they don’t. They created it. The issue is that humans find it tiring to read. It sounds synthetic, oddly vague, and imprecise, especially in long-form content.

When people disengage, they stop sharing, citing, and referencing that content. And those human behaviors are exactly what shape the third-party signals AI systems rely on. Models don’t reward good writing directly; they absorb how the world reacts to it.

What works is alignment and restraint. Clear language, consistent messaging, and writing that respects the reader’s attention matter far more. When those ideas are repeated across owned channels and reinforced by credible third-party sources, they compound.

By 2026, SEO will still be technical in the sense that crawlability, structure, and performance matter. But those are table stakes. The harder problem is discipline: knowing who you are, saying it clearly, and saying it the same way everywhere.

SEO isn’t just about being found anymore. It’s about being taken seriously long enough to matter.

Q: In many companies, AI now defines briefs, structures content, and even guides SEO priorities. Where does this delegation become dangerous? What signals show AI has replaced human judgment?

C: In practice, things usually go wrong when AI starts replacing institutional knowledge.

Using AI to accelerate research, summarize inputs, or explore options is reasonable. The risk shows up when it’s allowed to define briefs, shape content, or set priorities without grounding those decisions in what the organization actually knows about its customers, products, and history.

The warning signs are fairly clear. One is when no one can explain why a piece of content exists without referring to the tool that generated it. Another is when content looks polished but generic — familiar language, familiar structure, but no trace of real expertise or experience built over time. You also start to see flattening, where everything sounds the same regardless of audience, market, or context.

In those situations, internal knowledge isn’t being applied. The model fills gaps, but it doesn’t know what actually matters to the business, what’s failed before, which assumptions are risky, or where nuance is required. It can’t weigh trade-offs or remember why previous decisions were made.

AI works best alongside people who genuinely understand their domain. When it’s used as a substitute for that understanding, teams don’t just produce weaker content — they slowly lose the ability to reason about their own strategy.

Q: If a brand publishes incorrect, misleading, or inconsistent AI-generated content, who really pays the price: the algorithm or the person who let it take the wheel?

C: The brand. The brand always pays the price. The algorithm never does.

Search engines don’t lose credibility. Models don’t suffer reputational damage. Companies do — and so do the people responsible for the decisions behind the content.

I think one of the things that gets lost in the “AI for scale” conversation is that publishing is still a choice. Allowing an LLM to generate or influence content doesn’t remove responsibility; it concentrates it. Someone decided to trust the output, skip review, or move faster than their controls allowed.

The consequences aren’t always immediate, but they accumulate. Inconsistent or misleading content gets indexed, summarized, cited, and reused. Customers lose trust. Journalists stop referencing the brand. Internal teams begin operating on shaky assumptions. None of that is fixed with a quick update.

AI can absolutely help teams move faster, but it can’t decide what’s acceptable, accurate, or appropriate. Those are human judgment calls. When brands delegate them to systems that don’t understand context or consequences, they aren’t scaling — they’re gambling.

Responsibility doesn’t disappear when AI enters the picture. It just becomes harder to avoid.

Q: Are we seeing a new form of “keyword stuffing,” this time stylistic, in today’s hyper-dramatic, fragmented AI writing?

C: No — I don’t think it’s really analogous to keyword stuffing. It’s different, especially in who it offends and why it’s everywhere now.

Keyword stuffing failed because it tried to manipulate algorithms, and algorithms eventually pushed back. This problem lands somewhere else. Keyword stuffing offended machines. This style offends readers.

What’s changed is scale. AI hasn’t just introduced a new writing style — it’s multiplied output. Everyone can now produce ten times as much content, and most of it carries the same recognizable cadence: hyper-dramatic, fragmented, performative, and weirdly vague. Once you’ve seen it, you can’t not see it. And because AI makes everything so easy to scale, the toothpaste isn’t just out of the tube. That tube has exploded and the toothpaste is all over the walls and ceiling.

So people aren’t just reacting to a single AI-written article. What they’re reacting to is the saturation.

Remember, marketing works because people buy from brands they feel a connection to. We spend decades learning how to write copy that builds familiarity, trust, and emotional resonance because people want to buy from “someone,” not from a system. When the writing signals that there is no real human on the other side — or worse, that the reader is being fed AI slop — it triggers the opposite reaction. People feel a little disgust. They feel manipulated. Then they disengage. When they disengage, your conversions dry up.

So where keyword stuffing offended the machines, the obviously written by AI content doesn’t. LLMs are completely indifferent to the AI cadence because they made it in the first place. Humans are not indifferent. They’re losing trust; and then they’re losing interest. Without interest, there is no persuasion, no loyalty, and no sale.

So this isn’t a new form of keyword stuffing. It’s a breakdown in editorial judgment at industrial scale — one that doesn’t get punished by algorithms, but by customers who simply decide they don’t like you enough to spend their money with you.

Q: Many talk about “optimizing for AI” as if it were enough to chase the model of the moment. Isn’t AI simply amplifying those who’ve already built something solid?

C: Yes — and I think that misunderstanding is driving a lot of bad decisions.

“Optimizing for AI” is often treated as something you can bolt on, or a set of tactics you can chase as models evolve. You just can’t. Models are changing at an aggressive pace. Interfaces are being rewritten in real time. Tactics aren’t going to last a quarter — sometimes they don’t even last a month.

At the same time, people are still designing sites smothered in digital jazz hands: heavy animation, layered interactions, clever reveals… just glitter for the sake of glitter. It looks impressive in design reviews, but it makes content harder to access, parse, and reuse — for machines and for users who are already drifting away from the web as an interface.

LLMs don’t engage with spectacle. They ingest structure, language, and clarity. When critical information is buried behind interactions or fragmented across components, it becomes fragile. If content can’t be cleanly extracted, visual polish doesn’t matter because you’ve already been skipped.

What persists is structure, authority, and consistency. If a site is technically accessible, content is clearly organized, and the brand has a coherent voice reinforced outside its own website, AI systems have something stable to anchor to. Without that, no amount of model-aware tuning produces lasting results.

AI doesn’t reward clever hacks. It reflects and amplifies what already exists in the real world.

Q: Is alarmism about the “end of SEO” pushing companies to cut organic investment at exactly the wrong moment?

C: Yes — and I do think it’s a very real risk.

What’s striking to me is how disconnected the narrative has become from the data. In most sectors, we’re not seeing search collapse so much as it’s rebalancing. Traffic is shifting, not disappearing. The real collapses are happening in business models built on scale without value — content aggregators, recycled advice sites, and businesses that monetized traffic without ever building a brand.

Those declines are often blamed on “SEO is dead,” but what actually died were systems that depended on thin, interchangeable content.

Alarmism pushes CMOs toward the wrong reaction. Organic investment is slow, cumulative, and hard to defend in a panic, so it’s frequently the first thing cut. That may feel rational in the short term, but it’s exactly wrong at a moment when brand authority is becoming more decisive, not less.

In an AI-driven environment, organic visibility isn’t just about clicks. It’s how brands establish credibility, consistency, and presence across the ecosystem models draw from. When companies pull back, they don’t protect themselves from change — they remove themselves from the narrative.

Q: With ads entering ChatGPT, are brands confusing rented visibility with real authority?

C: I think the risk exists, but we need to be precise about what we know and what we don’t.

Using paid visibility while building organic authority makes sense. A dual-track approach has always been reasonable: buy attention while you earn trust. The issue isn’t ads themselves, but the assumptions being made about what they actually produce inside AI systems.

Right now, there’s no clear evidence that paid placement in conversational interfaces contributes meaningfully to long-term brand understanding once the spend stops. Ads create exposure in the moment, but whether they help models “learn” who a brand is remains an open question.

That uncertainty matters even more if pricing approaches premium media levels. At that point, brands aren’t buying efficient discovery; they’re buying expensive moments of attention. If that visibility doesn’t compound, it’s easy to mistake presence for progress.

Paid placement may have a role, especially for awareness. But until there’s evidence it builds durable brand understanding, it should complement organic authority — not replace it.

Q: AI rewrites brand narratives using third-party sources. How can companies defend against hallucinations?

C: This is one of the most important questions in this conversation, and it’s uncomfortable for a very specific reason.

It’s uncomfortable because it forces us to stop treating AI hallucinations as malfunctions and start understanding them as a predictable outcome of missing information.

I like to explain this by comparing LLMs to human memory. Your brain doesn’t store a perfect recording of events. It stores fragments. When you recall a memory, your brain reconstructs a complete story, filling in gaps so the narrative makes sense. That reconstruction can feel accurate even when parts of it are wrong.

LLMs work the same way. They don’t retrieve facts; they assemble narratives. When information about a brand is incomplete, inconsistent, or scattered, the system fills in the gaps using whatever signals it can find — including third-party sources that may be outdated, incorrect, or only loosely related.

That’s not malicious, and it’s not a failure of the model. It’s what the system is designed to do.

This is also why witnesses are cross-examined. Not because we assume they’re lying, but because each memory needs corroboration. Accuracy emerges from consistency and reinforcement across multiple sources. AI systems behave similarly. The more coherent and repeated the signals are, the less the system has to invent.

The mistake companies make is thinking this can be fixed after the fact — by correcting the model or chasing down individual inaccuracies once they surface. By the time a false narrative becomes visible, it has often already been reinforced elsewhere.

The real defense is preventative. Brands need to leave fewer gaps. That means being explicit, repetitive, and consistent about who they are, what they do, and what they’re credible for — across their own content and across trusted third-party references. Someone has to own that narrative over time, not just publish and move on.

That’s why this answer is uncomfortable. There’s no technical switch to flip and no one else to blame. If a brand doesn’t define itself clearly and often enough, the system will do it instead — constructing a complete story from incomplete memories, without asking permission.


This interview reflects how I think about SEO and AI today: less as a race for visibility, and more as a question of discipline, clarity, and long-term responsibility. As interfaces change, those fundamentals matter more, not less.

Carolyn Shelby

Carolyn Shelby

Carolyn Shelby is the Founder & Executive Advisor at CSHEL Search Strategies, providing advisory on search, AI systems, and visibility risk. With more than 25 years of experience across digital infrastructure and search platforms, she works with organizations navigating platform behavior and discovery. She is a frequent industry speaker and a regular contributor to Search Engine Journal and Search Engine Land.

AI as Your Marketing Co-Pilot: How to Effectively Leverage LLMs in SEO & Content

The Implosion of the Blogging-for-Dollars Revenue Model

NEVER MISS AN ARTICLE

Get My Newsletter

This field is for validation purposes and should be left unchanged.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.