How well does the Google algo catch spamdexers?

Google Logo

First off, I’m not picking on Google. I’m not, okay? I’m just puzzled, and so I’m asking some questions.

For all the talk lately about how awesome the algo has become at catching and stopping duplicate content, determining a site’s overall theme, evaluate (and factor in) the quality of the links, etc. I’ve come across some things that make me kind of wonder how much of that is actually true.

People’s Exhibit A: Metal Buildings is #1 for “metal buildings” (which are the main kws for a press release client of mine, hence why I was being nosy). They also rank well for other industry terms like steel buildings, metal garages, etc.

So… why does this make them evil?

This company appears to be spamdexing their way to the top by using 5000ish domains with identical content. They seem to all use the same template and have the same content then cross link between them. Isn’t that supposed to raise some red flags someplace?

How do I come up with 5000ish? Check Yahoo ( and then filter to show only inlinks “except from this domain” to “entire site”. You’ll get 9,335 results (1,000 of which you can see). 642 of those are the dupe sites, so it’s probably realistic to say that if there’s slightly more than 9,000 inlinks then slightly more than 5,000 of those are from the dupe sites.

People’s Exhibit B: The complete list of replicated sites and domains
(It’s friggin’ huge, so if you want to jump past it, click here)


Why didn’t Google’s algo catch this replicated (duplicate) content, whose sole function seems to be fluffing up the link count to the real website. I half wonder if all the algo tweaks designed to “prevent/catch” this type of stuff is just hand tweaking that really can’t catch anything unless a person notices there’s a problem and manually addresses it.


On a completely unrelated, yet designed to trip a particular someone’s Google vanity alert trigger, note… Matt Cutts’ cats might be super cute, but my kitty-mooz are by far the fairest in the land.