The recent Google/Twitter spat that has got much coverage in the SEO community over the last couple week or so, particulary from search marketeers such Sugarrae (otherwise known as Rae Hoffman) and Michael Gray (Graywolf)
Following on from this was a post from Quadzilla over at BlackHatSEO.com examined the possibly of black hole SEO (A phrase I have to be honest I do quite like). According to the blog a ‘black hole site’ is
“A black hole site is created when an tier 1 authority site ceases to link out to other sites. If a reference is needed, the information is rewritten and a reference page is created within the black hole. All (or virtually all) external links on the site are made nofollow.
The first example of a black hole site was the wikipedia. The internal links formed a network that passed link juice from one page to another allowing obscure articles with no external links to rank number 1 in the SERPs. This #1 ranking begets natural links from external links. When a webizen wants a quick reference, they consult Google and link to one of the top results. This causes more link juice to flow into the black hole and the body’s trust becomes more and more massive over time.
1. Link juice flows in, but it can never escape.
2. External Sites lose link juice at the expense of the black hole.
3. The relative link juice mass of the black hole expands exponentially”
This to me raised an interesting scenario. I am not privy to what happened with the Twitter/Google situation however one thing is clear – that Twitter have indeed created what Quadzilla termed a blackhole. If indeed Google did ‘influence’ this decision it does raise some interesting thoughts of what it could be life moving forward.
One of the fundamental parts of Google’s algorithm is the reliance on linkage to determine relevancy and influence. Search marketeers cottoned onto this fact and subsequently used this to influence the search engine rankings. It was further to this that Google erm… highlighted the use of the nofollow attribute in order to ‘control’ the benefits potential links could pass, and give webmasters a way to control ‘webspam’. Now one may (rightly or wrongly) argue that this is a case of throwing the baby away with the bathwater, however to be honest some degree of moderation was required, and I would suggest used in moderation and in the right context (such as the one Matt Cutts uses in his nofollow post) is a highly useful tool.
I would though add that the overuse of the nofollow attribute raises a completely different concern. (There is no doubting that sites/services such as Twitter would be nothing without the general public helping raise its profile – and I would suggest many SEO’s have been central to this. However in this context I would suggest there were better ways to control the level of webspam and exploitation.) What would happen if nofollow became a standard tactic, and webmasters decided to hoard internal weighting to themselves. This potentially could have a similar effect to that of paid linkage – in terms of the inbalance created by this lack of linkage between sites. Many websites are already highly guarded regarding ‘leaking’ PR from their websites, utilising advanced redirects aimed at minimising PR loss, and subsequently influencing relevancy as a result, by not giving credit where credit is due.
Surely there is a possibility this sort of activity influences relevancy as much as paid links do?
I can’t help but feel that whilst nofollow has been effective in controlling linkspam, there is a limited timespan to the reliance on this to manage linkage. Obtaining prominent rankings within organic search results is a fundamental part of many organisations marketing campaigns, and whilst this commercial interest in natural search remains, there will be a ‘pushing’ (and stepping over) of the mark (or testing of the boundaries), in order to obtain these key commercial wins.
[ This post from Peter Young's blog contains only his personal opinions. ]