Earlier today I read a post by Slingshot SEO – Mission ImposSERPble: Establishing Click-through Rates
To start with, just thought I’d say I do really approve of posts like this, love the fact people are working hard to try and conduct experiments to confirm what they have read and any hypothesis they have put together.
That being said, I’m not sure I agree with a few points of the post, I’ll start from the top and talk about the methodology in general:
It’s clear that nowhere in the post Slingshot said they were trying to be scientific in the post, and so this is really more of a comment rather than a disagreement – I’m just not really sure pulling together any conclusion that has been based on 324 keywords can be seen as reliable – I’m not a scientist myself, and wouldn’t say I have masses of experience running tests, but even thinking logically it sounds insane to jump to a quality conclusion based on this amount of data.
There are two more issues I have with the methodology, these are:
- Using Google Adwords Keyword Tool for search volume data
- However, saying this, I do think the data in GAKT has got a lot better recently and would almost class it as ‘semi-accurate’
- Personally I would prefer, for a reliable data, to use PPC impression data based on a keyword with 100% impressions share and set as exact match. I do understand why this wasn’t done though, and it would be very costly if done on a large scale of keywords – just something to bear in mind when considering the results.
- The positives of using the GAKT to establish CTR is, in my opinion, not to actual calculate CTR percentages to use – more so to get a percentage figure to calculate estimated traffic, but never actually presenting the percentage figure - Wow, that sentence was a mouthful, hope it makes sense!
Further that to that, other factors required further expansion take for example mentioning that universal search was a factor in CTR but not actually doing any monitoring of what items appeared where
- old style map listings shown?
- 3x video links?
- News links?
- Micro-formats on listing?
- etc. etc.
Without storing all this data, and getting sufficient occurrences of each, to come up with CTRs based on all possible combinations, this could massively affect CTR.
Again, this point will be really hard to actually complete and establish, just another thing to consider before taking the information at face value.
Another issue I have with the post, separate to the considerations that need to be made when looking at the data, is the fact I feel Slingshot are wrong to comment on how far their data is out from previous studies.
I may be completely wrong with this statement, but when I look at all the other studies I always make the assumption they are based on the volume of searches that are focused on natural listings only, not actual search volume… when you take that into consideration you get figures like this:
Search Volume (UK): 74,000
Using the CTRs from Slingshot: 74,000 * 18.2% = 13,468
Using the CTRs from Optify (and the 52% focus on Natural listings that Slingshot mentioned): (74,000 * 52%) * 36.4% = 14,007
Looking at that data alone show me that the whole point of the post, and why it has been seen as a “big thing” is because the figures are so drastically different, which they clearly aren’t.
I don’t want this post to be seen as having a lot off at Slingshot, as I said at the beginning of the post, I am very much an advocate of testing things out completely, I just felt that I needed to say something because I felt the conclusion of the post, and what people are going to take away from it, is completely incorrect.
And, just so that people don’t decide to go on and modify the test to include the points I’ve mentioned then come out with their own figures, just thought I’d add I didn’t mention a lot more things that need to be considered to establish a full conclusion… I’m impressed that Slingshot took the time to separately split apart longtail terms from head terms; however, there appears to be no breakdown of navigational vs. purchase intent vs. local intent, etc. etc. etc.
In summary, if you’re going to have a look at doing a test on CTR (or anything really) I would really suggest reading up about scientific testing methods and use the methodology that you can see in “proper testing” to make sure that all possible factors are considered and dealt with as much as possible.
If you can’t deal with any of the factors for whatever reason, or you feel that they fall out of the scope of the research piece you are conducting, always make sure you are clear that you considered these factors and what you did to try and neutralise them or why you discounted them.
That way you should avoid annoying refutes from people that see the smallest excuse to question your results. Then again, if you’re trying to get links to the research, just ignore everything I said, make up the research completely and, throw a couple of directed insults in there to get peoples backs up