Header tag

Wednesday 11 February 2015

Pitfalls of Online Optimisation and Testing 2: Spot the Difference

The second pitfall in online optimisation that I would like to look at is why we obtain flat results - totally, completely flat results at all levels of the funnel.  All metrics show the same results - bounce rate, exit rate, cart additions, average order value, order conversion. There is nothing to choose between the two recipes, despite a solid hypothesis and analytics which support your idea.

The most likely cause is that the changes you made in your test recipe were just not dramatic enough.  There are different types of change you could test:
 
*  Visual change (the most obvious) 
*  Path change (where do you take users who click on a "Learn more" link?)
*  Interaction change (do you have a hover state? Is clicking different from hovering? How do you close a pop-up?)


Sometimes, the change could be dramatic but the problem is that it was made on an insignificant part of the site or page.  If you carried out an end-to-end customer journey through the control experience and then through the test experience, could you spot the difference?  Worse still, did you test on a page which has traffic but doesn't actively contribute to your overall sales (is its order participation virtually zero?)?
Is your hypothesis wrong? Did you think the strap line was important? Have you in fact discovered that something you thought was important is being overlooked by visitors?
Are you being too cautious - is there too much at stake and you didn't risk enough? 

Is the part of the site getting traffic? And does that traffic convert? Or is it just a traffic backwater or a pathing dead end?  It could be that you have unintentionally uncovered an area of your site which is not contributing to site perofrmance.

Do your success metrics match your hypothesis? Are you optimising for orders on your Customer Support pages? Are you trying to drive down telephone sales?
Some areas of the site are critical, and small changes have big differences. On the other hand, some parts of the site are like background noise that users filter out (which is a shame when we spend so much time and effort selecting a typeface, colour scheme and imagery which supports our brand!). We agonise over the photos we use on our sites, we select the best images and icons... And they're just hygiene factors that users barely glance at.  On the other hand, there are some parts that are critical - persuasive copy, clear calls to action, product information and specifications.  What we need to know, and can find out through our testing, is what matters and what doesn't.

Another possibility is that you made two counter-acting changes - one improved conversion, and the other worsened it, so that the net change is close to zero. For example, did you make it easier for users to compare products by making the comparison link larger, but put it higher on the page which pushed other important information on the page to a lower position, where it wasn't seen?  I've mentioned this before in the context of landing page bounce rate - it's possible to improve the click through rate on an email or advert by promising huge discounts and low prices... but if the landing page doesn't reflect those offers, then peopl will bounce off it alarmingly quickly.  This should show up in funnel metrics, so make sure you're analysing each step in the funnel, not just cart conversion (user added an item to cart) and order conversion (user completed a purchase).


Alternatively:  did you help some users, but deter others?  Segment your data - new vs returning, traffic source, order value...  did everybody from all segments perform exactly as they did previously, or did the new visitors benefit from the test recipe, while returning visitors found the change unhelpful?

In conclusion, if your results are showing you that your performance is flat, that's not necessarily the same as 'nothing happened'.  If it's true that nothing happened, then you've proved something different - that your visitors are more resilient (or perhaps resistant) to the type of change you're making.  You've shown that the area you've tested, and the way you've tested it, don't matter to your visitors.  Drill down as far as possible to understand if you've genuinely got flat results, and if you have, you can either test much bigger changes on this part of the site, or stop testing here completely, and move on.

No comments:

Post a Comment