Header tag

Thursday, 1 May 2014

Iterative Testing - Follow the Numbers

Testing, as I have said before, is great.  It can be adventurous, exciting and rewarding to try out new ideas for the site (especially if you're testing something that IT can't build out yet) with pie-in-the-sky designs that address every customer complaint that you've ever faced.  Customers and visitors want bigger pictures, more text and clearer calls to action, with product videos, 360 degree views and a new Flash or Scene 7 interface that looks like something from Minority Report or Lost in Space.
Your new user interface, Minority Report style?  Image credit
That's great - it's exciting to be involved in something futuristic and idealised, but how will it benefit the business teams who have sales targets to reach for this month, quarter or year?  They will accept that some future-state testing is necessary, but will want to optimise current state, and will probably have identified some key areas from their sales and revenue data.  They can see clearly where they need to focus the business's optimisation efforts and they will start synthesising their own ideas.

And this is all good news.  You're reviewing your web analytics tools to look at funnels, conversion, page flow and so on; you may also have session replay and voice-of-the-customer information to wade through periodically, looking for a gem of information that will form the basis of a test hypothesis.  Meanwhile, the business and sales teams have already done this (from their own angle, with their own data) and have come up with an idea.

So you run the test - you have a solid hypothesis (either from your analytics, or from the business's data) and a good idea on how to improve site performance.

But things don't go quite to plan; the results are negative, conversion is down or the average order value hasn't gone up.  You carry out a thorough post-test analysis and then get everybody together to talk it through.  Everbody gathers around a table (or on a call, with a screen-share ;-)  - everybody turns up:  the design team, the business managers, the analysts... everybody with a stake in the test, and you talk it through. Sometimes, good tests fail.  Sometimes, the test wins (this is also good, but for some reason, wins never get quite as much scrutiny as losses).

And then there's the question:  "Well, we did this in this test recipe, and things improved a bit, and we did that in the other test recipe and this number changed:  what happens if we change this and that?"  Or, "Can we run the test again, but make this change as well?"


These are great questions.  As a test designer, you'll come to love these questions, especially if the idea is supported by the data.  Sometimes, iterative testing isn't sequential testing towards an imagined optimum; sometimes it's brainstorming based on data.  To some extent, iterative testing can be planned out in advance as a long-term strategy where you analyse a page, look at the key elements in it and address them methodically.  Sometimes, iterative testing can be exciting (it's always exciting, just moreso) and take you in directions you weren't expecting.  You may have thought that one part of the page (the product image, the ratings and reviews, the product blurb) was critical to the page's performance, but during the test review meeting, you find yourself asking "Can we change this and that? Can we run the test with a smaller call to action and more peer reviews?"  And why not?  You already have the makings of a hypothesis and the data to support it - your own test data, in fact - and you can sense that your test plan is going in the right direction (or maybe totally the wrong direction, but at least you know which way you should be going!).


It reminds me of the quote (attributed to a famous scientist, though I can't recall which one), who said, "The development of scientific theory is not like the construction of fairy castles, but more like the methodical laying of one brick on another."  It's okay - in fact it's good - to have a test strategy lined up, focusing on key page elements or on page templates, but it's even more interesting when a test result throws up questions like, "Can we test X as well as Y?" or "Can we repeat the test with this additional change included?"

Follow the numbers, and see where they take you.  It's a little like a dot-to-dot picture, where you're drawing the picture and plotting the new dots as you go, which is not the same as building the plane while you're flying in it ;-).  
 
Follow the numbers.  Image credit

One thing you will have to be aware of is that you are following the numbers.  During the test review, you may find a colleague who wants to test their idea because it's their pet idea (recall the HIPPOthesis I've mentioned previously). Has the idea come from the data, or an interpretation of it, or has it just come totally out of the blue?  Make sure you employ a filter - either during the discussion phase or afterwards - to understand if a recipe suggestion is backed by data or if it's just an idea.  You'll still have to do all the prep work - and thankfully, if you're modifying and iterating, your design and development team will be grateful that they only need to make slight modifications to an existing test design.

Yes, there's scope for testing new ideas, but be aware that they're ideas, backed by intuition more than data, and are less likely (on average) to be successful; I've blogged on this before when I discussed iterating versus creating.  If your testing program has limited resource (and whose doesn't?) then you'll want to focus on the test recipes that are more likely to win - and that means following the numbers.

No comments:

Post a Comment