Header tag

Wednesday 7 May 2014

Building Testing Program Momentum

I have written previously about getting a testing program off the ground, and selling the idea of testing to management.  It's not easy, but hopefully you'll be able to start making progress and getting a few quick wins under your belt.  Alternatively, you may have some seemingly disastrous tests where everything goes negative, and you wonder if you'll ever get a winner.  I hope that either way, your testing program is starting to provide some business intelligence for you and your company, and that you're demonstrating the value of testing.  Providing positive direction for the future is nice, providing negative direction ("don't ever implement this") is less pleasant but still useful for business.

In this article, I'd like to suggest ways of building testing momentum - i.e. starting to develop from a few ad-hoc tests into a more systematic way of testing.  I've talked about iterative testing a few times now (I'm a big believer) but I'd like to offer practical advice on starting to scale up your testing efforts.

Firstly, you'll find that you need to prioritise your testing efforts.  Which tests are - potentially - going to give you the best return?  It's not easy to say; after all, if you knew the answer you wouldn't have to test.  But look at the high traffic pages, the high entry pages (lots of traffic landing) and the major leaking points in your funnel.  Fixing these pages will certainly help the business.  You'll need to look at potential monetary losses for not fixing the pages (and remember that management typically pays more attention to £ and $ than they do to % uplift).

Secondly - consider the capacity of your testing team.  Is your testing team made up of you, a visual designer and a single Javascript developer, or perhaps a share of development team when they can spare some capacity?  There's still plenty of potential there, but plan accordingly.  I've mentioned previously that there's plenty of testing opportunity available in the wording, position and colour of CTA buttons, and that you don't always need to have major design changes to see big improvements in site performance.


So many ideas, but which is best? One way to find out: run a test!  Image credit

Thirdly - it's possible to dramatically increase the speed (and therefore capacity) of your testing program by testing in two different areas or directions at the same time.  Not simultaneously, but in parallel.  For example, let's suppose you want to test the call to action buttons on your product pages, and you also want to test how you show discounted prices.  These should be relatively easy to design and develop - it's mostly text and colour changes that you're focusing on.  Do you show the new price in green, and the original price in red? Do you add a strikethrough on the original price?  What do you call the new price - "offer" or "reduced"?  There's plenty to think about, and it seems everybody does it differently.  And for the call-to-action button - there's wording, shape (rounded or square corners), border, arrow...  the list goes on.

Now; if you want to test just call-to-action buttons, you have to develop the test (two weeks), run the test (two weeks), analyse the results (two weeks) and then develop the next test (two weeks more).  This is a simplified timeline, but it shows you that you'll only be testing on your site for two weeks out of six (the other four are spent analysing and developing).  Similarly, your development resource is only going to be working for two out of six weeks, and if there's capacity available, then it makes sense to use it.

I have read a little on critical path analysis (and that's it - nothing more), but it occured to me that you could double the speed of your testing program by running two mini-programs alongside each other, let's call them Track A and Track B.  While Track A is testing, Track B could be in development, and then, when the test in Track A is complete, you can switch it off and launch the test in Track B.  It's a little oversimplified, so here's a more plausible timeline (click for a larger image):





Start with Track A first, and design the hypothesis.  Then, submit it to the development team to write the code, and when it's ready, launch the test - Test A1.  While the test is running, begin on the design and hypothesis for the first test in Track B - Test B1.  Then, when it's time to switch off Test A1, you can swap over and launch Test B1.  That test will run, accumulating data and then, when it's complete, you can switch it off.  While test B1 is running, you can review the data in test A1, work out what went well, what went badly - review the hypothesis and improve, then design the next iteration.




If everything works perfectly, you'll reach point X on my diagram and Test A2 will be ready to launch when Test B1 is switched off.


However, we live in the real world, and test A2 isn't quite as successful as it was meant to be.  It takes quite some time to obtain useful data, and the conversion uplift that you anticipated has not happened - it's taking time to reach statistical significance, and so you have to keep it running for longer.  Meanwhile, Test B2 is ready - you've done the analysis, submitted the new design for develoment, and the developers have completed the work.  This means that test B2 is now pending.   Not a problem - you're still utilising all your site traffic for testing, and that's surely an improvement on the 33% usage (two weeks testing, four weeks other activity) you had before.

Eventually, at point Y, test A2 is complete, you switch it off and launch Test B2, which has been pending for a few days/weeks.  However, Test B2 is a disaster and conversion goes down very quickly; there's no option to keep it running.  (If it was trending positively, then you could keep it running).  Even though the next Track A test is still in development, you have got to pull the test - it's clearly hurting site performance and you need to switch it off as soon as possible.

I'm sure parallel processing has been applied in a wide range of other business projects, but this idea translates really well into the world of testing, especially if you're planning to start increasing the speed and capacity of your testing program.  I will give some though to other ways of increasing test program capacity, and - hopefully - write about this in the near future.





No comments:

Post a Comment