Header tag

Monday, 14 November 2022

How many of your tests win?

 As November heads towards December, and the end of the calendar year approaches, we start the season of Annual Reviews.  It's time to identify, classify and quantify our successes and failures opportunities from 2022, and to look forward to 2023.  For a testing program, this usually involves the number of tests we've run, and how many recipes were involved; how much money we made and how many of our tests were winners.

If I ask you, I don't imagine you'd tell me, but consider for a moment:  how many of your tests typically win?  How many won this year?  Was it 50%?  Was it 75%?  Was it 90%?  And how does this reflect on your team's performance?

50% or less

It's probably best to frame this as 'avoiding revenue loss'.  Your company tested a new idea, and you prevented them from implementing it, thereby saving your company from losing a (potentially quantifiable) sum of money.  You were, I guess, trying some new ideas, and hopefully pushed the envelope - in the wrong direction, but it was probably worth a try.

Around 75%

If 75% of your tests are winning, then you're in a good position and probably able to start picking and choosing the tests that are implemented by your company.  You'll have happy stakeholders who can see the clear incremental revenue that you're providing, and who can see that they're having good ideas.

90% or more

If you're in this apparently enviable position, you are quite probably running tests that you shouldn't be.  You're probably providing an insurance policy for some very solid changes to your website; you're running tests that have such strong analytical support, clear user research or customer feedback behind them that they're just straightforward changes that should be made.  Either that, or your stakeholders are very lucky, or have very good intuition about the website.  No, seriously ;-)

Your win rate will be determined by the level of risk or innovation that your company are prepared to put into their tests.  Are you testing small changes, well-backed by clear analytics?  Should you be?  Or are you testing off-the-wall, game-changing, future-state, cutting edge designs that could revolutionise the online experience? 

I've said before that your test recipes should be significantly different from the current state - different enough to be easy to distinguish from control, and to give you a meaningful delta.  That's not to say that small changes are 'bad', but if you get a winner, it will probably take longer to see it.

Another thought:  the win rate is determined by the quality of the test ideas, and how adventurous the ideas are, and therefore the win rate is a measure of the teams who are driving the test ideas.  If your testing team is focused on test ideas and has strengths in web analytics and customer experience metrics, then your team will probably have a high win rate.  Conversely, if your team is responsible for the execution of test ideas which are produced by other teams, then a measure of test quality will be on execution, test timing, and quantity of the tests you run.  You can't attribute the test win rate (high or low) to a team who develop tests; in fact, the quality of the code is a much better KPI.

What is the optimal test win rate?  I'm not sure that there is one, but it will certainly reflect the character of your test program more than its performance. 

Is there a better metric to look at?   I would suggest "learning rate":  how many of your tests taught you something? How many of them had a strong, clearly-stated hypothesis that was able to drive your analysis of your test (winner or loser) and lead you to learn something about your website, your visitors, or both?  Did you learn something that you couldn't have identified through web analytics and path analysis?  Or did you just say, "It won", or "It lost" and leave it there?  Was the test recipe so complicated, or contain so many changes, that isolating variables and learning something was almost completely impossible?

Whatever you choose, make sure (as we do with our test analysis) that the metric matches the purpose, because 'what gets measured gets done'.





No comments:

Post a Comment