uyhjjddddddddddd Web Optimisation, Maths and Puzzles: metrics

Header tag

Showing posts with label metrics. Show all posts
Showing posts with label metrics. Show all posts

Monday, 14 November 2022

How many of your tests win?

 As November heads towards December, and the end of the calendar year approaches, we start the season of Annual Reviews.  It's time to identify, classify and quantify our successes and failures opportunities from 2022, and to look forward to 2023.  For a testing program, this usually involves the number of tests we've run, and how many recipes were involved; how much money we made and how many of our tests were winners.

If I ask you, I don't imagine you'd tell me, but consider for a moment:  how many of your tests typically win?  How many won this year?  Was it 50%?  Was it 75%?  Was it 90%?  And how does this reflect on your team's performance?

50% or less

It's probably best to frame this as 'avoiding revenue loss'.  Your company tested a new idea, and you prevented them from implementing it, thereby saving your company from losing a (potentially quantifiable) sum of money.  You were, I guess, trying some new ideas, and hopefully pushed the envelope - in the wrong direction, but it was probably worth a try.  Or maybe this shows that your business instincts are usually correct - you're only testing the edge cases.

Around 75%

If 75% of your tests are winning, then you're in a good position and probably able to start picking and choosing the tests that are implemented by your company.  You'll have happy stakeholders who can see the clear incremental revenue that you're providing, and who can see that they're having good ideas.

90% or more

If you're in this apparently enviable position, you are quite probably running tests that you shouldn't be.  You're probably providing an insurance policy for some very solid changes to your website; you're running tests that have such strong analytical support, clear user research or customer feedback behind them that they're just straightforward changes that should be made.  Either that, or your stakeholders are very lucky, or have very good intuition about the website.  No, seriously ;-)

Your win rate will be determined by the level of risk or innovation that your company are prepared to put into their tests.  Are you testing small changes, well-backed by clear analytics?  Should you be?  Or are you testing off-the-wall, game-changing, future-state, cutting edge designs that could revolutionise the online experience? 

I've said before that your test recipes should be significantly different from the current state - different enough to be easy to distinguish from control, and to give you a meaningful delta.  That's not to say that small changes are 'bad', but if you get a winner, it will probably take longer to see it.

Another thought:  the win rate is determined by the quality of the test ideas, and how adventurous the ideas are, and therefore the win rate is a measure of the teams who are driving the test ideas.  If your testing team is focused on test ideas and has strengths in web analytics and customer experience metrics, then your team will probably have a high win rate.  Conversely, if your team is responsible for the execution of test ideas which are produced by other teams, then a measure of test quality will be on execution, test timing, and quantity of the tests you run.  You can't attribute the test win rate (high or low) to a team who develop tests; in fact, the quality of the code is a much better KPI.

What is the optimal test win rate?  I'm not sure that there is one, but it will certainly reflect the character of your test program more than its performance. 

Is there a better metric to look at?   I would suggest "learning rate":  how many of your tests taught you something? How many of them had a strong, clearly-stated hypothesis that was able to drive your analysis of your test (winner or loser) and lead you to learn something about your website, your visitors, or both?  Did you learn something that you couldn't have identified through web analytics and path analysis?  Or did you just say, "It won", or "It lost" and leave it there?  Was the test recipe so complicated, or contain so many changes, that isolating variables and learning something was almost completely impossible?

Whatever you choose, make sure (as we do with our test analysis) that the metric matches the purpose, because 'what gets measured gets done'.

Similar posts I've written about online testing

Getting an online testing program off the ground
Building Momentum in Online testing
Testing vs Implementing Directly


Tuesday, 23 February 2021

Knowing Your KPI is Key

I've written in the past about KPIs, and today I find myself sitting at my computer about to re-tell a story about KPIs - with another twist.

Two years ago, almost to the day, I introduced you all to Albert, Britney and Charles, my three fictitious car salespeople.  Back in 2019, they were selling hybrid cars, and we had enough KPIs to make sure that each of them was a winner in some way (except Albert.  He was our 'control', and he was only there to make the others look good.  Sorry, Albert).

Well, two years on, selling cars has gone online.  Covid-19 and all that means that sales of cars are now handled remotely - with video views, emails, and Zoom calls - and targets have been realigned as a result.  The management team have realised that KPIs need to change in line with the new targets (which makes sense), and there are now a number of performance indicators being tracked.

Here are the results from January 2021 for our three long-standing (or long-suffering) salespeople.







Metric Albert BritneyCharles
Zoom sessions 411 225 510
Calls answered 320 243 366
Leads generated 127 77198
Cars sold 40 5960
Revenue (£) 201,000 285,000203,500
Average car value (£) 5025 48303391
Conversion (contact to lead) 17.4% 16.5%22.6%
Conversion (lead to sale) 31.5% 76.6%30.3%

And again we ask ourselves:  who was the best salesperson?  And, more important, which of the KPIs is actually the KEY performance indicator?

Albert:  had the highest average car value

Britney:  had the highest revenue (40% more than Albert or Charles) and by far the highest conversion from lead to sale.

Charles:  had the most Zoom sessions; calls answered; leads generated; cars sold and conversion from contact  to lead.

Surely Charles won?  Except that wages, overheads and shareholder dividends aren't paid with Zoom sessions; bonuses aren't paid in phone calls and pensions aren't paid with actual cars.

The KPI of most businesses (and certainly this one) is revenue - or, more specifically, profit margin.  It's very nice to be able to talk about other metrics and to use these to improve the business, but if you're a business and your KPI isn't something related to money, then you're probably not aiming for the right target.  

Yes, you can certainly use other metrics to improve the business:  for example, Charles desperately needs to learn how to sell higher-value cars.  He's extremely productive - even prolific - with the customer contacts, but he's £1400 down per car compared to Britney,  and £1600 down per car compared to Albert.  Additionally, if Britney learned to improve her sales conversations and Zoom technique so that it was faster and more efficient, her sales volumes would increase.  This use of data to drive action is extremely helpful, and this will make your analysis actionable.

So:  metrics and KPIs aren't the same thing.  Select the KPI that actually matches the business aim (typically margin and revenue) and don't get distracted by lesser KPIs that are actually just calculated ratios.  Use all the metrics to improve business performance, but pick your winner based on what really matters to your company.

I have looked at KPIs in some my other articles:

The Importance of Being Earnest with your KPIs
Why Test Recipe KPIs are Vital
Web Analytics and Testing - A summary so far



Thursday, 16 March 2017

Average Time Spent on Page

The history of Web analytics tools has left a legacy of metrics that we can obtain "out of the box" even if they are of no practical use, and I would argue that a prime candidate for this category is time spent on page, and its troublesome partner average time spent on page. It's available because it's easy to obtain from tag-fires (or server log files) - it's just the time taken between consecutive page loads.  Is it useful? Not by itself, no. 

For example,  it can't be measured if the visitor exits from the page. If a user doesn't load another page on your site, then there are no further tag-fires, and you don't get a time on page.  This means that you have a self-selecting group of people who stayed on your site for at least one more page.  It entirely excludes visitors who immediately tell they have the wrong page and then leave. It also, sadly, excludes people who consume all the content and then leave. No net benefit there, then.

Worse still, visitors who immediately realise that they have the wrong page and hit the back button are included.  So, is there any value to the metric at all?  In most cases, I would argue not, although there can be if handled carefully. For example, there is some potential benefit in monitoring pages which require data entry, such as checkout pages or other forms. In these circumstances, faster is definitely better, and slower suggests unnecessarily complicated or lengthy. For most shopping pages, though, you will need a much clearer view of whether more time is better or worse. In an international journey, four hours on an airliner is very different from three hours in an airport.

I mentioned that time on page is not helpful by itself: it can be more informative in conjunction with other metrics such as exit rate, bounce rate or revenue participation. For example, if a page has a high exit rate and high time on page, then it suggests that only a few people are finding the content helpful and are prepared to work through the page to get what they want - and to move forwards. Remember that you can't draw any conclusions about the people who left - either they found everything they needed and then left, or they gave up quickly (or anything in between).

So, if you use and quote average time on page, then I suggest that you make sure you know what it's telling you and what's missing; that you quote it in conjunction with other relevant metrics, and you have decided in advance if longer = better or longer = worse.

Thursday, 28 August 2014

Telling a Story with Web Analytics Data

Management demands actionable insights - not just numbers, but KPIs, words, sentences and recommendations.  It's therefore essential that we, as web analysts and optimisers, are able to transform data into words - and better still, stories.  Consider a report with too much data and too little information - it reads like a science report, not a business readout:

Consider a report concerning four main characters;
Character A: female, aged 7 years old.  Approximately 1.3 metres tall.
Character B:  male, aged 5 years old.
Character C: female, aged 4 years old.
Character D:  male, aged 1 year old.

The main items in the report are a small cottage, a 1.2 kw electric cooker, 4 pints of water, 200 grams of dried cereal and a number of assorted iron and copper vessels, weighing 50-60 grams each.

After carrying out a combination of most of the water and dried cereal, and in conjunction with the largest of the copper vessels, Character B prepared a mixture which reached around 70 degrees Celsius.  He dispensed this unevenly into three of the smaller vessels in order to enable thermal equilibrium to be formed between the mixture and its surroundings.  Characters B, C and D then walked 1.25 miles in 30 minutes, averaging just over 4 km/h.  In the interim, Character A took some empirical measurements of the chemical mixture, finding Vessel 1 to still be at a temperature close to 60 degrees Celsius, Vessel 2 to be at 70 degrees Fahrenheit and Vessel 3 to be at 315 Kelvin, which she declares to be optimal.

The report continues with Character A consuming all of the mixture in Vessel 3, then single-handedly testing (in some case destruction testing) much of the furniture in the small cottage.

The problem is:  there's too much data and not enough information. 

The information is presented in various formats - lists, sentences and narrative.


Some of it the data is completely irrelevant (the height of Character A, for example)
Some of it is misleading (the ages of the other characters lacks context);
Some of it is presented in a mish-mash of units (temperatures are stated four times, with three different units).
The calculation of the speed of the walking characters is not clear - the distance is given in miles; the time is given in minutes; and the speed in kilometres per hour (if you are familiar with the abbreviation km/h).

Of course, this is an exaggeration, and as web analytics professionals, we wouldn't do this kind of thing in our reporting. 

Visitors are called visitors, and we consistently refer to them as visitors (and we ensure that this definition is understood among our readers)
Conversion rates are based on visitors, even though this may require extra calculation since our tools provide figures based on visits (or sessions)
Percentage of traffic coming from search is quoted as visitors (not called users), and not visits (whether you use visitors or visits is up to you, but be consistent)
Would you include number of users who use search?  And the conversion rate for users of search?
And when you say 'Conversion', are you consistently talking about 'user added an item to cart', or 'user completed a purchase and saw the thank-you page'?
Are you talking about the most important metrics?
 
So - make sure, for starters, that your units and data and KPIs are consistent, contextual, or at least make sense. And then:  add the words to the numbers.  It's only the start to say that: "We attracted 500 visitors with paid search, at a total cost of £1,200."  Go on to talk about the cost per visitor, break it down into key details by talking about the most expensive keywords, and the ones that drove the most traffic.  But then tell the story:  there's a sequence of events between user seeing your search term, clicking on your ad, visiting your site, and [hopefully] converting.  Break it down into chronological steps and tell the story!

There are various ways to ensure that you're telling the story; my favourites are to answer these types of questions:
"You say that metric X has increased by 5%.  Is that a lot?  Is that good?"
 "WHY has this metric gone up?"
"What happened to our key site performance indicators (profit, revenue, conversion) as a result?"
and my favourite:
"What should we do about it?"

There are, of course, various ways to hide the story, or disguise results that are not good (i.e. do not meet sales or revenue targets) - I did this in my anecdote at the start. However, management tend to start looking at incomplete data, or data that's obscure or irrelevant, and go on to ask about the data that's "missing"... so the truth will out, so it's better to show the data, tell the whole story, and highlight why things are below par. 

It's our role to highlight when performance is down - we should be presenting the issues (nobody else has the tools to do so) and then going on to explain what needs to be done - this is where actionable insights become invaluable.  In the end, we present the results and the recommendations and then let the management make the decision - I blogged about this some time ago - web analytics: who holds the steering wheel?

In the case of Characters A, B, C and D, I suggest that Characters B and C buy a microwave oven, and improve their security to prevent Character A from breaking into their house and stealing their breakfast.  In the case of your site, you'll need to use the data to tell the story.

Other articles I've written on Website Analytics that you may find relevant:

Web Analytics - Gathering Requirements from Stakeholders