uyhjjddddddddddd Web Optimisation, Maths and Puzzles: quantitative

Header tag

Showing posts with label quantitative. Show all posts
Showing posts with label quantitative. Show all posts

Tuesday, 17 October 2017

Quantitative and Qualitative Testing - Just tell me why!

"And so, you see, we achieved a 197% uplift in conversions with Recipe B!"
"Yes, but why?"
"Well, the page exit rate was down 14% and the click-through-rate to cart was up 12%."

"Yes, but WHY?"

If you've ever been on the receiving end of one of these conversations, you'll probably recognise it immediately.  You're presenting test results, where your new design has won, and you're sharing the good news with the boss.  Or, worse still, the test lost, and you're having to defend your choice of test recipe.  You're showing slide after slide of test metrics - all the KPIs you could think of, and all the ones in every big book you've read - and still you're just not getting to the heart of the matter.  WHY did your test lose?

No amount of numerical data will fully answer the "why" questions, and this is the significant drawback of quantitative testing.  What you need is qualitative testing.


Quantitative testing - think of "quantity" - numbers - will tell you how many, how often, how much, how expensive, or how large.  It can give you ratios, fractions and percentages.

Qualitative testing - think of "qualities" - will tell you what shape, what colour, good, bad, opinions, views and things that can't be counted.  It will tell you the answer to the question you're asking, and if you're asking why, you'll get the answer why.  It won't, however, tell you what the profitability of having a green button instead of a red one will be - it'll just tell you that people prefer green because respondents said it was more calming compared to the angry red one.

Neither is easier than the other to implement well, and neither is less important than the other.  In fact, both can easily be done badly.  Online testing and research may have placed the emphasis may be on A/B testing, and its rigid, reliable, mathematical nature, in contrast to qualitative testing where it's harder to provide concise, precise summaries, but a good research facility will require practitioners of both types of testing.

In fact, there are cases where one form of testing is more beneficial than the other.  If you're building a business case to get a new design fully developed and implemented, then A/B testing will tell you how much profit it will generate (which can then be offset against full development costs).  User testing won't give you a revenue figure like that.

Going back to my introductory conversation - quantitative testing will tell you why your new design has failed.  Why didn't people click the big green button?  Was it because they didn't see it, or because the wording was unhelpful, or because they didn't have enough information to progress?  A click-through-rate of 5% may be low, but "5%" isn't going to tell you why.  Even if you segment your data, you'll still not get a decent answer to the either-or question.  


Let's suppose that 85% of people prefer green apples to red.  
Why?
There's a difference between men and women:  95% of men prefer green apples; compared to just 75% of women.
Great.  Why?  In fact, in the 30-40 year old age group, nearly 98% of men prefer green apples; compared to just 76% of women in the age range.

See?  All this segmentation is getting us no closer to understanding the difference - is it colour; flavour or texture??


However, quantitative testing will get you the answer pretty quickly - you could just ask people directly.

You could liken it to quantitative testing being like the black and white outline of a picture, (or, if you're really good, a grey-scale picture) with qualitative being the colours that fit into the picture.  One will give you a clear outline, one will set the hues. You need both to see the full picture.

Thursday, 22 June 2017

The General Election (Inferences from Quantitative Data)

The Election

The UK has just had a general election: all the government representatives who sit in the House of Commons have all been selected by regional votes.  The UK is split into 650 areas, called constituencies, each of which has an elected Member of Parliament (MP). Each MP has been elected by voting in their constituency, and the candidate with the highest number of votes represents that constituency in the House of Commons.


There are two main political parties in the UK - the Conservative party (pursuing centre-right capitalist policies, and represented by a blue colour), and the Labour party (which pursues more socialist policies, and represented by as red colour).  I'll skip the political history, and move directly to the data:  the Conservative party achieved 318 MPs in the election; the Labour party achieved 262; the rest were spread between smaller parties. With 650 MPs in total, the Conservative party did not achieve a majority and have had to reach out to one of the smaller parties to reach the majority they require to obtain a working majority.

Anyway:  as the results for most of the constituencies had been announced, the news reporters started their job of interviewing famous politicians of the past and present.  They asked questions about what this meant for each political party; what this said about the political feeling in the country and so on.

And the Conservative politicians put a brave face on the loss of so many seats.  And the Labour politicians contained their delight at gaining so many seats and preventing a Conservative majority.

The pressing issue of the day is Brexit (the UK's departure from the European Union).  Some politicians said, "This tells us that the electorate don't want a 'hard' Brexit [i.e. to cut all ties completely with the EU], and that they want a softer approach." - views that they held personally, and which they thought they could infer from the election result.  O
thers said, "This shows a vote against austerity,"; "This vote shows dissatisfaction with immigration." and so on.

The problem is:  the question on election day is not, "Which of these policies do you like/dislike?" The question is, "Which of these people do you want to represent you in government?"   Anything beyond that is guesswork and supposition - whether that's educated, informed, biased, or speculative.


Website Data

There's a danger in reading too much into quantitative data, and especially bringing your own bias (intentionally or unintentionally) to bear on it.  Imagine on a website that 50% of people who reach your checkout don't complete their purchase.  Can you say why?

- They found out how much you charge for shipping, and balked at it.
- They discovered that you do a three-for-two deal and went back to find another item, which they found much later (or not at all)
- They got called away from their computer and didn't get chance to complete the purchase
- Their mobile phone battery ran out
- They had trouble entering their credit card number

You can view the data, you can look at the other pages they viewed during their visit.  You can even look at the items they had in their basket.  You may be able to write hypotheses about why visitors left, but you can't say for sure.  If you can design a test to study these questions, you may be able to improve your website's performance.  For example, can you devise a way to show visitors your shipping costs before they reach checkout?  Can you provide more contextual links to special offers such as three-for-two deals to make it easier for users to spend more money with you?  Is your credit card validation working correctly?  No amount of quantitative data will truly give you qualitative answers.

A word of warning:  it doesn't always work out as you'd expect.

The UK, in its national referendum in June 2016, voted to leave the EU.  The count was taken for each constituency, and then total number of votes was counted; the overall result was that "leave" won by 52% to 48%.  


However, this varied by region, and the highest leave percentage was in Stoke-on-Trent Central, where 69% of voters opted to leave.  This was identified by the United Kingdom Independence Party (UKIP) and their leader, Paul Nuttall, took the opportunity to stand as a candidate for election as an MP in the Stoke-on-Trent Central constituency in February 2017.  His working hypothesis was (I assume) that voters who wanted to leave the EU would also vote for him and his party, which puts forward policies such as zero-immigration, reduced or no funding for overseas aid, and so on - very UK-centric policies that you might imagine would be consistent with people who want to leave a multi-national group.  However, his hypothesis was disproved when the election results came in:

Labour Party - 7853
UKIP (Paul Nuttall) - 5233

Conservative Party - 5154
Liberal Democrat Party - 2083




He repeated his attempt in a different constituency in the General Election in June; he took 3,308 votes in Boston and Skegness - more than 10,000 fewer votes than the party's result in 2015.  Shortly afterwards, he stood down as the leader of UKIP.

So, beware: inferring too much from quantitative data - especially if you have a personal bias - can leave you high and dry, in politics and in website analysis.