Header tag

Monday, 12 February 2018

Mathematically Explaining Confidence and Levels of Significance

Level of Significance: a more mathematical discussion

In mathematical terms, and according to "A Dictionary of Statistical Terms, by E H C Marriott, published for the International Statistical Institute by Longman Scientific and Technical":


"Many statistical tests of hypotheses depend on the use of the probability distributions of a statistic t chosen for the purpose of the particular test. When the hypothesis is true this distribution has a known form, at least approximately, and the probability Pr(t≥ti) or Pr(t≥t0), and Pr(t ≤ ti) and Pr(t ≤ t0) are called levels of significance and are usually expressed as percentages, e.g. 5 per cent.  The actual values are, of course, arbitrary, but popular values are 5, 1 and 0.1 per cent."






In English: we assume that the probability of a particular event happening (e.g. a particular recipe persuading a customer to convert and complete a purchase) can be modelled using the Normal Distribution.  We assume that the average conversion rate (e.g. 15%) represents the recipe's typical conversion rate, and the chances of the recipe driving a higher conversion rate can be calculated using some complex but manageable maths.  

More data, more traffic and more orders gives us the ability to state our average conversion rate with greater probability.  As we obtain more data, our overall data set is less prone to skewing (being affected by one or two anomalous data points).  The 'spread' of our curve - the degree of variability - decreases; in mathematical terms, the standard deviation of our data decreases.  The standard deviation is a measure of how spread out our data is, and this takes into account how many data points we have, and how much they vary from the average.  More data generally means a lower standard deviation (and that's why we like to have more traffic to achieve confidence).


When we run a test between two recipes, we are comparing their average conversion rate (and other metrics), and how likely it is that one conversion rate is actually better than the other.  In order to achieve this, we want to look at where the two conversion rates compare on their normal distribution curves.


In the diagram above, the conversion rate for Recipe B (green) is over one standard deviation from the mean - it's closer to two standard deviations.  We can use spreadsheets or data tables (remember those?) to translate the number of standard deviations into a probability:  how likely is it that the conversion rate for Recipe B is going to be consistently higher than Recipe A.  This will give us a confidence level.  It depends on the difference between the two (Y% compared to X%) and how many standard deviations this is (how much spread there is in the two data sets, which is dependent on how many orders and visitors we've received.

Most optimisation tools will carry out the calculation on number of orders and visitors, and comparision between the two recipes as part of their in-built capabilities (it's possible to do it with a spreadsheet, but it's a bit laborious).

The fundamentals are:


- we model the performance (conversion rate) of each recipe using the normal distribution (this tells us how likely it is that the actual performance for the recipe will vary around the reported average).
- we calculate the distance between conversion rates for two recipes, and how many standard deviations there are between the two.

- we translate the number of standard deviations into a percentage probability, which is the confidence level that one recipe is actually outperforming the other.

Revisiting our original definition:
Many statistical tests of hypotheses depend on the use of the probability distributions of a statistic t chosen for the purpose of the particular test

...and we typically use the Normal Distribution When the hypothesis is true this distribution has a known form, at least approximately, and the probability Pr(t≥ti) or Pr(t≥t0), and Pr(t ≤ ti) and Pr(t ≤ t0) are called levels of significance and are usually expressed as percentages, e.g. 5 per cent.

In our example, the probabilities ti and t0 are the probabilities that the test recipe outperforms the control recipe.  It equates to the proportion of the total curve which is shaded:




You can see here that almost 95% of the area under the Recipe A curve has been shaded, there is only the small amount between t1 and t0 which is not shaded (approx 5%).  Hence we can say with confidence that Recipe B is better than Recipe A.

Thus, for example, the expression "t falls above the 5 per cent level of significance" means that the observed value of t is greater than t1 where the probability of all values greater than t1 is 0.05; t1 is called the upper 5 per cent significance point, and similarly for the lower significance point t0."

As I said, most of the heavy maths lifting can be done either by the testing tool or a spreadsheet, but I hope this article has helped to clarify what confidence means mathematically, and (importantly) how it depends on the sample size (since this improves the accuracy of the overall data and reduces the standard deviation, which, in turn, enables to us to quote smaller differences with higher confidence).

Tuesday, 6 February 2018

New Year's Resolution - Don't moan, complain.

One of my New Year's Resolution's for 2018 is this: don't moan, complain.

What's the difference?

We're very good, as a society, at moaning. Social media has made it even easier to bend our friends' ears about the latest irritation that we've had to suffer: long queues; poor service; sub-standard goods; cold food; inept staff; rude checkout assistants... the list goes on. And we think that sharing our dreadful experience with our friends will avenge us on the service provider - we "warn" our friends against giving their money to the same company and encourage them to support their competitors instead.

That is not complaining; that's moaning.

Moaning
: telling everyone about a terrible experience - except the people who (1) caused your inconvenience and/or (2) are in a position to fix your situation or provide redress.  


Complaining: approaching the person who provided the poor service; the lousy product; the long wait or the cold food, and asking them to please fix it.

I don't tend to complain - I think it's rude; I don't want to cause a scene; I don't want to be an inconvenience; I think should just tolerate it and make it a character-building opportunity.

However, I think it's time to make a change, and - when necessary  - to complain instead of biting my tongue (I'd like to think I don't moan much, but the principle is the same). Some stores, cinemas and so on ask for feedback - some shops will enter you for a prize draw if you do - which is a good place to start, but how about this: if you think you're going to go home and then tomorrow tell your friends how bad this place/shop/meal was today, why not tell the staff today? Or at least contact their complaints department so that they can actually do something about it. Make a difference, so that they can make a difference too.

My New Year's Resolutions, over the years:

My New Year's Resolutions for 2017
Spend Less Time on Trivial Matters
Give More Than I Receive
Repair, Not Replace
Produce More Than I Consume
A review of my 2017 resolutions
Don't Moan, Complain

Tuesday, 23 January 2018

Explaining Statistical Significance and Confidence in A/B tests

If you've been presenting or listening to A/B test results (from online or offline tests) for a while, you'll probably have been asked to explain what 'confidence' or 'statistical significance' is.

A simple way of describing the measure of confidence is:

The probability (or likelihood) that this result (win or lose) will continue.


100% means this result is certain to continue, 50% means it's 50-50 on if it will win or lose. Please note that this is just a SIMPLE way of describing confidence, it's not mathematically rigorous.

Statistical significance
(or just 'significance') is achieved when the results reach a certain pre-agreed level, typically 75%, 80% or 90%.


It's worth mentioning that confidence doesn't give us the likelihood that the magnitude of the win will remain the same.  You can't say that a particular recipe will continue to win at +5.3% revenue per visitor (it might rise to 5.5%, or fall to 4.1%), but you can say that it will continue to outperform control.  As the sample size increases, the magnitude of the win will also start to settle down to a particular figure, and if you reach 100% confidence then you can also expect the level of the win to settle down to a specific figure too.

A note: noise and anomalous results in the early part of the test may lead you to see large wins with high confidence.  You need to consider the volume of orders (or successes) and traffic in your results, and observe the daily results for your test, until you can see that the effects of these early anomalies have been reduced.


Online testers frequently ask how long a test should run for - what measures should we look at, and when are we safe to assume that our test is complete (and the data is reliable).  I would say that looking at confidence and at daily trends should give you a good idea.


It's infuriating, but there are occasions when more time means less conclusive results: a test can start with a clear winner, but after time the result starts to flatten out (i.e. the winning lift decreases and confidence falls).  If you see this trend, then it's definitely time to switch the test off.

Conversely, you hope that you'll see flattish results initially, and then a clear winner begin to develop, with one recipe consistently outperforming the other(s).  Feeding more time, more traffic and more orders into the test gives you an increasingly clear picture of the test winner; the lifts will start to stabilise and the confidence will also start to grow.  So the question isn't "How long do I keep my test running?" but "How many days of consistent uplift do you look for?  And what level of confidence do I require to call a recipe a winner?"

What level of confidence do I need to call a test a winner?


Note that you may have different criteria for calling a winner compared to calling a loser.  I'm sure the mathematical purists will cry foul, and say that this sounds like cooking the books, or fiddling the results, but consider this:  if you're looking for a winner that you're going to implement through additional coding (and which may require an investment of time and money) then you'll probably want to be sure that you've got a definite winner that will provide a return on your money, so perhaps the win criteria would be 85% confidence with at least five days of consistent positive trending.

On the other hand, if your test is losing, then every day that you keep it running is going to cost you money (after all, you're funneling a fraction of your traffic through a sub-optimal experience).  So perhaps you'll call a loser with just 75% confidence and five days of consistent under-performing.  Here, the question becomes "How much is it going to cost me in immediate revenue to keep it running for another day?" and the answer is probably "Too much! Switch it off!!"  This is not a mathematical pursuit, along the lines of "How much money do we need to lose to achieve our agreed confidence levels?", this is real life profit-and-loss.

In a future blog post, I'll provide a more mathematical treatment of confidence, explaining how it's calculate from a statistical standpoint, so that you have a clear understanding of the foundations behind the final figures.



Thursday, 18 January 2018

Geometry: Changing the steepness of a hill by zig-zagging

Even if a hill or a road is too steep to climb, there is still a way to make progress, and that's by zig-zagging.  Instead of going directly up the hill in the shortest route, it's possible to take an angled approach up the slope, increasing the path length, but making the climb angle less steep.

It is easier to outline this in a simplified diagram:



This triangular prism represents the face of a hill.
The angle directly up the hill is α and is shown in the pink triangle.
The angle of approach (i.e. the degree of zigzag, the deviation from the straight-up route) is ß, and is shown by the red and pink triangles combined.
The resultant angle (i.e. the actual angle of ascent) is δ and is shown by the blue triangle.

Each of the triangles is right-angled, so standard trigonometry functions can be applied (I haven't shown all the right angles in the diagram, but it is a regular triangular prism).

Considering each of these three angles in turn:  the way to get to a simplified expression for δ is to express the three angles in the fewest numbers of lines.  It's possible to express α, ß and δ in terms of the external dimensions of the prism (let's call them x, y and z) but this just leads to incompatible expressions that can't be simplified or combined.

α
  



ß


δ



The strategy here is to substitute for y and p in the expression for δ, and then to simplify.

Firstly, rearrange the expressions for α and ß to make y and p the subjects of those equations.



A very simple and elegant equation:  the angle of ascent depends on how steep the hill is, and the amount by which you zigzag, and is completely independent of the size of the hill (i.e. none of the lengths are relevant in the calculation).

A few sanity checks:

If ß is zero, or close to zero, then δ approaches α - i.e. if you don't zigzag, then you approach the hill at its actual angle.

If ß approaches 90 degrees, then  δ approaches zero - you hardly climb at all, but you'll need to travel much further to climb the hill.  In fact, as ß tends towards 90 degrees, path length p tends to infinity.


If α increases, then δ increases for constant ß (something that was worth checking).

An interesting note:

At first glance, you may think that a path (or zigzag) angle of 45 degrees would reduce the angle of ascent by half (e.g. from 60 degrees to 30 degrees), simply because 45 is half of 90.  However, this isn't the case.  In order to get a reduction of a half, cos ß needs to equal 0.5.  If cos ß = 0.5, then ß = 60 degrees.  A much larger deviation from the straight-up angle is needed.


In conclusion

This question was first put to me when I was in high school (a few years ago now) and it's been nagging at me ever since.  I'm pleased to have been able to solve it, and I'm pleased with how surprisingly simple the final expression is (previously, my 3-D geometry and logic weren't quite up to scratch, and I ended up going round in circles!).


Thursday, 11 January 2018

Calculating the tetrahedral bond angle

Calculating the Tetrahedral Bond Angle

Every Chemistry textbook which covers molecular shapes will state with utmost authority that the bond angle in tetrahedral molecules is 109.5 degrees. Methane (CH4) is frequently quoted as the example, shown to be completely symmetrical and tetrahedral. And then the 109.5 degrees.  There's no proof given (after all, Chemistry textbooks aren't dealing with geometry, and there's no need to show something just for the sake of mathematical proof - rightly, the content is all about reactivity and structure).  However, the lack of proof has bugged me on-and-off for about 20 years, and recently I decided it was time to do something about it and prove it for myself.

There are various websites showing the geometry of a tetrahedron and how it relates to a cube, and those sites use the relationship between a cube and a tetrahedron in order to calculate the angle, but I'm going to demonstrate an alternative proof using solely the properties of a tetrahedron  - its symmetry and its equilateral triangular faces.


To start with, calculate the horizontal distance from one of the vertices to the centre of the opposite triangular face (the point directly below the central 'atom').  In this diagram, E is the top corner, D is the central "atom" (representing the centre of the tetrahedron) and C is the point directly below D, such that CDE is a straight line, and C is the centre of the shaded face (the base).



This gives a large right-angled triangle ACE, where the hypotenuse is one edge of the tetrahedron (length AE = l); one side is the line we'll be calculating (length AC, using the triangle ABC); and the third, CE, is the line extending from the top of the tetrahedron through the central atom down to the centre of the base.

In triangle ABC, length AB = l/2, angle A is 30 degrees, angle B is 90 degrees.  We need to calculate length AC:

cos 30 = l/2 / AC
AC = l /2 cos 30


Since we have two sides and an angle of a right-angled triangle, we can determine the other two angles; we're primarily interested in the angle at the top, labelled α.

sin α = AC / l

And as we know that AC = 1 / 2 cos 30 this simplifies to

sin α = 1 / (2 cos 30)

Evaluating:  1 / (2 cos 30) = 0.5773

sin α = 0.5773
α = 35.26 degrees.


Looking now at the triangle ADE which contains the tetrahedral bond angle at D:  the bond angle D can be calculating through symmetry, since ADE is an isosceles triangle.

D = 180 - (2*35.26) = 109.47 degrees, as we've been told all along.

QED

Thursday, 21 December 2017

How did a Chemistry Graduate get into Online Testing?

When people examine my CV, they are often intrigued by how a graduate specialising in chemistry transferred into web analytics, and into online testing and optimisation.  Surely there's nothing in common between the two?

I am at a slight disadvantage - after all, I can't exactly say that I always wanted to go into website analysis when I was younger.  No; I was quite happy playing on my home computer, an Acorn Electron with its 32KB of RAM and 8-bit processor running at 1MHz, and the internet hadn't been invented yet.  You needed to buy an external interface just to connect it to a temperature gauge or control an electrical circuit - we certainly weren't talking about the 'internet of things'.  But at school, I was good at maths, and particularly good at science which was something I especially enjoyed.  I carried on my studies, specialising in maths, chemistry and physics, pursuing them further at university.  Along the way, I bought my first PC - a 286 with 640KB memory, then upgraded to a 486SX 25MHz with 2MB RAM, which was enough to support my scientific studies, and enabled me to start accessing the information superhighway.

Nearly twenty years later, I'm now an established web optimization professional, but I still have my interest in science, and in particular chemistry.  Earlier this week, I was reading through a chemistry textbook (yes, it's still that level of interest), and found this interesting passage on experimental method.  It may not seem immediately relevant, but substitute "online testing" or "online optimisation" for Chemistry, and read on.

Despite what some theoreticians would have us believe, chemistry is founded on experimental work.   An investigative sequence begins with a hypothesis which is tested by experiment and, on the basis of the observed results, is ratified, modified or discarded.   At every stage of this process, the accurate and unbiased recording of results is crucial to success.  However, whilst it is true that such rational analysis can lead the scientist towards his goal, this happy sequence of events occurs much less frequently than many would care to admit. 

I'm sure you can see how the practice and thought processes behind chemical experiments translates into care and planning for online testing.  I've been blogging about valid hypotheses and tests for years now - clearly the scientific thinking in me successfully made the journey from the lab to the website.  And the comment that the "happy sequence of experiment winners happen less frequently than many would care to admit" is particularly pertinent, and I would have to agree with it (although I wouldn't like to admit it).  Be honest, how many of your tests win?  After all, we're not doing experimental research purely for academic purposes - we're trying to make money, and our jobs are to get winners implemented and make money for our companies (while upholding our reputations as subject-matter experts).

The textbook continues...

Having made the all important experimental observations, transmitting this information clearly to other workers in the field is of equal importance.   The record of your observations must be made in such a manner that others as well as yourself can repeat the work at a later stage.   Omission of a small detail, such as the degree of purity of a particular reagent, can often render a procedure irreproducible, invalidating your claims and leaving you exposed to criticism.   The scientific community is rightly suspicious of results which can only be obtained in the hands of one particular worker!

The terminology is quite subject-specific here, but with a little translation, you can see how this also applies to online testing.  In the scientific world, there's a far greater emphasis on sharing results with peers - in industry, we tend to keep our major winners to ourselves, unless we're writing case studies (and ask yourself why do we read case studies anyway?) or presenting at conferences.  But when we do write or publish our results, it's important that we do explain exactly how we achieved that massive 197% lift in conversion - otherwise we'll end up  "invalidating our claims and leaving us exposed to criticism.  The scientific community [and the online community even moreso] is rightly suspicious of results which can only be obtained in the hands of one particular worker!"  Isn't that the truth?

Having faced rigorous scrutiny and peer review of my work in a laboratory, I know how to address questions about the performance of my online tests.   Working with online traffic is far safer than handling hazardous chemicals, but the effects of publishing spurious or inaccurate results are equally damaging to an online marketer or a laboratory-based chemist.  Online and offline scientists alike have to be thoughtful in their experimental practice, rigorous in their analysis and transparent in their methodology and calculations.  


Excerpts taken from Experimental Organic Chemistry: Principles and Practice by L M Harwood and C J Moody, published by Blackwell Scientific Publications in 1989 and reprinted in 1990.

Wednesday, 29 November 2017

Another day I haven't used Algebra

So, there's a meme floating around Facebook, which says, "Well, another day has passed, and I still haven't used algebra."  Really?  If it's true, it's not something to be especially proud of.  And the likelihood is that it's not true anyway.

For starters, there are many things that I learned at school that I don't use on a daily basis any more.  Foreign languages, for a start (although I probably do use them more than I realise).  Do I regularly apply the map-reading skills I learned at school? We have satnavs and apps for that.  And do I refer the Stuarts and the Tudors?  I suppose I should probably proudly announce that I haven't once consulted a history book this week, and rile all the historians I know.  Somehow though, Maths - probably due to its apparent difficulty or complexity - is seen as something that we should abandon, forget or even be proud of ignoring:

"Why do they make us learn math? It's not like I'll ever use it."
"Yeah, it's not like math teaches you how to work out complex problems logically."


However, Maths (and to some extent algebra) still permeates many areas of our life.  If you want to cook a meal (and you might), then you'll need to know when to start cooking it, in order to achieve a particular mealtime.  Or you might just start cooking as soon as you get home, and eat it as soon as it's ready.  But when will that be?  How long will it take you to get home if you drive at 30 mph?  40 mph?  Are you so sure that another day has passed and you really haven't used algebra?


And then there are those delightful puzzles on Facebook.  You know the sort - if three buckets are equal to 30, and two buckets and two spades are equal to 26, and a bucket and a spade and a flag are equal to 24, what's a flag worth?  I really don't think it's possible to solve that problem without using algebra (call it what you will).  How do you solve those problems?  Here's some help on BODMAS problems (or PEMDAS, if you're from the US).

Once you assign a numerical value (or a time, or a price) to an item (or a distance), and then start doing any sort of calculation on it, you are doing algebra.  Have you ever wondered which was better value in the sales?  The Black Friday sales?  The pre- or post-Christmas sales?  3 for 2 offers? Or buy-one-get-one-free? Or buy-one-get-one-half-price?

And if you have a £10 note in your pocket, and you want to know how many widgets you can buy without overspending... you used algebra.  I think it's fair to say that so far today, I have used algebra numerous times - you might even say X times.