Level of Significance: a more mathematical discussion
In mathematical terms, and according to "A Dictionary of Statistical Terms, by E H C Marriott, published for the International Statistical Institute by Longman Scientific and Technical":
In English: we assume that the probability of a particular event happening (e.g. a particular recipe persuading a customer to convert and complete a purchase) can be modelled using the Normal Distribution. We assume that the average conversion rate (e.g. 15%) represents the recipe's typical conversion rate, and the chances of the recipe driving a higher conversion rate can be calculated using some complex but manageable maths.
More data, more traffic and more orders gives us the ability to state our average conversion rate with greater probability. As we obtain more data, our overall data set is less prone to skewing (being affected by one or two anomalous data points). The 'spread' of our curve - the degree of variability - decreases; in mathematical terms, the standard deviation of our data decreases. The standard deviation is a measure of how spread out our data is, and this takes into account how many data points we have, and how much they vary from the average. More data generally means a lower standard deviation (and that's why we like to have more traffic to achieve confidence).
In mathematical terms, and according to "A Dictionary of Statistical Terms, by E H C Marriott, published for the International Statistical Institute by Longman Scientific and Technical":
"Many statistical tests of hypotheses depend on the use of the probability distributions of a statistic t chosen for the purpose of the particular test. When the hypothesis is true this distribution has a known form, at least approximately, and the probability Pr(t≥ti) or Pr(t≥t0), and Pr(t ≤ ti) and Pr(t ≤ t0) are called levels of significance and are usually expressed as percentages, e.g. 5 per cent. The actual values are, of course, arbitrary, but popular values are 5, 1 and 0.1 per cent."
When we run a test between two recipes, we are comparing their average conversion rate (and other metrics), and how likely it is that one conversion rate is actually better than the other. In order to achieve this, we want to look at where the two conversion rates compare on their normal distribution curves.
In the diagram above, the conversion rate for Recipe B (green) is over one standard deviation from the mean - it's closer to two standard deviations. We can use spreadsheets or data tables (remember those?) to translate the number of standard deviations into a probability: how likely is it that the conversion rate for Recipe B is going to be consistently higher than Recipe A. This will give us a confidence level. It depends on the difference between the two (Y% compared to X%) and how many standard deviations this is (how much spread there is in the two data sets, which is dependent on how many orders and visitors we've received.
Most optimisation tools will carry out the calculation on number of orders and visitors, and comparision between the two recipes as part of their in-built capabilities (it's possible to do it with a spreadsheet, but it's a bit laborious).
The fundamentals are:
- we model the performance (conversion rate) of each recipe using the normal distribution (this tells us how likely it is that the actual performance for the recipe will vary around the reported average).
- we calculate the distance between conversion rates for two recipes, and how many standard deviations there are between the two.
- we translate the number of standard deviations into a percentage probability, which is the confidence level that one recipe is actually outperforming the other.
Revisiting our original definition:Many statistical tests of hypotheses depend on the use of the probability distributions of a statistic t chosen for the purpose of the particular test
...and we typically use the Normal Distribution When the hypothesis is true this distribution has a known form, at least approximately, and the probability Pr(t≥ti) or Pr(t≥t0), and Pr(t ≤ ti) and Pr(t ≤ t0) are called levels of significance and are usually expressed as percentages, e.g. 5 per cent.
In our example, the probabilities ti and t0 are the probabilities that the test recipe outperforms the control recipe. It equates to the proportion of the total curve which is shaded:
You can see here that almost 95% of the area under the Recipe A curve has been shaded, there is only the small amount between t1 and t0 which is not shaded (approx 5%). Hence we can say with confidence that Recipe B is better than Recipe A.
Thus, for example, the expression "t falls above the 5 per cent level of significance" means that the observed value of t is greater than t1 where the probability of all values greater than t1 is 0.05; t1 is called the upper 5 per cent significance point, and similarly for the lower significance point t0."
As I said, most of the heavy maths lifting can be done either by the testing tool or a spreadsheet, but I hope this article has helped to clarify what confidence means mathematically, and (importantly) how it depends on the sample size (since this improves the accuracy of the overall data and reduces the standard deviation, which, in turn, enables to us to quote smaller differences with higher confidence).
No comments:
Post a Comment