Header tag

Sunday, 23 December 2018

2018: A first time for everything

Having recently passed 40, I thought I was done with firsts. I was wrong; this year has been a year of many firsts. You won't find my highlights on Facebook (I gave up Facebook for Lent, and all social media for September), so I've compiled some firsts here.

2018 saw the first time I played with my church's music group at a funeral. I've played at a number of weddings and christenings, which were joyful occasions. It was a challenge and an honour to be asked to play at a funeral - I had to "put my feelings in my pockets" for the duration of the service, and let them out afterwards. Not only was 2018 the first time I played at a funeral; it included the second time too.

I owned a BigTrak for the first time ever in 2018.  It was a very short association; the new 21st century BigTrak is actually underpowered compared to its 1980s predecessor (it runs off one less battery) and thus means it doesn't turn with the same level of accuracy or reliability. An instruction to turn 90 degrees ends up somewhere between 65 and 75 degrees, so there's no way to accurately program a square path. I sent it back barely 30 minutes after opening it. They say you should never revisit your childhood heroes; perhaps they were right.

A major highlight for me was seeing Michael W Smith live for the first time, at the Festival of Hope in Blackpool in September.


I've enjoyed his music and bought his albums (on cassette, even) for over 20 years, but never previously seen him live, since he doesn't cross the Atlantic all that often and I've never seen the dates in advance.  Seeing him live was well worth the wait; they say you should never meet your heroes - they are totally wrong.

Earlier in the year, spring 2018 saw the first time taking the family to see the Red Arrows.  I've seen them dozens of times before, but the Armed Forces Day in Llandudno was the first occasion where the Leese family en masse attended an air display. 


The trip was very successful, even if it didn't go entirely as planned: one of our children got bored partway through the the Red Arrows' display and opted to walk down to the waterline and throw stones into the sea instead, while another fell asleep in the lull between the Red Arrows and the Typhoon display, and had to be provided with ear defenders to help shut out the noise and stay asleep.

I created an account on Soundcloud for the first time this year. It isn't getting much attention, but I am uploading a few miscellaneous tracks to it (all my own work). It's 20 years since I bought my first keyboards, and I've finally decided it's time to upload some of the music I've produced for the wider world.  I've included some additional voice talent in my most recent recordings, which means there's a whole list of firsts: first shout-out to find voice actors; first online auditions; first recordings; and so on.

Unfortunately, 2018 has seen me start taking immuno suppressant drugs, as I have been diagnosed with psoriatic arthritis  (a form of rheumatoid arthritis). Back in January, I had severe pain in my left foot, at the base of my toes. After an initial diagnosis of tendonitis that didn't improve, I was eventually referred for blood tests and now attend the Rheumatology department of my local hospital every month for progress checks. The drug I'm on - methatraxate - seems to be working very well, with very few side effects (except occasional bouts of can't-be-bothered, and sometimes one day a week where I have almost no energy). I've also had the chance to see ultrasounds on my hands and feet, which were fascinating.

Summer 2018 was the first time I've volunteered to help at my church's summer club.  In fact, it's the first time I've volunteered to help at any summer club ever.  It was an exhausting three-day club, and somehow I managed to fit my normal work around it as well!  It was a great opportunity to 'give more than I receive' - I was completely drained by the end of it. It was a great experience, volunteering alongside a team of amazing people and sharing the good news of Jesus with dozens of children, and I'm already looking forward to next year's club!

I am not as 'technical' as people think, and I really don't know how to fix your computer. Truth be told, I'm not sure how to fix my own. But when our laptop started beeping incessantly, it fell to me as the most technical member of our household, to fix it. Short answer: I had to dismantle most of the laptop to get to the CMOS battery (roughly the size of a 10p) and replace it. Successfully. On the second attempt. First time performing laptop surgery - check!

All in all, 2018 has been an interesting year. I've done many new things; some new things have happened to me, and it's been a surprising year of firsts.  I intend to keep on growing and doing more firsts next year.

Wednesday, 28 November 2018

The Hierarchy of A/B Testing

As any A/B testing program matures, it becomes important to work out not only what you should test (and why), but also to start identifying the order in which to run your tests.  

For example, let's suppose that your customer feedback team has identified a need for a customer support tool that helps customers choose which of your products best suits them.  Where should it fit on the page?  What should it look like?  What should it say?  What color should it be?  Is it beneficial to customers?  How are you going to unpick all these questions and come up with a testing strategy for this new concept?

These questions should be brought into a sequence of tests, with the most important questions answered first.  Once you've answered the most important questions, then the rest can follow in sequence.
Firstly:  PRESENCE:  is this new feature beneficial to customers?
In our hypothetical example, it's great that the customer feedback team have identified a potential need for customers.  The first question to answer is: does the proposed solution meet customer needs?  And the test that follows from that is:  what happens if we put it on the page?  Not where (top versus bottom), or what it should look like (red versus blue versus green), but should it go anywhere on the page at all?

If you're feeling daring, you might even test removing existing content from the page.  It's possible that content has been added slowly and steadily over weeks, months or even longer, and hasn't been tested at any point.  You may ruffle some feathers with this approach, but if something looks out of place then it's worth asking why it was put there.  If you get an answer similar to "It seemed like a good idea at the time" then you've probably identified a test candidate.


Let's assume that your first test is a success, and it's a winner.  Customers like the new feature, and you can see this because you've looked at engagement with it - how many people click on it, hover near it, enter their search parameters and see the results, and it leads to improved conversion.

Next:  POSITION:  where should it fit on the page?

Your first test proved that it should go on the page - somewhere.  The next step is to determine the optimum placement.  Should it get pride of place at the top of the page, above the fold (yes, I still believe in 'the fold' as a concept)?  Or is it a sales support tool that is best placed somewhere below all the marketing banners and product lists?  Or does it even fit at the bottom of the page as a catch-all for customers who are really searching for your products?


Because web pages come in so many different styles...
This test will show you how engagement varies with placement for this tool - but watch out for changes in click through rates for the other elements on your page.  You can expect your new feature to get more clicks if you place it at the top of the page, but are these at the expense of clicks on more useful page content?  Naturally, the team that have been working on the new feature will have their own view on where the feature should be placed, but what's the best sequence for the page as a whole?  And what's actually best for your customer?

Next:  APPEARANCE:  what should it look like?
This question covers a range of areas that designers will love to tweak and play with.  At this point, you've answered the bigger questions around presence (yes) and position (optimum), and now you're moving on to appearance.  Should it be big and bold?  Should it fit in with the rest of the page design, or should it stand out?  Should it be red, yellow, green or blue?  There are plenty of questions to answer here, and you'll never be short of ideas to test.


Take care:
It is possible to answer multiple questions with one test that has multiple recipes, but take care to avoid addressing the later questions without first answering the earlier ones. 
If you introduce your new feature in the middle of the page (without testing) and then start testing what the headline and copy should say, then you're testing in a blind alley, without understanding if you have the bets placement already.  And if your test recipes all lose, was it because you changed the headline from "Find your ideal sprocket" to "Select the widget that suits you", or was it because the feature simply doesn't belong on the page at all?

Also take care not to become bogged down in fine detail questions when you're still answering more general questions.  It's all too easy to become tangled up in discussions about whether the feature is black with white text, or white with black text, when you haven't even tested having the feature on the page.  The cosmetic questions around placement and appearance are far more interesting and exciting than the actual necessary aspects of getting the new element onto the page and making it work.  

For example, NASA recently landed another probe on Mars.  It wasn't easy, and I don't imagine there were many people at NASA who were quibbling about the colour of the parachute or the colour of the actual space rocket.  Most people were focused on actually getting the probe onto the martian surface.  The same general rule applies in A/B testing - sometimes just getting the new element working and present on the page generates enough difficulties and challenges, especially if it's a dynamic element that involves calling APIs or other third-party services.

In those situations, yes, there are design questions to answer, but 'best guess' is a perfectly acceptable answer.  What should it look like?  Use your judgement; use your experience; maybe even use previous test data, and come back to it in a later test.  


Testing InSight's parachute; Image: NASA
But don't go introducing additional complexity and more variables where they're really not welcome.  What colour was the NASA parachute?  The one that was easiest to produce.

Once your first test on presence has been completed, it becomes a case of optimizing any remaining details.  CTA button wording and color; smaller elements within the new feature; the 'colour of the parachute' and so on.  You'll find there's more interest in tweaking the design of a winner than there is in actually getting it working, but that's fine... just roll with it!

Wednesday, 31 October 2018

Productivity

It's not just blog posts.

It's about spending time producing something.

This is something I pondered through much of October, as I was working on a number of different projects (none of them related to blogging, web analytics, maths or puzzles).  I aim to produce one post per month for this blog, but October has been so busy that I've just not had time to put two words together.  In fact, I'm editing this in November, so there you go.

But the truth is I've been playing with my children; I've been practising music (and writing some pieces too) and doing so many things that don't feature here that I've just not had time to make a sensible contribution to this blog.

And I guess that's the point - productivity isn't always measurable (especially if you're only measuring one outcome).  My KPI for this blog is post-one-a-month and see which articles are most popular.  And even then, that's not critical, it's just nice to have.

So go be productive offline.  There's a whole planet out there.

Friday, 21 September 2018

Email Etiquette

I'm going to go completely off-topic in this post, and talk about something that I've started noticing more and more over recent months:  poor email etiquette.  Not poor spelling, or grammar, or style, but just a low standard of communication from people and businesses who send me emails.  Things like missing images, poor titles, wonky meta tags, and pre-header text (the part of an email that you see in your browser after the subject title).  This is all stuff that can be accepted, ignored or overlooked - it's fine.  But sometimes the content of the email - the writing style or lack of it - begins to speak more loudly than the text in it.
Way back in the annals of online history, internet etiquette ("netiquette") was a buzz-word that was bandied around chat rooms, HTML web pages, and the occasional online guide.

According to the BBC, netiquette means, "Respecting other users' views and displaying common courtesy when posting your views to online discussion groups." while Wikipedia defines it as, "is a set of social conventions that facilitate interaction over networks, ranging from Usenet and mailing lists to blogs and forums."  Which is fair enough.  In short, netiquette means "Play nicely!"


Email etiquette is something else - similar, but different.  Email is personal, while online posting is impersonal and has a much wider audience.  Email is, to all intents and purposes, the modern version of writing a letter, and we were all taught how to write a letter, right?  No?  Except that the speed of email means that much of the thought and care that goes into writing a letter (or even word-processing one) has also started to disappear.  Here, then are my suggestions for good email etiquette.

 - Check your typing.  You might be banging out a 30-second email, but it's still worth taking an extra five seconds to check that everything is spelt correctly.  "It is not time to launch the product" and "It is now time to launch the product" will both beat a spell-checker, but only one of them is what you meant to say.  


Use the active tense instead of the passive.  Saying "I understand," or "I agree" just reads better and conveys more information than "Understood." or "Agreed."  You're not a robot, and you don't have to lose your personality to communicate effectively via email.

- Write in complete sentences.  Just because you're typing as fast as you think doesn't mean that your recipients will read the incomplete sentences you've written and correctly extrapolate them back to your original thoughts.  The speed of email delivery does not require speedier responses.  Take your time.  If you start dropping I, you, me, then, that, if  and other important nouns and pronouns from your sentences, and replacing them with full stops, then you're going to confuse a lot of people.  This ties in with the previous point - just because the passive tense is shorter than the active doesn't mean that it will be easier to understand.  You will also irritate those who are having to increase their effort in order to understand you.
"Take your time..."
- Don't use red text, unless you know what you're doing.  Red text says "This is an error", which is fine if you're highlighting an error, but will otherwise frustrate and irritate your readers.  Full capitals is still regarded as shouting (although have you ever noticed that comic book characters shout in almost all their speech bubbles?), which is okay if you want to shout, but not recommended if you want to improve the readability of your message.

- Shorter sentences are better than long ones.  Obviously, your sentences still need to be complete, but this suggestion applies especially if your readers don't read English as their first language.  Break up your longer sentences into shorter ones.  Keep the language concise.  Split your sentences instead of carrying on with an "and...". You're not writing a novel, you're writing a message, so you can probably lose subordinate clauses, unnecessary adverbs and parenthetical statements.  Keep it concise, keep it precise.  This also applies to reports, analyses and recommendations.  Stick to the point, and state it clearly.
"Keep it concise, keep it precise."
- Cool fingers on a calm keyboard.  If you have to reply to an email which has annoyed, irritated or frustrated you, then go away and think about your reply for a few minutes.  Keep calm instead of flying off the handle and hammering your keyboard.  Pick out the key points that need to be addressed, and handle them in a cool, calm and factual manner.  "Yes, my idea is better than yours, and no, I don't agree with your statements, because..." is going to work better in the long term than lots of red text and block capitals.  

 - Remember that sarcasm and irony will be almost completely lost by the time your message reaches its recipient(s).  If you're aiming to be sarcastic or ironic, then you'd better be very good at it, or dose it with plenty of smileys or emoticons to help get the message across.  Make use of extra punctuation, go for italics and capital letters, and try not to be too subtle.  If in doubt, or if you're communicating with somebody who doesn't know you very well, then avoid sarcasm completely.  Sometimes, this can even apply over the phone, too.  Subtlety can be totally lost over a phone conversation, so work out what you want to say, and say it clearly.  Obviously!

- Please and thank you go a long, long way.  If you want to avoid sounding heavy handed and rude, then use basic manners.  If you're making a request, then say please.  If you're acknowledging somebody's work, then say thank you.  You'll be amazed at how this improves working relationships with everybody around you - a little appreciation goes a long way.  I know this is hardly earth-shattering, nor specific to email, but it's worth repeating.

When you've finished, stop.  Don't start wandering around the discussion, bringing up new subjects or changing topic.  Start another email instead.

FOR EXAMPLE

A potential worst case?  You could start (and potentially end) an email with "Disagree."



Friday, 31 August 2018

Chess Game vs Steve

For this month's post, I'm going to revisit one of the most bizarre Chess games I've ever played over the board (face-to-face).  This game was played on 11 March 2014, and was against Steve (I didn't catch his surname).  I played my standard 1. d4 d5 2. c4 and faced a reply I've not seen before, namely 2.  ... b5.

What's going on?

Steve said after the game that he expected me to capture the "loose" b-pawn, then he'd play a6, I would capture again; he would then recapture with his bishop and after I move my e-pawn, he'd capture my bishop on f1 and unleash a massive Queenside attack with all his open files, and my king unable to castle to safety.

It's a good job I was having none of it.  I played c4xd5, to keep my pawns in the centre.

 1.d4 d5
2.c4 b5
3.cxd5 Nf6
4.e3 Ba6
5.Nc3 b4
6.Qa4+ Qd7
7.Qxb4 Bxf1

So Steve plays his Ba6 and Bxf1 motif, still looking at trapping my king in the centre.

8.Kxf1 Nxd5
9.Qb7 Nb6
10.Nf3 Nc6
11.d5 Nd8
12.Qa6 Nxd5
13.Ne5 Nb4
14.Qc4 Qf5
15.Qb5+

It's not possible to play Ndc6 or Nbc6 here, although the knights will protect each other.  Nc6, 16. Nxc6 Qxb5 17. Nxb5 Nxc6 18. Nxc7+ winning the rook.



Instead, the game continued...

15 ... c6
16.Nxc6 Qd3+
17.Qxd3 Nxd3
18.Nd4 e6
19.Ke2 Ne5
20.Ncb5  threatening Nc7+ and picking up the rook


20 ... Kd7
21.Rd1 Ke7

Black wastes a move while I continue to develop my pieces.  I was really pleased at this point; a pawn up and with superior development - and I was starting to claim the open files as well.

22.Bd2

22. ...  Ndc6

Black wants to exchange my active knights for his stuck on the back rank, and start mobilising his rooks.
23.Nxc6+ Nxc6
24.Rac1 Ne5
25.Rc7+

I exchange my lead in development for a lead in material, picking up the last of black's queenside pawns, and also giving me two connected passed pawns.

25. ...  Kf6 (tucked in behind the knight, which isn't guaranteed to go well)
26.Rxa7 Rb8
27.a4 Bc5
28.Rc7 Bb6
29.Rc2 g5
30.Bc3

"Pin and win..."

Black doesn't see the threat, and instead continues the kingside expansion

30. ... h5
31.f4 gxf4
32.exf4 Rhg8

A real blunder.  Not only do I win the knight on the spot, but the unfortunate position of the rook on b8 needed to be addressed at this point. 


33.Bxe5+ Ke7
34.Bxb8 Rxg2+
35.Kd3 Resigned.

The quick sequence of picking up the knight on e5 and then the rook on b8 has completely tipped the scales, and an unorthodox start comes to a swift end.  I enjoyed the way I dodged my opponent's opening preparation, played the middlegame, and developed my pieces in accordance with standard practice, and I think I was fortunate to pick up the knight and rook so quickly.  My longer term strategy was to start advancing my unopposed a- and b-pawns, probably with the support of my rooks, while sheltering my king near my queenside pawns.

A few months later, we had a rematch, and my game was a disaster (I don't think I still have the scoresheet!).









Tuesday, 31 July 2018

Checkout Conversion - A Penalty Shoot-Out

This year's World Cup ended barely a few weeks ago, and already the dust has settled and we've all gone back to our non-football lives.

From an English perspective, there were thankfully few penalty shoot-outs in this year's tournament (I can only remember two, maybe three), and even more thankfully, England won theirs (for the first time in living memory).  Penalty shoot-outs are a test of skill, nerve and determination; there are five opportunities to score, or to lose, and to lose completely.  It's all or nothing, and it really could be nothing.

It occurred to me while I was a neutral observer of one of the shoot-outs, that a typical online checkout process is like a penalty shoot out.

Five opportunities to win or lose.
A test of nerve and skill.
All or nothing.
Practice and experience helps, but isn't always enough.

As website designers (and optimizers), we're always looking to increase the number of conversions - the number of people who successfully complete the penalty shoot out, get five out of five and "win".  Each page in a checkout process requires slightly different skills and abilities; each page requires slightly more nerve as you approach the point of completing the purchase, as our prospective customer hands over increasingly sensitive personal information.

So we need to reassure customers.  Checkout conversion comes down to making things simple and straightforward; and helping users keep their eyes on the goal.

1. Basket - the goal
2. Sign In - does this go in checkout, or at the end?
3. Delivery Details - where are you going to deliver the package?
4. Payment Information - how are you going to pay for it
5. Confirmation - Winner!

Monday, 25 June 2018

Data in Context (England 6 - Panama 1)

There's no denying it, England have made a remarkable and unprecedented start to their World Cup campaign.  6-1 is their best ever score in a World Cup competition, exceeding their previous record of 3-0 against Paraguay and against Poland (both achieved in the Mexico '86 competition).  A look at a few data points emphasises the scale of the win:


*  The highest ever England win (any competition) is 13-0 against Ireland in February 1882.
*  England now share the record for most goals in the first half of a World Cup game (five, joint record with Germany, who won 7-1 against Brazil in 2014).
* The last time England scored four or more goals in a World Cup game was in the final of 1966.
*  Harry Kane joins Ron Flowers (1962) as the only players to score in England's first two games at a World Cup tournament.

However, England are not usually this prolific - they scored as many goals against Panama on Sunday as they had in their previous seven World Cup matches in total.  This makes the Panama game an outlier; an unusual result; you could even call it a freak result... Let's give the data a little more context:

- Panama 
are playing in their first World Cup ever, and that they scored their first ever goal in the World Cup against England.
- Panama's qualification relied on a highly dubious (and non-existent) "ghost goal"

- Panama's world ranking is 55th (just behind Jamaica) down from a peak of 38th in 2013. England's world ranking is 12th.
- Panama's total population is around 4 million people.  England's is over 50 million.  London alone has 8 million.  (Tunisia has around 11 million people).

Sometimes we do get freak results.  You probably aren't going to convince an England fan about this today, but as data analysts, we have to acknowledge that sometimes the data is just anomalous (or even erroneous).  At the very least, it's not representative.

When we don't run our A/B tests for long enough, or we don't get a large enough sample of data, or we take a specific segment which is particularly small, we leave ourselves open to the problem of getting anomalous results.  We have to remember that in A/B testing, there are some visitors who will always complete a purchase (or successfully achieve a site goal) on our website, no matter how bad the experience is.  And some people will never, ever buy from us, no matter how slick and seamless our website is.  And there are some people who will have carried out days or weeks of research on our site, before we launched the test, and shortly after we start our test, they decide to purchase a top-of-the-range product with all the add-ons, bolt-ons, upgrades and so on.  And there we have it - a large, high-value order for one of our test recipes which is entirely unrelated to our test, but which sits in Recipe B's tally and gives us an almost-immediate winner.

The aim of a test is to nudge people from the 'probably won't buy' category into the 'probably will buy' category, and into the 'yes, I will buy' category.  Testing is about finding the borderline cases and working out what's stopping them from buying, and then fixing that blocker.  It's not about scoring the most wins, it about getting accurate data and putting that data into context.


Rest assured that if Panama had put half a dozen goals past England, it would widely and immediately be regarded as a freak result (that's called bias, and that's a whole other problem).

Tuesday, 19 June 2018

When Should You Switch A Test Off? (Tunisia 1 - England 2)

Another day yields another interesting and data-rich football game from the World Cup.  In this post, I'd like to look at answering the question, "When should I switch a test off?" and use the Tunisia vs England match as the basis for the discussion.


Now, I'll admit I didn't see the whole match (but I caught a lot of it on the radio and by following online updates), but even without watching it, it's possible to get a picture of the game from looking at the data, which is very intriguing.  Let's kick off with the usual stats:



The result after 90 minutes was 1-1, but it's clear from the data that this would be a very one-sided draw, with England having most of the possession, shots and corners.  It also appears that England squandered their chances - the Tunisian goalkeeper made no saves, but England could only get 44% of their 18 shots on target (which kind of begs the question - what about the others - and the answer is that they were blocked by defenders).  There were three minutes of stoppage time, and that's when England got their second goal.

[This example also shows the unsuitability of the horizontal bar graph as a way of representing sports data - you can't compare shot accuracy (44% vs 20% doesn't add up to 100%) and when one team has zero (bookings or saves) the bar disappears completely.  I'll fix that next time.]

So, if the game had been stopped at 90 minutes as a 1-1 draw, it's fair to say that the data indicates that England were the better team on the night and unlucky to win.  They had more possession and did more with it. 

Comparison to A/B testing

If this were a test result and your overall KPI was flat (i.e. no winner, as in the football game), then you could look at a range of supporting metrics and determine if one of the test recipes was actually better, or if it was flat.  If you were able to do this while the test was still running, you could also take a decision on whether or not to continue with the test.

For example, if you're testing a landing page, and you determine that overall order conversion and revenue metrics are flat - no improvement for the test recipe - then you could start to look at other metrics to determine if the test recipe really has identical performance to the control recipe.  These could include bounce rate; exit rate; click-through rate; add-to-cart performance and so on.  These kind of metrics give us an indication of what would happen if we kept the test running, by answering the question: "Given time, are there any data points that would eventually trickle through to actual improvements in financial metrics?"

Let's look again at the soccer match for some comparable and relevant data points:

*  Tunisia are win-less in their last 12 World Cup matches (D4 L8).  Historic data indicates that they were unlikely to win this match.

*  England had six shots on target in the first half, their most in the opening 45 minutes of a World Cup match since the 1966 semi-final against Portugal.  In this "test", England were trending positively in micro-metrics (shots on target) from the start.

Tunisia scored with their only shot on target in this match, their 35th-minute penalty.  Tunisia were not going to score any more goals in this game.

*  England's Kieran Trippier created six goalscoring opportunities tonight, more than any other player has managed so far in the 2018 World Cup.  "Creating goalscoring opportunities" is typically called "assists" and isn't usually measured in soccer, but it shows a very positive result for England again.

As an interesting comparison - would the Germany versus Mexico game have been different if the referee had allowed extra time?  Recall that Mexico won 1-0 in a very surprising result, and the data shows a much less one-sided game.  Mexico won 1-0 and, while they were dwarfed by Germany, they put up a much better set of stats than Tunisia (compare Mexico with 13 shots vs Tunisia with just one - which was their penalty).  So Mexico's result, while surprising, does show that they did play an attacking game and should have achieved at least a draw, while Tunisia were overwhelmed by England (who, like Germany should have done even better with their number of shots).

It's true that Germany were dominating the game, but weren't able to get a decent proportion of shots on target (just 33%, compared to 40% for England) and weren't able to fully shut out Mexico and score.  Additionally, the Mexico goalkeeper was having a good game and according to the data was almost unbeatable - this wasn't going to change with a few extra minutes.


Upcoming games which could be very data-rich:  Russia vs Egypt; Portugal vs Morocco.



Monday, 18 June 2018

The Importance of Being Earnest with Your KPIs


It’s World Cup time once again, and a prime opportunity to revisit the importance of having the right KPIs to measure your performance (football team, website, marketing campaign, or whichever).  Take a look at these facts and apparent KPIs, taken from a recent World Cup soccer match, and notice how it’s possible to completely avoid what your data is actually telling you. 

*  One goalkeeper made nine saves during the match, which is three more than any other goalkeeper in the World Cup so far.

* One team had 26 shots in the game – without scoring – which is the most so far in this World Cup, and equals Portugal in their game against England in 2006.  The other team had just 13 shots in the game, and only four on target.

*  One team had just 33% possession:  they had the ball for only 30 minutes out of the 90-minute game

* One team had eight corners; the other managed just one.

A graph may help convey some additional data, and give you a clue as to the game (and the result).



If you look closely, you’ll note that the team in green had four shots on target, while other team only managed three saves.

Hence the most important result in the game – the number of goals scored – gets buried (if you’re not careful) and you have to carry out additional analysis to identify that Mexico won 1-0, scoring in the first half and then holding onto their lead with only 33% possession.



Monday, 11 June 2018

Spoiler-Free Review of Jurassic World: The Fallen Kingdom

Jurassic World: The Fallen Kingdom is the latest addition to the Jurassic Park/Jurassic World franchise, and strikes an uneasy balance between retreading old themes and covering new material.  There are the dinosaurs; there are the heroes and the villians; there's even a child cowering and quaking while a dinosaur approaches.  It's all there - if you've seen and enjoyed the previous films, you'll enjoy this one too.



Universal Pictures
The story moves at a very good pace - yes, there are the slower, plot-development scenes where the villains outline their master plan, and the heroes trade jokes and contemplate the future of dinosaur-kind.  I won't share too much of the plot, but Owen and Claire are persuaded to return to Isla Nubar when it's discovered that it's an active volcano and all the dinosaurs are going to be killed.  The return to the island is filmed particularly well, as we see a Jurassic World that has fallen into disrepair, death and decay, in stark contrast to the lavish bright colours we saw in the previous film.  The aftermath of the Indominus's rampage is visible everywhere (including in some very neat detail shots).

The visual effects of dinosaurs plus volcano are extremely well executed, and there is the usual quota of running, shouting, chasing, and hiding, all delivered at breakneck speed. In fact, it's so fast that you may miss one or two of the plot developments, but fear not, there's plenty of chance to catch up.  The entire second half of the film takes place off the island - so this is unlike most of the previous films.  Yes, there are comparisons with The Lost World, but this film has a lot more about it than that.


Is the film scary?  Yes.  There are plenty of suspenseful moments... teeth and claws appearing slowly out of the murky darkness; rustling trees getting closer - all that stuff.  This is more scary than the high-speed dinosaur vs human or dinosaur vs dinosaur stuff - and there's plenty of that too.  There are two extended scenes in the second half where one particularly nasty dinosaur starts stalking its human prey, but apart from that there's not much that we haven't seen before.

Is it gory?  No.  Despite a body count that puts it on a par with the other films, there isn't much visible blood - one character has his arm bitten off, and the amount of blood is almost too small to be plausible.  There's at least one death on camera, but it's out-of-focus and in the background.  I took two children - aged seven and nine - with me, and the nine-year-old was upset by some of the tragic scenes, but neither of them were particularly scared.


All-in-all, I liked this film: it is exactly what you would expect, with some interesting twists.  I know it's had mixed reviews, but it does a good job of staying true to its roots while expanding the wider storyline in a number of unexpected ways.  The speed at which the film moves through the plot, with some serious and irreversible actions, means that this is - in my view - more than just another sequel and is not as derivative as some make it seem.

Monday, 14 May 2018

Online Optimisation: Testing Sequences

As your online optimisation program grows and develops, it's likely that you'll progress from changing copy or images or colours, and start testing moving content around on the page - changing the order of the products that you show; moving content from the bottom of the page to the top; testing to see if you achieve greater engagement (more clicks; lower bounce rate; lower exit rate) and make more money (conversion; revenue per visitor).  A logical next step up from 'moving things around' is to test the sequence of elements in a list or on a page.  After all, there's no new content, no real design changes, but there's a lot of potential in changing the sequence of the existing content on the page.
Sequencing tests can look very simple, but there are a number of complexities to think about - and mathematically, the numbers get very large very quickly.  


As an example, here's the Ford UK's cars category page, www.ford.co.uk/cars.










[The page scrolls down; I've split it into two halves and shown them side-by-side].


Testing sequences can quickly become a very mathematical process:  if you have just three items in a list, then the number of recipes is six; if you have four items, then there are 24 different sequences (see combinations without repetition).  Clearly, some of these will make no sense (either logically or financially) so you can cut out some of the options, but that's still going to leave you with a large number of potential sequences.  In Ford's example here, with 20 items in the list, there are 2,432,902,008,176,640,000 different options.

Looking at Ford, there appears to be some form of sorting (default) which is generally price low-to-high and slightly by size or price, with a few miscellaneous tagged onto the end (the Ford GT, for example).  At first glance, there's very little difference between many of the cars - they look very, very similar (there's no sense of scale or of the specific features of each model).

Since there are two quintillion various ways of sequencing this list, we need to look at some 'normal' approaches, and are, of course, a number of typical ways of sorting products that customers are likely to gravitate towards - sorting by alphabetical order; sorting by price or perceived value (i.e. start with the the lower quality products and move to luxury quality), and you could also add to that sorting by most popular (drives the most clicks or sales).  Naturally, if your products have another obvious sorting option (such as size, width, length or whatever) then this could also be worth testing.

What are the answers?  As always:  plan your test concept in advance.  Are you going to use 'standard' sorting options, such as size or price, or are you going to do something based on other metrics (such as click-through-rate, revenue or page popularity)?  What are the KPIs you're going to measure?  Are you going for clicks, or revenue?  This may lead to non-standard sequences, where there's no apparent logic to the list you produce.  However, once you've decided, 
the number of sequences falls from trillions to a handful, and you can start to choose the main sequences you're going to test.


For Ford, price low to high (or size large to small), popularity (sales), grouping by model size (hatchback, saloon, off-road/SUV, sports) may also work - and that leads on to sub-categorization and taxonomy, which I'll probably cover in an upcoming blog.






 

Wednesday, 11 April 2018

Chess and Machine Learning?

Machine learning is a new, exciting and growing area of computer science that looks at if and how computers can learn without explicitly being taught.  Within the last few weeks, machine learning programs have learned games such as Go and Chess, and become very capable players: Google's AlphaZero beat the well-known Chess engine Stockfish after just 24 hours of learning how to play; just over a year ago, AlphaGo beat the world's strongest human player Ke Jie at Go.

AlphaZero is different from all previous Chess engines, in that it learns by playing.  Having been programmed with the rules of Chess (aims of the game; how the pieces move), it played 1000 games against itself, learning as it went.  The Google Alpha Zero team have published a paper of their research, and it makes for interesting reading.

From a Chess perspective, the data is very interesting as it shows how Alpha Zero discovered some key well-known openings (the English; the Sicilian; the Ruy Lopez) and how it used them in games, and then discarded them as it found 'better' alternatives.  Table 2 on page 6 shows how the frequency of each opening varied against training time.  There are some interesting highlights in the data:

The English Opening (1. c4 e5 2. g3 d5 3. cxd5 Nf6 4. Bg2 Nxd5 5. Nf3) was a clear favourite with Alpha Zero from very early on, and grew in popularity.


The Queen's Gambit (1. d4 d5 2. c4 c6 Nc3 Nf6 Nf3 a6 g3 c4 a4) also became a preferred opening.


Interestingly, the Sicilian Defence (1. ... c5) was not favoured, instead the preferred line against 1. e4 was the Ruy Lopez (1 ... e5).

It's worth remembering that Alpha Zero deduced these well-known and long-played openings and variations by itself in 24 hours - compared to the decades (and centuries) of human play that has gone into developing these openings.

Apart from the purely academic exercises of building machines that can learn to play games, there are the financially lucrative applications of machine learning: product recommendations.  Amazon and Netflix make extensive use of recommenders, where machines make forecasts about a user, based on users who showed similar behaviour ("people who liked what you like also like this...").  Splitting out and segmenting all users to find users with similar properties is a key part of the machine learning process for this application.

In conclusion:  
"It's an exciting time for Machine Learning.  There is ample work to be done at all levels: from the theory end to the framework end, much can be improved.  It's almost as exciting as the creation of the internet."  Ryan Dahl, inventor of Node.js

Monday, 5 March 2018

Why are manhole lids circular?

I remember reading this question - and its answer - in a maths puzzle book in my mid teens. It's a very simple solution - and very easy to start investigating further.  The short answer: manhole lids are circular so that they don't fall down the hole (risking losing the lid, and landing on a worker who is in the hole).  Technically, the lid has a constant maximum diameter irrespective of which angle you use to measure it. 

The same cannot be said of most other polygons - let's take some quick examples.
Squares: the sides are shorter than the diagonals, so a small rotation will enable the lid to fall down the hole.
Pentagons: the ratio of side to diameter is smaller, but it's still possible to drop the lid down the hole.
Equilateral triangles are an exception; and in fact you do sometimes see manhole lids that are equilateral triangles (sometimes hinged along one side).

The same principle applies to coins. In order to function correctly,  a vending machine has to be able to identify and distinguish different coins, based on their diameter and irrespective of how they fall through the slot.  The coins which are not circular are based on Reuleaux polygons, such as the Reuleaux triangle, where the shape has a constant diameter - the key requirement for coins, and manhole covers!

Wednesday, 14 February 2018

Film Review: Star Wars The Last Jedi

I loved it.

My first impressions from the first few minutes was that this was a retread of Empire Strikes Back.  The First Order have tracked down the resistance base on a remote planet, and the resistance are trying to evacuate before the First Order land troops and... oh, wait a minute, there is no shield, no cannon and the base is going to be obliterated from space.  And things seem to go well for the resistance, as they are able to stall long enough to get almost everybody safely aboard their cruiser and off to safety.  But not before Poe Dameron (X-Wing ace turned hot-headed insubordinate comedian pilot) decides to sacrifice the entire bomber fleet just to destroy a Dreadnaught.  Let's here it for Pyrrhic victories!

Worse still, the First Order have developed a way to track the Resistance through hyperspace: running away is not a way to escape, and hyperspace fuel is in limited supply.

At the end of the previous film, Rey had successfully tracked down Luke Skywalker, and much of this film covers her efforts to persuade him to join the Resistance.  So, we have space battles interspersed with the story of a Jedi master and a young Jedi-wannabe/trainee on a remote, green, damp planet.  Like I said, I kept recalling Empire Strikes Back throughout this film. I haven't looked online to see if anybody has listed all the parallels between The Last Jedi and The Empire Strikes back, but I saw a few (and I'm only a casual movie-goer).  Luke Skywalker has traded his youthful naivety and enthusiasm for jaded cynicism.  The way he casually lobs his lightsabre over his shoulder is both funny and tragic at the same time.

My only niggle with the film is the amount of time spent on the story with Rey and Luke.  The other storylines were far more exciting and just downright interesting; Luke and Rey - less so.  Luke goes for a walk.  Luke catches a fish.  Luke wanders around his island.  Yawn.

The plot makes a lot of sense, and there's a direct causal link between the Admiral and her tight-lipped need-to-know authoritarian attitude, conflicting with Poe Dameron's "we have a right to know what's going on" and the subsequent demise of the resistance fleet.  If she'd told Poe what her plan was, he wouldn't have sent Finn off to find the code breaker, who wouldn't have subsequently told the First Order about the resistance's plans and their cloaking frequency (or whatever it was).  If they'd all stayed home, sat tight and waited it out, they might all have survived.  I'm not blaming him or her, but it seems like the two characters managed to deliberately out-hard-head each other - aiming to be the most stubborn character and the one who wins, until neither of them do.

Some of my favourite aspects of the film is how the script addresses some of the criticisms that were levelled at the first of the new films (The Force Awakens).

"Finn should have had that fight with Captain Phasma, not with some random stormtrooper with a cool elbow mounted weapon."  Cue large-scale, violent, hand-to-hand fight between Finn and Phasma.


"Snoke is too much like the Emperor and there's no real explanation for him."  Kill him off - now who saw that coming?

"More Poe Dameron!" - definitely fixed in this episode.  He kicks off the action at the start; we see more of his character throughout this film (borderline arrogant, but still funny) and he commits mutiny.  This is not a replacement for Han Solo; this is a whole new character who has his own ideas, opinions and history.


"Do something different!"  - I saw most of the parallels between The Force Awakens and A New Hope.  In fact, it felt like a rehash of the story with new faces. As I mentioned earlier, The Last Jedi has elements of The Empire Strikes Back in it, but those elements have been rearranged to produce a fresh story (and no, I didn't for one second think "It's salt!", I knew full well it was meant to be snow).

All-in-all, I'm excited for the next installment; I'm looking forwards to the Han Solo movie and I feel even more optimistic for the future of the Star Wars saga.

Monday, 12 February 2018

Mathematically Explaining Confidence and Levels of Significance

Level of Significance: a more mathematical discussion

In mathematical terms, and according to "A Dictionary of Statistical Terms, by E H C Marriott, published for the International Statistical Institute by Longman Scientific and Technical":


"Many statistical tests of hypotheses depend on the use of the probability distributions of a statistic t chosen for the purpose of the particular test. When the hypothesis is true this distribution has a known form, at least approximately, and the probability Pr(t≥ti) or Pr(t≥t0), and Pr(t ≤ ti) and Pr(t ≤ t0) are called levels of significance and are usually expressed as percentages, e.g. 5 per cent.  The actual values are, of course, arbitrary, but popular values are 5, 1 and 0.1 per cent."






In English: we assume that the probability of a particular event happening (e.g. a particular recipe persuading a customer to convert and complete a purchase) can be modelled using the Normal Distribution.  We assume that the average conversion rate (e.g. 15%) represents the recipe's typical conversion rate, and the chances of the recipe driving a higher conversion rate can be calculated using some complex but manageable maths.  

More data, more traffic and more orders gives us the ability to state our average conversion rate with greater probability.  As we obtain more data, our overall data set is less prone to skewing (being affected by one or two anomalous data points).  The 'spread' of our curve - the degree of variability - decreases; in mathematical terms, the standard deviation of our data decreases.  The standard deviation is a measure of how spread out our data is, and this takes into account how many data points we have, and how much they vary from the average.  More data generally means a lower standard deviation (and that's why we like to have more traffic to achieve confidence).


When we run a test between two recipes, we are comparing their average conversion rate (and other metrics), and how likely it is that one conversion rate is actually better than the other.  In order to achieve this, we want to look at where the two conversion rates compare on their normal distribution curves.


In the diagram above, the conversion rate for Recipe B (green) is over one standard deviation from the mean - it's closer to two standard deviations.  We can use spreadsheets or data tables (remember those?) to translate the number of standard deviations into a probability:  how likely is it that the conversion rate for Recipe B is going to be consistently higher than Recipe A.  This will give us a confidence level.  It depends on the difference between the two (Y% compared to X%) and how many standard deviations this is (how much spread there is in the two data sets, which is dependent on how many orders and visitors we've received.

Most optimisation tools will carry out the calculation on number of orders and visitors, and comparision between the two recipes as part of their in-built capabilities (it's possible to do it with a spreadsheet, but it's a bit laborious).

The fundamentals are:


- we model the performance (conversion rate) of each recipe using the normal distribution (this tells us how likely it is that the actual performance for the recipe will vary around the reported average).
- we calculate the distance between conversion rates for two recipes, and how many standard deviations there are between the two.

- we translate the number of standard deviations into a percentage probability, which is the confidence level that one recipe is actually outperforming the other.

Revisiting our original definition:
Many statistical tests of hypotheses depend on the use of the probability distributions of a statistic t chosen for the purpose of the particular test

...and we typically use the Normal Distribution When the hypothesis is true this distribution has a known form, at least approximately, and the probability Pr(t≥ti) or Pr(t≥t0), and Pr(t ≤ ti) and Pr(t ≤ t0) are called levels of significance and are usually expressed as percentages, e.g. 5 per cent.

In our example, the probabilities ti and t0 are the probabilities that the test recipe outperforms the control recipe.  It equates to the proportion of the total curve which is shaded:




You can see here that almost 95% of the area under the Recipe A curve has been shaded, there is only the small amount between t1 and t0 which is not shaded (approx 5%).  Hence we can say with confidence that Recipe B is better than Recipe A.

Thus, for example, the expression "t falls above the 5 per cent level of significance" means that the observed value of t is greater than t1 where the probability of all values greater than t1 is 0.05; t1 is called the upper 5 per cent significance point, and similarly for the lower significance point t0."

As I said, most of the heavy maths lifting can be done either by the testing tool or a spreadsheet, but I hope this article has helped to clarify what confidence means mathematically, and (importantly) how it depends on the sample size (since this improves the accuracy of the overall data and reduces the standard deviation, which, in turn, enables to us to quote smaller differences with higher confidence).