Header tag

Thursday, 25 September 2025

Port Vale 0 Arsenal 2 Match Report

Port Vale vs Arsenal, 24 September 2025

There are only two teams in the football league which aren’t named after the places where they’re located; and yesterday they met for only the second time in 30 years.  Port Vale (Stoke on Trent) and Arsenal (London) are currently 61 league places apart, and to be fair it showed: Arsenal just weren’t as good as they should have been on paper.  The wage bill for one of Arsenal’s players is more than the entire Port Vale team’s wages; after last night’s performance, I wonder if Arteta is getting his value for money?

It was a disappointing performance from the London side, who seemed to have trouble doing anything with their overwhelming levels of possession.  After grabbing an early goal, they struggled to do anything positive or meaningful with the ball, despite the professional encouragement from their fans, and the consistent support of the officials.  The Arsenal goalkeeper, Kepa Arrizabalaga, passed a pre-game fitness check, which was more than could be said for the linesman monitoring the Port Vale defensive line.  He continued to show signs of an ongoing shoulder injury, which prevented him from lifting his flag anywhere above horizontal for the entire first half, while the Arsenal forwards frequently found themselves receiving the ball with nobody but the goalkeeper to beat.  You think I’m kidding?  There were zero Arsenal offsides in the whole game.

The linesmen, whose shoulder shows signs of improving

The referee too seemed in awe of the Premiership team’s visit to Vale Park, admiring their ‘strength on the ball’ and ‘technique’ which frequently left the Vale players getting a close up of the turf as they were ‘tackled’ off the ball.  Strangely, this was not a symmetrical arrangement; whenever an Arsenal player was dispossessed, this was seen as a sign of rough treatment and was typically identified as an illegal challenge. 

The referee reminds the Port Vale strikers to be kind to the Arsenal goalkeeper

He's not offside, ok?

One thing I must confess, though, is that Arsenal didn’t employ cynical and overly defensive strategies after obtaining their opening goal.  They kept moving the ball with good technique – they didn’t anything particularly productive with it (they achieved over four times the number of passes that Vale did), and most of their second-half corners were passed all the way back to the halfway line – and wore Vale down with efficiency and energy.  They saved their timewasting for more subtle tactics, and one of the most egregious was with the substitutions.  I’m not expert at football, but watching one of the Arsenal players (and an England player too) dawdle his way off the pitch when he was substituted was more gamesmanship than sportsmanship.  Maybe he wasn’t looking forward to the long drive home?  Maybe he just wanted to stay and play a bit more?


It's a long walk to the touchline.  So long, that even I could get my camera out, focus it, and take a photo.

He wasn’t the only one in no hurry to leave the field of play, as other substitutions took longer than was probably fair. There were delays taking throw-ins, there were in-team discussions about whose turn it was to take this particular corner, and maybe you’d like to try it this time?
Here's an Arsenal corner.  Coming soon.

Port Vale, for their part, showed considerable effort but also looked to be in awe of their visitors.  During the second half, the Vale front line got a hold of the ball a couple of times – at one point in a very promising position facing goal on the edge of the penalty area, only to flounder at the last minute.  In fact, the data shows that Vale didn’t even achieve a single shot on target.  It was going to be one of those nights.

While being a significant disappointment for the Arsenal, who only scraped an early goal and a late one, the game was entertaining for sure.  One of the funniest parts of the match came in the second half, when, after a Vale substitution, Arsenal were expected to restart play with a throw-in.  There was a breakdown in communication between the referee and the Arsenal player: the referee pointed persistently to where the throw-in should be taken (approximately 5 metres ahead of the halfway line, on the Arsenal left), while the Arsenal player was standing around 10-15 metres further forward of that point.  There then followed a confused discussion between the Arsenal player, trying to take the throw in, and the referee, vehemently pointing 10 metres further back.  This happened in front of where we were sitting, and with the crowd around us (did I mention this was virtually a sell-out?) we did our best to point out the miscommunication.

Taking a throw-in.  It's supposed to be from where the ball went out.  Who knew?

Even though miscommunication is frequent in football matches, and it was noted among the fans.  Arsenal brought almost 3,000 fans to Vale Park, and they stood, sang and shouted with a high degree of organization and professionalism.  There was genuinely no unpleasantness between the two sets of fans, none of the jeering or rude gesturing I have observed at other grounds, and everybody got on with shouting for their team.  At least I think that’s what we were doing – in some cases I struggled to turn the chanted syllables into phrases, or even specific words, and on a couple of occasions I managed it, then regretted it.  Football fans can certainly employ some colourful metaphors.

Speaking of organization and professionalism: the Arsenal players certainly showed this, at a completely different level to the Vale players.  At one point in the first half, Vale gained possession (legally and everything), and in order to hold possession, passed it back from the midfield to the defenders, where it was carefully passed along the line.  But not for very long: with alarming efficiency, the Arsenal players deployed a 10-man press, with the defenders moving up to the halfway line and the forward players squeezing possession.  Vale almost crumbled in the face of this threat, and did well to keep the ball away from their goal:  Arsenal in possession were interesting; Arsenal chasing possession were terrifying.



The size of Arsenal’s squad was clear to see, with players at the match wearing numbers like 41, 49 and 56.  This was probably the Arsenal B-team.  I hope so, for Arteta's sake.  On the other hand, the Port Vale shirts didn't even show a sponsor.

Football is an 11-a-side sport, with shirts numbered 1-56.

The stats tell the story fairly well:  Arsenal dominated all the main numbers, and have to be disappointed with the output from their efforts.  A lucky early goal, and one at the end made it for them. The game was billed as a David vs Goliath clash—except David forgot his slingshot and Goliath turned up wearing Crocs.  Arsenal, sitting proudly at 2nd in the Premier League, took on Port Vale, languishing at 19th in League One, a full 61 league places below. The result? A narrow and nervy 2–0 win for the North London giants. Yes, really.  They say that you can only beat the team in front of you, and that’s all that Arsenal managed, when a much more impressive scoreline was expected.  Sad times for all.

Possession
Arsenal 81%
Port Vale 19%

Passes
Arsenal 789  (731 completed, 93%)
Port Vale 183 (115 completed, 63%)

Shots
Arsenal 11  (7 inside box , 4 outside)
Port Vale 3 (2 inside box, 1 outside)

Shots on Target
Arsenal 4
Port Vale 0

Corners
Arsenal 6
Port Vale 1

Offsides
Arsenal 0
Port Vale 2

Wednesday, 3 September 2025

Did You Just Code a Distraction?

In the fast-paced world of web development and digital marketing, creating new features to enhance user experience is not just a common practice, it's pretty much par for the course.  Everybody has ideas about making the website better, and it usually involves some sort of magical feature that will help users find the exact product they want in a mere matter of seconds - a shopping genie, or something involving AI.   The new feature for your website is almost always visually appealing, interactive, and the designers are confident it will boost user engagement with their favourite persona. Before rolling it out, you wisely decide to run an A/B test to measure its effectiveness.

So you code the test, and you run it for the usual length of time.  You follow all the advice on LinkedIn about statistical significance (we can all describe it and we all have our own ways of calculating it, thank you) and getting a decent sample size. The test results are in, and they’re a mixed bag. On one hand, the new feature is a hit in terms of engagement. It receives twice as many clicks compared to the other clickable elements on the page, such as those lovely banners, the promotional links and the pretty pictures. However, your  deeper dive into the data reveals a concerning trend. While the new feature attracts a lot of attention and engagement, the conversion rate for users who interact with it is only around 2.5%. In contrast, the conversion rate for users who engage with the existing content on the page is significantly higher, at around 4.1%. 

This is key.  It really is insufficient to look only at engagement data (click through rate) as a success metric.  Yes, it is important, but it is not enough.  After all, if you want to create a banner with a high click rate, then you could simply write "Buy one get one free", or better still, "Buy one get two free - click here."  It's essential that you set expectations with your banners, calls to action and features - what's to stop you from writing "Click here for free money!"?  If your priority in testing is to generate clicks, then you'll degenerate into coding the on-site versions of clickbait, and that's a terrible waste of a potential lead.

So, what went wrong with your test? The short answer is that you coded a pretty distraction. Here’s a breakdown of why this happens and how to address it:

Misalignment with User Intent

The new feature, despite being engaging, may not align with the primary intent of your users. If it diverts their attention away from the main conversion paths, it can reduce overall effectiveness. Users might be intrigued by the new feature but not find it relevant to their immediate needs.  You misunderstood your persona's motivation; it might be time to write your persona with this additional information.

Cognitive Load

Introducing a new element can increase the cognitive load on users. They have to process and understand this new feature, which can be mentally taxing. If the feature doesn’t provide immediate value or clarity, users might get distracted and abandon their original task.  They used up their time, effort and patience while interacting with your new feature, and gave up on their primary purpose (which was to buy something from you).

Disruption of User Flow

A well-designed website guides users smoothly towards conversion goals. A new feature that stands out too much can disrupt this flow, causing users to deviate from their intended path. This disruption can lead to lower conversion rates, as users get sidetracked.  How do I get back to my intended path?  This new feature has proved to be the next big shiny thing, and while it's attracting user engagement, it's confusing them and preventing them from getting to where they wanted to.  

The Solutions

To avoid coding distractions, consider the following strategies:

User-Centric Design

Not my favourite phrase, since it leads to design without A/B testing and designers designing for their favourite personas.  Ensure that any new feature is designed with the user’s needs and goals in mind. Conduct user research to understand what your audience values and how they navigate your site, and then align your new features and your development roadmap with these insights.  This will enhance, rather than disrupt, the user experience, and reduce the amount time of wasted on development the next shiny bauble - it looks nice and impresses senior management, but is not good for users.  

Incremental Testing

Instead of launching a fully-fledged feature, start with a minimal viable version and test its impact incrementally. This approach allows you to gather feedback and make necessary adjustments before a full rollout.  Use test data in conjunction with user research to gain a full picture of what you thought was going to happen, and what actually happened.

Clear Value Proposition

Make sure the new feature has a clear and compelling value proposition. Users should immediately understand its purpose and how it benefits them. This clarity can help integrate the feature seamlessly into the user journey.  If the test 'fails', then you'll learn that the value proposition you promoted was not what users wanted to read, and you can try something different.

Monitor and Iterate

Continuously monitor the performance of new features and be ready to iterate based on user feedback and data. If a feature is not performing as expected, don’t hesitate to tweak or even remove it to maintain a smooth user experience.  It's time to swallow your pride and start again.  If you change direction now, you'll have less distance to travel than if you wait six months before unimplementing the eye-catching blunder you've launched.

Conclusion

In the quest to innovate and improve user engagement, it’s crucial to strike a balance between novelty and functionality. While new features can attract attention, they must also support the primary goals of your website. By focusing on user-centric design, incremental testing, and clear value propositions, you can avoid coding distractions and create features that truly enhance the user experience.

Other Web Analytics and Testing Articles I've Written

How not to segment test data (even if your stakeholders ask you to)

Designing Personas for Design Prototypes
Web Analytics - Gathering Requirements from Stakeholders
Analysis is Easy, Interpretation Less So
Telling a Story with Web Analytics Data
Reporting, Analysing, Testing and Forecasting
Pages with Zero Traffic

Wednesday, 27 August 2025

Do You Know How Well Your Test Will Perform?

 There are various ways of running tests - or more specifically, there are various ways of generating test hypotheses.  One that I've come across over the years, and more recently, is the 'guess how well your test is going to perform' approach.  It's not called that, but it seems to me to be the most succinct description.

"If we change the pictures on our site from cats to dogs, then we'll see a 3.5% increase in conversion."
"If we promote construction toys ahead of action figures, then we'll see a 4% lift in revenue."

If you know that's going to happen, why don't you do it anyway?

The main underlying challenge I have is that it's almost impossible to quantify the improvement you're going to get.  How do you know?

Well, let's attempt the calculation (with hypothetical numbers all the way through).

Let's say our latest campaign landing page has a bounce rate (user lands on page, then exits without visiting any other pages) of 75%.  10% engage with site search, 10% click on the menus at the top of the page, and 5% click on the content on the page (there are a few banners and a few links).

We've identified that most users aren't scrolling past the first set of banners and links, and we therefore hypothesise that if we make the banners smaller, and reduce the amount of padding around the links, that we can increase engagement with the content in the lower half of the page, and therefore improve the bounce rate.  We believe we can get 50% more links above the fold, and therefore increase the in-page engagement rate from 5% to 7.5%.  We will assume (and this is the fun bit) that this additional traffic converts at the same rate as the 5% we have so far, and therefore, we'll get a revenue lift of 50%.  This sounds like a lot, but given that the engagement rate is going up from a small number to a slightly larger number, it's unlikely to be a huge revenue lift in dollar terms (unless you're pouring in huge volumes of traffic - and watching it bounce at a rate of 75%).

Perhaps that was an over-simplification.  But if we knew that our test will give us a 5% lift (and we've still decided to test it), what happens when we launch the test?  Presumably, we'll stop it when it reaches the 5% lift, irrespective of the confidence level.  But what happens if it doesn't get to 5%?  What if it stubbornly sits at 4%?  Or maybe just 3%?  Did the test win, or did it lose?  In classical scientific terms, it lost, since we disproved our overly-specific hypothesis.  But from a business perspective, it still won, just not by as much as we had originally expected.  Would you go into a meeting with the marketing manager and say, "Sorry, Jim, our test only achieved a 3% revenue lift, so we've decided it was a failure."?

For me, it comes down to two arguments: 

If you can forecast your test result with a high degree of certainty, based on considerable evidence for your hypothesis, it's probably not worth testing and you should implement already.  Testing is best used for edge-cases with some degree of uncertainty. 

If, on the other hand, you have identified a customer problem with your site, and you can see that fixing it will give you a revenue lift - but you don't know how to fix it - then that's very good grounds for testing.  The hypothesis is not, "If we fix this problem, we'll get a 6% revenue lift," but, "If we fix this problem in this way then we'll get a revenue lift".  And that's where you need to encourage the website analysts and the customer feedback department (or the complaints department, or whoever advocates for customers within your company) to come together and find out where the problems are, and what they are, and how to address them.

That will undoubtedly bring good test ideas, and that's what you're looking for, even if you don't know how much revenue lift it will provide.

Other Web Analytics and Testing Articles I've Written

How not to segment test data (when your stakeholders want you to adapt your data)
Web Analytics - Gathering Requirements from Stakeholders
Analysis is Easy, Interpretation Less So (and why it's more valuable)
Telling a Story with Web Analytics Data
Reporting, Analysing, Testing and Forecasting
Pages with Zero Traffic


Friday, 18 July 2025

Over-specific Targeting and Segmentation

 In my previous posts, I've talked about how you can analyze website data, or A/B test data, and use it to identify winning segments of users.

However, your wonderful new test design may be an unfortunate loser.  It may have shown dreadful results - all the KPIs you can think of are in the red.  There's no way to salvage this one, every single metric shows a drop for Recipe B.  And I suppose you have two options:  learn from the data and try again, or segment the data and see why it lost.

And so begins a trip down a very dangerous rabbit-hole.  

"If we look at new versus return visitors, we find that return visitors didn't perform as badly as new visitors."

"And if we look at return visitors who were visiting on a mobile device instead of a laptop or desktop, then we see that performance is actually slightly better."

"And if we look at return visitors who visited on a mobile device and were looking for our higher-price products, then we actually see an improvement."

Great.  But after three rounds of filtering, targeting or segmenting (your choice of terminology), you've gone from 50% of traffic (the test population) down to 4.3% of traffic.  Is it really worth spending time, effort and energy to provide 4.3% of your traffic with a unique experience?  If you're a luxury brand like Rolls Royce, or Beaverbrooks, then possibly, yes.  If you're selling discounted pet supplies, perhaps not.

But we can get into this level of detail with our targeting and personalisation campaigns too.  I've previously talked about the challenges of setting up a personalisation campaign - the first is obtaining and analysing the data, the second is having the content to present to user segments.  But assuming we can make a decent effort at both, we don't want to get into too much detail in our segments.  For example:

What do you show in the homepage hero banner?  Or what do you show in your "We think you'll like this..." module?  Do you stand at the front of your virtual store, identifying customers based on data such as previous visits or previous purchases, and say, "I think you'd like to buy this Lego set.  It's not discounted, there's no extra incentive to buy, but we watched you on your previous visit, and we think this Lego set is for you."

Is your targeting that good?


In my experience and conversations with other professionals, Netflix and Amazon are often cited as the leaders in targeting.  "Because you watched Star Trek: Voyager" is a reasonable and transparent explanation of the recommendations that Netflix shows me.  And sure enough, around half of the recommendations are actually interesting to me - some of them I've seen before, some of them aren't my cup of tea.  And when you have the opportunity to present me with 42 options (the screen scrolls horizontally seven times) then you can show me specific examples.  If I don't know what I'm looking for, this is a good place to start.

So you could stand at the front of your virtual toy store and say, "We can recommend these Lego models...." and show 42 from your catalogue.  And why not?

If that's not feasible (perhaps due to challenges with obtaining stock values - there's nothing worse than actively recommending an item that's out of stock) then you can be less specific.  Standing on the virtual front door of your virtual store, you could offer, "Would you like to see our Lego models? Please walk this way" instead of "We think you want this model."  You're more likely to get a positive reaction, for a start; in a world where engagement metrics are the king of the KPIs, you're at least more likely to see better results from being a little less specific.  You can certainly expect to be more successful with a broad recommendation than with no targeting at all.  Compare, for example, "Welcome to our toy shop!  These are our favourite toys!" with "We think you're interested in construction toys."  The first is symptomatic of the "We want to sell you this" which pervades many home-page banners, instead of the notion that we find out what our customers want to buy, and show them that.  I'll leave that one there for now, but at least some level of targeting is better than none (and probably better than over-targeting too).

If we can say with 75% chance that a visitor is looking for Lego models, but only a 23% chance that it's Lego Technic (the advanced, engineering-level Lego), and only a 5% likelihood that it's a Lego Technic Race Car, then perhaps leading with one specific model is too much.  It would be better to suggest the Lego Technic range, and direct users to a category page and let them find their own way from there.
     

Your virtual store could be selling electronics; home appliances; books; streaming TV shows; or whichever, but Lego has the advantage of being widely known globally, and very visual and tangible.  Insert your relevant product subcategory in here (I suppose if I had been paying attention to your browsing habits, I could have personalised the content of the blog to make it more relevant to you.  Maybe next time!).


Monday, 30 June 2025

How I'm Fixing the SEO for this Blog

I recently discovered Google Search Console, and learned to my absolute disappointment, that over two thirds of the pages on this blog aren't indexed by Google.  Well, it would explain why they don't get any traffic.  Worse still, it looks like nothing since 2020 was indexed.

So, here's what I've been doing to try and get my pages indexed, and attract more traffic to my blog.

1. Google Search Console - I've submitted my sitemap, several times, and added individual pages that have the best quality content (in my view).
2. Removed extraneous links on the page - the calendar of blog posts was diluting link juice and so that's gone.  Is it really relevant for you to know that I've been blogging here since 2011?  Probably not.
3. Tidied up the category labels - there was a whole cloud of these (literally) and I've reduced them to a manageable list, which I continue to prune.
4. Added group links on similar pages to show that they have a common theme - the Star Trek pages and calculator games pages first.  These now have 'Other pages you may be interested in' with links.  The Star Trek reviews and the Star Wars reviews all have links to the other episodes in the same season, showing Google that they're connected, they're not just 10 random pages.
5. Created static pages per category - Chess and Maths first - although these aren't getting much interest yet.
6. Submitted my site to Bing's webmaster tools and tracked traffic there - much easier, and much more straightforward.
7.  I've c
reated external links to my site  - backlinks such as this one on Goodreads for Calculator Fun and Games.

Am I seeing an improvement in traffic? 

Not yet.

Am I giving up?

No!

Tuesday, 1 April 2025

Waterproof Electricity

Researchers at Oxford University proudly announced the development and successful testing of a new material which will conduct electricity even when underwater. The so-called 'waterproof electricity' is the result of a new type of plastic which will conduct an electric current but prevents any "leakage" of electric charges into the water.  Their findings, published in Materials Journal, mark a significant point in global materials development.

Historically, water has always been the biggest enemy of electrical devices. The only way to protect devices which are to be used underwater has been to physically coat them in a waterproof and airtight layer, leading to cumbersome and clunky devices, and as they to be operated underwater, this additional layer has made them particularly difficult to use.

Professor David Armstrong, the team leader at Oxford, explained, "As per recent information, we have been able to conduct electricity through our new material without any loss of current to the surrounding water. Clearly this opens up all kinds of applications, from underwater research to making domestic mobile devices waterproof." 

Dr Emily Turner, a senior researcher on the team, added, "The potential for this material is immense. We are looking at applications in underwater robotics, marine exploration, and even in everyday consumer electronics. The ability to have devices that are both electrically conductive and waterproof could revolutionize many industries."

Professor Mauro Pasta, another key researcher, emphasized the collaborative effort: "This project has been a true interdisciplinary endeavor, combining expertise from materials science, chemistry, and electrical engineering. The synergy between these fields has been crucial in achieving this breakthrough."

The research for the new polymer is based on PTFE (Teflon) which is water resistant, while having additional atom chains which enable it to conduct electricity.  Known as Fluoro-Ortho-Oxy Limonene, it's a highly oxygenated organic molecule formed from the oxidation of limonene. It features a unique structure that includes both closed-shell and open-shell peroxy radicals, which contribute to its exceptional properties.  Part of its structure is shown below. Its full chemical structure and further details will be released in an online article at noon today.


If you'd like to read more of my Chemistry articles, I can recommend my explanation of how I got into online A/B testing as a Chemistry graduate.

If this sounds like something out of Star Trek, there's probably a good reason for it.  

Wednesday, 26 March 2025

Airband Radio Aerials: Maths in Action

I've been interested in aircraft and airshows for over 40 years - anything military or civil, and I've blogged in the past about how to use a spreadsheet to track down where to watch the Red Arrows fly past on their transit flights.  You didn't think that post was about geometry without some real-life applications?  What is this - "Another day I haven't used algebra"?

Anyway - I've been particularly interested in the Red Arrows and their air-to-air chatter, and the communications between pilots and air traffic control.  Yes, I take my airband radio along to airshows and to airports, and listen to the pilots request and receive clearance to take off or land.  Getting to airports is more of a challenge than it used to be - my children aren't as interested as I am in the whole thing, and standing at the end of a runway in poor weather isn't as much fun as it sounds!

So, I've started developing my home-based receiver.  In other words, I spent my birthday money on an airband antenna and an extension cable to connect it from outside (cold and sometimes rainy) to my desk (warm and inside) so that I can listen to pilots flying nearby.

Now: nearby is a relative term.

From Stoke on Trent, I've been able to pick up pilot transmissions from about 35 miles away, on the southern edge of Manchester Airport.  That's with a very basic antenna, set on my garden gatepost and about two metres off the ground - not bad for a first attempt.

My dad, on the other hand, has been tracking radio transmissions for decades.  His main areas of interest are long wave (around 200 kHz), medium wave (500-1600 kHz), and TV (UHF, 300 Mhz to 3GHz).  

Airband falls into the Very High Frequency range, around 100-200 MHz.

Here comes the maths:
All radio transmissions travel at the speed of light, c = 2.998 * 10^8 ms-1.
c = f w

Where f (sometimes the Greek letter nu, ν) is the frequency, and w (usually the Greek letter lambda, ƛ) is the wavelength.

So, if we know the frequency range that we want to listen to, we can calculate the wavelength of that transmission.  And this is important, because the length of the antenna (or aerial) that we need will depend on the wavelength.  Ideally, the aerial should be the length of one full wavelength, for maximum reception effectiveness.  Alternatively, a half-wavelength or a quarter-wavelength can be used.

So:  we know the speed of light, c = 2.998 * 10^8 ms-1
And we know the frequency of the transmissions we want to receive, which is around 118 MHz.

c/ν = ƛ

ƛ =   2.5 metres

Which is feasible for an external, wall-mounted aerial.  Can you see where this is going?

Exactly.  And here it is:  

It's just over two metres from end to end, with a feed at the midpoint.  This is the Mark One; the Mark Two will be the same aerial but even higher up, and closer to vertical (with a bracket that will enable it to dodge the eaves of the roof!