Header tag

Wednesday, 3 September 2025

Did You Just Code a Distraction?

In the fast-paced world of web development and digital marketing, creating new features to enhance user experience is not just a common practice, it's pretty much par for the course.  Everybody has ideas about making the website better, and it usually involves some sort of magical feature that will help users find the exact product they want in a mere matter of seconds - a shopping genie, or something involving AI.   The new feature for your website is almost always visually appealing, interactive, and the designers are confident it will boost user engagement with their favourite persona. Before rolling it out, you wisely decide to run an A/B test to measure its effectiveness.

So you code the test, and you run it for the usual length of time.  You follow all the advice on LinkedIn about statistical significance (we can all describe it and we all have our own ways of calculating it, thank you) and getting a decent sample size. The test results are in, and they’re a mixed bag. On one hand, the new feature is a hit in terms of engagement. It receives twice as many clicks compared to the other clickable elements on the page, such as those lovely banners, the promotional links and the pretty pictures. However, your  deeper dive into the data reveals a concerning trend. While the new feature attracts a lot of attention and engagement, the conversion rate for users who interact with it is only around 2.5%. In contrast, the conversion rate for users who engage with the existing content on the page is significantly higher, at around 4.1%. 

This is key.  It really is insufficient to look only at engagement data (click through rate) as a success metric.  Yes, it is important, but it is not enough.  After all, if you want to create a banner with a high click rate, then you could simply write "Buy one get one free", or better still, "Buy one get two free - click here."  It's essential that you set expectations with your banners, calls to action and features - what's to stop you from writing "Click here for free money!"?  If your priority in testing is to generate clicks, then you'll degenerate into coding the on-site versions of clickbait, and that's a terrible waste of a potential lead.

So, what went wrong with your test? The short answer is that you coded a pretty distraction. Here’s a breakdown of why this happens and how to address it:

Misalignment with User Intent

The new feature, despite being engaging, may not align with the primary intent of your users. If it diverts their attention away from the main conversion paths, it can reduce overall effectiveness. Users might be intrigued by the new feature but not find it relevant to their immediate needs.  You misunderstood your persona's motivation; it might be time to write your persona with this additional information.

Cognitive Load

Introducing a new element can increase the cognitive load on users. They have to process and understand this new feature, which can be mentally taxing. If the feature doesn’t provide immediate value or clarity, users might get distracted and abandon their original task.  They used up their time, effort and patience while interacting with your new feature, and gave up on their primary purpose (which was to buy something from you).

Disruption of User Flow

A well-designed website guides users smoothly towards conversion goals. A new feature that stands out too much can disrupt this flow, causing users to deviate from their intended path. This disruption can lead to lower conversion rates, as users get sidetracked.  How do I get back to my intended path?  This new feature has proved to be the next big shiny thing, and while it's attracting user engagement, it's confusing them and preventing them from getting to where they wanted to.  

The Solutions

To avoid coding distractions, consider the following strategies:

User-Centric Design

Not my favourite phrase, since it leads to design without A/B testing and designers designing for their favourite personas.  Ensure that any new feature is designed with the user’s needs and goals in mind. Conduct user research to understand what your audience values and how they navigate your site, and then align your new features and your development roadmap with these insights.  This will enhance, rather than disrupt, the user experience, and reduce the amount time of wasted on development the next shiny bauble - it looks nice and impresses senior management, but is not good for users.  

Incremental Testing

Instead of launching a fully-fledged feature, start with a minimal viable version and test its impact incrementally. This approach allows you to gather feedback and make necessary adjustments before a full rollout.  Use test data in conjunction with user research to gain a full picture of what you thought was going to happen, and what actually happened.

Clear Value Proposition

Make sure the new feature has a clear and compelling value proposition. Users should immediately understand its purpose and how it benefits them. This clarity can help integrate the feature seamlessly into the user journey.  If the test 'fails', then you'll learn that the value proposition you promoted was not what users wanted to read, and you can try something different.

Monitor and Iterate

Continuously monitor the performance of new features and be ready to iterate based on user feedback and data. If a feature is not performing as expected, don’t hesitate to tweak or even remove it to maintain a smooth user experience.  It's time to swallow your pride and start again.  If you change direction now, you'll have less distance to travel than if you wait six months before unimplementing the eye-catching blunder you've launched.

Conclusion

In the quest to innovate and improve user engagement, it’s crucial to strike a balance between novelty and functionality. While new features can attract attention, they must also support the primary goals of your website. By focusing on user-centric design, incremental testing, and clear value propositions, you can avoid coding distractions and create features that truly enhance the user experience.

Other Web Analytics and Testing Articles I've Written

How not to segment test data (even if your stakeholders ask you to)

Designing Personas for Design Prototypes
Web Analytics - Gathering Requirements from Stakeholders
Analysis is Easy, Interpretation Less So
Telling a Story with Web Analytics Data
Reporting, Analysing, Testing and Forecasting
Pages with Zero Traffic

Wednesday, 27 August 2025

Do You Know How Well Your Test Will Perform?

 There are various ways of running tests - or more specifically, there are various ways of generating test hypotheses.  One that I've come across over the years, and more recently, is the 'guess how well your test is going to perform' approach.  It's not called that, but it seems to me to be the most succinct description.

"If we change the pictures on our site from cats to dogs, then we'll see a 3.5% increase in conversion."
"If we promote construction toys ahead of action figures, then we'll see a 4% lift in revenue."

If you know that's going to happen, why don't you do it anyway?

The main underlying challenge I have is that it's almost impossible to quantify the improvement you're going to get.  How do you know?

Well, let's attempt the calculation (with hypothetical numbers all the way through).

Let's say our latest campaign landing page has a bounce rate (user lands on page, then exits without visiting any other pages) of 75%.  10% engage with site search, 10% click on the menus at the top of the page, and 5% click on the content on the page (there are a few banners and a few links).

We've identified that most users aren't scrolling past the first set of banners and links, and we therefore hypothesise that if we make the banners smaller, and reduce the amount of padding around the links, that we can increase engagement with the content in the lower half of the page, and therefore improve the bounce rate.  We believe we can get 50% more links above the fold, and therefore increase the in-page engagement rate from 5% to 7.5%.  We will assume (and this is the fun bit) that this additional traffic converts at the same rate as the 5% we have so far, and therefore, we'll get a revenue lift of 50%.  This sounds like a lot, but given that the engagement rate is going up from a small number to a slightly larger number, it's unlikely to be a huge revenue lift in dollar terms (unless you're pouring in huge volumes of traffic - and watching it bounce at a rate of 75%).

Perhaps that was an over-simplification.  But if we knew that our test will give us a 5% lift (and we've still decided to test it), what happens when we launch the test?  Presumably, we'll stop it when it reaches the 5% lift, irrespective of the confidence level.  But what happens if it doesn't get to 5%?  What if it stubbornly sits at 4%?  Or maybe just 3%?  Did the test win, or did it lose?  In classical scientific terms, it lost, since we disproved our overly-specific hypothesis.  But from a business perspective, it still won, just not by as much as we had originally expected.  Would you go into a meeting with the marketing manager and say, "Sorry, Jim, our test only achieved a 3% revenue lift, so we've decided it was a failure."?

For me, it comes down to two arguments: 

If you can forecast your test result with a high degree of certainty, based on considerable evidence for your hypothesis, it's probably not worth testing and you should implement already.  Testing is best used for edge-cases with some degree of uncertainty. 

If, on the other hand, you have identified a customer problem with your site, and you can see that fixing it will give you a revenue lift - but you don't know how to fix it - then that's very good grounds for testing.  The hypothesis is not, "If we fix this problem, we'll get a 6% revenue lift," but, "If we fix this problem in this way then we'll get a revenue lift".  And that's where you need to encourage the website analysts and the customer feedback department (or the complaints department, or whoever advocates for customers within your company) to come together and find out where the problems are, and what they are, and how to address them.

That will undoubtedly bring good test ideas, and that's what you're looking for, even if you don't know how much revenue lift it will provide.

Other Web Analytics and Testing Articles I've Written

How not to segment test data (when your stakeholders want you to adapt your data)
Web Analytics - Gathering Requirements from Stakeholders
Analysis is Easy, Interpretation Less So (and why it's more valuable)
Telling a Story with Web Analytics Data
Reporting, Analysing, Testing and Forecasting
Pages with Zero Traffic


Friday, 18 July 2025

Over-specific Targeting and Segmentation

 In my previous posts, I've talked about how you can analyze website data, or A/B test data, and use it to identify winning segments of users.

However, your wonderful new test design may be an unfortunate loser.  It may have shown dreadful results - all the KPIs you can think of are in the red.  There's no way to salvage this one, every single metric shows a drop for Recipe B.  And I suppose you have two options:  learn from the data and try again, or segment the data and see why it lost.

And so begins a trip down a very dangerous rabbit-hole.  

"If we look at new versus return visitors, we find that return visitors didn't perform as badly as new visitors."

"And if we look at return visitors who were visiting on a mobile device instead of a laptop or desktop, then we see that performance is actually slightly better."

"And if we look at return visitors who visited on a mobile device and were looking for our higher-price products, then we actually see an improvement."

Great.  But after three rounds of filtering, targeting or segmenting (your choice of terminology), you've gone from 50% of traffic (the test population) down to 4.3% of traffic.  Is it really worth spending time, effort and energy to provide 4.3% of your traffic with a unique experience?  If you're a luxury brand like Rolls Royce, or Beaverbrooks, then possibly, yes.  If you're selling discounted pet supplies, perhaps not.

But we can get into this level of detail with our targeting and personalisation campaigns too.  I've previously talked about the challenges of setting up a personalisation campaign - the first is obtaining and analysing the data, the second is having the content to present to user segments.  But assuming we can make a decent effort at both, we don't want to get into too much detail in our segments.  For example:

What do you show in the homepage hero banner?  Or what do you show in your "We think you'll like this..." module?  Do you stand at the front of your virtual store, identifying customers based on data such as previous visits or previous purchases, and say, "I think you'd like to buy this Lego set.  It's not discounted, there's no extra incentive to buy, but we watched you on your previous visit, and we think this Lego set is for you."

Is your targeting that good?


In my experience and conversations with other professionals, Netflix and Amazon are often cited as the leaders in targeting.  "Because you watched Star Trek: Voyager" is a reasonable and transparent explanation of the recommendations that Netflix shows me.  And sure enough, around half of the recommendations are actually interesting to me - some of them I've seen before, some of them aren't my cup of tea.  And when you have the opportunity to present me with 42 options (the screen scrolls horizontally seven times) then you can show me specific examples.  If I don't know what I'm looking for, this is a good place to start.

So you could stand at the front of your virtual toy store and say, "We can recommend these Lego models...." and show 42 from your catalogue.  And why not?

If that's not feasible (perhaps due to challenges with obtaining stock values - there's nothing worse than actively recommending an item that's out of stock) then you can be less specific.  Standing on the virtual front door of your virtual store, you could offer, "Would you like to see our Lego models? Please walk this way" instead of "We think you want this model."  You're more likely to get a positive reaction, for a start; in a world where engagement metrics are the king of the KPIs, you're at least more likely to see better results from being a little less specific.  You can certainly expect to be more successful with a broad recommendation than with no targeting at all.  Compare, for example, "Welcome to our toy shop!  These are our favourite toys!" with "We think you're interested in construction toys."  The first is symptomatic of the "We want to sell you this" which pervades many home-page banners, instead of the notion that we find out what our customers want to buy, and show them that.  I'll leave that one there for now, but at least some level of targeting is better than none (and probably better than over-targeting too).

If we can say with 75% chance that a visitor is looking for Lego models, but only a 23% chance that it's Lego Technic (the advanced, engineering-level Lego), and only a 5% likelihood that it's a Lego Technic Race Car, then perhaps leading with one specific model is too much.  It would be better to suggest the Lego Technic range, and direct users to a category page and let them find their own way from there.
     

Your virtual store could be selling electronics; home appliances; books; streaming TV shows; or whichever, but Lego has the advantage of being widely known globally, and very visual and tangible.  Insert your relevant product subcategory in here (I suppose if I had been paying attention to your browsing habits, I could have personalised the content of the blog to make it more relevant to you.  Maybe next time!).


Monday, 30 June 2025

How I'm Fixing the SEO for this Blog

I recently discovered Google Search Console, and learned to my absolute disappointment, that over two thirds of the pages on this blog aren't indexed by Google.  Well, it would explain why they don't get any traffic.  Worse still, it looks like nothing since 2020 was indexed.

So, here's what I've been doing to try and get my pages indexed, and attract more traffic to my blog.

1. Google Search Console - I've submitted my sitemap, several times, and added individual pages that have the best quality content (in my view).
2. Removed extraneous links on the page - the calendar of blog posts was diluting link juice and so that's gone.  Is it really relevant for you to know that I've been blogging here since 2011?  Probably not.
3. Tidied up the category labels - there was a whole cloud of these (literally) and I've reduced them to a manageable list, which I continue to prune.
4. Added group links on similar pages to show that they have a common theme - the Star Trek pages and calculator games pages first.  These now have 'Other pages you may be interested in' with links.  The Star Trek reviews and the Star Wars reviews all have links to the other episodes in the same season, showing Google that they're connected, they're not just 10 random pages.
5. Created static pages per category - Chess and Maths first - although these aren't getting much interest yet.
6. Submitted my site to Bing's webmaster tools and tracked traffic there - much easier, and much more straightforward.
7.  I've c
reated external links to my site  - backlinks such as this one on Goodreads for Calculator Fun and Games.

Am I seeing an improvement in traffic? 

Not yet.

Am I giving up?

No!

Tuesday, 1 April 2025

Waterproof Electricity

Researchers at Oxford University proudly announced the development and successful testing of a new material which will conduct electricity even when underwater. The so-called 'waterproof electricity' is the result of a new type of plastic which will conduct an electric current but prevents any "leakage" of electric charges into the water.  Their findings, published in Materials Journal, mark a significant point in global materials development.

Historically, water has always been the biggest enemy of electrical devices. The only way to protect devices which are to be used underwater has been to physically coat them in a waterproof and airtight layer, leading to cumbersome and clunky devices, and as they to be operated underwater, this additional layer has made them particularly difficult to use.

Professor David Armstrong, the team leader at Oxford, explained, "As per recent information, we have been able to conduct electricity through our new material without any loss of current to the surrounding water. Clearly this opens up all kinds of applications, from underwater research to making domestic mobile devices waterproof." 

Dr Emily Turner, a senior researcher on the team, added, "The potential for this material is immense. We are looking at applications in underwater robotics, marine exploration, and even in everyday consumer electronics. The ability to have devices that are both electrically conductive and waterproof could revolutionize many industries."

Professor Mauro Pasta, another key researcher, emphasized the collaborative effort: "This project has been a true interdisciplinary endeavor, combining expertise from materials science, chemistry, and electrical engineering. The synergy between these fields has been crucial in achieving this breakthrough."

The research for the new polymer is based on PTFE (Teflon) which is water resistant, while having additional atom chains which enable it to conduct electricity.  Known as Fluoro-Ortho-Oxy Limonene, it's a highly oxygenated organic molecule formed from the oxidation of limonene. It features a unique structure that includes both closed-shell and open-shell peroxy radicals, which contribute to its exceptional properties.  Part of its structure is shown below. Its full chemical structure and further details will be released in an online article at noon today.


If you'd like to read more of my Chemistry articles, I can recommend my explanation of how I got into online A/B testing as a Chemistry graduate.

If this sounds like something out of Star Trek, there's probably a good reason for it.  

Wednesday, 26 March 2025

Airband Radio Aerials: Maths in Action

I've been interested in aircraft and airshows for over 40 years - anything military or civil, and I've blogged in the past about how to use a spreadsheet to track down where to watch the Red Arrows fly past on their transit flights.  You didn't think that post was about geometry without some real-life applications?  What is this - "Another day I haven't used algebra"?

Anyway - I've been particularly interested in the Red Arrows and their air-to-air chatter, and the communications between pilots and air traffic control.  Yes, I take my airband radio along to airshows and to airports, and listen to the pilots request and receive clearance to take off or land.  Getting to airports is more of a challenge than it used to be - my children aren't as interested as I am in the whole thing, and standing at the end of a runway in poor weather isn't as much fun as it sounds!

So, I've started developing my home-based receiver.  In other words, I spent my birthday money on an airband antenna and an extension cable to connect it from outside (cold and sometimes rainy) to my desk (warm and inside) so that I can listen to pilots flying nearby.

Now: nearby is a relative term.

From Stoke on Trent, I've been able to pick up pilot transmissions from about 35 miles away, on the southern edge of Manchester Airport.  That's with a very basic antenna, set on my garden gatepost and about two metres off the ground - not bad for a first attempt.

My dad, on the other hand, has been tracking radio transmissions for decades.  His main areas of interest are long wave (around 200 kHz), medium wave (500-1600 kHz), and TV (UHF, 300 Mhz to 3GHz).  

Airband falls into the Very High Frequency range, around 100-200 MHz.

Here comes the maths:
All radio transmissions travel at the speed of light, c = 2.998 * 10^8 ms-1.
c = f w

Where f (sometimes the Greek letter nu, ν) is the frequency, and w (usually the Greek letter lambda, ƛ) is the wavelength.

So, if we know the frequency range that we want to listen to, we can calculate the wavelength of that transmission.  And this is important, because the length of the antenna (or aerial) that we need will depend on the wavelength.  Ideally, the aerial should be the length of one full wavelength, for maximum reception effectiveness.  Alternatively, a half-wavelength or a quarter-wavelength can be used.

So:  we know the speed of light, c = 2.998 * 10^8 ms-1
And we know the frequency of the transmissions we want to receive, which is around 118 MHz.

c/ν = Æ›

Æ› =   2.5 metres

Which is feasible for an external, wall-mounted aerial.  Can you see where this is going?

Exactly.  And here it is:  

It's just over two metres from end to end, with a feed at the midpoint.  This is the Mark One; the Mark Two will be the same aerial but even higher up, and closer to vertical (with a bracket that will enable it to dodge the eaves of the roof!


Tuesday, 18 March 2025

Calculator Games: Ulam Sequences: Up, Up and Away!

 Up, Up and Away With Ulam Seqeunces

This article in the ongoing series of ‘mathematical puzzles you could investigate with a calculator’ (that’s why I just call it ‘Calculator Fun and Games’) is the Ulam Sequence.  Ulam sequences, named after mathematician StanisÅ‚aw Ulam, are fascinating numerical sequences that begin with two specified integers. Each subsequent number in the sequence is defined as the smallest integer that is the sum of two distinct earlier numbers, where such a sum is unique within the sequence. This uniqueness constraint shapes the sequence's progression in an intricate way.

Ulam sequences are studied for their intriguing mathematical properties and their unpredictable, non-linear behavior, which challenges patterns typically found in additive sequences. They have applications in number theory and combinatorics, offering rich grounds for exploration and research.

Let's have a look at them in more detail, and start with the simplest.

How to Generate an Ulam Sequence

Take with two numbers (specifically, positive integers).  A good place to start is with a =1 and b=2.  To find the next number in the sequence, find the smallest integer that can be written as the sum of two distinct earlier numbers in just one way.  Continue with the next number, and the next.

For example, let’s start the Ulam Sequence with the numbers 1 and 2.  These are the first two terms in the sequence.

The next term is 3 (since 1+2=3).
After that comes 4 (since 1+3=4).

The next terms is not 5.  We can write 5 as 1+4 and as 2+3 using the terms that we’ve generated already.

The next term is 6 (since 2+4=6), and we can only write this in one way using our terms.

We can write 7 as 4+3 and as 6+1, so we skip 7.

The next term is 8 (since 2+6=8).

So, the beginning of the sequence is: 1,2,3,4,6,8,… (and it continues
1,2,3,4,6,8,11,13,16,18,26,28,36,38,47,48,53,57,62,69,72,77,82,87,97,99,102)

Note that 5, 7, 9, 10, 12, 14, 15, 17, 19, 20, 21 and 22 can be obtained in multiple ways using the terms before them.

However:  23 is not in the sequence because it cannot be obtained using the previous terms at all!    24 can be written as both 16+8 and 18+6, while 25 is not obtainable.

The sequence grows in an irregular, almost random pattern.  Let’s see what happens when we start with 1 and 3 instead of 1 and 2.

4 = 1 + 3 only
5 = 1 + 4 only
6 = 5 + 1 only

7 can be written as 4+3 and 6+1
8 = 5+3
9 is 4+5 and 6+3
10 = 6+4
11 is 6+5 and 8+3

The first 20 terms for the 1,3 Ulam Sequence are:

1,3,4,5,6,8,10, 12,17, 21, 23, 28, 32,34,39,43,48,52,59 and 63.

The Ulam Sequence is an interesting example of how simple rules can lead to complex and intriguing mathematical structures, which makes it ideal for calculator (or spreadsheet) exercises.  For example, here’s a comparison of the Ulam sequences for 1,2 compared with 1,3 (as I’ve calculated above) and then 1,4 and 1,5.  Interestingly, the 1,5 sequence does not race ahead of the 1,2 sequence as I had originally expected.

Term

Ulam (1,2)

Ulam (1,3)

Ulam (1,4)

Ulam (1,5)

1

1

1

1

1

2

2

3

4

5

3

3

4

5

6

4

4

5

6

7

5

6

6

7

8

6

8

8

8

9

7

11

10

10

10

8

13

12

16

12

9

16

17

18

20

10

18

21

19

22

11

26

23

21

23

12

28

28

31

24

13

36

32

32

26

14

38

34

33

38

15

47

39

42

38

16

48

43

46

40

17

53

48

56

41

18

57

52

57

52

19

62

59

66

57

20

69

63

70

69




There’s a balance between the ability to leap to larger numbers (1,5) initially – from 1 to 5 – and the need to fill in more numbers between 5 and 10 (because there are very smaller numbers that can be made in multiple ways).

A quick comparison of the Ulam Sequences for (2,b) is even more interesting.  We have to start with 2,3 since 2,1 is the same as 1,2 above, and 2,2 will only produce the even numbers (which is cute but dull).  In fact, any even number paired with 2 will produce uninteresting results!

Let’s compare 2,3 and 2,5:  These grow at a slower rate compared to the 1,b sequences.  Interestingly, they contain far fewer even numbers than the 1,b sequences; in fact 2,5 only contains the even numbers 2 and 12 in the first 20 terms (with no indication that there are any more even numbers further along the sequence).


Term

2,3

2,5

1

2

2

2

3

5

3

5

7

4

7

9

5

8

11

6

9

12

7

13

13

8

14

15

9

18

19

10

19

23

11

24

27

12

25

29

13

29

35

14

30

37

15

35

41

16

36

43

17

40

45

18

41

49

19

46

51

20

51

55



So there’s plenty of scope for investigation with a spreadsheet for the larger numbers.  For example, I haven’t found anybody else list the Ulam sequence for 10,11… so here it is:  the Ulam Sequence for (10,11)

10, 11, 21, 31, 32, 41, 43, 51, 54, 61, 62, 65…

After huge initial leaps of +10 or +11 between consecutive terms, the growth rate of the sequence starts to slow down.  There is only one term in the 20s, then two in the 30s, 40s and 50s, then three in the 60s.

Further reading:
Wolfram has lists and links for many of the 1,b and 2,b Ulam sequences.

Other articles on this blog on similar themes:
Snakes and Ladders (Collatz Conjecture)
Crafty Calculator Calculations (numerical anagrams with five digits)
More Multiplications (numerical anagrams, four digits)
Over and Out (reduce large numbers to zero as rapidly as possible)
Calculator Games: Front to Back
Calculator Games: The Kaprekar Constant