uyhjjddddddddddd Web Optimisation, Maths and Puzzles: emetrics

Header tag

Showing posts with label emetrics. Show all posts
Showing posts with label emetrics. Show all posts

Thursday, 6 November 2014

Building Momentum in Online Testing - Key Takeaways

As I mentioned in my previous post, I was recently invited to speak at the eMetrics Summit in London, and based on discussions afterwards, the content was really useful to the attendees.  I'm glad that people were able to find it useful, and here, I'd like to share some of the key points that I raised (and some that I forgot to mention).

Image Credit: eMetrics Summit official photography

There are a large number of obstacles to building momentum with an optimisation program, but most of them can be grouped into one of these categories:


A.  Lack of development resource (HTML and JavaScript developers)
B.  Lack of management buy-in and access to resource
C.  Tests take too long to develop, run, or call a winner
D.  Tests keep losing (or, perversely, tests keep winning and the view is that "testing is completed")
E.  Lack of design resource (UXers or designers)

These issues can be addressed in a number of ways, and the general ideas I outlined were:

1.  If you need to improve your win rate, or if you don't have much development resource, re-use your existing mboxes and iterate.  You won't need to wait for IT deployments or for a developer to code new 'mboxes', you can use them again, test and learn and test again.

2.  If you need to improve the impact of your tests (i.e. your tests are producing flat results, or the wins are very small) then make more dramatic changes to your test recipes, and createI commented that generally speaking, the more differences there are between control and the test recipe, the greater the difference in performance (which may be positive or negative).  If you keep iterating and making small changes, you'll probably see smaller lifts or falls; if you take a leap into the unknown, you'll either fly or crash.

Remember not to throw out your analytics just because you're being creative - you'll need to look at the analytics carefully, as always, and any and all VOC data you have.  The key difference is that you're testing bigger changes, more changes, or both - you shouldn't be trying new ideas just because they seem good (you'll still need some reason for the recipe).

3.  If you need to get tests moving more quickly, then reduce the number of recipes per test.  More recipes means more time to develop; more time to run (less traffic per recipe per day) and more time to analyse the results afterwards.  Be selective - each recipe should address the original test hypothesis in a different way, you shouldn't need to add on recipe after recipe just because it looks like a good idea.  Also, only test on high-traffic or critical pages, where there's plenty of volume of traffic, or where it's mission-critical (for example, cart pages, or key landing pages).  As a bonus, if you work on optimising conversion or bounce rate for your PPC or display marketing traffic, you'll have an automatic champion in your online marketing department.

Extra:  If you do decide to run with a large number of recipes, then monitor the recipes' performance more frequently.  As soon as you can identify a recipe which is significantly and definitely underperforming vs control, switch it off.  This has two benefits:  a) you drive a larger share of traffic through the remaining recipes, and b) you're saving the business money because you've stopped traffic going through a low-converting (or low-performing) recipe - which was costing money.

4.  Getting management buy-in and support on an ongoing basis:  this is not easy, especially when analysts are, stereotypically, numbers-people rather than people-people. We find it easier to work with numbers than to work with people, since numbers are clear-cut and well-defined, and people can be... well... messy and unpredictable.  Brooks Bell have recently released a blog post about five ways to manage up, which I recommend.  The main recommendation is to get out there and share.  Share your winners (pleasant) and your losers (unpleasant), but also explain why you think a test is winning or losing.  This kind of discussion will lead naturally on to, "Well, it lost because this component was too big/too small/in the wrong place." and starts to inform your next test.

I also talked through my ideas on what makes a good test idea, and what makes for a bad test idea; here's the diagram I shared on 'good test ideas'.

In this diagram, the top circle defines what your customers want, based on your analysis; the lower left circle defines your coding capabilities and the lower right defines ideas that are aligned with your company brand and which are supported by your management team.

So where are the good test ideas?  You might think that they are in segment D.  In fact, these are recommendations for immediate action.  The best test ideas are close to segment D, but not actually in it; the areas around segment D are the best places - where two of the three circles intersect, but where the third is nearly aligned too.  For example; in segment F, we have ideas that the developers can produce, and which management are aligned with, but where there is a doubt about if it will help customer experience.  Here, the idea may be a new way of customising or personalising your product in your order process - upgrading the warranty or guarantee; adding a larger battery or a special waterproof coating (whatever your product may be).  This may work well on your site, but it may also be too complex.  Your customer experience data may show that users want more options for customising and configuring their purchase - but is this the best way to do it?  Let's test!


I also briefly covered bad test ideas - things that should not be tested.  There's a short list:

Don't test making improvements such as bug fixes, broken links, broken image links, spelling and grammar mistakes.  There's no point - it's a clear winner.  

Don't test fixes for historic bugs in your page templates - for example where you're integrating newer designs or elements (product videos, for example) that weren't catered for when the layout was originally built.  The alignment of the elements on the page are a little off, things don't fit or line up vertically, horizontally - these can be improved with a test, but really, this isn't fixing the main issue, which is that the page needs fixing.  The test will show the financial upside of making the fix (and this would be the only valid case for running the test) but the bottom line is that a test will only prove what you already know.

I wrapped up my keynote by mentioning the need to select your KPIs for the test, and for that, I have to confess that I borrowed from a blog post I wrote earlier this year, which was a sporting example of metrics.
Presenting the "metrics in sport" slide, Image Credit: Aurelie Pols
I'm already looking forward to the next conference, which will probably be in 2015!

Tuesday, 4 November 2014

Building momentum in your online optimisation program (eMetrics UK)

At the end of October, I spoke at eMetrics London.  I was invited by Peter O'Neill to present at the conference, and I anticipated that I would be speaking as part of a track on optimisation or testing.  However, Peter put me on the agenda with the keynote at  the start of the second day, a slot I feel very honoured to have been given.


Jim Sterne, my Web Analytics hero, presenting
Selfie: a quick last-minute practice
Peter O'Neill, eMetrics UK organiser
I thoroughly enjoyed presenting - and I'm still learning on making formal web analytics presentations (and probably will always be) - but for me the highlight of the Summit was meeting and talking with Jim Sterne, the Founding President and current Chairman of the Digital Analytics Association, and the Founder of the eMetrics Marketing Optimization Summit.  I've been following him since before Twitter and Facebook, through his email newsletter "Sterne Measures" - and, as he kindly pointed out to me when I mentioned this, "Oh, you're old!"  Jim gave a great keynote presentation on going from "Bits and Bytes to Insights" which has to be one of the clearest and most comprehensive presentations on the history and future of web analytics that I've ever heard.

My topic for the keynote was "Building momentum in your online optimisation program."  From my discussions at various other conferences, I've noted that people aren't concerned with getting an online testing program started, and overcoming the initial obstacles; many analysts are now struggling to keep it running.  I've previously blogged on getting a testing program off the ground, and this topic is more about keeping it up in the air. 

While putting the final parts of the presentation together I determined not to re-use the material from my blog - as much as possible.  The emphasis in my presentation was on how to take the first few tests and move towards a critical mass as quickly as possible - where test ideas and test velocity will increase sufficiently that there will be continuous ongoing interest in your tests - winners and losers, so that you'll be able to make a significant, consistent improvement to your company's website.


I'm just getting resettled back into the routine of normal work, but I'll share the key points (including some parts I missed) from my presentation in a future blog post as soon as I can.