It
occurred to me recently that there are some similarities between
playing a game of Chess and optimising a website through testing (apart
from the considerable differences between them).
Many years ago, when I started this blog, I envisaged it as a place where I would type up my chess games, with commentary, lessons and points for improvement. A quick glance through my archive of blog posts will quickly show that the blog really hasn't matched its original plan. It evolved and changed, in particular in 2011, when I shared my first few posts about web analytics and I realised that I could attract significantly more readers by blogging and sharing about web analytics than I have ever managed with Chess. I figure that Chess is a much more mature subject, with many more people who are considerably more experienced than me; whereas web analytics (and especially my main area of interest, testing) is a much younger field, and there's scope for sharing ideas and experiences which are still novel and interesting. However, there are some similarities between the two - here are a few thoughts.
Strategies and Tactics
Plans in Chess can be broadly separated into strategies (long-term aims) and tactics (short term two-or-three move plans). The long-term aim is to checkmate your opponent's king, and throughout the game that will become the more prominent goal. In the short term, as you progress through the opening moves, you'll identify potential opportunities to capture your opponent's pieces, to put your pieces on good squares and to restrict your opponent's chances of beating you. Some of these are short term, some are long term. For example, it may be possible to win one of your opponent's bishops with a cunning trap (if your opponent is not vigilant). This will make it easier to achieve your long-term goal of winning the game by checkmating your opponent.
Testing offers the same range of opportunities: are you looking for quick wins (who isn't?) or are you planning to redesign an entire page, or even an entire site? Will you be implementing wins as soon as you have confirmed results, or are you going to iterate and try to do even better? Are you compiling wins to launch in one big bang, or are you testing, implementing and then repeating? How far ahead are you planning? Neither approach is necessarily better, as long as you're planning, and everybody is agreed on the plan!
Aims and Goals
As I've just mentioned, in Chess there's a clear goal: checkmate your opponent's king, by threatening to capture his king and leaving him with no way to escape. The aim for your online testing program may not be so clear cut, and if it isn't, then it might be time to find one single aim for it. You might call it the mission statement for your optimisation program, but whatever you call it, it's important that everybody who's involved in your program understands the purpose of the testing team. The same applies to each test, as I've mentioned before - each test should have a clear aim, that everybody understands and that can be measured in some clearly-defined way.
Values and KPIs
Each piece in Chess has a certain nominal value; the values aren't precise, but they provide a meaningful comparison of the strength and ability of each piece. For example, the rook is worth five pawns, the knight and bishop are worth three pawns each, and the queen is worth nine. This enables players to quickly evaluate a position in a game and say which player is winning, or if the position is roughly equal. It's slightly more complicated than that, as you have to take into consideration the position of the pieces and so on, but a quick comparison of the total material value that each player has will give a good idea of who's winning.
This also means that it's possible to determine if a plan or a strategy is likely to win and is worth pursuing. It may be possible to trap your opponent's rook (worth five pawns), but if doing so will mean losing two knights (each worth three pawns) and a pawn, then the trap is not really beneficial to you. If, on the other hand, you could trap your opponent's king (winning the game) at the cost of two rooks and a knight, then that's definitely worth doing.
The key performance indicator for a game of Chess is your opponent's king, and if you can measure how close you are to capturing (or checkmating) your opponent's king, then you can see how close you are to winning the game. You also need to keep your own king safe, but that's where the analogy breaks down :-)
If online testing, your plan, strategy and tests each need to have KPIs. Once you've established your long-term aim, you can set KPIs against each test which are connected to achieving the long-term aim. If you want to improve the return on investment (ROI) of your online marketing, you could look at the landing page... improve the bounce rate and the exit rate, and encourage more people to move further into your site and view your products. Alternatively, you could look at improving conversion of visitors from cart (basket) to checkout. Or perhaps the flow of visitors through your checkout process. Providing you can tie each of your tests and your tactics to the overall strategy: "Improve ROI for online marketing" then you can measure whether or not it's succeeding.
Classifying your KPIs in order of importance is also important - as we saw in Chess, if you can win your opponent's pieces but lose several of yours in the process, then it's probably not a good idea. In testing, what would you do if your test recipe had a worse bounce rate but higher overall conversion (a situation that's not impossible)? Which is the more important metric - conversion, or bounce rate? Would your answer be the same if it was improved conversion but lower revenue (people not spending as much per order)? Are you going to capture your opponent's knight but lose your queen?
Win, lose or draw?
In Chess, there are clear rules that determine the outcome of a game. Either one player wins (so the other loses), or it's a draw, and there are various ways of drawing: including by agreement (both players decide neither can win); by stalemate (one player cannot make any legal moves) or a drawn position where it's clear that neither player has enough material to checkmate the other.
I hate losing at Chess. However, the truth be told, I'm not much above average as a Chess player, and I'm the weakest player at my club (this has not deterred me, and I still play for fun). This means that I get plenty of chances to analyse my losses, and see where I could improve. Do I make the same mistakes in future games? Not usually, no.
However, I'm not satisfied to stay as the weakest player in my club - I just see this as an opportunity to do some giant-slaying in my future matches. I read books, I visit Chess websites, I practice against other people and against computers, but I especially review my own games. Occasionally, I win. And do I analyse my winning games? Absolutely - I may have already seen during the game that my opponent missed a chance to beat me, but did I also miss a chance to win more easily?
In online optimisation, the rules for calling a test a win, lose or draw are still up for debate, and they vary between companies. And so they should. Each company will have its own testing program with its own tactics and strategies, and its own requirements. Do you want to have 99.9% confidence that a test will win, or do you want some directional test data to support something you already believed based on other data sources? How quickly do you want to run the next test, or implement the winner? Providing that the rules for calling the win, lose or draw are agreed in advance, I might even suggest that they could vary between tests. This is, of course, totally different from Chess, where the centuries-old rules of the game clearly state the requirements for a win or a draw. Otherwise though, I think it's fair to say that KPIs, metrics and strategy have their approximate equivalents in pieces, pawns and plans - and that thinking and planning are definitely the way forward!
Chess cartoons taken from the 1971 printing of Chess for Children, originally published 1960.
Many years ago, when I started this blog, I envisaged it as a place where I would type up my chess games, with commentary, lessons and points for improvement. A quick glance through my archive of blog posts will quickly show that the blog really hasn't matched its original plan. It evolved and changed, in particular in 2011, when I shared my first few posts about web analytics and I realised that I could attract significantly more readers by blogging and sharing about web analytics than I have ever managed with Chess. I figure that Chess is a much more mature subject, with many more people who are considerably more experienced than me; whereas web analytics (and especially my main area of interest, testing) is a much younger field, and there's scope for sharing ideas and experiences which are still novel and interesting. However, there are some similarities between the two - here are a few thoughts.
Strategies and Tactics
Black prepares a tactic. |
Testing offers the same range of opportunities: are you looking for quick wins (who isn't?) or are you planning to redesign an entire page, or even an entire site? Will you be implementing wins as soon as you have confirmed results, or are you going to iterate and try to do even better? Are you compiling wins to launch in one big bang, or are you testing, implementing and then repeating? How far ahead are you planning? Neither approach is necessarily better, as long as you're planning, and everybody is agreed on the plan!
Aims and Goals
White threatens checkmate. |
Values and KPIs
Each piece in Chess has a certain nominal value; the values aren't precise, but they provide a meaningful comparison of the strength and ability of each piece. For example, the rook is worth five pawns, the knight and bishop are worth three pawns each, and the queen is worth nine. This enables players to quickly evaluate a position in a game and say which player is winning, or if the position is roughly equal. It's slightly more complicated than that, as you have to take into consideration the position of the pieces and so on, but a quick comparison of the total material value that each player has will give a good idea of who's winning.
Rooks are worth five pawns; bishops are worth three pawns and queens are worth nine. |
The key performance indicator for a game of Chess is your opponent's king, and if you can measure how close you are to capturing (or checkmating) your opponent's king, then you can see how close you are to winning the game. You also need to keep your own king safe, but that's where the analogy breaks down :-)
White wins despite less material. |
Classifying your KPIs in order of importance is also important - as we saw in Chess, if you can win your opponent's pieces but lose several of yours in the process, then it's probably not a good idea. In testing, what would you do if your test recipe had a worse bounce rate but higher overall conversion (a situation that's not impossible)? Which is the more important metric - conversion, or bounce rate? Would your answer be the same if it was improved conversion but lower revenue (people not spending as much per order)? Are you going to capture your opponent's knight but lose your queen?
Win, lose or draw?
Queen takes King: checkmate! |
I hate losing at Chess. However, the truth be told, I'm not much above average as a Chess player, and I'm the weakest player at my club (this has not deterred me, and I still play for fun). This means that I get plenty of chances to analyse my losses, and see where I could improve. Do I make the same mistakes in future games? Not usually, no.
However, I'm not satisfied to stay as the weakest player in my club - I just see this as an opportunity to do some giant-slaying in my future matches. I read books, I visit Chess websites, I practice against other people and against computers, but I especially review my own games. Occasionally, I win. And do I analyse my winning games? Absolutely - I may have already seen during the game that my opponent missed a chance to beat me, but did I also miss a chance to win more easily?
In online optimisation, the rules for calling a test a win, lose or draw are still up for debate, and they vary between companies. And so they should. Each company will have its own testing program with its own tactics and strategies, and its own requirements. Do you want to have 99.9% confidence that a test will win, or do you want some directional test data to support something you already believed based on other data sources? How quickly do you want to run the next test, or implement the winner? Providing that the rules for calling the win, lose or draw are agreed in advance, I might even suggest that they could vary between tests. This is, of course, totally different from Chess, where the centuries-old rules of the game clearly state the requirements for a win or a draw. Otherwise though, I think it's fair to say that KPIs, metrics and strategy have their approximate equivalents in pieces, pawns and plans - and that thinking and planning are definitely the way forward!
Chess cartoons taken from the 1971 printing of Chess for Children, originally published 1960.
Mmm.. good to be here in your article or post, whatever, I think I should also work hard for my own website like I see some good and updated working in your site.
ReplyDeleteElo boosting
Groß Informationen! Ausgezeichnete Schreiben. Ich bin sicher, dass ich diese Site bald wieder besuchen.
ReplyDeleteIch habe einige relevante Informationen, die Sie unten überprüfen können
Stargames Erfahrungen
Doraemon Games Play the best Doraemon Games online free for everyone! We update Nobita games, Doraemon Dress Up games, Doraemon Fisshing games, all Doraemon games online.
ReplyDeleteGood Chess Things.
ReplyDeletehttp://gamesandsoftwarehouse.blogspot.com/2016/01/Games-house-chess.html