
The heart shape has some new and some familiar combinations, but the same principle applies.
Mostly my experiences with web analytics and online testing; some maths, some opinions, and the occasional Chess game.
50% or less
It's probably best to frame this as 'avoiding revenue loss'. Your company tested a new idea, and you prevented them from implementing it, thereby saving your company from losing a (potentially quantifiable) sum of money. You were, I guess, trying some new ideas, and hopefully pushed the envelope - in the wrong direction, but it was probably worth a try.
Around 75%
If 75% of your tests are winning, then you're in a good position and probably able to start picking and choosing the tests that are implemented by your company. You'll have happy stakeholders who can see the clear incremental revenue that you're providing, and who can see that they're having good ideas.
90% or more
If you're in this apparently enviable position, you are quite probably running tests that you shouldn't be. You're probably providing an insurance policy for some very solid changes to your website; you're running tests that have such strong analytical support, clear user research or customer feedback behind them that they're just straightforward changes that should be made. Either that, or your stakeholders are very lucky, or have very good intuition about the website. No, seriously ;-)
Your win rate will be determined by the level of risk or innovation that your company are prepared to put into their tests. Are you testing small changes, well-backed by clear analytics? Should you be? Or are you testing off-the-wall, game-changing, future-state, cutting edge designs that could revolutionise the online experience?
What is the optimal test win rate? I'm not sure that there is one, but it will certainly reflect the character of your test program more than its performance.
Is there a better metric to look at? I would suggest "learning rate": how many of your tests taught you something? How many of them had a strong, clearly-stated hypothesis that was able to drive your analysis of your test (winner or loser) and lead you to learn something about your website, your visitors, or both? Did you learn something that you couldn't have identified through web analytics and path analysis? Or did you just say, "It won", or "It lost" and leave it there? Was the test recipe so complicated, or contain so many changes, that isolating variables and learning something was almost completely impossible?
Whatever you choose, make sure (as we do with our test analysis) that the metric matches the purpose, because 'what gets measured gets done'.