A/B Testing – What Not To Do

by Tom Farrell June 8, 2016
June 8, 2016

Everybody is A/B testing these days. Or at least, it sometimes feels like that. That’s certainly a positive in one sense – making decisions on the back of user data rather than ‘the most persuasive person in the room’ is certainly a step in the right direction. Particularly on mobile, where one false move means an end to the consumer relationship, it’s worth getting experience perfect.


But here’s the truth: many organizations are not getting the value that they feel they should from their testing programs. If that sounds familiar, here’s a very quick guide to some of the more obvious pitfalls that can scupper A/B testing programs – and a few ways to ensure you avoid them. We’ve worked with some of the world’s leading digital and mobile businesses, so we know what we’re talking about!


Insufficient Analysis of Results


Many A/B testing platforms allow for simple enough operational aspects of the tests, but an unsettling number are lacking in what comes after that – proper analysis of the numbers derived. Here are two clear examples of what to avoid;



  • Simply presenting numbers without any analysis whatsoever. Don’t judge A/B test results on raw numbers alone – they frequently tell you nothing.
  • Presenting a ‘winning’ variant based on uncertain calculations or in the absence of statistical significance.

Simply put: always ensure that someone who understands statistics is either assessing the product you’re using or doing the math on your raw numbers. If that isn’t happening – you are wasting your time! Designing tests that don’t produce any real insight


Running A/B tests is great, and marketers enjoy seeing and reporting on real results. But those results are pointless unless they are teaching the organization something new. To really make improvements within a company, you’ll need to take the right approach when designing your tests.



  • If possible run any given A/B test against a small percentage of the total audience first. This enables the remainder to be shown the winning variant, and thus makes the campaign itself more effective.
  • Ensure that you distinctively define the style of the two variants: photography vs cartoon, for example. If photography ‘wins’, this will help improve future designs and give you a ‘finding’ to refine further.
  • Start each A/B test with a clear idea of what you are testing or hope to learn – so that you’ll be examining all variants with that in mind. Don’t try and retrofit logic to a completed test!

Ignoring The Results


You wouldn’t think that this would be a common mistake made by organizations when running A/B tests, but in our experience, it’s probably the number one reason why A/B testing programs fail.


In truth, few of us are innocent of this. So how do we make sure that our tests are really changing the way that we build mobile apps?


Before you begin anything, make sure you have a clear understanding of what it is exactly that you are hoping to achieve. This means there is less likelihood of the results being picked apart by people who don’t like the outcomes (often driven by biases and opinions of senior team members) and more of a chance of them being accepted for what they are.


Remember not to place too much weight on ‘being right’ as it discourages contribution and teamwork and leads to defensiveness that will ultimately slow down your whole process.


Testing The Wrong Things


A common mistake in any A/B testing program is focusing too much energy on the little things. Of course small things can make a difference, but don’t let this stop you from thinking about the bigger picture.


When planning what to A/B test, think about where you can really make a difference to your app, or the aspects of your app experience that are closest to the points of monetization. It makes sense to initially focus on these areas.


Nothing is more likely to cause an A/B testing program to meet a road-block than focusing only on superficial, visual aspects of an app, so focus on rethinking core experiences and configurations and testing how these new variants alter app performance.


By doing this, rather than relentlessly optimizing what is already there, we can think bigger and make major changes to the whole structure and mechanics of the app.


Measuring The Wrong Things


When it comes to measuring the results of A/B testing, it’s often too tempting to test what’s easy rather than what’s important.


For example, it can be actively misleading to measure the success of a campaign based on click through rates rather than on the number of conversions the campaign was set out to achieve.


It sure would be easier to determine a clear winner based on nothing more than number of clicks, but don’t forget that you could have a variable that delivers outstanding click-through rates but rather poor conversions – so remember to measure and determine the winner according to the metrics that really matter.


Ideally, start from revenue and work backward.

Digital & Social Articles on Business 2 Community

(36)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.