Golden Rules of A/B Testing

May 1, 2016

A/B Testing in email marketing has been around for nearly as long as email itself. You will often see the mantra of ‘Test, Test and Test again’ suggesting that you should always be testing everything. We don’t agree though, despite our own platform having the most comprehensive range of Split and Multi-variate Testing tools for email marketing.


Golden rules of a b testing


Here are our 5 tips for making sure you make the most of A/B testing and don’t waste your efforts:


Test just one thing at a time


This is THE golden rule of A/B Split Testing. By making sure that each test simply has one change each time in one particular area then you can be sure that any difference in performance is down to this change. If you want to test many things then either plan these as new tests or look to Multi-variate Testing, which will mean you can see the difference in the combinations between the different tests as well. This does mean everything has to be the same though, right down to when you send the email.


What is a suitable sample size?


There are plenty of calculators out there to determine how many email addresses each sample should be sent to in order to get reliable results. Arbitrary numbers like 2,500 are often banded about but it’s not just how many you are sending to that will impact on the reliability of results, it’s about how engaged this audience is, what expected baseline results you might get and how significant the difference is.


How long do you wait for results to be conclusive?


A common misunderstanding is that an hour is all you need for you to be able to make a reliable decision on a subject line test. In fact unless it is something fairly significant like a change in offer or design then we normally recommend leaving at least 12 hours, which usually means waiting a full 24 hours for the next suitable send time to come along for making your decision. Some systems will claim they can make the decision earlier, but it is the reliability of results you have to factor in which is why many A/B Tests aren’t practical to be sent to a small sample, and then rolled out to the rest of the database.


Is it significant?


When designing your tests don’t get too hung up on the detail. Make sure that there is a real difference in subject lines, copy, call-to-actions or other test areas for someone to really notice.


Are the learnings useful and repeatable?


Another common mistake is to run A/B tests that produce some nice differences but simply can’t be used again. Subject line tests often fall into this area – you test 3 to 4 subject lines on your latest newsletter but when you try and repeat that next time the results are different because the actual content used in the subject line is different.

Digital & Social Articles on Business 2 Community

(91)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.