1017_RAAK_Tipmail_7-Mistakes-that-limit-the-succes-of-your-AB-Test_Blog

How to avoid 7 mistakes that limit the success of your A/B test

A/B testing your email campaigns is an amazing way to help you uncover high-impact changes in your subject lines, email designs, landing pages, and more and to improve your conversions. But only if you are using it correctly. To a certain degree, A/B testing is very simple: you present two variations to two different groups of your subscribers and then listen. However, it’s also rather simple to mess things up and either come to the wrong conclusion or undermine your results completely. Stay clear of these 7 commonly made mistakes to make sure that your email A/B tests yield the best possible results.

Mistake #1: not having a clear hypothesis

Don’t start split testing, just for the sake of seeing what might work. Work out an A/B testing hypothesis, a theory which defines why you’re getting particular results on a web page and how you can improve those results. Know what you are trying to achieve and have a clear reason why you’re testing. It will help you achieve the desired results.

Mistake #2: split testing too many aspects

This is one of the key A/B testing mistakes lots of people make: trying to split test too much at once. It may seem like you’re saving time by testing various aspects during the same run, but in the end, you’ll find that it doesn’t. The outcome of your split test will be that you’ll have a tough time sorting out which change was responsible for the results. Follow one of the basic rules when you’re doing an A/B test: test one item at a time and test it against another version of that same item.

Mistake #3: isolating your testing metrics from your overall goal

The main intention of email campaigns is trying to generate either email conversions or sales conversions. Keep those goals in mind when you’re A/B testing, too. That’s exactly where some marketers go wrong. They think subject lines only affect opens, email content only influence clicks, and only landing page content can lead to conversions. However, the different stages of an email interaction don’t operate in isolation. They all work together, because your contacts experience them all together. Once you know that, you realise that the goal of a subject line isn’t to generate opens. It’s to generate openers who are likely to convert.
Not convinced? It’s easy to check it for yourself. Just run a few subject line A/B tests and look at how the different subject lines affect activity all the way down the email interaction funnel.

Mistake #4: getting your timing wrong

When A/B testing, timing is everything. Which often leads to these commonly made A/B testing mistakes…

Not running the test long enough

You need to run an A/B test for a certain timespan to achieve a statistical significance. One hour probably won’t suffice. It’s better to aim for a testing period of at least 4 hours.

Comparing completely different time periods

If you get most of your website traffic on a Friday, then it doesn’t make sense to compare split testing results for that day with the results on a low-traffic day. And a week maybe even too narrow to look at. If you’re an eCommerce retailer, you can’t compare split testing results for the holiday boom with the results you get during a ‘normal’ month.

Mistake #5: testing a too modest selection of your audience

Choosing the right testing period is crucial. But selection the right amount of people in your database is equally important. Basically, you need to test your campaigns with enough people to get meaningful results. If your testing audience is too small, the results of your test might just be determined by randomness.

Mistake #6: skipping an A/B testing plan

Ad hoc A/B testing is inefficient, sporadic and unfocused. To get the most out of your A/B tests, set up an A/B testing plan. Develop a testing schedule which records:
  • the hypothesis you are trying to confirm
  • which emails you are using to test each theory
  • the results of each test and how they impact your future testing plans
 
This plan will help you find out whether or not your hypothesis is correct.

Mistake #7: not confirming your results

Always confirm the results of what you’re testing. One single A/B test is never conclusive ad infinitum. In the short term, any change you see may be the result of the novelty effect. Subscribers are attracted to the new, which can uplift any change you make. However, that novelty effect will eventually fade. So if you run the same test two or three times over a certain period of time, you’ll level out any novelty effect and see the true impact of the change(s) you’ve made. 

Conclusion

A/B Testing will tell you a lot — if not all — about what works and what doesn’t when you set up email campaigns. Always take the right parameters into account when you’re planning an A/B test. Respecting those will lead to more representative figures, much clearer insights, and in the end: better results

Posted on
Jan 8, 2019