Stephanie Donelson

Content & social media marketing manager
Email marketing

To test or not to test

A/B testing, seems like a pretty common practice in digital marketing these days but is it always in your best interest?

A/B testing can be valuable for improving your conversions, generating more leads, and enhancing your email campaigns and helps you make future decisions based on data. 

A/B testing, or split testing, is a marketing experiment where you split up your audience into two groups and serve each a variation of the same campaign. One group sees version A and one gets version B and then you see which version performs the best – by whatever metrics you’ve set up in advance.

For a successful A/B test you must only change one variable. 

What kinds of variables should you be testing?

Landing page and form A/B tests

  • Headlines or subheadings
  • Copy
  • Call to action (CTA)
  • Images or video
  • Layout
  • Colors

Email A/B tests

  • From line
  • Subject line
  • Body content
  • CTAs

These are all great things to test, but not all tests are valuable tests or provide statistical results. Most marketers have just fallen in the habit of A/B testing even if they don’t know what they’re hoping to get out of the test. So, how can we make sure that the A/B tests we’re running provide real results and actionable insights?

5 best practices for A/B testing

Contact us card

1. Test the right things

Focus your efforts on testing the right marketing tactics or campaigns. I focus my efforts on testing lead gen tactics, like lead magnet pages (ebooks, guides, checklists, etc.), webinar or event registration pages, and the contact us page. 

What are you trying to achieve with your A/B test? Make sure what you’re hoping to see matches where you’re running your test. 

2. Have a big enough sample size

Only have 25 people on a segmented email marketing list? That’s nowhere near a large enough sample size to run a test. You need to have enough people to get reliable results.

For emails, I tend to run a 15/15 and 70 split, so 15% get version A, 15% get version B, and 70% are sent the winner.

Use Optimizely’s free sample size calculator to see if you have enough people to run a statistically relevant A/B test. 

Try the sample size calculator!

3. Statistical significance

Remember the days of college and talking about statistical significance? Most of us don’t use it in our everyday lives, but if we’re running A/B tests then statistical significance is very important. It simply means that our results are real and not based on random chance. 

Here’s another free tool, this one by Visual Website Optimizer, to find out if you have statistical significance: 

Find out if you have statistical significance

4. Schedule and duration

Compare apples to apples by running your A/B tests for comparable periods for the right results. A Labor Day discount campaign can’t be compared to a regular newsletter. Make sure the timing of the test and duration are appropriate to yield the right results. 

Some tests only need to run for a few hours and others need to run for days, weeks, or months. I typically run email tests for three to four hours (based on the timing of regular open times), but landing page or form tests need more time to evaluate their effectiveness. 

5. One and done

Like I mentioned earlier, a successful A/B test is only experimenting on one, single variable. Test headline A against headline B and everything else is the same. Or test CTA A against CTA B and everything else is the same. 

Set a goal or metric you’ll measure your results by, like:

  • Open rate
  • Click-through rate
  • Conversion rate
  • Website traffic increase

Once you’ve set up your test, do not make changes in the middle of it. Let the test run its course to make sure your test was successful or provided real, statistically significant data. Your results won’t be reliable if you make changes midway through or add new variables. By changing only one thing, you can make sure that variable is actually responsible for the changes. 

Remember, you don’t have to set up A/B tests for every marketing campaign and most likely shouldn’t. Only run tests when you have a big enough sample size, have a specific goal in mind, like increasing conversion rates because they’ve stagnated, and have a plan on how you’ll use the results. 

What other best practices do you follow for A/B testing? Tell me in the comments or on Twitter!

Leave a Reply

Your email address will not be published. Required fields are marked *