Request demo

Outbound sales tactics

How to A/B test email campaigns

January 15, 2019

Richard

In order to improve the performance of your email campaigns, it is important to be data-driven. When you are sending email campaigns be proactive and run A/B tests constantly, there is always something new to be tested. First of all, you need to define what you are going to test. Some variables we usually test are:

  • Subject line
  • Body of the email
  • Value proposition
  • Call-to-action
  • Images
  • Examples of clients
  • Audience
  • Time of the Day
  • Day of the week
  • Job title
  • Industry

For each one of these variables there is a metric you should track:

  • Open Rate - measures the percentage of people that opened your email. If you are A/B testing subject lines, the metric you should track is the open rate of your emails.
  • Reply Rate - measures the percentage of people that replied to your email. You should track this metric in case you are A/B testing changes to the body of your email (i.e. value proposition, call-to-action, examples of clients, etc.) or your audience.
  • Conversion Rate - measures the percentage of people that converted to a given goal (i.e. scheduling a call, signing up for a free trial, clicking on a link, etc...). The variables that have more impact on this metric are the call-to-action, the audience, job title and industry.  

Once you decide what you want to A/B test it is important to define the sample size for your test. You can use this online calculator to help you calculate the sample size automatically: Sample Size Calculator. This will help you understand how many emails you need to send in order to find the hypothesis that is performing better.

Suppose you are testing two different calls-to-action, the metric you should be tracking is the reply rate. You should define:

  • Metric (Baseline) - the reply rate of your email campaigns.
  • Minimum detectable effect - the minimum detectable effect corresponds to the relative difference between the reply rates for the two different calls-to-action you are testing.

Once you choose the variable to test, the metric to measure and the sample size, the next step is to create a strategy. Best practices are:

  • Change only one variable at a time. If you are testing a new call-to-action and a new value proposition at the same time you won’t be able to decouple the effects your results. To avoid not being able to tell which variable is responsible for a certain outcome, you should only test one variable at a time.
  • Split your leads into equal batches when testing a new variable. When running A/B tests, you should never focus all of your efforts on one single assumption. If you have a baseline approach then you should run your tests against that approach. If you don't have a baseline then you should test different assumptions simultaneously.

To decide which hypothesis is performing best, the difference in the metric that you are measuring needs to be at least equal to the minimum detectable effect. If you can’t achieve a significant difference, then you can’t say for sure that one is better than the other. When that happens you should keep iterating and test new assumptions.

Let's take a look at the example below. After you decide to test two different calls-to-action, with a baseline of 35% (the average reply rate of your email campaigns), a minimum detectable effect of 30%, and a statistical significance of 95%, your sample size needs to be 170. This means you have to send 170 emails for each call-to-action and analyze the results.

After testing both calls-to-action, imagine that this was the result of your test:

alt

   

As you can see, the call-to-action 2 had a higher reply rate than the call-to-action 1. Since we set our minimum detectable effect at 30% of our baseline (35%), it means the increment between the two calls-to-action exceeded the stipulated value.

Summary

In order to run successful A/B tests, you need to clearly define your goals (i.e. the metrics you want to test), decide what you are going to measure and create different scenarios to test. Analyze your results and keep iterating. Make A/B testing a recurring practice in order to improve the performance of your email campaigns.  

Copied

Subscribe to Amplemarket Blog

Sales Tips, Email Resources, Marketing Content

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Ready to run your
business with AI?

Sign up for an amplemarket account

Email addresses ending in @gmail, @outlook or @yahoo are not accepted.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.