As martech magicians, we at CMO love A/B testing.
Any time we can base our marketing in data is a good time--because the most effective decisions are always going to be those backed by data. Email marketing is no exception.
So, what exactly is A/B testing for email? It’s one of the best email marketing practices you can implement. It is the process of comparing two versions of a single email in order to see which performs better.
In other words, you create a piece of content and change one variable. Then you send each variation out to different contacts. For example, you might send Version 1 out to 25% of your database, and Version 2 out to a different 25% of your database. The version that garners the best results will then be sent to the remaining 50%.
We (of course) love HubSpot’s concise definition of it:
A/B testing is “the inbound answer to a controlled experiment. A method of comparing two versions of something...to determine which one performs better.”
Why A/B Testing is One of the Best Email Marketing Practices
Small tweaks can have a big impact.
For example, when HubSpot decided to A/B test their sender, they found that using a personalized sender name resulted in a .53% higher open rate and a .23% higher CTR. That might not sound like a lot, but it resulted in 130+ more leads being gained.
Therein lies the power of A/B testing: When executed correctly, it can help you create better performing emails.
The alternative is one big guessing that’s bound to be ineffective. As humans, we’re highly fallible and biased.
Put another way, we’re marketers; not mind-readers.
A/B testing empowers you to hone in on what your audience responds to--not what you think they will. It stops the guessing games and makes the numbers do the work for you (pssst...that's the #1 secret behind the best email marketing campaigns).
And this is more important than ever because it’s exceedingly difficult to cut through the digital noise.
Think of it this way. The average person is exposed to somewhere between 4,000-10,000 media messages a day. They might pay attention to fifty. And remember four.
So, you need to be as effective as you can with your communications. You must optimize.
And you can’t optimize without iterating.
Enter A/B testing.
It’s easy, it’s inexpensive, and it will make you better at what you do.
Convinced?
Great. Let’s dive in.
The Best Email Marketing Process for A/B Testing
As with any experiment, the process for our A/B testing harkens back to the scientific method. Think back to high school:
- Make an observation
- Create a hypothesis
- Develop an experiment
- Collect data
- Analyze data
- Report
With one tiny tweak, this is the exact process we’ll use to create a stellar A/B test.
- Identify a goal
- Create a hypothesis
- Develop an experiment
- Collect data
- Analyze data
- Report
Identify a Goal: The Best Email Marketing Goals & A/B Variables
You must start with identifying a goal (or “winning metric” in HubSpot) because without intent, you’re not optimizing for anything. Have a clear idea of what you want to achieve so you can measure your progress toward that goal.
When it comes to email, your goal will probably be one of the Big Three:
- Open rate. How many leads opened your email.
- Click through rate (CTR). How many leads clicked the call to action (CTA).
- Click to Open Rate (CTOR). Of the total number that opened, the number that clicked through.
Each of these tends to come with its own, subsequent set of variables. For example, if you’re testing for open rate, you can play around with the:
- Subject line
- Preview text
- Sender
For click rates, you can explore:
- Design
- Format
- Fonts
- Text size
- Colors
- Images
- Copy
- Body copy
- CTA copy
- Email signature
Another, higher-level metric you might consider at some point is conversion rate (CVR). It will help you achieve your best email marketing campaigns to date, but it's a little trickier.
CVR is a little trickier because it’s a more vague metric; the meaning of “conversion” can vary. A conversion refers to a lead taking a desired goal, and that could be downloading a guide, signing up for a free trial, buying a product, etc. Essentially, you have to determine what the conversion rate is for a given campaign.
To calculate CVR, you can use this equation:
CVR = Number of Conversions / Number of Email Recipients x 100
Make sure you’re using the number of people who actually received the email (i.e., don’t included bounces).
And note that this might be more effective when used at the campaign level as opposed to individual emails.
Create a Hypothesis
Whatever you want to measure, be sure that you’re only testing one variable at a time. We’re running an A/B test. Not an A/B/C test. Not a multivariate test. Just two options here.
Of those two options, make your best guess as to which version (A or B) will perform better. It gives you an opportunity to consider the user experience and may force you to question your assumptions when the results start rolling in.
But this isn’t about being right or wrong: ultimately, the goal is to create your best email marketing campaigns, so if you’re not correct in your assumption you still win (as long as you come away from the experiment with some actionable data).
Develop the Experiment
Next, consider where you want to test something. It needs to be an email with a decent amount of traffic, like:
- A welcome email
- A newsletter
- Churn prevention emails
- “Expiring soon” follow-ups for free trials
- Cancellation follow-ups (maybe)
However, you can implement A/B testing anywhere the audience is large enough.
In HubSpot, you have to have 1,000 contacts in order to send an A/B test.
This is because of statistical significance.
Statistical significance is the likelihood that a relationship between variables is caused by something other than chance.
With a smaller population (a.k.a., sample size), there’s less certainty as to whether or not a set of results is accurate; there may be other factors at play. The larger a population, the more certain you can be that the results are conclusive; they are not a result of chance. HubSpot requires you to have a minimum population of 1,000.
If you actually want to see how accurate your test is, you can use a statistical significance calculator like this one.
If there is a notable statistical difference, you’re good to go. If not, you may either need to (a) change the variable you’re testing, or (b) test using a bigger sample size.
Collect & Analyze Data: The Best Email Marketing Process
In HubSpot, testing is easy.
Simply navigate to Marketing > Email > Create Email. On the left-hand sidebar, the icon at the bottom will be A/B if you have Marketing Hub Professional or Enterprise.
Click on it, name your B variation, and then you can toggle between your A and B versions with the button at the top.
On this tab, you’ll also be able to use the slider to determine the A/B distribution. Distribution refers to the ratio of your audience that will receive Version A, Version B, or the winning version.
Here, you’ll be able to set your winning metric. Remember from before that this is either your:
- Open rate, or “opens by delivered”
- Click through rate (CTR), or clicks by delivered
- Click to open rate (CTOR), or clicks by opens
Then, set the test duration and—if the results are inconclusive—which version should be sent. Hit send and HubSpot will do the rest.
Report: Keep Track of Results to Create the Best Email Marketing Campaigns
The smaller your database, the longer it will take to get results. Testing takes time, because a certain number of eyeballs have to see your content before statistical significance is achieved.
When the results do start rolling in, keep track of everything you learn in a good old fashioned spreadsheet. Tease out the lessons from each test you run, and you’ll gain some serious insight into your audience and how to market to them.
And the work is never done—when you’re out to create the best email marketing campaigns that you can, there’s always more you can test, refine, and optimize.