It’s time to remove the mystery around A/B testing so you can get the most out of your email marketing.
Marketing is a mix of art and science. Before the Internet, marketing was mostly seen as an art. Have you seen Mad Men? Don Draper meditates on the beach and somehow comes up with an amazing idea that entices millions of people to buy Coke.
How did he know his idea would work? The answer is that he had no idea. But that has changed now that science has emerged to test the art of marketing.
Today, you can measure and test almost EVERYTHING in email marketing. This is one of the reasons why email is so effective. You can evaluate your ideas using an A/B test to compare two email ideas and determine which elements work best.
Most people skip testing because they don’t know how or what to test. If this is you, you are missing out on a huge opportunity to improve your campaigns.
A/B split testing is a way of evaluating and comparing two things. In email marketing, you simply set up two emails that are exactly the same except for one variable, such as a different subject line. You then send the two emails to a small sample of your subscribers to see which email is more effective.
Half of your test group receives Email A and the other half get Email B. The winner is determined by what you are trying to measure. For instance, if you want more people to open your emails, you use open rate as your success metric.
You can then send the winning email to the rest of your subscriber list with the confidence that it is your best option.
MailerLite offers an A/B split test feature that guides you step-by-step through the entire process. We keep it simple.
It is time to go back to high school science class for a few minutes. An A/B test is basically an experiment where you are testing one variable within the email (e.g. a subject line).
In school, you learn to always follow the same process when conducting an experiment to ensure consistency and accuracy of the results. The same is true with A/B testing.
We’ve broken down email A/B testing into 5 steps that will help you avoid any over-complication and allow you to conduct a successful test.
The first step of any experiment is to define the problem (or question) that you are trying to solve and then determine what metrics will determine if the problem was solved.
For example, let’s say you need to go to the grocery store and there are two different routes. You want to take the fastest route, but you don’t know which it is. The problem you need to solve is finding the fastest route.
In this case, the success metric is time. How much time does each route take to get to the store?
There are a bunch of questions you can ask about your emails that would lead to improvements. The three most popular questions with A/B testing are:
Success Metric = Open Rate
Success Metric = Reply Rate
Success Metric = Click-Through Rate
These are metrics that every email marketer wants to improve, and A/B testing gives them the insights to create emails that work better to achieve success.
Sometimes the problem you are trying to solve doesn’t have a clear success metric.
Let’s say you want to test the best time of day to send an email. If the email sent out at 11 AM has the best open rates, but email sent at 4 PM has better click-through rates, which time should you send your email?
The answer depends on your goal for the campaign. If your goal is for people to download an e-book, your success metric should be the number of downloads.
You have to look beyond the email metrics to the campaign's ultimate goal. What if the email with fewer click-throughs yielded more downloads? It is counterintuitive, but that often happens in email marketing.
Don’t assume an email is performing better by just looking at the top-level metrics.
Once you have a clear goal and success metrics, it’s time to have a bit of fun. You get to form a hypothesis of what you think you should change to achieve your goal.
A hypothesis is your theory of how to solve the problem. Let’s say you want to improve clicks within your newsletter. One hypothesis could be that changing the call-to-action with stronger language will increase clicks. Another hypothesis can be that you need to make the button larger or change the button color to improve clicks.
The only way to prove your hypothesis is to test!
What would your hypothesis be?
It depends on your email design and content. This is where your critical-thinking skills and creativity come into the experiment. You need to look at the various elements of your email and make some educated guesses.
If you are not sure what to change, you can run separate tests to evaluate each element. But don’t make the mistake of changing more than one variable per test. Every A/B test should only evaluate one thing.
That takes us to the next step of the process: setting up your control and test emails.
The purpose of testing your email is to learn and make improvements. To improve something, you need to start with an original version. This is your control email. It gives you a baseline of measurement, so you know what improvement looks like.
This email should stay exactly the same for every test. Your test email is the control email that includes one variation that you want to test. This will allow you to test different elements of the email and compare the results to this original email.
For example, if you are wondering if your opens would improve by either changing the subject or the preheader test, you would run two separate tests. One test would compare the original (control) email to a test email with a different subject line. The other test would compare the original to a test email with a changed preheader.
Your control email tells you what happens typically. When you change something in your test email, you can easily compare to the control to see if that independent variable is better or worse.
If you are like most people, you want to test a few things to see what moves the needle. It is tempting to change two or three elements of your email to save time, but doing this will get you nowhere.
How would you know which if the three variables made the impact? It is impossible to know. And what if only changing one variable made an even more significant impact?
This is where people tend to overcomplicate A/B testing. Changing more than one variable will leave you with more questions than answers and you will not know what to do next. If you want to test multiple elements, conduct individual test versus the control group.
And don’t forget, you can test variables beyond the elements in your newsletter. Things outside the newsletter such as your audience and the time you send the campaign can have an impact. Make sure you send your A/B test to the same audience simultaneously to avoid introducing more variables such as time of day.
An A/B test is designed to isolate a variable to see if it moves the needle. Keep it simple and test one thing at a time.
You can test every aspect of your campaign, but that is not a good use of your time. Start by testing the common elements that will have a direct impact on your metrics.
While some variables might not be as recognizable, such as testing segments of your target audience or the best time to run a promotion, there are a set of standard email elements to test before you dig deeper.
Here are the top 5 variables that email marketers commonly test:
The most popular variable to test, it’s often the first thing people see. A good line can make all the difference, especially with open rates.
Your CTA can have a big impact on clicks-throughs. There are a few things you can test including the CTA text, where it is placed and the design.
Grabs the reader’s attention and entices them to engage. In addition to testing wording, think about different text sizes and colors.
You can test which images work best, the size of the image and whether you should use multiple images.
Experiment with call-outs, lists and different types of subtitles. Emails should be scannable, but you also want people to engage and take action.
As we mentioned earlier, a poorly planned sample audience that receives your A/B test emails skew the results. But don’t stress! Here are the three most important things to remember when planning your sample.
You want to have a large sample audience to increase the statistical significance of the results. When your group is too small, the results could be more random because the behaviors of smaller groups are more different than a larger group.
While there are not standard size groups due to all the possible variations of a test, many experts recommend having at least 1000 subscribers to test. If you currently have less than 1000 subscribers, you can still do the test, but if the results are close, you might not have a clear winner.
We found this super simple A/B sample size calculator that you can use to plug in your numbers to see the statistical significance of your results.
Pick your audience carefully. Avoid skewing your results by choosing your audience from a list composed of similar subscribers.
For example, you can send the test to all new subscribers, base it on geography or certain types of customers. The point is that both test emails must go out to the same group. And when you select the group, the split should be random.
The whole point of the test is for people to engage with your emails so you can learn more. If you send out the emails to inactive users, you won’t see the results. And if one group has less inactive subscribers, that group is more likely to win for the wrong reason.
The only time to use inactive subscribers is for testing a re-engagement campaign. If you are testing emails targeting inactive subscribers, then it makes sense to use them as the sample audience.
When your A/B test is complete, you will have access to several metrics such as opens, replies, forwards and click-throughs. All of these metrics are important to judge an emails performance, but these metrics might not have anything to do with the test.
Remember, you started the A/B test with a specific goal and hypothesis. The winner of the test should only be judged by the success metrics that you put in place when defining your test.
Also, the results of your test are helpful for that specific campaign, but one result doesn’t mean it will remain true forever. If one email beats another by a lot, then the result is probably effective for a longer term, but people’s behaviors continuously change.
Keep regularly testing to ensure your email tactics remain the most effective they can be to grow your business.
There is nothing mysterious or complicated about A/B testing. Email marketing is much harder without A/B testing. Your campaigns will never improve without learning what works and what doesn’t.
MailerLite makes it easy to set up an A/B test. Start small and try testing your subject line to improve your open rates. Once you get the hang of it, you can go deeper.
To set up an A/B test on MailerLite, check out our tutorial video to get started.