A third factor to take into account is the timing window.
When do you normally open an email? Your answer is probably: it depends.
You might be online, see the email coming in and click within 5 minutes. Or you might first see the newsletter 2 hours after it got delivered in your mailbox. Or perhaps the subject line didn’t grab you enough and you leave the email unopened.
These are all real scenarios, which is why you should have an adequate time window when running an A/B test.
While with variables like subject lines and opens you can send the winner as early as 2 hours after sending, you might want to wait for a longer time if you’re measuring click-throughs. When you’re testing your newsletter on active subscribers, you can shorten the waiting time.
Research has shown that when you wait 2 hours, the accuracy of the test will be around 80%. The longer time you add to those hours, the more accurate your results will be. To hit an accuracy of 99%, it’s best to wait for an entire day.
Be aware that a longer waiting time is not always better. Some newsletters are time-sensitive and should be sent asap. In other situations, waiting too long will result in the winning email being sent at the weekend. A weekday versus a Saturday or Sunday can make a lot of difference in your email stats (check out this article if you’re wondering when is the best time to send your email).
The main rule when it comes to defining the right send time optimization is: Every business is different so it's essential to monitor your metrics and continue to test.
Your fourth factor is the delivery time.
Keep in mind that the winning email is automatically sent once the testing period is completed. As this group likely contains the most subscribers, it’s a good idea to schedule the email automation to reach these people.
Let’s say you’re testing 2 subject lines on 20% of your subscribers (each group contains 10%). You want the winning newsletter to arrive in people’s inboxes at 10 AM and you want to test the open rate for 2 hours.
This means you have to start your test at 8 AM, so your A/B test can run for 2 hours before the winning variant is sent out at 10 AM.
Finally, the fifth factor is to test only one variable at a time.
Imagine you’re sending two emails at the same time. The content and sender’s name are identical. The only thing that differs is the subject line. After a few hours, you see that version A has a much better open rate.
When you only test 1 thing at a time and you see a clear difference in the metric you’re analyzing, you can draw an accurate conclusion. However, if you had also changed the sender’s name, it would be impossible to conclude that the subject line made all the difference.