Glossary

Email Marketing A/B Testing


Email marketing is one of the most common ways for marketers to engage with their subscribers. From small local businesses to large international companies, government institutions, and even political campaigns, everyone uses it.

Among the reasons email marketing is so successful is the ability to track its effectiveness in real-time reports and tools, such as A/B testing.’ This allows them to find out which subject lines work best, what type of content gets engagement, or what day of the week brings in the most subscriptions.

What is A/B Testing?

The short answer: A method for comparing two versions of a campaign against each other to determine which performs better.

The long answer: Often used in combination with tools like Google Analytics, A/B testing is a way for marketers to test variables to determine which version of their email (e.g., subject line or layout) yields the most conversions and results in more sign-ups and purchases than the other.

A simple example: If your original web page has a ‘buy now button that says “buy,” you could split test changing this to “pre-order” and see which one gets more clicks. You can continue to make further changes like this until you find what works best (the highest click rate).

Once you’ve found a compelling message, it can be used across your entire marketing campaign; from emails to social media posts and everything in between!

How Do I Run An A/B Test?

To run an A/B test, all you need is to create two (or more) versions of the same email or web page. The ‘A’ version is considered your control and would be done as close to launching as possible. This will be your original design – the one that has already proven effective in past campaigns or you think will work best for this particular campaign.

The ‘B’ version should then be created with a single change to the first version; it could be changing the color of a button, adding some text to the copy, altering an image, etc. Each ‘version’ needs to have enough differentiation to clarify which one performed than the other, but with a maximum of two parameters that are easy to distinguish. 

Having past subscribers help with A/B testing is a great way to get statistically significant results quickly. If you are creating new versions of your email, keeping track of past performance can give you an insight into how large your sample size needs to be for your A/B test to be accurate. For example, if one version has a history of getting 10% more clicks than the other, then you would need at least 80 responses per variation to conclude that there’s no difference between them at all.