As most marketers would know, A/B testing (also called split testing or bucket testing) is the process of testing multiple variations of web pages or e-mailers sent to customers, to try and ascertain the elements on a page that convert, and to get landing pages or mails that boost your marketing ROI. A/B testing is pertinent for marketers to ensure that they are optimizing the design and messaging of your marketing. So what goes into creating a good A/B test? Let’s consider a few points.
It’s not about the tool
Let’s address the elephant in the room first. Getting A/B testing right is not dependent on the tool, but on other factors that we will discuss below. There are plenty of tools that let you run and analyse A/B tests, and I keep hearing marketers lament how the use of a specific tool helped or hindered their A/B test. At the cost of repetition,I will say this again. Beyond a point, as long as the hygiene factors are met, focus on your metrics and test attributes, rather than just the tool used.
Identify the right metrics to track
The starting point for a successful A/B test is to identify the metrics you wish to validate. Bear in mind that this does not imply that you are starting off with a hypothesis. For that may influence your choice of attributes and prevent you from creating an unbiased test case. Are you optimizing for increased time on page, clicks, or offline actions influenced by your page content, ensure that the right metrics for your business are identified at the start of the test. Remember, you cannot improve what you cannot measure.
Test key elements, one change at a time
Do not change too many page attributes at one point. An A/B test is meant to check one change at a time. Having multiple moving parts will prevent you from being to accurately attribute the improvements to a specific change.
Think as your user
At the end of the day, whatever you do as a marketer, you do for improving your engagement with your target users. Recognize who your target audience is and ensure that all your content speaks to them. Experiment between “My” and “Your” variations on your page (e.g. “Download your free copy” versus “Download my free copy” call to action buttons) and see what impact it has on your conversions.
Manage tracking pixels properly
This cannot be overstated. You do all the work, create the framework, drive the traffic for your test, only to find out that the tracking code was inserted improperly. Take the help of your technical team if required, but use some test traffic to ensure that the corresponding data is showing up in you dashboard.
Test mobile versus desktop traffic
Realize that mobile users on your site react differently from desktop users. Ensure you segregate your traffic by traffic type to get a clear idea of how each cohort is behaving.
Test traffic from various sources
Look at your Google Analytics dashboard to get a view of how users are coming in to your site. Your site attracts traffic from multiple sources, so be sure to test the same when you conduct a A/B test, depending on the profile of customers you are targeting. Each user has a different persona, a different context with which they interact with your brand on a particular channel. Draw out a traffic behavior versus source chart to see if any interesting cohorts are thrown up. That may make for interesting reading.
Account for various time of day and day of week events
Traffic on your site would typically not be uniform through the week or even through the day. Ensure you give your test enough time to run, to negate any time of day or day or week variations or impacts that may sully the results.
Anything else we missed in our list above? Do let us know in the comments.