More and more frequently we are asked about A/B testing and how and when to use it? Hence I decided to write a post on the topic, starting with the obvious question for those who are not yet familiar: what is A/B testing really, and why should I use it?
Let’s start with the Wikipedia definition since it is quite good:
A/B testing is a way to compare two versions of a single variable typically by testing a subject’s response to variable A against variable B, and determining which of the two variables is more effective.
The testing is performed by randomly sending traffic to variable A and B respectively. By doing so any difference in factors such as the target audience and time are eliminated as far as possible. What you get at the end is an answer to the question which of the two variants that maximise your wished for outcome. This can be traffic for the attract phase, leads for the convert phase and qualified leads or customers for the close phase—if mapped to the Inbound methodology.
Above leads us to the why. A/B testing is a proven way to—by way of evidence—help improve the business outcome of a certain piece of content or functionality. In metric-driven organisations where investments need to be tested against set levels of Return of Investment (ROI), A/B testing is often the key to achieving maximum results and increased budgets. Common types of A/B tests to are:
Now, A/B testing should never be your number one priority—creating content should be your first priority! But once you have the content and have gained a little bit of traction and experience as to how that content is performing, then dedicating time for A/B testing is well worth the investment. Simply because you can maximise/optimise the investments already made. For example, doubling the conversion rate means cutting the cost per lead in half which greatly increases the lifetime ROI of your original investment.
A/B testing is a basic form of optimisation that works well if used properly. The golden rule of A/B testing is to use what is often referred to as DDD or Dramatically Different Design. What that means is that in order to get statistically proven results, you can’t make too subtle changes between variant A and B and hope it makes a difference. Instead you need radically different designs to test against each other. You want clearly different results to make the A/B testing investment worth the time and effort.
Once you’ve found your winner, A or B, then continue to fine tune the winning variant using more subtle changes and potentially apply Multivariate Testing (MVT) rather than another round of A/B testing. In MVT multiple variations of the same overall design is tested at the same time, or as Wikipedia puts it; ‘It can be thought of in simple terms as numerous A/B tests performed on one page at the same time.’
Whereas A/B testing is usually used to determine the better of two variations, MVTs can potentially test an infinite number of possible combinations. Think of it this way; rather than testing page A vs. page B as a whole, you test a page containing element A, B, C and D and have the test randomly serve up pages built from any possible combination. What sets the limits are the time it takes to get a statistically solid result; the more combinations the longer it will take.
Want to learn more about A/B testing? Please get in touch!