A/B testing a term commonly used in web development, online marketing, and other forms of advertising to describe simple randomized experiments with two variants, A and B, which are the control and treatment in the controlled experiment. The formal or scientific name used for this process and other related process is Hypothesis testing. Other names include randomized controlled experiments, online controlled experiments, and split testing. In online settings, such as web design (especially user experience design), the goal is to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement).
As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the currently used version (control), while Version B is modified in some respect (treatment). For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can sometimes be seen through testing elements like copy text, layouts, images and colors, but not always. The vastly larger group of statistics broadly referred to as multivariate or multinomial testing is similar to A/B testing, but may tests more than two different versions at the same time and/or has more controls, etc. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations, as is common with survey data, offline data, and other, more complex phenomena.
A/B testing has been marketed by some as a change in philosophy and business strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions. A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.
An emailing campaign example
A company with a customer database of 2000 people decides to create an email campaign with a discount code in order to generate sales through its website. It creates an email and then modifies the Call To Action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase).
- To 1000 people it sends the email with the Call To Action stating "Offer ends this Saturday! Use code A1",
- and to another 1000 people it sends the email with the Call To Action stating "Offer ends soon! Use code B1".
All other elements of the email's copy and layout are identical. The company then monitors which campaign has the highest success rate by analysing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales.
In the example above, the purpose of the test is to determine which is the most effective way to impel customers into making a purchase. If, however, the aim of the test was to see which would generate the highest click-rate – that is, the number of people who actually click onto the website after receiving the email — then the results may have been different.
More of the customers receiving the code B1 may have accessed the website after receiving the email, but because the Call To Action didn't state the end-date of the promotion, there was less incentive for them to make an immediate purchase. If the purpose of the test was simply to see which would bring more traffic to the website, then the email containing code B1 may have been more successful. An A/B test should have a defined outcome that is measurable, e.g. number of sales made, click-rate conversion, number of people signing up/registering etc.
Many companies use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results. It is an increasingly common practice as the tools and expertise grows in this area. There are many A/B testing case studies which show that the practice of testing is increasingly becoming popular with small and medium-sized businesses as well.
- Choice modelling
- Google Analytics Content Experiments (formerly Google Website Optimizer)
- Multivariate testing
- "Split Testing Guide for Online Stores". webics.com.au. August 27, 2012. Retrieved 2012-08-28.
- "Only 7 out of 10 tests fail". convert.com. Sep 1, 2013. Retrieved 2013-09-01.
- Kohavi, Ron; Longbotham, Roger, Sommerfield, Dan, Henne, Randal M. (2009). "Controlled experiments on the web: survey and practical guide". Data Mining and Knowledge Discovery (Berlin: Springer) 18 (1): 140–181. doi:10.1007/s10618-008-0114-1. ISSN 1384-5810.
- "A Simple Approach to Relevant A/B Testing". LEWIS Pulse. Retrieved 2013-09-24.
- "A/B Split Testing | Multivariate Testing | Case Studies". Visual Website Optimizer. Retrieved 2011-07-10.