A/B testing

From Wikipedia, the free encyclopedia
  (Redirected from A/B Testing)
Jump to: navigation, search

In marketing, A/B testing is a simple randomized experiment with two variants, A and B, which are the control and treatment in the controlled experiment. It is a form of statistical hypothesis testing. Other names include randomized controlled experiments, online controlled experiments, and split testing. In online settings, such as web design (especially user experience design), the goal is to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement).

As the name implies, two versions (A and B) are compared, which are identical except for one variation that might affect a user's behavior. Version A might be the currently used version (control), while Version B is modified in some respect (treatment). For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can sometimes be seen through testing elements like copy text, layouts, images and colors,[1] but not always.[2] The vastly larger group of statistics broadly referred to as multivariate or multinomial testing is similar to A/B testing, but may test more than two different versions at the same time and/or has more controls, etc. Simple A/B tests are not valid for observational, quasi-experimental or other non-experimental situations, as is common with survey data, offline data, and other, more complex phenomena.

A/B testing has been marketed by some as a change in philosophy and business strategy in certain niches, though the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions.[3][4][5] A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.

An emailing campaign example[edit]

A company with a customer database of 2000 people decides to create an email campaign with a discount code in order to generate sales through its website. It creates an email and then modifies the call to action (the part of the copy which encourages customers to do something — in the case of a sales campaign, make a purchase).

  • To 1000 people it sends the email with the call to action stating, "Offer ends this Saturday! Use code A1",
  • and to another 1000 people it sends the email with the call to action stating, "Offer ends soon! Use code B1".

All other elements of the email's copy and layout are identical. The company then monitors which campaign has the highest success rate by analysing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. A more nuanced approach would involve applying statistical testing to determine if the differences in response rates between A1 and B1 were statistically significant (that is, highly likely that the differences are real, repeatable, and not due to random chance).[6]

In the example above, the purpose of the test is to determine which is the most effective way to impel customers into making a purchase. If, however, the aim of the test had been to see which would generate the highest click-rate – that is, the number of people who actually click onto the website after receiving the email — then the results might have been different.

More of the customers receiving the code B1 might have accessed the website after receiving the email, but because the Call To Action didn't state the end-date of the promotion, there was less incentive for them to make an immediate purchase. If the purpose of the test had been simply to see which would bring more traffic to the website, then the email containing code B1 might have been more successful. An A/B test should have a defined outcome that is measurable, e.g. number of sales made, click-rate conversion, number of people signing up/registering etc.[7]

Segmentation and targeting[edit]

A more advanced output of A/B testing is a segmented strategy rather than a global strategy. That is, while code A1 may have had a higher response rate overall, code B1 may have actually had a higher response rate within a specific segment of the customer base.[8]

For example, the breakdown of the response rates by gender could have been:

Overall Men Women
Total sends 2,000 1,000 1,000
Total responses 80 35 45
Code A1 50 / 1,000 (5%) 10 / 500 (2%) 40 / 500 (8%)
Code B1 30 / 1,000 (3%) 25 / 500 (5%) 5 / 500 (1%)

In this case, we can see that while code A1 had a higher response rate overall, code B1 actually had a higher response rate with men. As a result, the company might select a segmented strategy as a result of the A/B test, sending code B1 to men and code A1 to women going forward. In this example, a segmented strategy would yield an increase in expected response rates from 5% ((40 + 10) / (500+500)) to 6.5% ((40 + 25) / (500+500)), constituting a 30% increase.

It is important to note that if segmented results are expected from the A/B test, the test should be properly designed at the outset to be evenly distributed across key customer attributes, such as gender. That is, the test should both (a) contain a representative sample of men vs. women, and (b) assign men and women randomly to each “treatment” (code A1 vs. code B1). Failure to do so could lead to experiment bias and inaccurate conclusions to be drawn from the test.[9]

This segmentation and targeting approach can be further generalized to include multiple customer attributes rather than a single customer attribute – for example, customer age AND gender, to identify more nuanced patterns that may exist in the test results.

Acceptance[edit]

Many companies use the "designed experiment" approach to making marketing decisions, with the expectation that relevant sample results can improve positive conversion results.[10] It is an increasingly common practice as the tools and expertise grows in this area. There are many A/B testing case studies which show that the practice of testing is increasingly becoming popular with small and medium-sized businesses as well.[11]

See also[edit]

References[edit]

  1. ^ "Split Testing Guide for Online Stores". webics.com.au. August 27, 2012. Retrieved 2012-08-28. 
  2. ^ "Only 7 out of 10 tests fail". convert.com. Sep 1, 2013. Retrieved 2013-09-01. 
  3. ^ Christian, Brian (2000-02-27). "The A/B Test: Inside the Technology That's Changing the Rules of Business | Wired Business". Wired.com. Retrieved 2014-03-18. 
  4. ^ Christian, Brian. "Test Everything: Notes on the A/B Revolution | Wired Enterprise". Wired.com. Retrieved 2014-03-18. 
  5. ^ Cory Doctorow at 3:33 am Thu, Apr 26, 2012 (2012-04-26). "A/B testing: the secret engine of creation and refinement for the 21st century". Boing Boing. Retrieved 2014-03-18. 
  6. ^ Amazon.com. "The Math Behind A/B Testing". Developer.amazon.com. Retrieved 2014-03-18. 
  7. ^ Kohavi, Ron; Longbotham, Roger, Sommerfield, Dan, Henne, Randal M. (2009). "Controlled experiments on the web: survey and practical guide". Data Mining and Knowledge Discovery (Berlin: Springer) 18 (1): 140–181. doi:10.1007/s10618-008-0114-1. ISSN 1384-5810. 
  8. ^ "Advanced A/B Testing Tactics That You Should Know | Testing & Usability". Online-behavior.com. Retrieved 2014-03-18. 
  9. ^ "Eight Ways You’ve Misconfigured Your A/B Test". Dr. Jason Davis. 2013-09-12. Retrieved 2014-03-18. 
  10. ^ "A Simple Approach to Relevant A/B Testing". LEWIS Pulse. Retrieved 2013-09-24. 
  11. ^ "A/B Split Testing | Multivariate Testing | Case Studies". Visual Website Optimizer. Retrieved 2011-07-10.