Maximum difference scaling (MaxDiff) is a discrete choice model first described by Jordan Louviere in 1987 while on the faculty at the University of Alberta. The first working papers and publications occurred in the early 1990s. With MaxDiff, survey respondents are shown a set of the possible items and are asked to indicate the best and worst items (or most and least important, or most and least appealing, etc.). According to Louviere, MaxDiff assumes that respondents evaluate all possible pairs of items within the displayed set and choose the pair that reflects the maximum difference in preference or importance. MaxDiff may be thought of as a variation of the method of Paired Comparisons. Consider a set in which a respondent evaluates four items: A, B, C and D. If the respondent says that A is best and D is worst, these two responses inform us on five of six possible implied paired comparisons:
- A > B, A > C, A > D, B > D, C > D
The only paired comparison that cannot be inferred is B vs. C. In a choice among five items, MaxDiff questioning informs on seven of ten implied paired comparisons. MaxDiff questionnaires are relatively easy for most respondents to understand. Furthermore, humans are much better at judging items at extremes than in discriminating among items of middling importance or preference. And since the responses involve choices of items rather than expressing strength of preference, there is no opportunity for scale use bias.
Steve Cohen introduced MaxDiff to the marketing research world in a paper presented at an ESOMAR Conference in Barcelona in 2002 entitled, "Renewing market segmentation: Some new tools to correct old problems." This paper was nominated for Best Methodological paper at that conference.
In 2003 at the ESOMAR Latin America Conference in Punta del Este, Uruguay, Steve and his co-author, Dr. Leopldo Neira, compared MaxDiff results to those obtained by rating scale methods. This paper won Best Methodological Paper at that conference. Later the same year, it was selected as winner of the John and Mary Goodyear Award for Best Paper at all ESOMAR Conferences in 2003 and then it was published as the lead article in "Excellence in International Research 2004," published by ESOMAR.
At the 2003 Sawtooth Conference, Steve Cohen's paper "Maximum Difference Scaling: Improved Measures of Importance and Preference for Segmentation,"  was selected as Best Presentation. Steve and Bryan agreed that MaxDiff should be part of the Sawtooth package and it was introduced later that year.
Later in 2004, Steve Cohen and Bryan Orme of Sawtooth Software won the David K. Hardin Award from the AMA for their paper which was published in Marketing Research Magazine entitled, "What's your preference? Asking survey respondents about their preferences creates new scaling decisions."
The basic steps are:
- select products to be tested
- show products to potential consumers (textually or visually)
- respondents choose the best and worst from each task
- input the data from a representative sample of potential customers into a statistical software program and choose the MaxDiff analysis procedure. The software will produce utility functions for each of the features. In addition to utility scores, you can also request raw counts which will simply sum the total number of times a product was selected as best and worst. These utility functions indicate the perceived value of the product on an individual level and how sensitive consumer perceptions and preferences are to changes in product features.
Why use maximum difference scaling?
Max Diff is an antidote to standard rating scales or importance scales. Respondents find these ratings scales very easy but they do tend to deliver results which indicate that everything is "quite important", making the data not especially actionable. Max Diff on the other hand forces respondents to make choices between options, while still delivering rankings showing the relative importance of the items being rated.
Estimation of the utility function is performed using multinomial discrete choice analysis, in particular multinomial logit. Several algorithms could be used in this estimation process, including maximum likelihood, neural networks, and the Hierarchical Bayes model. The Hierarchical Bayes model is beneficial because it allows for borrowing across the data.
- Almquist, Eric; Lee, Jason (April 2009), What Do Customers Really Want?, Harvard Business Review, retrieved 15 February 2010
- Cohen, Steve and Paul Markowitz (2002), “Renewing Market Segmentation: Some New Tools to Correct Old Problems,” ESOMAR 2002 Congress Proceedings, 595-612, ESOMAR: Amsterdam, The Netherlands.
- Cohen, Steven H. (April 2003). "Maximum Difference Scaling: Improved Measures of Importance and Preference for Segmentation". Proceedings of the Sawtooth Software Conference. San Antonio, TX. pp. 61–74.
- Louviere, J. J. (1991), “Best-Worst Scaling: A Model for the Largest Difference Judgments,” Working Paper, University of Alberta.
- Thurstone, L. L. (1927), “A Law of Comparative Judgment,” Psychological Review, 4, 273-286.
- Cohen, Steven H. and Paul Markowitz (2002) “Renewing market segmentation: Some new tools to correct old problems.” ESOMAR 2002 Congress Proceedings, 595-612, ESOMAR: Amsterdam, The Netherlands.
- Cohen, Steven H. and Leopoldo Neira (2003). “Measuring preferences for product benefits across countries: Overcoming scale usage bias with maximum difference scaling.” Paper presented at the Latin American Conference of the European Society for Opinion and Marketing Research, Punta del Este, Uruguay. Reprinted in Excellence in International Research: 2004. ESOMAR, Amsterdam, Netherlands, 1-22.
- Maximum Difference Scaling: Improved Measures of Importance and Preference for Segmentation Steven H. Cohen, SHC & Associates, Sawtooth Software RESEARCH PAPER SERIES, 2003
- Steven H. Cohen and Bryan Orme, 2004, What's your preference? Asking survey respondents about their preferences creates new scaling decisions. Winner of the 2004 David K. Hardin Award from the American Marketing Association.