# Ordered logit

(Redirected from Ordered logistic regression)

In statistics, the ordered logit model (also ordered logistic regression or proportional odds model), is an ordinal regression model—that is, a regression model for ordinal dependent variables—first considered by Peter McCullagh.[1] For example, if one question on a survey is to be answered by a choice among "poor", "fair", "good", and "excellent", and the purpose of the analysis is to see how well that response can be predicted by the responses to other questions, some of which may be quantitative, then ordered logistic regression may be used. It can be thought of as an extension of the logistic regression model that applies to dichotomous dependent variables, allowing for more than two (ordered) response categories.

## The model and the proportional odds assumption

The model only applies to data that meet the proportional odds assumption, the meaning of which can be exemplified as follows. Suppose the proportions of members of the statistical population who would answer "poor", "fair", "good", "very good", and "excellent" are respectively p1, p2, p3, p4, p5. Then the logarithms of the odds (not the logarithms of the probabilities) of answering in certain ways are:

${\displaystyle {\begin{array}{rll}{\text{poor}},&\log {\frac {p_{1}}{p_{2}+p_{3}+p_{4}+p_{5}}},&0\\[8pt]{\text{poor or fair}},&\log {\frac {p_{1}+p_{2}}{p_{3}+p_{4}+p_{5}}},&1\\[8pt]{\text{poor, fair, or good}},&\log {\frac {p_{1}+p_{2}+p_{3}}{p_{4}+p_{5}}},&2\\[8pt]{\text{poor, fair, good, or very good}},&\log {\frac {p_{1}+p_{2}+p_{3}+p_{4}}{p_{5}}},&3\end{array}}}$

The proportional odds assumption is that the number added to each of these logarithms to get the next is the same in every case. In other words, these logarithms form an arithmetic sequence.[2] The model states that the number in the last column of the table—the number of times that that logarithm must be added—is some linear combination of the other observed variables.

The coefficients in the linear combination cannot be consistently estimated using ordinary least squares. They are usually estimated using maximum likelihood. The maximum-likelihood estimates are computed by using iteratively reweighted least squares.

Examples of multiple ordered response categories include bond ratings, opinion surveys with responses ranging from "strongly agree" to "strongly disagree," levels of state spending on government programs (high, medium, or low), the level of insurance coverage chosen (none, partial, or full), and employment status (not employed, employed part-time, or fully employed).[3]

Suppose the underlying process to be characterized is

${\displaystyle y^{*}=\mathbf {x} ^{\mathsf {T}}\beta +\varepsilon ,\,}$

where ${\displaystyle y^{*}}$ is the exact but unobserved dependent variable (perhaps the exact level of agreement with the statement proposed by the pollster); ${\displaystyle \mathbf {x} }$ is the vector of independent variables, ${\displaystyle \varepsilon }$ is the error term, and ${\displaystyle \beta }$ is the vector of regression coefficients which we wish to estimate. Further suppose that while we cannot observe ${\displaystyle y^{*}}$, we instead can only observe the categories of response

${\displaystyle y={\begin{cases}0&{\text{if }}y^{*}\leq \mu _{1},\\1&{\text{if }}\mu _{1}

where the parameters ${\displaystyle \mu _{i}}$ are the externally imposed endpoints of the observable categories. Then the ordered logit technique will use the observations on y, which are a form of censored data on y*, to fit the parameter vector ${\displaystyle \beta }$.

## Estimation

For details on how the equation is estimated, see the article Ordinal regression.