Score (statistics)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

In statistics, the score, score function, efficient score[1] or informant[2] indicates how sensitively a likelihood function L(\theta; X) depends on its parameter \theta. Explicitly, the score for \theta is the gradient of the log-likelihood with respect to \theta.

The score plays an important role in several aspects of inference. For example:

The score function also plays an important role in computational statistics, as it can play a part in the computation of maximum likelihood estimates.

Definition[edit]

The score or efficient score [1] is the gradient (the vector of partial derivatives), with respect to some parameter \theta, of the logarithm (commonly the natural logarithm) of the likelihood function (the log-likelihood). If the observation is X and its likelihood is L(\theta;X), then the score V can be found through the chain rule:


V \equiv V(\theta, X)
=
\frac{\partial}{\partial\theta} \log L(\theta;X)
=
\frac{1}{L(\theta;X)} \frac{\partial L(\theta;X)}{\partial\theta}.

Thus the score V indicates the sensitivity of L(\theta;X) (its derivative normalized by its value). Note that V is a function of \theta and the observation X, so that, in general, it is not a statistic. However in certain applications, such as the score test, the score is evaluated at a specific value of \theta (such as a null-hypothesis value, or at the maximum likelihood estimate of \theta), in which case the result is a statistic.

In older literature, the term "linear score" may be used to refer to the score with respect to infintesimal translation of a given density. This convention arises from a time when the primary parameter of interest was the mean or median of a distribution. In this case, the likelihood of an observation is given by a density of the form L(\theta;X)=f(X+\theta). The "linear score" is then defined as


V_{\rm linear}
=
\frac{\partial}{\partial X} \log f(X)

Properties[edit]

Mean[edit]

Under some regularity conditions, the expected value of V with respect to the observation x, given \theta, written \mathbb{E}(V\mid\theta), is zero. To see this rewrite the likelihood function L as a probability density function L(\theta; x) = f(x; \theta). Then:


\mathbb{E}(V\mid\theta)
=\int_{-\infty}^{+\infty}
f(x; \theta) \frac{\partial}{\partial\theta} \log L(\theta;X)
\,dx
=\int_{-\infty}^{+\infty}
 \frac{\partial}{\partial\theta} \log L(\theta;X) f(x; \theta) \, dx

=\int_{-\infty}^{+\infty}
\frac{1}{f(x; \theta)}\frac{\partial f(x; \theta)}{\partial \theta}f(x; \theta)\, dx
=\int_{-\infty}^{+\infty} \frac{\partial f(x; \theta)}{\partial \theta} \, dx

If certain differentiability conditions are met (see Leibniz integral rule), the integral may be rewritten as


\frac{\partial}{\partial\theta} \int_{-\infty}^{+\infty}
 f(x; \theta) \, dx
=
\frac{\partial}{\partial\theta}1 = 0.

It is worth restating the above result in words: the expected value of the score is zero. Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zero as the number of repeat samples approached infinity.

Variance[edit]

Main article: Fisher information

The variance of the score is known as the Fisher information and is written \mathcal{I}(\theta). Because the expectation of the score is zero, this may be written as


\mathcal{I}(\theta)
=
\mathbb{E}
\left\{\left.
 \left[
  \frac{\partial}{\partial\theta} \log L(\theta;X)
 \right]^2
\right|\theta\right\}.

Note that the Fisher information, as defined above, is not a function of any particular observation, as the random variable X has been averaged out. This concept of information is useful when comparing two methods of observation of some random process.

Examples[edit]

Bernoulli process[edit]

Consider a Bernoulli process, with A successes and B failures; the probability of success is θ.

Then the likelihood L is


L(\theta;A,B)=\frac{(A+B)!}{A!B!}\theta^A(1-\theta)^B,

so the score V is


V=\frac{1}{L}\frac{\partial L}{\partial\theta} = \frac{A}{\theta}-\frac{B}{1-\theta}.

We can now verify that the expectation of the score is zero. Noting that the expectation of A is nθ and the expectation of B is n(1 − θ) [recall that A and B are random variables], we can see that the expectation of V is


E(V)
= \frac{n\theta}{\theta} - \frac{n(1-\theta)}{1-\theta}
= n - n 
= 0.

We can also check the variance of V. We know that A + B = n (so Bn − A) and the variance of A is nθ(1 − θ) so the variance of V is


\begin{align}
\operatorname{var}(V) & =\operatorname{var}\left(\frac{A}{\theta}-\frac{n-A}{1-\theta}\right)
=\operatorname{var}\left(A\left(\frac{1}{\theta}+\frac{1}{1-\theta}\right)\right) \\
& =\left(\frac{1}{\theta}+\frac{1}{1-\theta}\right)^2\operatorname{var}(A)
=\frac{n}{\theta(1-\theta)}.
\end{align}

Binary outcome model[edit]

For models with binary outcomes (Y = 1 or 0), the model can be scored with the logarithm of predictions

 S = Y \log( p ) + ( Y - 1 ) ( \log( 1 - p ) )

where p is the probability in the model to be estimated and S is the score.[7]

Applications[edit]

Scoring algorithm[edit]

Main article: Scoring algorithm

The scoring algorithm is an iterative method for numerically determining the maximum likelihood estimator.

Score test[edit]

Main article: Score test

See also[edit]

Notes[edit]

  1. ^ a b Cox & Hinkley (1974), p 107
  2. ^ Chentsov, N.N. (2001), "Informant", in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 
  3. ^ Cox & Hinkley (1974), p 113
  4. ^ a b Cox & Hinkley (1974), p 295
  5. ^ Cox & Hinkley (1974), p 222–3
  6. ^ Cox & Hinkley (1974), p 254
  7. ^ Steyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ and Kattan MW (2010) Assessing the performance of prediction models. A framework for traditional and novel measures. Epidemiology 21 (1) 128–138 DOI: 10.1097/EDE.0b013e3181c30fb2

References[edit]

  • Cox, D.R., Hinkley, D.V. (1974) Theoretical Statistics, Chapman & Hall. ISBN 0-412-12420-3
  • Schervish, Mark J. (1995). Theory of Statistics. New York: Springer. Section 2.3.1. ISBN 0-387-94546-6.