Point estimation

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

In statistics, point estimation involves the use of sample data to calculate a single value (known as a point estimate since it identifies a point in some parameter space) which is to serve as a "best guess" or "best estimate" of an unknown population parameter (for example, the population mean). More formally, it is the application of a point estimator to the data to obtain a point estimate.

Point estimation can be contrasted with interval estimation: such interval estimates are typically either confidence intervals, in the case of frequentist inference, or credible intervals, in the case of Bayesian inference.

Point estimators[edit]

There are a variety of point estimators, each with different properties.

Bayesian point estimation[edit]

Bayesian inference is typically based on the posterior distribution. Many Bayesian point estimators are the posterior distribution's statistics of central tendency, e.g., its mean, median, or mode:

  • Posterior mean, which minimizes the (posterior) risk (expected loss) for a squared-error loss function; in Bayesian estimation, the risk is defined in terms of the posterior distribution, as observed by Gauss.[1]
  • Posterior median, which minimizes the posterior risk for the absolute-value loss function, as observed by Laplace.[1][2]
  • maximum a posteriori (MAP), which finds a maximum of the posterior distribution; for a uniform prior probability, the MAP estimator coincides with the maximum-likelihood estimator;

The MAP estimator has good asymptotic properties, even for many difficult problems, on which the maximum-likelihood estimator has difficulties. For regular problems, where the maximum-likelihood estimator is consistent, the maximum-likelihood estimator ultimately agrees with the MAP estimator.[3][4][5] Bayesian estimators are admissible, by Wald's theorem.[4][6]

The Minimum Message Length (MML) point estimator is based in Bayesian information theory and is not so directly related to the posterior distribution.

Special cases of Bayesian filters are important:

Several methods of computational statistics have close connections with Bayesian analysis:

Properties of point estimates[edit]

See also[edit]

Notes[edit]

  1. ^ a b Dodge, Yadolah, ed. (1987). Statistical data analysis based on the L1-norm and related methods: Papers from the First International Conference held at Neuchâtel, August 31–September 4, 1987. North-Holland Publishing.
  2. ^ Jaynes, E. T. (2007). Probability Theory: The logic of science (5. print. ed.). Cambridge University Press. p. 172. ISBN 978-0-521-59271-0.
  3. ^ Ferguson, Thomas S. (1996). A Course in Large Sample Theory. Chapman & Hall. ISBN 0-412-04371-8.
  4. ^ a b Le Cam, Lucien (1986). Asymptotic Methods in Statistical Decision Theory. Springer-Verlag. ISBN 0-387-96307-3.
  5. ^ Ferguson, Thomas S. (1982). "An inconsistent maximum likelihood estimate". Journal of the American Statistical Association. 77 (380): 831–834. doi:10.1080/01621459.1982.10477894. JSTOR 2287314.
  6. ^ Lehmann, E. L.; Casella, G. (1998). Theory of Point Estimation (2nd ed.). Springer. ISBN 0-387-98502-6.

Bibliography[edit]

  • Bickel, Peter J. & Doksum, Kjell A. (2001). Mathematical Statistics: Basic and Selected Topics. I (Second (updated printing 2007) ed.). Pearson Prentice-Hall.
  • Liese, Friedrich & Miescke, Klaus-J. (2008). Statistical Decision Theory: Estimation, Testing, and Selection. Springer.