In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest. The classic example of a nuisance parameter is the variance, σ2, of a normal distribution, when the mean, μ, is of primary interest.
Nuisance parameters are often variances, but not always; for example in an errors-in-variables model, the unknown true location of each observation is a nuisance parameter. In general, any parameter which intrudes on the analysis of another may be considered a nuisance parameter. A parameter may also cease to be a "nuisance" if it becomes the object of study, as the variance of a distribution may be.
The general treatment of nuisance parameters can be broadly similar between frequentist and Bayesian approaches to theoretical statistics. It relies on an attempt to partition the likelihood function into components representing information about the parameters of interest and information about the other (nuisance) parameters. This can involve ideas about sufficient statistics and ancillary statistics. When this partition can be achieved it may be possible to complete a Bayesian analysis for the parameters of interest by determining their joint posterior distribution algebraically. The partition allows frequentist theory to develop general estimation approaches in the presence of nuisance parameters. If the partition cannot be achieved it may still be possible to make use of an approximate partition.
In some special cases, it is possible to formulate methods that circumvent the presences of nuisance parameters. The t-test provides a practically useful test because the test statistic does not depend on the unknown variance. It is a case where use can be made of a pivotal quantity. However, in other cases no such circumvention is known.
Practical approaches to statistical analysis treat nuisance parameters somewhat differently in frequentist and Bayesian methodologies.
A general approach in a frequentist analysis can be based on maximum likelihood-ratio tests. These provide both significance tests and confidence intervals for the parameters of interest which are approximately valid for moderate to large sample sizes and which take account of the presence of nuisance parameters. See Basu (1977) for some general discussion and Spall and Garner (1990) for some discussion relative to the identification of parameters in linear dynamic (i.e., state space representation) models.
In Bayesian analysis, a generally applicable approach creates random samples from the joint posterior distribution of all the parameters: see Markov chain Monte Carlo. Given these, the joint distribution of only the parameters of interest can be readily found by marginalizing over the nuisance parameters. However, this approach may not always be computationally efficient if some or all of the nuisance parameters can be eliminated on a theoretical basis.
- Basu, D. (1977), "On the Elimination of Nuisance Parameters," Journal of the American Statistical Association, vol. 77, pp. 355–366. doi:10.1080/01621459.1977.10481002
- Bernardo, J. M., Smith, A. F. M. (2000) Bayesian Theory. Wiley. ISBN 0-471-49464-X
- Cox, D.R., Hinkley, D.V. (1974) Theoretical Statistics. Chapman and Hall. ISBN 0-412-12420-3
- Spall, J. C. and Garner, J. P. (1990), “Parameter Identification for State-Space Models with Nuisance Parameters,” IEEE Transactions on Aerospace and Electronic Systems, vol. 26(6), pp. 992–998.
- Young, G. A., Smith, R. L. (2005) Essentials of Statistical Inference, CUP. ISBN 0-521-83971-8