This article needs attention from an expert in statistics. Please add a reason or a talk parameter to this template to explain the issue with the article. WikiProject Statistics (or its Portal) may be able to help recruit an expert.(May 2009)
A quasi-maximum likelihood estimate (QMLE, also known as a pseudo-likelihood estimate or a composite likelihood estimate) is an estimate of a parameter θ in a statistical model that is formed by maximizing a function that is related to the logarithm of the likelihood function, but is not equal to it. In contrast, the maximum likelihood estimate maximizes the actual log likelihood function for the data and model. The function that is maximized to form a QMLE is often a simplified form of the actual log likelihood function. A common way to form such a simplified function is to use the log-likelihood function of a misspecified model that treats certain data values as being independent, even when in actuality they may not be. This removes any parameters from the model that are used to characterize these dependencies. Doing this only makes sense if the dependency structure is a nuisance parameter with respect to the goals of the analysis.
As long as the quasi-likelihood function that is maximized is not oversimplified, the QMLE (or composite likelihood estimate) is consistent and asymptotically normal. It is less efficient than the maximum likelihood estimate, but may only be slightly less efficient if the quasi-likelihood is constructed so as to minimize the loss of information relative to the actual likelihood. Standard approaches to statistical inference that are used with maximum likelihood estimates, such as the formation of confidence intervals, and statistics for model comparison, can be generalized to the quasi-maximum likelihood setting.