Spectral density estimation
||It has been suggested that Periodogram be merged into this article. (Discuss) Proposed since September 2013.|
In statistical signal processing, the goal of spectral density estimation is to estimate the spectral density (also known as the power spectrum) of a random signal from a sequence of time samples of the signal. Intuitively speaking, the spectral density characterizes the frequency content of the signal. The purpose of estimating the spectral density is to detect any periodicities in the data, by observing peaks at the frequencies corresponding to these periodicities.
SDE should be distinguished from the field of frequency estimation, which assumes a limited (usually small) number of generating frequencies plus noise and seeks to find their frequencies. SDE makes no assumption on the number of components and seeks to estimate the whole generating spectrum.
Techniques for spectrum estimation can generally be divided into parametric and non-parametric methods. The parametric approaches assume that the underlying stationary stochastic process has a certain structure which can be described using a small number of parameters (for example, using an auto-regressive or moving average model). In these approaches, the task is to estimate the parameters of the model that describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure.
Following is a partial list of spectral density estimation techniques:
- Welch's method
- Bartlett's method
- Autoregressive moving average estimation, based on fitting to an ARMA model
- Maximum entropy spectral estimation
- Least-squares spectral analysis, based on least-squares fitting to known frequencies
- Non-uniform discrete Fourier transform
Finite number of tones
||This article or section may need to be cleaned up. It has been merged from frequency estimation.|
||This article needs attention from an expert on the subject. (April 2009)|
Frequency estimation is the process of estimating the complex frequency components of a signal in the presence of noise given assumptions about the number of the components. This contrasts with the general methods above which does not make prior assumptions about the components.
The most common methods involve identifying the noise subspace to extract these components. The most popular methods of noise subspace based frequency estimation are Pisarenko's Method, MUSIC, the eigenvector solution, and the minimum norm solution.
For example, consider a signal, , consisting of a sum of complex exponentials in the presence of white noise, . This may be represented as
Thus, the power spectrum of consists of impulses in addition to the power due to noise.
The noise subspace methods of frequency estimation are based on eigen decomposition of the autocorrelation matrix into a signal subspace and a noise subspace. After these subspaces are identified, a frequency estimation function is used to find the component frequencies from the noise subspace.
If one only wants to estimate the single loudest frequency, one can use a pitch detection algorithm.
If one wants to know all the (possibly complex) frequency components of a received signal (including transmitted signal and noise), one uses a discrete Fourier transform or some other Fourier-related transform.
- Hayes, Monson H., Statistical Digital Signal Processing and Modeling, John Wiley & Sons, Inc., 1996. ISBN 0-471-59431-8.
- Porat, B. (1994). Digital Processing of Random Signals: Theory & Methods. Prentice Hall. ISBN 0-13-063751-3.
- Priestley, M.B. (1991). Spectral Analysis and Time Series. Academic Press. ISBN 0-12-564922-3.
- P Stoica and R Moses, Spectral Analysis of Signals. Prentice Hall, NJ, 2005 (Chinese Edition, 2007). AVAILABLE FOR DOWNLOAD.
|This signal processing-related article is a stub. You can help Wikipedia by expanding it.|