||This article needs additional citations for verification. (April 2011)|
In signal processing, oversampling is the process of sampling a signal with a sampling frequency significantly higher than twice the bandwidth or highest frequency of the signal being sampled. Oversampling helps avoid aliasing, improves resolution and reduces noise.
Oversampling factor 
An oversampled signal is said to be oversampled by a factor of β, defined as
- fs is the sampling frequency
- B is the bandwidth or highest frequency of the signal; the Nyquist rate is 2B.
There are three main reasons for performing oversampling:
Oversampling can make it easier to realize analog anti-aliasing filters. Without oversampling, it is very difficult to implement filters with the sharp cutoff necessary to maximize use of the available bandwidth without exceeding the Nyquist limit. By increasing the bandwidth of the sampled signal, design constraints for the anti-aliasing filter may be relaxed. Once sampled, the signal can be digitally filtered and downsampled to the desired sampling frequency. In modern integrated circuit technology, digital filters are easier to implement than comparable analog filters.
In practice, oversampling is implemented in order to achieve cheaper higher-resolution A/D and D/A conversion. For instance, to implement a 24-bit converter, it is sufficient to use a 20-bit converter that can run at 256 times the target sampling rate. Combining 256 consecutive 20-bit samples can increase the signal-to-noise ratio by a factor of 16 (the square root of the number of samples averaged), adding 4 bits to the resolution, producing a single sample with 24-bit resolution.
The number of samples required to get bits of additional data precision is:
The sum of samples is divided by to get the mean sample scaled up to an integer with additional bits:
Note that this averaging is possible only if the signal contains perfect equally distributed noise which is enough to be measured by the A/D converter. If not, all samples will have the same value, the average will be identical to this value, and the oversampling will have no effect, so the conversion result will be as inaccurate as if it had been measured by the low-resolution core A/D. This is an interesting counter-intuitive example where adding some dithering noise can improve the results instead of degrading them.
If multiple samples are taken of the same quantity with uncorrelated noise added to each sample, then averaging N samples reduces the noise power by a factor of 1/N. If, for example, we oversample by a factor of 4, the signal-to-noise ratio in terms of power improves by factor of 4 which corresponds to a factor of 2 improvement in terms of voltage.
Certain kinds of A/D converters known as delta-sigma converters produce disproportionately more quantization noise in the upper portion of their output spectrum. By running these converters at some multiple of the target sampling rate, and low-pass filtering the oversampled down to half the target sampling rate, it is possible to obtain a result with less noise than the average over the entire band of the converter. Delta-sigma converters use a technique called noise shaping to move the quantization noise to the higher frequencies.
For example, consider a signal with a bandwidth or highest frequency of B = 100 Hz. The sampling theorem states that sampling frequency would have to be greater than 200 Hz. Sampling at 200 Hz would result in β = 1. Sampling at four times that rate (β = 4) requires a sampling frequency of 800 Hz. This gives the anti-aliasing filter a transition band of 300 Hz ((fs/2) − B = (800 Hz/2) − 100 Hz = 300 Hz) instead of 0 Hz if the sampling frequency was 200 Hz.
After being sampled at 800 Hz, the signal (ostensibly with a bandwidth of 400 Hz) could be digitally filtered to have a bandwidth of 100 Hz and then downsampled to a 200 Hz sample frequency without aliasing.
- Nauman Uppal (2004-08-30). Upsampling vs. Oversampling for Digital Audio. Retrieved 2012-10-06. "Without increasing the sample rate, we would need to design a very sharp filter that would have to cutoff at just past 20kHz and be 80-100dB down at 22kHz. Such a filter is not only very difficult and expensive to implement, but may sacrifice some of the audible spectrum in its rolloff."
- See standard error (statistics)
- John Watkinson, The Art of Digital Audio, ISBN 0-240-51320-7
See also