The resolution of an optical imaging system – a microscope, telescope, or camera – can be limited by factors such as imperfections in the lenses or misalignment. However, there is a principal limit to the resolution of any optical system, due to the physics of diffraction. An optical system with resolution performance at the instrument's theoretical limit is said to be diffraction-limited.
The diffraction-limited angular resolution of a telescopic instrument is proportional to the wavelength of the light being observed, and inversely proportional to the diameter of its objective's entrance aperture. For telescopes with circular apertures, the size of the smallest feature in an image that is diffraction limited is the size of the Airy disk. As one decreases the size of the aperture of a telescopic lens, diffraction proportionately increases. At small apertures, such as f/22, most modern lenses are limited only by diffraction and not by aberrations or other imperfections in the construction.
For microscopic instruments, the diffraction-limited spatial resolution is proportional to the light wavelength, and to the numerical aperture of either the objective or the object illumination source, whichever is smaller.
In astronomy, a diffraction-limited observation is one that achieves the resolution of a theoretically ideal objective in the size of instrument used. However, most observations from Earth are seeing-limited due to atmospheric effects. Optical telescopes on the Earth work at a much lower resolution than the diffraction limit because of the distortion introduced by the passage of light through several kilometres of turbulent atmosphere. Advanced observatories have started using adaptive optics technology, resulting in greater image resolution for faint targets, but it is still difficult to reach the diffraction limit using adaptive optics.
Radio telescopes are frequently diffraction-limited, because the wavelengths they use (from millimeters to meters) are so long that the atmospheric distortion is negligible. Space-based telescopes (such as Hubble, or a number of non-optical telescopes) always work at their diffraction limit, if their design is free of optical aberration.
The beam from a laser with near-ideal beam propagation properties may be described as being diffraction-limited. A diffraction-limited laser beam, passed through diffraction-limited optics, will remain diffraction-limited, and will have a spatial or angular extent essentially equal to the resolution of the optics at the wavelength of the laser.
The Abbe diffraction limit for a microscope
The observation of sub-wavelength structures with microscopes is difficult because of the Abbe diffraction limit. Ernst Abbe found in 1873 that light with wavelength λ, traveling in a medium with refractive index n and converging to a spot with half-angle will have a minimum resolvable distance of
The portion of the denominator is called the numerical aperture (NA) and can reach about 1.4–1.6 in modern optics, hence the Abbe limit is d = λ/2.8. Considering green light around 500 nm and a NA of 1, the Abbe limit is roughly d = λ/2 = 250 nm (0.25 μm), which is small compared to most biological cells (1 μm to 100 μm), but large compared to viruses (100 nm), proteins (10 nm) and less complex molecules (1 nm). To increase the resolution, shorter wavelengths can be used such as UV and X-ray microscopes. These techniques offer better resolution but are expensive, suffer from lack of contrast in biological samples and may damage the sample.
Implications for digital photography
In a digital camera, diffraction effects interact with the effects of the regular pixel grid. The combined effect of the different parts of an optical system is determined by the convolution of the point spread functions (PSF). The point spread function of a diffraction limited lens is simply the Airy disk. The point spread function of the camera, otherwise called the instrument response function (IRF) can be approximated by a rectangle function, with a width equivalent to the pixel pitch. A more complete derivation of the modulation transfer function (derived from the PSF) of image sensors is given by Fliegel. Whatever the exact instrument response function, it is largely independent of the f-number of the lens. Thus at different f-numbers a camera may operate in three different regimes, as follows:
- In the case where the spread of the IRF is small with respect to the spread of the diffraction PSF, in which case the system may be said to be essentially diffraction limited (so long as the lens itself is diffraction limited).
- In the case where the spread of the diffraction PSF is small with respect to the IRF, in which case the system is instrument limited.
- In the case where the spread of the PSF and IRF are similar, in which case both impact the available resolution of the system.
The spread of the diffraction-limited PSF is approximated by the diameter of the first null of the Airy disk,
where λ is the wavelength of the light and N is the f-number of the imaging optics. For f/8 and green (0.5 μm wavelength) light, d = 9.76 μm. This is similar to the pixel size for the majority of commercially available 'full frame' (43mm sensor diagonal) cameras and so these will operate in regime 3 for f-numbers around 8 (few lenses are close to diffraction limited at f-numbers smaller than 8). Cameras with smaller sensors will tend to have smaller pixels, but their lenses will be designed for use at smaller f-numbers and it is likely that they will also operate in regime 3 for those f-numbers for which their lenses are diffraction limited.
Obtaining higher resolution
There are techniques for producing images that appear to have higher resolution than allowed by simple use of diffraction-limited optics. Although these techniques improve some aspect of resolution, they generally come at an enormous increase in cost and complexity. Usually the technique is only appropriate for a small subset of imaging problems, with several general approaches outlined below.
Extending numerical aperture
The effective resolution of a microscope can be improved by illuminating from the side.
In conventional microscopes such as bright-field or differential interference contrast, this is achieved by using a condenser. Under spatially incoherent conditions, the image is understood as a composite of images illuminated from each point on the condenser, each of which covers a different portion of the object's spatial frequencies. This effectively improves the resolution by, at most, a factor of two.
Simultaneously illuminating from all angles (fully open condenser) drives down interferometric contrast. In conventional microscopes, the maximum resolution (fully open condenser, at NA = 1) is rarely used. Further, under partially coherent conditions, the recorded image is often non-linear with object's scattering potential—especially when looking at non-self-luminous (non-fluorescent) objects. To boost contrast, and sometimes to linearize the system, unconventional microscopes (with structured illumination) synthesize the condenser illumination by acquiring a sequence of images with known illumination parameters. Typically, these images are composited to form a single image with data covering a larger portion of the object's spatial frequencies when compared to using a fully closed condenser (which is also rarely used).
Another technique, 4 Pi microscopy uses two opposing objectives to double the effective numerical aperture, effectively halving the diffraction limit, by collecting the forward and backward scattered light. When imaging a transparent sample, with a combination of incoherent or structured illumination, as well as collecting both forward, and backward scattered light it is possible to image the complete scattering sphere.
Unlike methods relying on localization, such system are still limited by the diffraction limit of the illumination (condenser) and collection optics (objective), although in practice they can provide substantial resolution improvements compared to conventional methods.
The diffraction limit is only valid in the far field as it assumes that no evanescent fields reach the detector. Various near-field techniques that operate less than ≈1 wavelength of light away from the image plane can obtain substantially higher resolution. These techniques exploit the fact that the evanescent field contains information beyond the diffraction limit which can be used to construct very high resolution images, in principle beating the diffraction limit by a factor proportional to how well a specific imaging system can detect the near-field signal. For scattered light imaging, instruments such as near-field scanning optical microscopes and Nano-FTIR, which are built atop atomic force microscope systems, can be utilized to achieve up to 10-50nm resolution. The data recorded by such instruments often requires substantial processing, essentially solving an optical inverse problem for each image.
In fluorescence microscopy the excitation and emission are typically on different wavelengths. In total internal reflection fluorescence microscopy a thin portion the sample located immediately on the cover glass is excited with an evanescent field, and recorded with a conventional diffraction limited objective, improving the axial resolution.
However, because these techniques cannot image beyond 1 wavelength, they cannot be used to image into objects thicker than 1 wavelength which limits their applicability.
Far-field imaging techniques are most desirable for imaging objects that are large compared to the illumination wavelength but that contain fine structure. This includes nearly all biological applications in which cells span multiple wavelengths but contain structure down to molecular scales. In recent years several techniques have shown that sub-diffraction limited imaging is possible over macroscopic distances. These techniques usually exploit optical nonlinearity in a material's reflected light to generate resolution beyond the diffraction limit.
Among these techniques, the STED microscope has been one of the most successful. In STED, multiple laser beams are used to first excite, and then quench fluorescent dyes. The nonlinear response to illumination caused by the quenching process in which adding more light causes the image to become less bright generates sub-diffraction limited information about the location of dye molecules, allowing resolution far beyond the diffraction limit provided high illumination intensities are used.
The limits on focusing or collimating a laser beam are very similar to the limits on imaging with a microscope or telescope. The only difference is that laser beams are typically soft-edged beams. This non-uniformity in light distribution leads to a coefficient slightly different from the 1.22 value familiar in imaging. But the scaling is exactly the same.
The beam quality of a laser beam is characterized by how well its propagation matches an ideal Gaussian beam at the same wavelength. The beam quality factor M squared (M2) is found by measuring the size of the beam at its waist, and its divergence far from the waist, and taking the product of the two, known as the beam parameter product. The ratio of this measured beam parameter product to that of the ideal is defined as M2, so that M2=1 describes an ideal beam. The M2 value of a beam is conserved when it is transformed by diffraction-limited optics.
The outputs of many low and moderately powered lasers have M2 values of 1.2 or less, and are essentially diffraction-limited.
The same equations apply to other wave-based sensors, such as radar and the human ear.
As opposed to light waves (i.e., photons), massive particles have a different relationship between their quantum mechanical wavelength and their energy. This relationship indicates that the effective "de Broglie" wavelength is inversely proportional to the momentum of the particle. For example, an electron at an energy of 10 keV has a wavelength of 0.01 nm, allowing the electron microscope (SEM or TEM) to achieve high resolution images. Other massive particles such as helium, neon, and gallium ions have been used to produce images at resolutions beyond what can be attained with visible light. Such instruments provide nanometer scale imaging, analysis and fabrication capabilities at the expense of system complexity.
- Born, Max; Emil Wolf (1997). Principles of Optics. Cambridge University Press. ISBN 0-521-63921-2.
- Lipson, Lipson and Tannhauser (1998). Optical Physics. United Kingdom: Cambridge. p. 340. ISBN 978-0-521-43047-0.
- Fliegel, Karel (December 2004). "Modeling and Measurement of Image Sensor Characteristics" (PDF). Radioengineering. 13 (4).
- Niek van Hulst (2009). "Many photons get more out of diffraction". Optics & Photonics Focus. 4 (1).
- Streibl, Norbert (February 1985). "Three-dimensional imaging by a microscope". Journal of the Optical Society of America A. 2 (2): 121–127. Bibcode:1985JOSAA...2..121S. doi:10.1364/JOSAA.2.000121.
- Sheppard, C.J.R.; Mao, X.Q. (September 1989). "Three-dimensional imaging in a microscope". Journal of the Optical Society of America A. 6 (9): 1260–1269. Bibcode:1989JOSAA...6.1260S. doi:10.1364/JOSAA.6.001260.