Jump to content

Super-resolution imaging

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Gwestheimer (talk | contribs) at 18:00, 27 May 2012 (fixed disambiguation, added figures and more explamations). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Superresolution is the term applied to a class of techniques that extend ordinary resolution limits of imaging systems and has been used in two connotations:

  • optical or diffractive superresolution, in which object details are transmitted that are beyond the traditional diffraction limit, and
  • geometrical or image-processing superresolution, where various techniques are employed to improve information transfer within the standard resolution limits.

Basic Concepts

Because some of the ideas surrounding superresolution raise serious fundamental issues, there is need at the outset to examine the relevant physical and information theoretical principles.

Diffraction Limit The detail of a physical object that an optical instrument can reproduce in an image has limits that are mandated by inviolate laws of physics, whether formulated by the diffraction equations in the wave theory of light [1] or the Uncertainty Principle for photons in quantum mechanics. [2] Information transfer can never be increased beyond this boundary but, using clever techniques, packets outside the limits can be swapped for (or multiplexed with) some inside it. [3] One does not so much “break” as “make an end-run around” the diffraction limit. New powerful procedures probing electro-magnetic disturbances at the molecular level (in the socalled near-field) [4] remain fully consonant with Maxwell's equations.


A particularly succinct expression of the diffraction limit is given in the spatial-frequency domain. In Fourier Optics light distributions are expressed as superpositions of a series of grating light patterns in a range of fringe widths, technically spatial frequencies. It is generally taught that diffraction theory stipulates an upper limit, the cut-off spatial-frequency, beyond which pattern elements fail to be transferred into the optical image, i.e., are not resolved. But in fact what is set by diffraction theory is the width of the passband, not a fixed upper limit. No fundamental principles of physics are transgressed when a spatial frequency band beyond the cut-off spatial frequency is swapped for one inside it. In fact this has been implemented in dark-field microscopy for a long time. Nor are information-theoretical rules broken in the inventive procedures for superimposing more than one band [5], [6]: disentangling them in the received image needs assumptions of object invariance during multiple exposures, i.e., the substitution of one kind of uncertainty for another.


Information When the term superresolution is used in techniques of inferring object details from statistical treatment of the image within standard resolution limits, for example, averaging multiple exposures, it involves an exchange of one kind of information (extracting signal from noise) for another (the assumption that the target has remained invariant).

Resolution and localization True resolution involves the distinction whether a target, e.g. a star or a spectral line, is single or double, ordinarily requiring separable peaks in the image. When a target is known to be single, its location can be determined with higher precision than the image width by finding the centroid (center of gravity) of its image light distribution. The word ultra-resolution had been proposed for this process [7] but it did not catch on, and the high-precision localization procedure now mostly sails under the flag of superresolution.

In summary: The impressive technical achievements of enhancing the performance of imaging-forming and –sensing devices now classified as superresolution utilize to the fullest but always stay within the bounds imposed by the laws of physics and information theory.

Techniques to which the term Superresolution has been applied

Optical or diffractive superresolution

Substituting spatial-frequency bands. Though the bandwidth allowable by diffraction is fixed, it can be positioned anywhere in the spatial-frequency spectrum. Dark-field illumination in microscopy is an example. See also aperture synthesis.

The "structured illumination" technique of superresolution is related to Moire patterns. The target, a band of fringes (bottom 2/3), is beyond the diffraction limit. When a band of somewhat coarser fringes (top 2/3) is artificially superimposed, a broad set of fringes, easily resolved, is generated (middle 1/3), signifying the presence of the fine fringes which are not themselves represented in the image.

Multiplexing spatial-frequency bands, e.g., structured illumination (see figure on the left). An image is formed using the normal passband of the optical device. Then some known light structure, for example a set of light fringes that is also within the passband, is superimposed on the target. [6] The image now contains components resulting from the combination of the target and the superimposed light structure, e.g. Moire fringes, and carries information about target detail which simple, unstructured illumination does not. The “superresolved” components, however, need disentangling to be revealed.

Multiple parameter use within traditional diffraction limit: if a target has no special polarization or wavelength properties, two polarization states or non-overlapping wavelength regions can be used to encode target details, one in a spatial-frequency band inside the cut-off limit the other beyond it. Both would utilize normal passband transmission but are then separately decoded to reconstitute target structure with extended resolution.

Probing near-field electro-magnetic disturbance. The usual discussion of superresolution involved conventional imagery of an object by an optical system. But modern technology allows probing the electromagnetic disturbance within molecular distances of the source [4] which has superior resolution properties, see also evanescent waves and the development of the new Super lens.


Geometrical or image-processing superresolution

Compared to a single image marred by noise during its acquisition or transmission (left), resolution is improved by suitable combination of several separately-obtained images (right). This can be achieved only within the intrinsic resolution capability of the imaging process for revealing such detail.

Multi-exposure image noise reduction. When an image is degraded by noise, there can be more detail in the average of many exposures, even within the diffraction limit. See example on the right.

Single-frame deblurring. Known defects in a given imaging situation, such as defocus or aberrations, can sometimes be mitigated in whole or in part by suitable spatial-frequency filtering of even a single image. Such procedures all stay within the diffraction-mandated passband, and do not extend it.

Sub-pixel Image Localization. The location of a single source can be determined by computing the "center of gravity" (centroid) of the light distribution extending over several adjacent pixels. Provided that there is enough light, this can be achieved with arbitrary precision, very much better than pixel width of the detecting apparatus and the resolution limit for the decision of whether the source is single or double. This technique, which requires the presupposition that all the light comes from a single source, is at the basis of what has becomes known as superresolution microscopy, e.g. STORM, where fluorescent probes attached to molecules give nanoscale distance information. It is also the mechanism underlying visual hyperacuity.

Bayesian Induction from within to beyond traditional diffraction limit. Some object features, though beyond the diffraction limit, may be known to be associated with other object features that are within the limits and hence contained in the image. Then conclusions can be drawn, using statistical methods, from the available image data about the presence of the full object [8]. The classical example is Toraldo di Francia's proposition [9] of judging whether an image is that of a single or double star by determining whether its width exceeds the spread from a single star. This can be achieved at separations well below the classical resolution bounds, and requires the prior limitation to the choice "single or double?"

See also

References

  1. ^ Born M, Wolf E, Principles of Optics, Cambridge Univ. Press , any edition
  2. ^ Fox M, 2007 Quantum Optics Oxford
  3. ^ Zalevsky Z, Mendlovic D. 2003 Optical Superresolution Springer
  4. ^ a b Betzig E, Trautman JK., 1992. Near-field optics: microscopy, spectroscopy, and surface modification beyond the diffraction limit. Science 257, 189-195.
  5. ^ Lukosz, W., 1966. Optical systems with resolving power exceeding the classical limit. J. opt. soc. Am. 56, 1463-1472.
  6. ^ a b Gustaffsson, M., 2000. Surpassing the lateral resolution limit by a factor of two using structured illumination microscopy. J. Microscopy 198, 82-87.
  7. ^ Cox, I.J., Sheppard, C.J.R., 1986. Information capacity and resolution in an optical system. J.opt. Soc. Am. A 3, 1152-1158
  8. ^ Harris, J.L., 1964. Resolving power and decision making. J. opt. soc. Am. 54, 606-611.
  9. ^ Toraldo di Francia, G., 1955. Resolving power and information. J. opt. soc. Am. 45, 497-501.