Multispectral image

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Video by SDO simultaneously showing sections of the Sun at various wavelengths

A multispectral image is one that captures image data at specific frequencies across the electromagnetic spectrum. The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, such as infrared. Spectral imaging can allow extraction of additional information the human eye fails to capture with its receptors for red, green and blue. It was originally developed for space-based imaging.

Multispectral images are the main type of images acquired by remote sensing (RS) radiometers. Dividing the spectrum into many bands, multispectral is the opposite of panchromatic, which records only the total intensity of radiation falling on each pixel. Usually, satellites have three or more radiometers (Landsat has seven). Each one acquires one digital image (in remote sensing, called a 'scene') in a small band of visible spectra, ranging from 0.7 µm to 0.4 µm, called red-green-blue (RGB) region, and going to infrared wavelengths of 0.7 µm to 10 or more µm, classified as near infrared (NIR), middle infrared (MIR) and far infrared (FIR or thermal). In the Landsat case, the seven scenes comprise a seven-band multispectral image. Spectral imaging with more numerous bands, finer spectral resolution or wider spectral coverage may be called hyperspectral or ultraspectral.

This technology has also assisted in the interpretation of ancient papyri, such as those found at Herculaneum, by imaging the fragments in the infrared range (1000 nm). Often, the text on the documents appears to the naked eye as black ink on black paper. At 1000 nm, the difference in how paper and ink reflect infrared light makes the text clearly readable. It has also been used to image the Archimedes palimpsest by imaging the parchment leaves in bandwidths from 365-870 nm, and then using advanced digital image processing techniques to reveal the undertext with Archimedes' work.

The availability of wavelengths for remote sensing and imaging is limited by the infrared window and the optical window.

Spectral bands[edit]

The wavelengths are approximate; exact values depend on the particular satellite's instruments:

  • Blue, 450-515..520 nm, is used for atmosphere and deep water imaging, and can reach depths up to 150 feet (50 m) in clear water.
  • Green, 515..520-590..600 nm, is used for imaging vegetation and deep water structures, up to 90 feet (30 m) in clear water.
  • Red, 600..630-680..690 nm, is used for imaging man-made objects, in water up to 30 feet (9 m) deep, soil, and vegetation.
  • Near infrared, 750-900 nm, is used primarily for imaging vegetation.
  • Mid-infrared, 1550-1750 nm, is used for imaging vegetation, soil moisture content, and some forest fires.
  • Mid-infrared, 2080-2350 nm, is used for imaging soil, moisture, geological features, silicates, clays, and fires.
  • Thermal infrared, 10400-12500 nm, uses emitted instead of reflected radiation to image geological structures, thermal differences in water currents, and fires, and for night studies.
  • Radar and related technologies are useful for mapping terrain and for detecting various objects.

Spectral band usage[edit]

Further information: False-color.

For different purposes, different combinations of spectral bands can be used. They are usually represented with red, green, and blue channels. Mapping of bands to colors depends on the purpose of the image and the personal preferences of the analysts. Thermal infrared is often omitted from consideration due to poor spatial resolution, except for special purposes.

  • True-color uses only red, green, and blue channels, mapped to their respective colors. As a plain color photograph, it is good for analyzing man-made objects, and is easy to understand for beginner analysts.
  • Green-red-infrared, where the blue channel is replaced with near infrared, is used for vegetation, which is highly reflective in near IR; it then shows as blue. This combination is often used to detect vegetation and camouflage.
  • Blue-NIR-MIR, where the blue channel uses visible blue, green uses NIR (so vegetation stays green), and MIR is shown as red. Such images allow the water depth, vegetation coverage, soil moisture content, and the presence of fires to be seen, all in a single image.

Many other combinations are in use. NIR is often shown as red, causing vegetation-covered areas to appear red.

Classification[edit]

Since these remote sensing images are typically multispectral responses of various features it is hard to identify directly the feature type by visual inspection. Hence the remote sensing data has to be classified first, followed by processing by various data enhancement techniques so as to help the user to understand the features that are present in the image.

Such classification is a complex task which involves rigorous validation of the training samples depending on the classification algorithm used. The techniques can be grouped mainly into two types.

  • Supervised classification techniques
  • Unsupervised classification techniques

Supervised classification makes use of training samples. Training samples are areas on the ground for which there is Ground truth, that is, what is there is known. The spectral signatures of the training areas are used to search for similar signatures in the remaining pixels of the image, and we will classify accordingly. This type of classification which uses the training samples for classification is called supervised classification. Expert knowledge is very important in this method since the selection of the training samples and adopting a bias can badly affect the accuracy of classification. One popular technique is the Maximum Likelihood principle. In this we will calculate the probability of a pixel belonging to a class (i.e. feature) and will allot the pixel to its most probable class.

In case of unsupervised classification no prior knowledge is required for classifying the features of the image. In this, the natural clustering or grouping of the pixel values i.e., gray levels of the pixels are observed. Then a threshold level is defined for adopting the no of classes in the image. The finer the threshold value more will be the no of classes. But beyond a certain limit same class is represented in different classes in the sense variation in the class is represented. After forming the clusters, ground truth validation is done to identify the class the image pixel belongs to. Thus in this unsupervised classification apriori information about the classes is not required. One of the popular methods in unsupervised classification is K means classifier algorithm.

Multispectral data analysis software[edit]

  • MicroMSI is endorsed by the NGA.
  • Opticks is an open-source remote sensing application.
  • Multispec is an established freeware multispectral analysis software.
  • Gerbil is a rather novel open source multispectral visualization and analysis software.

See also[edit]

References[edit]

External links[edit]