Image fusion

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

The image fusion process is defined as gathering all the important information from multiple images, and their inclusion into fewer images, usually a single one. This single image is more informative and accurate than any single source image, and it consists of all the necessary information. The purpose of image fusion is not only to reduce the amount of data but also to construct images that are more appropriate and understandable for the human and machine perception[1]. In computer vision, Multisensor Image fusion is the process of combining relevant information from two or more images into a single image.[2] The resulting image will be more informative than any of the input images.[3]

In remote sensing applications, the increasing availability of space borne sensors gives a motivation for different image fusion algorithms. Several situations in image processing require high spatial and high spectral resolution in a single image. Most of the available equipment is not capable of providing such data convincingly. Image fusion techniques allow the integration of different information sources. The fused image can have complementary spatial and spectral resolution characteristics. However, the standard image fusion techniques can distort the spectral information of the multispectral data while merging.

In satellite imaging, two types of images are available. The panchromatic image acquired by satellites is transmitted with the maximum resolution available and the multispectral data are transmitted with coarser resolution. This will usually be two or four times lower. At the receiver station, the panchromatic image is merged with the multispectral data to convey more information.

Many methods exist to perform image fusion. The very basic one is the high pass filtering technique. Later techniques are based on Discrete Wavelet Transform, uniform rational filter bank, and Laplacian pyramid.

Multi-Focus Image Fusion[edit]

Multi-focus image fusion is used to collect useful and necessary information from input images with different focus depths in order to create an output image that ideally has all information from input images [1] [4]. In visual sensor network (VSN), sensors are cameras which record images and video sequences. In many applications of VSN, a camera can’t give a perfect illustration including all details of the scene. This is because of the limited depth of focus exists in the optical lens of cameras [5]. Therefore, just the object located in the focal length of camera is focused and cleared and the other parts of image are blurred. VSN has an ability to capture images with different depth of focuses in the scene using several cameras. Due to the large amount of data generated by camera compared to other sensors such as pressure and temperature sensors and some limitation such as limited band width, energy consumption and processing time, it is essential to process the local input images to decrease the amount of transmission data. The aforementioned reasons emphasize the necessary of multi-focus images fusion. Multi-focus image fusion is a process which combines the input multi-focus images into a single image including all important information of the input images and it’s more accurate explanation of the scene than every single input image. [1]

Why Image Fusion[edit]

Multi sensor data fusion has become a discipline which demands more general formal solutions to a number of application cases. Several situations in image processing require both high spatial and high spectral information in a single image. This is important in remote sensing. However, the instruments are not capable of providing such information either by design or because of observational constraints. One possible solution for this is data fusion.

Standard Image Fusion Methods[edit]

Image fusion methods can be broadly classified into two groups - spatial domain fusion and transform domain fusion.

The fusion methods such as averaging, Brovey method, principal component analysis (PCA) and IHS based methods fall under spatial domain approaches. Another important spatial domain fusion method is the high pass filtering based technique. Here the high frequency details are injected into upsampled version of MS images. The disadvantage of spatial domain approaches is that they produce spatial distortion in the fused image. Spectral distortion becomes a negative factor while we go for further processing, such as classification problem. Spatial distortion can be very well handled by frequency domain approaches on image fusion. The multiresolution analysis has become a very useful tool for analysing remote sensing images. The discrete wavelet transform has become a very useful tool for fusion. Some other fusion methods are also there, such as Laplacian pyramid based, curvelet transform based etc. These methods show a better performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion.

The images used in image fusion should already be registered. Misregistration is a major source of error in image fusion. Some well-known image fusion methods are:

  • High pass filtering technique
  • IHS transform based image fusion
  • PCA based image fusion
  • Wavelet transform image fusion
  • Pair-wise spatial frequency matching

Remote Sensing Image Fusion[edit]

Image fusion in remote sensing has several application domains. An important domain is the multi-resolution image fusion (commonly referred to pan-sharpening). In satellite imagery we can have two types of images

  • Panchromatic images - An image collected in the broad visual wavelength range but rendered in black and white.
  • Multispectral images - Images optically acquired in more than one spectral or wavelength interval. Each individual image is usually of the same physical area and scale but of a different spectral band.

The SPOT PAN satellite provides high resolution (10m pixel) panchromatic data. While the LANDSAT TM satellite provides low resolution (30m pixel) multispectral images. Image fusion attempts to merge these images and produce a single high resolution multispectral image.

The standard merging methods of image fusion are based on Red-Green-Blue (RGB) to Intensity-Hue-Saturation (IHS) transformation. The usual steps involved in satellite image fusion are as follows:

  1. Resize the low resolution multispectral images to the same size as the panchromatic image.
  2. Transform the R, G and B bands of the multispectral image into IHS components.
  3. Modify the panchromatic image with respect to the multispectral image. This is usually performed by histogram matching of the panchromatic image with Intensity component of the multispectral images as reference.
  4. Replace the intensity component by the panchromatic image and perform inverse transformation to obtain a high resolution multispectral image.

An explanation of how to do Pan-sharpening in Photoshop. For other applications of image in remote sensing, interested readers can refer to: Beyond Pan-sharpening: Pixel-level Fusion in Remote Sensing Applications.

Medical Image Fusion[edit]

Image fusion has become a common term used within medical diagnostics and treatment.[6] The term is used when multiple images of a patient are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality,[7] or by combining information from multiple modalities,[8] such as magnetic resonance image (MRI), computed tomography (CT), positron emission tomography (PET), and single photon emission computed tomography (SPECT). In radiology and radiation oncology, these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors.

For accurate diagnoses, radiologists must integrate information from multiple image formats. Fused, anatomically consistent images are especially beneficial in diagnosing and treating cancer. With the advent of these new technologies, radiation oncologists can take full advantage of intensity modulated radiation therapy (IMRT). Being able to overlay diagnostic images into radiation planning images results in more accurate IMRT target tumor volumes.

Image Fusion Metrics[edit]

Comparative analysis of image fusion methods demonstrates that different metrics support different user needs, sensitive to different image fusion methods, and need to be tailored to the application. Categories of image fusion metrics are based on information theory,[3] features, structural similarity, or human perception.[9]

See also[edit]

External links[edit]


  1. ^ a b c M., Amin-Naji,; A., Aghagolzadeh, (2018). "Multi-Focus Image Fusion in DCT Domain using Variance and Energy of Laplacian and Correlation Coefficient for Visual Sensor Networks". Journal of AI and Data Mining. 6 (2): pp. 233–250. doi:10.22044/jadm.2017.5169.1624. ISSN 2322-5211. 
  2. ^ Haghighat, M. B. A.; Aghagolzadeh, A.; Seyedarabi, H. (2011). "Multi-focus image fusion for visual sensor networks in DCT domain". Computers & Electrical Engineering. 37 (5): 789–797. doi:10.1016/j.compeleceng.2011.04.016. 
  3. ^ a b Haghighat, M. B. A.; Aghagolzadeh, A.; Seyedarabi, H. (2011). "A non-reference image fusion metric based on mutual information of image features". Computers & Electrical Engineering. 37 (5): 744–756. doi:10.1016/j.compeleceng.2011.07.012. 
  4. ^ Naji, M. A.; Aghagolzadeh, A. (November 2015). "Multi-focus image fusion in DCT domain based on correlation coefficient". 2015 2nd International Conference on Knowledge-Based Engineering and Innovation (KBEI): 632–639. doi:10.1109/KBEI.2015.7436118. 
  5. ^ Naji, M. A.; Aghagolzadeh, A. (November 2015). "A new multi-focus image fusion technique based on variance in DCT domain". 2015 2nd International Conference on Knowledge-Based Engineering and Innovation (KBEI): 478–484. doi:10.1109/KBEI.2015.7436092. 
  6. ^ James, A.P.; Dasarathy, B V. (2014). "Medical Image Fusion: A survey of state of the art". Information Fusion. 19: 4–19. arXiv:1401.0166Freely accessible. doi:10.1016/j.inffus.2013.12.002. 
  7. ^ Gooding, M.J.; et al. (2010). "Investigation into the fusion of multiple 4-D fetal echocardiography images to improve image quality". Ultrasound in Medincine and Biology. 36 (6): 957–66. doi:10.1016/j.ultrasmedbio.2010.03.017. 
  8. ^ Maintz, J.B.; Viergever, M.A. (1998). "A survey of medical image registration". Medical Image Analysis. 2 (1): 1–36. doi:10.1016/s1361-8415(01)80026-8. PMID 10638851. 
  9. ^ Liu, Z.; Blasch, E.; Xue, Z.; Langaniere, R.; Wu, W. (2012). "Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Survey". IEEE Transactions on Pattern Analysis and Machine Intelligence. 34 (1): 94–109. doi:10.1109/tpami.2011.109.