Jump to content

Underwater computer vision

From Wikipedia, the free encyclopedia

Underwater computer vision is a subfield of computer vision. In recent years, with the development of underwater vehicles ( ROV, AUV, gliders), the need to be able to record and process huge amounts of information has become increasingly important. Applications range from inspection of underwater structures for the offshore industry to the identification and counting of fishes for biological research. However, no matter how big the impact of this technology can be to industry and research, it still is in a very early stage of development compared to traditional computer vision. One reason for this is that, the moment the camera goes into the water, a whole new set of challenges appear. On one hand, cameras have to be made waterproof, marine corrosion deteriorates materials quickly and access and modifications to experimental setups are costly, both in time and resources. On the other hand, the physical properties of the water make light behave differently, changing the appearance of a same object with variations of depth, organic material, currents, temperature etc.


Medium differences[edit]


In air, light comes from the whole hemisphere on cloudy days, and is dominated by the sun. In water lighting comes from a finite cone above the scene. This phenomenon is called Snell's window.

Light attenuation[edit]

Unlike air, water attenuates light exponentially. This results in hazy images with very low contrast. The main reasons for light attenuation are light absorption (where energy is removed from the light) and light scattering, by which the direction of light is changed. Light scattering can further be divided into forward scattering, which results in an increased blurriness and backward scattering that limits the contrast and is responsible for the characteristic veil of underwater images. Both scattering and attenuation are heavily influenced by the amount of organic matter dissolved or suspended in the water.

Another problem with water is that light attenuation is a function of the wavelength. This means that different colours are attenuated faster or slower than others, leading to colour degradation. Red and orange light is the first to be attenuated, followed by yellows and greens. Blue is the least attenuated visual wavelength.


In high level computer vision, human structures are frequently used as image features for image matching in different applications. However, the sea bottom lacks such features, making it hard to find correspondences in two images.

In order to be able to use a camera in the water, a watertight housing is required. However, refraction will happen at the water-glass and glass-air interface due to differences in density of the materials. This has the effect of introducing a non-linear image deformation.

The motion of the vehicle presents another special challenge. Underwater vehicles are constantly moving due to currents and other phenomena. This introduces another uncertainty to algorithms, where small motions may appear in all directions. This can be specially important for video tracking. In order to reduce this problem image stabilization algorithms may be applied.

Frequent methods[edit]

[clarification needed]

Image restoration[edit]

Image restoration[2][3] aims to model the degradation process and then invert it, obtaining the new image after solving. It is generally a complex approach that requires plenty of parameters[clarification needed] that vary a lot between different water conditions.

Image enhancement[edit]

Image enhancement[4] only tries to provide a visually more appealing image without taking the physical image formation process into account. These methods are usually simpler and less computational intensive.

Color correction[edit]

Different algorithms exist that perform automatic color correction.[5][6] The UCM (Unsupervised Color Correction Method), for example, does this in the following steps: It firstly reduces the color cast by equalizing the color values. Then it enhances contrast by stretching the red histogram towards the maximum and finally saturation and intensity components are optimized.

Underwater stereo vision[edit]

It is usually assumed that stereo cameras have been calibrated previously, geometrically and radiometrically. This leads to the assumption that corresponding pixels should have the same color. However this can not be guaranteed in an underwater scene, because of dispersion and backscatter as mentioned earlier. However, it is possible to digitally model this phenomenon and create a virtual image with those effects removed

Other application fields[edit]

In recent years imaging sonars[7][8] have become more and more accessible and gained resolution, delivering better images. Sidescan sonars are used to produce complete maps of regions of the sea floor stitching together sequences of sonar images. However, imaging sonar images often lack proper contrast and are degraded by artefacts and distortions due to noise, attitude changes of the AUV/ROV carrying the sonar or non uniform beam patterns. Another common problem with sonar computer vision is the comparatively low frame rate of sonar images.[9]


  1. ^ Horgan, Jonathan; Toal, Daniel (2009). "Computer Vision Applications In the Navigation of Unmanned Underwater Vehicles". Underwater Vehicles. doi:10.5772/6703. ISBN 978-953-7619-49-7. S2CID 2940888.
  2. ^ Y. Schechner, Yoav; Karpel, Nir. "Clear Underwater vision". Proc. Computer Vision & Pattern Recognition. I: 536–543.
  3. ^ Hou, Weilin; J.Gray, Deric; Weidemann, Alan D.; A.Arnone, Robert (2008). "Comparison and Validation of point spread models for imaging in natural waters". Optics Express. 16 (13): 9958–9965. Bibcode:2008OExpr..16.9958H. doi:10.1364/OE.16.009958. PMID 18575566.
  4. ^ Schettini, Raimondo; Corchs, Silvia (2010). "Underwater Image Processing: State of the Art Image Enhancement Methods". EURASIP Journal on Advances in Signal Processing. 2010: 14. doi:10.1155/2010/746052.
  5. ^ Akkaynak, Derya, and Tali Treibitz. "Sea-Thru: A Method for Removing Water From Underwater Images." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019.
  6. ^ Iqbal, K.; Odetayo, M.; James, A.; Salam, R.A. "Enhancing the low quality images using Unsupervised Color Correction Methods" (PDF). Systems Man and Cybernetics.[dead link]
  7. ^ Mignotte, M.; Collet, C. (2000). "Markov Random Field and Fuzzy Logic Modeling in Sonar Imagery". Computer Vision and Image Understanding. 79: 4–24. CiteSeerX doi:10.1006/cviu.2000.0844.
  8. ^ Cervenka, Pierre; de Moustier, Christian (1993). "Sidescan Sonar Image Processing Techniques". IEEE Journal of Oceanic Engineering. 18 (2): 108. Bibcode:1993IJOE...18..108C. doi:10.1109/48.219531.
  9. ^ Trucco, E.; Petillot, Y.R.; Tena Ruiz, I. (2000). "Feature Tracking in Video and Sonar Subsea Sequences with Applications". Computer Vision and Image Understanding. 79: 92–122. doi:10.1006/cviu.2000.0846.