1
|
Liu T, Zhang S, Yu Z. Redefining Accuracy: Underwater Depth Estimation for Irregular Illumination Scenes. SENSORS (BASEL, SWITZERLAND) 2024; 24:4353. [PMID: 39001132 PMCID: PMC11244248 DOI: 10.3390/s24134353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2024] [Revised: 06/30/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
Acquiring underwater depth maps is essential as they provide indispensable three-dimensional spatial information for visualizing the underwater environment. These depth maps serve various purposes, including underwater navigation, environmental monitoring, and resource exploration. While most of the current depth estimation methods can work well in ideal underwater environments with homogeneous illumination, few consider the risk caused by irregular illumination, which is common in practical underwater environments. On the one hand, underwater environments with low-light conditions can reduce image contrast. The reduction brings challenges to depth estimation models in accurately differentiating among objects. On the other hand, overexposure caused by reflection or artificial illumination can degrade the textures of underwater objects, which is crucial to geometric constraints between frames. To address the above issues, we propose an underwater self-supervised monocular depth estimation network integrating image enhancement and auxiliary depth information. We use the Monte Carlo image enhancement module (MC-IEM) to tackle the inherent uncertainty in low-light underwater images through probabilistic estimation. When pixel values are enhanced, object recognition becomes more accessible, allowing for a more precise acquisition of distance information and thus resulting in more accurate depth estimation. Next, we extract additional geometric features through transfer learning, infusing prior knowledge from a supervised large-scale model into a self-supervised depth estimation network to refine loss functions and a depth network to address the overexposure issue. We conduct experiments with two public datasets, which exhibited superior performance compared to existing approaches in underwater depth estimation.
Collapse
Affiliation(s)
- Tong Liu
- Key Laboratory of Ocean Observation and Information of Hainan Province, Sanya Oceanographic Institution, Ocean University of China, Sanya 572024, China; (T.L.); (S.Z.)
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
| | - Sainan Zhang
- Key Laboratory of Ocean Observation and Information of Hainan Province, Sanya Oceanographic Institution, Ocean University of China, Sanya 572024, China; (T.L.); (S.Z.)
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
| | - Zhibin Yu
- Key Laboratory of Ocean Observation and Information of Hainan Province, Sanya Oceanographic Institution, Ocean University of China, Sanya 572024, China; (T.L.); (S.Z.)
- Faculty of Information Science and Engineering, Ocean University of China, Qingdao 266100, China
| |
Collapse
|
2
|
Hu X, Liu J, Li H, Liu H, Xue X. An effective transformer based on dual attention fusion for underwater image enhancement. PeerJ Comput Sci 2024; 10:e1783. [PMID: 38855239 PMCID: PMC11157557 DOI: 10.7717/peerj-cs.1783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/06/2023] [Indexed: 06/11/2024]
Abstract
Underwater images suffer from color shift, low contrast, and blurred details as a result of the absorption and scattering of light in the water. Degraded quality images can significantly interfere with underwater vision tasks. The existing data-driven based underwater image enhancement methods fail to sufficiently consider the impact related to the inconsistent attenuation of spatial areas and the degradation of color channel information. In addition, the dataset used for model training is small in scale and monotonous in the scene. Therefore, our approach solves the problem from two aspects of the network architecture design and the training dataset. We proposed a fusion attention block that integrate the non-local modeling ability of the Swin Transformer block into the local modeling ability of the residual convolution layer. Importantly, it can adaptively fuse non-local and local features carrying channel attention. Moreover, we synthesize underwater images with multiple water body types and different degradations using the underwater imaging model and adjusting the degradation parameters. There are also perceptual loss functions introduced to improve image vision. Experiments on synthetic and real-world underwater images have shown that our method is superior. Thus, our network is suitable for practical applications.
Collapse
Affiliation(s)
- Xianjie Hu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Jing Liu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Heng Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Hui Liu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| | - Xiaojun Xue
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
| |
Collapse
|
3
|
Mei X, Ye X, Wang J, Wang X, Huang H, Liu Y, Jia Y, Zhao S. UIEOGP: an underwater image enhancement method based on optical geometric properties. OPTICS EXPRESS 2023; 31:36638-36655. [PMID: 38017810 DOI: 10.1364/oe.499684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/04/2023] [Indexed: 11/30/2023]
Abstract
Due to the inconsistent absorption and scattering effects of different wavelengths of light, underwater images often suffer from color casts, blurred details, and low visibility. To address this image degradation problem, we propose a robust and efficient underwater image enhancement method named UIEOGP. It can be divided into the following three steps. First, according to the light attenuation effect presented by Lambert Beer's law, combined with the variance change after attenuation, we estimate the depth of field in the underwater image. Then, we propose a local-based color correction algorithm to address the color cast issue in underwater images, employing the statistical distribution law. Finally, drawing inspiration from the law of light propagation, we propose detail enhancement algorithms, each based on the geometric properties of circles and ellipses, respectively. The enhanced images produced by our method feature vibrant colors, improved contrast, and sharper detail. Extensive experiments show that our method outperforms current state-of-the-art methods. In further experiments, we found that our method is beneficial for downstream tasks of underwater image processing, such as the detection of keypoints and edges in underwater images.
Collapse
|
4
|
Yang X, Li J, Liang W, Wang D, Zhao J, Xia X. Underwater image quality assessment. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1276-1288. [PMID: 37706727 DOI: 10.1364/josaa.485307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 05/03/2023] [Indexed: 09/15/2023]
Abstract
To obtain high-visual-quality underwater images by image post-processing, many underwater image restoration and enhancement methods have been proposed. Underwater image quality assessment (UIQA) methods have been developed to compare these restoration and enhancement methods. This paper comprehensively summarizes the subjective and objective UIQA methods, metrics, and datasets. Experiments are conducted on two underwater image datasets to analyze the performance of several typical UIQA metrics. Suggestions for further research directions are put forward as well.
Collapse
|
5
|
Monterroso Muñoz A, Moron-Fernández MJ, Cascado-Caballero D, Diaz-del-Rio F, Real P. Autonomous Underwater Vehicles: Identifying Critical Issues and Future Perspectives in Image Acquisition. SENSORS (BASEL, SWITZERLAND) 2023; 23:4986. [PMCID: PMC10222422 DOI: 10.3390/s23104986] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/11/2023] [Accepted: 05/17/2023] [Indexed: 07/09/2023]
Abstract
Underwater imaging has been present for many decades due to its relevance in vision and navigation systems. In recent years, advances in robotics have led to the availability of autonomous or unmanned underwater vehicles (AUVs, UUVs). Despite the rapid development of new studies and promising algorithms in this field, there is currently a lack of research toward standardized, general-approach proposals. This issue has been stated in the literature as a limiting factor to be addressed in the future. The key starting point of this work is to identify a synergistic effect between professional photography and scientific fields by analyzing image acquisition issues. Subsequently, we discuss underwater image enhancement and quality assessment, image mosaicking and algorithmic concerns as the last processing step. In this line, statistics about 120 AUV articles fro recent decades have been analyzed, with a special focus on state-of-the-art papers from recent years. Therefore, the aim of this paper is to identify critical issues in autonomous underwater vehicles encompassing the entire process, starting from optical issues in image sensing and ending with some issues related to algorithmic processing. In addition, a global underwater workflow is proposed, extracting future requirements, outcome effects and new perspectives in this context.
Collapse
Affiliation(s)
| | - Maria-Jose Moron-Fernández
- Department of Computer Architecture and Technology, Universidad de Sevilla, 41012 Sevilla, Spain; (M.-J.M.-F.); (D.C.-C.)
| | - Daniel Cascado-Caballero
- Department of Computer Architecture and Technology, Universidad de Sevilla, 41012 Sevilla, Spain; (M.-J.M.-F.); (D.C.-C.)
| | - Fernando Diaz-del-Rio
- Department of Computer Architecture and Technology, Universidad de Sevilla, 41012 Sevilla, Spain; (M.-J.M.-F.); (D.C.-C.)
| | - Pedro Real
- Department of Applied Mathematics I, Universidad de Sevilla, 41012 Sevilla, Spain;
| |
Collapse
|
6
|
Papa L, Proietti Mattia G, Russo P, Amerini I, Beraldi R. Lightweight and Energy-Aware Monocular Depth Estimation Models for IoT Embedded Devices: Challenges and Performances in Terrestrial and Underwater Scenarios. SENSORS (BASEL, SWITZERLAND) 2023; 23:2223. [PMID: 36850825 PMCID: PMC9966799 DOI: 10.3390/s23042223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
The knowledge of environmental depth is essential in multiple robotics and computer vision tasks for both terrestrial and underwater scenarios. Moreover, the hardware on which this technology runs, generally IoT and embedded devices, are limited in terms of power consumption, and therefore, models with a low-energy footprint are required to be designed. Recent works aim at enabling depth perception using single RGB images on deep architectures, such as convolutional neural networks and vision transformers, which are generally unsuitable for real-time inferences on low-power embedded hardware. Moreover, such architectures are trained to estimate depth maps mainly on terrestrial scenarios due to the scarcity of underwater depth data. Purposely, we present two lightweight architectures based on optimized MobileNetV3 encoders and a specifically designed decoder to achieve fast inferences and accurate estimations over embedded devices, a feasibility study to predict depth maps over underwater scenarios, and an energy assessment to understand which is the effective energy consumption during the inference. Precisely, we propose the MobileNetV3S75 configuration to infer on the 32-bit ARM CPU and the MobileNetV3LMin for the 8-bit Edge TPU hardware. In underwater settings, the proposed design achieves comparable estimations with fast inference performances compared to state-of-the-art methods. Moreover, we statistically proved that the architecture of the models has an impact on the energy footprint in terms of Watts required by the device during the inference. Then, the proposed architectures would be considered to be a promising approach for real-time monocular depth estimation by offering the best trade-off between inference performances, estimation error and energy consumption, with the aim of improving the environment perception for underwater drones, lightweight robots and Internet of things.
Collapse
|
7
|
Er MJ, Chen J, Zhang Y, Gao W. Research Challenges, Recent Advances, and Popular Datasets in Deep Learning-Based Underwater Marine Object Detection: A Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:1990. [PMID: 36850584 PMCID: PMC9966468 DOI: 10.3390/s23041990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 01/18/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
Underwater marine object detection, as one of the most fundamental techniques in the community of marine science and engineering, has been shown to exhibit tremendous potential for exploring the oceans in recent years. It has been widely applied in practical applications, such as monitoring of underwater ecosystems, exploration of natural resources, management of commercial fisheries, etc. However, due to complexity of the underwater environment, characteristics of marine objects, and limitations imposed by exploration equipment, detection performance in terms of speed, accuracy, and robustness can be dramatically degraded when conventional approaches are used. Deep learning has been found to have significant impact on a variety of applications, including marine engineering. In this context, we offer a review of deep learning-based underwater marine object detection techniques. Underwater object detection can be performed by different sensors, such as acoustic sonar or optical cameras. In this paper, we focus on vision-based object detection due to several significant advantages. To facilitate a thorough understanding of this subject, we organize research challenges of vision-based underwater object detection into four categories: image quality degradation, small object detection, poor generalization, and real-time detection. We review recent advances in underwater marine object detection and highlight advantages and disadvantages of existing solutions for each challenge. In addition, we provide a detailed critical examination of the most extensively used datasets. In addition, we present comparative studies with previous reviews, notably those approaches that leverage artificial intelligence, as well as future trends related to this hot topic.
Collapse
|
8
|
Qi Q, Li K, Zheng H, Gao X, Hou G, Sun K. SGUIE-Net: Semantic Attention Guided Underwater Image Enhancement with Multi-Scale Perception. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:6816-6830. [PMID: 36288230 DOI: 10.1109/tip.2022.3216208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Due to the wavelength-dependent light attenuation, refraction and scattering, underwater images usually suffer from color distortion and blurred details. However, due to the limited number of paired underwater images with undistorted images as reference, training deep enhancement models for diverse degradation types is quite difficult. To boost the performance of data-driven approaches, it is essential to establish more effective learning mechanisms that mine richer supervised information from limited training sample resources. In this paper, we propose a novel underwater image enhancement network, called SGUIE-Net, in which we introduce semantic information as high-level guidance via region-wise enhancement feature learning. Accordingly, we propose semantic region-wise enhancement module to better learn local enhancement features for semantic regions with multi-scale perception. After using them as complementary features and feeding them to the main branch, which extracts the global enhancement features on the original image scale, the fused features bring semantically consistent and visually superior enhancements. Extensive experiments on the publicly available datasets and our proposed dataset demonstrate the impressive performance of SGUIE-Net. The code and proposed dataset are available at https://trentqq.github.io/SGUIE-Net.html.
Collapse
|
9
|
Senshina D, Polevoy D, Ershov E, Kunina I. Experimental Study of Radial Distortion Compensation for Camera Submerged Underwater Using Open SaltWaterDistortion Data Set. J Imaging 2022; 8:jimaging8100289. [PMID: 36286383 PMCID: PMC9604812 DOI: 10.3390/jimaging8100289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 09/14/2022] [Accepted: 10/12/2022] [Indexed: 11/26/2022] Open
Abstract
This paper describes a new open data set, consisting of images of a chessboard collected underwater with different refractive indices, which allows for investigation of the quality of different radial distortion correction methods. The refractive index is regulated by the degree of salinity of the water. The collected data set consists of 662 images, and the chessboard cell corners are manually marked for each image (for a total of 35,748 nodes). Two different mobile phone cameras were used for the shooting: telephoto and wide-angle. With the help of the collected data set, the practical applicability of the formula for correction of the radial distortion that occurs when the camera is submerged underwater was investigated. Our experiments show that the radial distortion correction formula makes it possible to correct images with high precision, comparable to the precision of classical calibration algorithms. We also show that this correction method is resistant to small inaccuracies in the indication of the refractive index of water. The data set, as well as the accompanying code, are publicly available.
Collapse
Affiliation(s)
- Daria Senshina
- Evocargo LLC, 129085 Moscow, Russia
- Moscow Institute of Physics and Technology (National Research University), 141701 Dolgoprudny, Russia
- Correspondence:
| | - Dmitry Polevoy
- Smart Engines Service LLC, 117312 Moscow, Russia
- Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, 119333 Moscow, Russia
- National University of Science and Technology MISIS, 119049 Moscow, Russia
| | - Egor Ershov
- Institute for Information Transmission Problems of Russian Academy of Sciences, 127051 Moscow, Russia
| | - Irina Kunina
- Smart Engines Service LLC, 117312 Moscow, Russia
- Institute for Information Transmission Problems of Russian Academy of Sciences, 127051 Moscow, Russia
| |
Collapse
|
10
|
Li Y, Zhu C, Peng J, Bian L. Fusion-based underwater image enhancement with category-specific color correction and dehazing. OPTICS EXPRESS 2022; 30:33826-33841. [PMID: 36242409 DOI: 10.1364/oe.463682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 08/08/2022] [Indexed: 06/16/2023]
Abstract
Underwater imaging is usually affected by water scattering and absorption, resulting in image blur and color distortion. In order to achieve color correction and dehazing for different underwater scenes, in this paper we report a fusion-based underwater image enhancement technique. First, statistics of the hue channel of underwater images are used to divide the underwater images into two categories: color-distorted images and non-distorted images. Then, category-specific combinations of color compensation and color constancy algorithms are used to remove the color shift. Second, a ground-dehazing algorithm using haze-line prior is employed to remove the haze in the underwater image. Finally, a channel-wise fusion method based on the CIE L* a* b* color space is used to fuse the color-corrected image and dehazed image. For experimental validation, we built a setup to acquire underwater images. The experimental results validate that the category-specific color correction strategy is robust to different categories of underwater images and the fusion strategy simultaneously removes haze and corrects color casts. The quantitative metrics on the UIEBD and EUVP datasets validate its state-of-the-art performance.
Collapse
|
11
|
Liu R, Jiang Z, Yang S, Fan X. Twin Adversarial Contrastive Learning for Underwater Image Enhancement and Beyond. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4922-4936. [PMID: 35849672 DOI: 10.1109/tip.2022.3190209] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Underwater images suffer from severe distortion, which degrades the accuracy of object detection performed in an underwater environment. Existing underwater image enhancement algorithms focus on the restoration of contrast and scene reflection. In practice, the enhanced images may not benefit the effectiveness of detection and even lead to a severe performance drop. In this paper, we propose an object-guided twin adversarial contrastive learning based underwater enhancement method to achieve both visual-friendly and task-orientated enhancement. Concretely, we first develop a bilateral constrained closed-loop adversarial enhancement module, which eases the requirement of paired data with the unsupervised manner and preserves more informative features by coupling with the twin inverse mapping. In addition, to confer the restored images with a more realistic appearance, we also adopt the contrastive cues in the training phase. To narrow the gap between visually-oriented and detection-favorable target images, a task-aware feedback module is embedded in the enhancement process, where the coherent gradient information of the detector is incorporated to guide the enhancement towards the detection-pleasing direction. To validate the performance, we allocate a series of prolific detectors into our framework. Extensive experiments demonstrate that the enhanced results of our method show remarkable amelioration in visual quality, the accuracy of different detectors conducted on our enhanced images has been promoted notably. Moreover, we also conduct a study on semantic segmentation to illustrate how object guidance improves high-level tasks. Code and models are available at https://github.com/Jzy2017/TACL.
Collapse
|
12
|
Ge W, Lin Y, Wang Z, Yang T. Multi-prior underwater image restoration method via adaptive transmission. OPTICS EXPRESS 2022; 30:24295-24309. [PMID: 36236987 DOI: 10.1364/oe.463865] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 06/06/2022] [Indexed: 06/16/2023]
Abstract
Captured underwater images usually suffer from severe color cast and low contrast due to wavelength-dependent light absorption and scattering. These degradation issues affect the accuracy of target detection and visual understanding. The underwater image formation model is widely used to improve the visual quality of underwater images. Accurate transmission map and background light estimation are the keys to obtaining clear images. We develop a multi-priors underwater image restoration method with adaptive transmission (MUAT). Concretely, we first propose a calculation method of the dominant channel transmission to cope with pixel interference, which combines two priors of the difference between atmospheric light and pixel values and the difference between the red channel and the blue-green channel. Besides, the attenuation ratio between the superior and inferior channels is adaptively calculated with the background light to solve the color distortion and detail blur caused by the imaging distance. Ultimately, the global white balance method is introduced to solve the color distortion. Experiments on several underwater scene images show that our method obtains accurate transmission and yields better visual results than state-of-the-art methods.
Collapse
|
13
|
Multi-Level Wavelet-Based Network Embedded with Edge Enhancement Information for Underwater Image Enhancement. JOURNAL OF MARINE SCIENCE AND ENGINEERING 2022. [DOI: 10.3390/jmse10070884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
As an image processing method, underwater image enhancement (UIE) plays an important role in the field of underwater resource detection and engineering research. Currently, the convolutional neural network (CNN)- and Transformer-based methods are the mainstream methods for UIE. However, CNNs usually use pooling to expand the receptive field, which may lead to information loss that is not conducive to feature extraction and analysis. At the same time, edge blurring can easily occur in enhanced images obtained by the existing methods. To address this issue, this paper proposes a framework that combines CNN and Transformer, employs the wavelet transform and inverse wavelet transform for encoding and decoding, and progressively embeds the edge information on the raw image in the encoding process. Specifically, first, features of the raw image and its edge detection image are extracted step by step using the convolution module and the residual dense attention module, respectively, to obtain mixed feature maps of different resolutions. Next, the residual structure Swin Transformer group is used to extract global features. Then, the resulting feature map and the encoder’s hybrid feature map are used for high-resolution feature map reconstruction by the decoder. The experimental results show that the proposed method can achieve an excellent effect in edge information protection and visual reconstruction of images. In addition, the effectiveness of each component of the proposed model is verified by ablation experiments.
Collapse
|
14
|
A boundary migration model for imaging within volumetric scattering media. Nat Commun 2022; 13:3234. [PMID: 35680924 PMCID: PMC9184484 DOI: 10.1038/s41467-022-30948-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 05/12/2022] [Indexed: 11/25/2022] Open
Abstract
Effectively imaging within volumetric scattering media is of great importance and challenging especially in macroscopic applications. Recent works have demonstrated the ability to image through scattering media or within the weak volumetric scattering media using spatial distribution or temporal characteristics of the scattered field. Here, we focus on imaging Lambertian objects embedded in highly scattering media, where signal photons are dramatically attenuated during propagation and highly coupled with background photons. We address these challenges by providing a time-to-space boundary migration model (BMM) of the scattered field to convert the scattered measurements in spectral form to the scene information in the temporal domain using all of the optical signals. The experiments are conducted under two typical scattering scenarios: 2D and 3D Lambertian objects embedded in the polyethylene foam and the fog, which demonstrate the effectiveness of the proposed algorithm. It outperforms related works including time gating in terms of reconstruction precision and scattering strength. Even though the proportion of signal photons is only 0.75%, Lambertian objects located at more than 25 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of more than 50 TMFPs, can be reconstructed. Also, the proposed method provides low reconstruction complexity and millisecond-scale runtime, which significantly benefits its application. Imaging in scattering media is challenging due to signal attenuation and strong coupling of scattered and signal photons. The authors present a boundary migration model of the scattered field, converting scattered measurements in spectral form to scene information in temporal domain, and image Lambertian objects in highly scattering media.
Collapse
|
15
|
Zhou J, Yang T, Zhang W. Underwater vision enhancement technologies: a comprehensive review, challenges, and recent trends. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03767-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Medium Transmission Map Matters for Learning to Restore Real-World Underwater Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Low illumination, light reflections, scattering, absorption, and suspended particles inevitably lead to critically degraded underwater image quality, which poses great challenges for recognizing objects from underwater images. The existing underwater enhancement methods that aim to promote underwater visibility heavily suffer from poor image restoration performance and generalization ability. To reduce the difficulty of underwater image enhancement, we introduce the media transmission map as guidance for image enhancement. Different from the existing frameworks, which also introduce the medium transmission map for better distribution modeling, we formulate the interaction between the underwater visual images and the transmission map explicitly to obtain better enhancement results. At the same time, our network only requires supervised learning of the media transmission map during training, and the corresponding prediction map can be generated in subsequent tests, which reduces the operation difficulty of subsequent tasks. Thanks to our formulation, the proposed method with a very lightweight network configuration can produce very promising results of 22.6 dB on the challenging Test-R90 with an impressive 30.3 FPS, which is faster than most current algorithms. Comprehensive experimental results have demonstrated the superiority on underwater perception.
Collapse
|
17
|
Li S, Liu F, Wei J. Dehazing and deblurring of underwater images with heavy-tailed priors. APPLIED OPTICS 2022; 61:3855-3870. [PMID: 36256430 DOI: 10.1364/ao.452345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/31/2022] [Indexed: 06/16/2023]
Abstract
The common problems of underwater images include color cast, haze effect, and the motion blur effect caused by turbulence and camera shake. To address these problems, research on color cast and haze and blur effects is carried out in this paper. Because red light has significant attenuation underwater, which could cause color cast of images, this paper proposes a red channel compensation method to solve this problem. This approach adaptively compensates for the red channel according to the pixel value of the red channel, successfully preventing excessive compensation. To address the haze effect of underwater images, combined with the physical model of underwater images, a variational method is introduced in the paper. This method can not only recover clear underwater images, but also refine the transmission map at the same time. Furthermore, the blind deconvolution method is adopted to deblur underwater images. First, the blur kernel of an underwater image is estimated, and then a clear underwater image is recovered based on the obtained blur kernel. Finally, qualitative and quantitative comparisons of the underwater images recovered by different methods are also carried out. From the qualitative perspective, the images recovered by our method have higher image sharpness and more outstanding details. The quantitative comparison results show that the images recovered using our method have higher scores according to various criteria. Therefore, on the whole, our method presents great advantages in comparison with others.
Collapse
|
18
|
Liu S, Fan H, Lin S, Wang Q, Ding N, Tang Y. Adaptive Learning Attention Network for Underwater Image Enhancement. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3156176] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
19
|
Liu J, Liu Z, Wei Y, Ouyang W. Recovery for underwater image degradation with multi-stage progressive enhancement. OPTICS EXPRESS 2022; 30:11704-11725. [PMID: 35473109 DOI: 10.1364/oe.453387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Accepted: 03/09/2022] [Indexed: 06/14/2023]
Abstract
Optical absorption and scattering result in quality degradation of underwater images, which hampers the performance of underwater vision tasks. In practice, a well-posed underwater image recovery requires a combination of scene specificity and adaptability. To this end, this paper breaks down the overall recovery process into in-situ enhancement and data-driven correction modules, and proposes a Multi-stage Underwater Image Enhancement (MUIE) method to cascade the modules. In the in-situ enhancement module, a channel compensation with scene-relevant supervision is designed to address different degrees of unbalanced attenuation, and then the duality-based computation inverts the result of running a enhancement on inverted intensities to recover the degraded textures. In response to different scenarios, a data-driven correction, encoding corrected color-constancy information under data supervision, is performed to correct the improper color appearance of in-situ enhanced results. Further, under the collaboration between scene and data information, the recovery of MUIE avoids ill-posed response and reduces the prior dependence of specific scenes, resulting in a robust performance in different underwater scenes. Recovery comparison results confirm that the recovery of MUIE shows the superiority of scene clarity, realistic color appearance and evaluation scores. With the recovery of MUIE, the Underwater Image Quality Measurement (UIQM) scores of recovery-challenging images in the UIEB dataset were improved from 1.59 to 3.92.
Collapse
|
20
|
Li T, Rong S, Zhao W, Chen L, Liu Y, Zhou H, He B. Underwater image enhancement using adaptive color restoration and dehazing. OPTICS EXPRESS 2022; 30:6216-6235. [PMID: 35209562 DOI: 10.1364/oe.449930] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
Underwater images captured by optical cameras can be degraded by light attenuation and scattering, which leads to deteriorated visual image quality. The technique of underwater image enhancement plays an important role in a wide range of subsequent applications such as image segmentation and object detection. To address this issue, we propose an underwater image enhancement framework which consists of an adaptive color restoration module and a haze-line based dehazing module. First, we employ an adaptive color restoration method to compensate the deteriorated color channels and restore the colors. The color restoration module consists of three steps: background light estimation, color recognition, and color compensation. The background light estimation determines the image is blueish or greenish, and the compensation is applied in red-green or red-blue channels. Second, the haze-line technique is employed to remove the haze and enhance the image details. Experimental results show that the proposed method can restore the color and remove the haze at the same time, and it also outperforms several state-of-the-art methods on three publicly available datasets. Moreover, experiments on an underwater object detection dataset show that the proposed underwater image enhancement method is able to improve the accuracy of the subsequent underwater object detection framework.
Collapse
|
21
|
An Overview of Underwater Vision Enhancement: From Traditional Methods to Recent Deep Learning. JOURNAL OF MARINE SCIENCE AND ENGINEERING 2022. [DOI: 10.3390/jmse10020241] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Underwater video images, as the primary carriers of underwater information, play a vital role in human exploration and development of the ocean. Due to the optical characteristics of water bodies, underwater video images generally have problems such as color bias and unclear image quality, and image quality degradation is severe. Degenerated images have adverse effects on the visual tasks of underwater vehicles, such as recognition and detection. Therefore, it is vital to obtain high-quality underwater video images. Firstly, this paper analyzes the imaging principle of underwater images and the reasons for their decline in quality and briefly classifies various existing methods. Secondly, it focuses on the current popular deep learning technology in underwater image enhancement, and the underwater video enhancement technologies are also mentioned. It also introduces some standard underwater data sets, common video image evaluation indexes and underwater image specific indexes. Finally, this paper discusses possible future developments in this area.
Collapse
|
22
|
Abstract
Due to refraction, absorption, and scattering of light by suspended particles in water, underwater images are characterized by low contrast, blurred details, and color distortion. In this paper, a fusion algorithm to restore and enhance underwater images is proposed. It consists of a color restoration module, an end-to-end defogging module and a brightness equalization module. In the color restoration module, a color balance algorithm based on CIE Lab color model is proposed to alleviate the effect of color deviation in underwater images. In the end-to-end defogging module, one end is the input image and the other end is the output image. A CNN network is proposed to connect these two ends and to improve the contrast of the underwater images. In the CNN network, a sub-network is used to reduce the depth of the network that needs to be designed to obtain the same features. Several depth separable convolutions are used to reduce the amount of calculation parameters required during network training. The basic attention module is introduced to highlight some important areas in the image. In order to improve the defogging network’s ability to extract overall information, a cross-layer connection and pooling pyramid module are added. In the brightness equalization module, a contrast limited adaptive histogram equalization method is used to coordinate the overall brightness. The proposed fusion algorithm for underwater image restoration and enhancement is verified by experiments and comparison with previous deep learning models and traditional methods. Comparison results show that the color correction and detail enhancement by the proposed method are superior.
Collapse
|
23
|
Block-Greedy and CNN Based Underwater Image Dehazing for Novel Depth Estimation and Optimal Ambient Light. WATER 2021. [DOI: 10.3390/w13233470] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The μ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement.
Collapse
|
24
|
Pelletier D, Rouxel J, Fauvarque O, Hanon D, Gestalin JP, Lebot M, Dreano P, Furet E, Tardivel M, Le Bras Y, Royaux C, Leguen G. KOSMOS: An Open Source Underwater Video Lander for Monitoring Coastal Fishes and Habitats. SENSORS (BASEL, SWITZERLAND) 2021; 21:7724. [PMID: 34833799 PMCID: PMC8619907 DOI: 10.3390/s21227724] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 11/09/2021] [Accepted: 11/16/2021] [Indexed: 11/16/2022]
Abstract
BACKGROUND Monitoring the ecological status of coastal ecosystems is essential to track the consequences of anthropogenic pressures and assess conservation actions. Monitoring requires periodic measurements collected in situ, replicated over large areas and able to capture their spatial distribution over time. This means developing tools and protocols that are cost-effective and provide consistent and high-quality data, which is a major challenge. A new tool and protocol with these capabilities for non-extractively assessing the status of fishes and benthic habitats is presented here: the KOSMOS 3.0 underwater video system. METHODS The KOSMOS 3.0 was conceived based on the pre-existing and successful STAVIRO lander, and developed within a digital fabrication laboratory where collective intelligence was contributed mostly voluntarily within a managed project. Our suite of mechanical, electrical, and software engineering skills were combined with ecological knowledge and field work experience. RESULTS Pool and aquarium tests of the KOSMOS 3.0 satisfied all the required technical specifications and operational testing. The prototype demonstrated high optical performance and high consistency with image data from the STAVIRO. The project's outcomes are shared under a Creative Commons Attribution CC-BY-SA license. The low cost of a KOSMOS unit (~1400 €) makes multiple units affordable to modest research or monitoring budgets.
Collapse
Affiliation(s)
- Dominique Pelletier
- Ifremer, Unité Ecologie et Modèles Pour l’Halieutique, Centre Atlantique, F-44311 Nantes, France
| | - Justin Rouxel
- Ifremer, Laboratoire Détection Capteurs et Mesures, Centre Bretagne, F-29280 Plouzané, France; (J.R.); (O.F.); (M.T.)
| | - Olivier Fauvarque
- Ifremer, Laboratoire Détection Capteurs et Mesures, Centre Bretagne, F-29280 Plouzané, France; (J.R.); (O.F.); (M.T.)
| | - David Hanon
- Konk Ar Lab, F-29900 Concarneau, France; (D.H.); (J.-P.G.); (P.D.); (E.F.); (G.L.)
| | - Jean-Paul Gestalin
- Konk Ar Lab, F-29900 Concarneau, France; (D.H.); (J.-P.G.); (P.D.); (E.F.); (G.L.)
| | | | - Paul Dreano
- Konk Ar Lab, F-29900 Concarneau, France; (D.H.); (J.-P.G.); (P.D.); (E.F.); (G.L.)
| | - Enora Furet
- Konk Ar Lab, F-29900 Concarneau, France; (D.H.); (J.-P.G.); (P.D.); (E.F.); (G.L.)
| | - Morgan Tardivel
- Ifremer, Laboratoire Détection Capteurs et Mesures, Centre Bretagne, F-29280 Plouzané, France; (J.R.); (O.F.); (M.T.)
| | - Yvan Le Bras
- Pôle National de Données de Biodiversité, UMS 2006 PatriNat, Station Marine de Concarneau, Muséum National d’Histoire Naturelle, F-29900 Concarneau, France; (Y.L.B.); (C.R.)
| | - Coline Royaux
- Pôle National de Données de Biodiversité, UMS 2006 PatriNat, Station Marine de Concarneau, Muséum National d’Histoire Naturelle, F-29900 Concarneau, France; (Y.L.B.); (C.R.)
| | - Guillaume Leguen
- Konk Ar Lab, F-29900 Concarneau, France; (D.H.); (J.-P.G.); (P.D.); (E.F.); (G.L.)
- Guillaume Leguen, F-29900 Concarneau, France
| |
Collapse
|
25
|
Tao Y, Dong L, Xu L, Xu W. Effective solution for underwater image enhancement. OPTICS EXPRESS 2021; 29:32412-32438. [PMID: 34615313 DOI: 10.1364/oe.432756] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 09/07/2021] [Indexed: 06/13/2023]
Abstract
Degradation of underwater images severely limits people to exploring and understanding underwater world, which has become a fundamental but vital issue needing to be addressed in underwater optics. In this paper, we develop an effective solution for underwater image enhancement. We first employ an adaptive-adjusted artificial multi-exposure fusion (A-AMEF) and a parameter adaptive-adjusted local color correction (PAL-CC) to generate a contrast-enhanced version and a color-corrected version from the input respectively. Then we put the contrast enhanced version into the famous guided filter to generate a smooth base-layer and a detail-information containing detail-layer. After that, we utilize the color channel transfer operation to transfer color information from the color-corrected version to the base-layer. Finally, the color-corrected base-layer and the detail-layer are added together simply to reconstruct the final enhanced output. Enhanced results obtained from the proposed solution performs better in visual quality, than those dehazed by some current techniques through our comprehensive validation both in quantitative and qualitative evaluations. In addition, this solution can be also utilized for dehazing fogged images or improving accuracy of other optical applications such as image segmentation and local feature points matching.
Collapse
|
26
|
An Extensive Literature Review on Underwater Image Colour Correction. SENSORS 2021; 21:s21175690. [PMID: 34502585 PMCID: PMC8433714 DOI: 10.3390/s21175690] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 08/13/2021] [Accepted: 08/16/2021] [Indexed: 11/16/2022]
Abstract
The topic of underwater (UW) image colour correction and restoration has gained significant scientific interest in the last couple of decades. There are a vast number of disciplines, from marine biology to archaeology, that can and need to utilise the true information of the UW environment. Based on that, a significant number of scientists have contributed to the topic of UW image colour correction and restoration. In this paper, we try to make an unbiased and extensive review of some of the most significant contributions from the last 15 years. After considering the optical properties of water, as well as light propagation and haze that is caused by it, the focus is on the different methods that exist in the literature. The criteria for which most of them were designed, as well as the quality evaluation used to measure their effectiveness, are underlined.
Collapse
|
27
|
Wang K, Shen L, Lin Y, Li M, Zhao Q. Joint Iterative Color Correction and Dehazing for Underwater Image Enhancement. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3070253] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
28
|
Raveendran S, Patil MD, Birajdar GK. Underwater image enhancement: a comprehensive review, recent trends, challenges and applications. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10025-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
29
|
Li C, Anwar S, Hou J, Cong R, Guo C, Ren W. Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:4985-5000. [PMID: 33961554 DOI: 10.1109/tip.2021.3076367] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Underwater images suffer from color casts and low contrast due to wavelength- and distance-dependent attenuation and scattering. To solve these two degradation issues, we present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor. Concretely, we first propose a multi-color space encoder network, which enriches the diversity of feature representations by incorporating the characteristics of different color spaces into a unified structure. Coupled with an attention mechanism, the most discriminative features extracted from multiple color spaces are adaptively integrated and highlighted. Inspired by underwater imaging physical models, we design a medium transmission (indicating the percentage of the scene radiance reaching the camera)-guided decoder network to enhance the response of network towards quality-degraded regions. As a result, our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding and the advantages of both physical model-based and learning-based methods. Extensive experiments demonstrate that our Ucolor achieves superior performance against state-of-the-art methods in terms of both visual quality and quantitative metrics. The code is publicly available at: https://li-chongyi.github.io/Proj_Ucolor.html.
Collapse
|
30
|
Ngo D, Lee S, Ngo TM, Lee GD, Kang B. Visibility Restoration: A Systematic Review and Meta-Analysis. SENSORS 2021; 21:s21082625. [PMID: 33918021 PMCID: PMC8069147 DOI: 10.3390/s21082625] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 03/29/2021] [Accepted: 04/06/2021] [Indexed: 11/16/2022]
Abstract
Image acquisition is a complex process that is affected by a wide variety of internal and environmental factors. Hence, visibility restoration is crucial for many high-level applications in photography and computer vision. This paper provides a systematic review and meta-analysis of visibility restoration algorithms with a focus on those that are pertinent to poor weather conditions. This paper starts with an introduction to optical image formation and then provides a comprehensive description of existing algorithms as well as a comparative evaluation. Subsequently, there is a thorough discussion on current difficulties that are worthy of a scientific effort. Moreover, this paper proposes a general framework for visibility restoration in hazy weather conditions while using haze-relevant features and maximum likelihood estimates. Finally, a discussion on the findings and future developments concludes this paper.
Collapse
Affiliation(s)
- Dat Ngo
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
| | - Seungmin Lee
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
| | - Tri Minh Ngo
- Faculty of Electronics and Telecommunication Engineering, The University of Danang—University of Science and Technology, Danang 550000, Vietnam;
| | - Gi-Dong Lee
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
| | - Bongsoon Kang
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
- Correspondence: ; Tel.: +82-51-200-7703
| |
Collapse
|
31
|
Robust Chromatic Adaptation Based Color Correction Technology for Underwater Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recovering correct or at least realistic colors of underwater scenes is a challenging issue for image processing due to the unknown imaging conditions including the optical water type, scene location, illumination, and camera settings. With the assumption that the illumination of the scene is uniform, a chromatic adaptation-based color correction technology is proposed in this paper to remove the color cast using a single underwater image without any other information. First, the underwater RGB image is first linearized to make its pixel values proportional to the light intensities arrived at the pixels. Second, the illumination is estimated in a uniform chromatic space based on the white-patch hypothesis. Third, the chromatic adaptation transform is implemented in the device-independent XYZ color space. Qualitative and quantitative evaluations both show that the proposed method outperforms the other test methods in terms of color restoration, especially for the images with severe color cast. The proposed method is simple yet effective and robust, which is helpful in obtaining the in-air images of underwater scenes.
Collapse
|