1
|
Jia T, Li J, Zhuo L, Zhang J. Self-guided disentangled representation learning for single image dehazing. Neural Netw 2024; 172:106107. [PMID: 38232424 DOI: 10.1016/j.neunet.2024.106107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 07/22/2023] [Accepted: 01/05/2024] [Indexed: 01/19/2024]
Abstract
Image dehazing has received extensive research attention as images collected in hazy weather are limited by low visibility and information dropout. Recently, disentangled representation learning has made excellent progress in various vision tasks. However, existing networks for low-level vision tasks lack efficient feature interaction and delivery mechanisms in the disentanglement process or an evaluation mechanism for the degree of decoupling in the reconstruction process, rendering direct application to image dehazing challenging. We propose a self-guided disentangled representation learning (SGDRL) algorithm with a self-guided disentangled network to realize multi-level progressive feature decoupling through sharing and interaction. The self-guided disentangled (SGD) network extracts image features using the multi-layer backbone network, and attribute features are weighted using the self-guided attention mechanism for the backbone features. In addition, we introduce a disentanglement-guided (DG) module to evaluate the degree of feature decomposition and guide the feature fusion process in the reconstruction stage. Accordingly, we develop SGDRL-based unsupervised and semi-supervised single image dehazing networks. Extensive experiments demonstrate the superiority of the proposed method for real-world image dehazing. The source code is available at https://github.com/dehazing/SGDRL.
Collapse
Affiliation(s)
- Tongyao Jia
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China
| | - Jiafeng Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing University of Technology, Beijing, 100124, China.
| | - Li Zhuo
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing University of Technology, Beijing, 100124, China
| | - Jing Zhang
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, China; Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
2
|
Shi X, Huang F, Ju L, Fan Z, Zhao S, Chen S. Hierarchical deconvolution dehazing method based on transmission map segmentation. OPTICS EXPRESS 2023; 31:43234-43249. [PMID: 38178422 DOI: 10.1364/oe.510100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 11/20/2023] [Indexed: 01/06/2024]
Abstract
Images captured in fog are often affected by scattering. Due to the absorption and scattering of light by aerosols and water droplets, the image quality will be seriously degraded. The specific manifests are brightness decrease, contrast decrease, image blur, and noise increase. In the single-image dehazing method, the image degradation model is essential. In this paper, an effective image degradation model is proposed, in which the hierarchical deconvolution strategy based on transmission map segmentation can effectively improve the accuracy of image restoration. Specifically, the transmission map is obtained by using the dark channel prior (DCP) method, then the transmission histogram is fitted. The next step is to divide the image region according to the fitting results. Furthermore, to more accurately recover images of complex objects with a large depth of field, different levels of inverse convolution are adopted for different regions. Finally, the sub-images of different regions are fused to get the dehazing image. We tested the proposed method using synthetic fog images and natural fog images respectively. The proposed method is compared with eight advanced image dehazing methods on quantitative rating indexes such as peak signal-to-noise ratio (PSNR), structural similarity (SSIM), image entropy, natural image quality evaluator (NIQE), and blind/referenceless image spatial quality evaluator (BRISQUE). Both subjective and objective evaluations show that the proposed method achieves competitive results.
Collapse
|
3
|
Shi Z, Huo J, Meng Z, Yang F, Wang Z. An Adversarial Dual-Branch Network for Nonhomogeneous Dehazing in Tunnel Construction. SENSORS (BASEL, SWITZERLAND) 2023; 23:9245. [PMID: 38005632 PMCID: PMC10675628 DOI: 10.3390/s23229245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 10/25/2023] [Accepted: 11/10/2023] [Indexed: 11/26/2023]
Abstract
The tunnel construction area poses significant challenges for the use of vision technology due to the presence of nonhomogeneous haze fields and low-contrast targets. However, existing dehazing algorithms display weak generalization, leading to dehazing failures, incomplete dehazing, or color distortion in this scenario. Therefore, an adversarial dual-branch convolutional neural network (ADN) is proposed in this paper to deal with the above challenges. The ADN utilizes two branches of the knowledge transfer sub-network and the multi-scale dense residual sub-network to process the hazy image and then aggregate the channels. This input is then passed through a discriminator to judge true and false, motivating the network to improve performance. Additionally, a tunnel haze field simulation dataset (Tunnel-HAZE) is established based on the characteristics of nonhomogeneous dust distribution and artificial light sources in the tunnel. Comparative experiments with existing advanced dehazing algorithms indicate an improvement in both PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity) by 4.07 dB and 0.032 dB, respectively. Furthermore, a binocular measurement experiment conducted in a simulated tunnel environment demonstrated a reduction in the relative error of measurement results by 50.5% when compared to the haze image. The results demonstrate the effectiveness and application potential of the proposed method in tunnel construction.
Collapse
Affiliation(s)
| | - Junzhou Huo
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China; (Z.S.); (Z.M.); (F.Y.); (Z.W.)
| | | | | | | |
Collapse
|
4
|
Frants V, Agaian S, Panetta K. QCNN-H: Single-Image Dehazing Using Quaternion Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5448-5458. [PMID: 37022843 DOI: 10.1109/tcyb.2023.3238640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Single-image haze removal is challenging due to its ill-posed nature. The breadth of real-world scenarios makes it difficult to find an optimal dehazing approach that works well for various applications. This article addresses this challenge by utilizing a novel robust quaternion neural network architecture for single-image dehazing applications. The architecture's performance to dehaze images and its impact on real applications, such as object detection, is presented. The proposed single-image dehazing network is based on an encoder-decoder architecture capable of taking advantage of quaternion image representation without interrupting the quaternion dataflow end-to-end. We achieve this by introducing a novel quaternion pixel-wise loss function and quaternion instance normalization layer. The performance of the proposed QCNN-H quaternion framework is evaluated on two synthetic datasets, two real-world datasets, and one real-world task-oriented benchmark. Extensive experiments confirm that the QCNN-H outperforms state-of-the-art haze removal procedures in visual quality and quantitative metrics. Furthermore, the evaluation shows increased accuracy and recall of state-of-the-art object detection in hazy scenes using the presented QCNN-H method. This is the first time the quaternion convolutional network has been applied to the haze removal task.
Collapse
|
5
|
Zhang J, Chen C, Chen K, Ju M, Zhang D. Local Adaptive Image Filtering Based on Recursive Dilation Segmentation. SENSORS (BASEL, SWITZERLAND) 2023; 23:5776. [PMID: 37447626 PMCID: PMC10346767 DOI: 10.3390/s23135776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/11/2023] [Accepted: 06/16/2023] [Indexed: 07/15/2023]
Abstract
This paper introduces a simple but effective image filtering method, namely, local adaptive image filtering (LAIF), based on an image segmentation method, i.e., recursive dilation segmentation (RDS). The algorithm is motivated by the observation that for the pixel to be smoothed, only the similar pixels nearby are utilized to obtain the filtering result. Relying on this observation, similar pixels are partitioned by RDS before applying a locally adaptive filter to smooth the image. More specifically, by directly taking the spatial information between adjacent pixels into consideration in a recursive dilation way, RDS is firstly proposed to partition the guided image into several regions, so that the pixels belonging to the same segmentation region share a similar property. Then, guided by the iterative segmented results, the input image can be easily filtered via a local adaptive filtering technique, which smooths each pixel by selectively averaging its local similar pixels. It is worth mentioning that RDS makes full use of multiple integrated information including pixel intensity, hue information, and especially spatial adjacent information, leading to more robust filtering results. In addition, the application of LAIF in the remote sensing field has achieved outstanding results, specifically in areas such as image dehazing, denoising, enhancement, and edge preservation, among others. Experimental results show that the proposed LAIF can be successfully applied to various filtering-based tasks with favorable performance against state-of-the-art methods.
Collapse
Affiliation(s)
- Jialiang Zhang
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210046, China;
| | - Chuheng Chen
- School of Bell Honors, Nanjing University of Posts and Telecommunications, Nanjing 210046, China; (C.C.); (K.C.)
| | - Kai Chen
- School of Bell Honors, Nanjing University of Posts and Telecommunications, Nanjing 210046, China; (C.C.); (K.C.)
| | - Mingye Ju
- School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210046, China;
| | - Dengyin Zhang
- School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210046, China;
| |
Collapse
|
6
|
Xiao X, Ren Y, Li Z, Zhang N, Zhou W. Self-supervised zero-shot dehazing network based on dark channel prior. FRONTIERS OF OPTOELECTRONICS 2023; 16:7. [PMID: 37055622 PMCID: PMC10102283 DOI: 10.1007/s12200-023-00062-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 02/05/2023] [Indexed: 06/19/2023]
Abstract
Most learning-based methods previously used in image dehazing employ a supervised learning strategy, which is time-consuming and requires a large-scale dataset. However, large-scale datasets are difficult to obtain. Here, we propose a self-supervised zero-shot dehazing network (SZDNet) based on dark channel prior, which uses a hazy image generated from the output dehazed image as a pseudo-label to supervise the optimization process of the network. Additionally, we use a novel multichannel quad-tree algorithm to estimate atmospheric light values, which is more accurate than previous methods. Furthermore, the sum of the cosine distance and the mean squared error between the pseudo-label and the input image is applied as a loss function to enhance the quality of the dehazed image. The most significant advantage of the SZDNet is that it does not require a large dataset for training before performing the dehazing task. Extensive testing shows promising performances of the proposed method in both qualitative and quantitative evaluations when compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Xinjie Xiao
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620 China
| | - Yuanhong Ren
- College of Information Science and Technology, Donghua University, Shanghai, 201620 China
| | - Zhiwei Li
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620 China
| | - Nannan Zhang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620 China
| | - Wuneng Zhou
- College of Information Science and Technology, Donghua University, Shanghai, 201620 China
| |
Collapse
|
7
|
Guo F, Yang J, Liu Z, Tang J. Haze Removal for Single Image: A Comprehensive Review. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.03.061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
8
|
Hsieh CH, Chen ZY, Chang YH. Using Whale Optimization Algorithm and Haze Level Information in a Model-Based Image Dehazing Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:815. [PMID: 36679610 PMCID: PMC9861576 DOI: 10.3390/s23020815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 12/26/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Single image dehazing has been a challenge in the field of image restoration and computer vision. Many model-based and non-model-based dehazing methods have been reported. This study focuses on a model-based algorithm. A popular model-based method is dark channel prior (DCP) which has attracted a lot of attention because of its simplicity and effectiveness. In DCP-based methods, the model parameters should be appropriately estimated for better performance. Previously, we found that appropriate scaling factors of model parameters helped dehazing performance and proposed an improved DCP (IDCP) method that uses heuristic scaling factors for the model parameters (atmospheric light and initial transmittance). With the IDCP, this paper presents an approach to find optimal scaling factors using the whale optimization algorithm (WOA) and haze level information. The WOA uses ground truth images as a reference in a fitness function to search the optimal scaling factors in the IDCP. The IDCP with the WOA was termed IDCP/WOA. It was observed that the performance of IDCP/WOA was significantly affected by hazy ground truth images. Thus, according to the haze level information, a hazy image discriminator was developed to exclude hazy ground truth images from the dataset used in the IDCP/WOA. To avoid using ground truth images in the application stage, hazy image clustering was presented to group hazy images and their corresponding optimal scaling factors obtained by the IDCP/WOA. Then, the average scaling factors for each haze level were found. The resulting dehazing algorithm was called optimized IDCP (OIDCP). Three datasets commonly used in the image dehazing field, the RESIDE, O-HAZE, and KeDeMa datasets, were used to justify the proposed OIDCP. Then a comparison was made between the OIDCP and five recent haze removal methods. On the RESIDE dataset, the OIDCP achieved a PSNR of 26.23 dB, which was better than IDCP by 0.81 dB, DCP by 8.03 dB, RRO by 5.28, AOD by 5.6 dB, and GCAN by 1.27 dB. On the O-HAZE dataset, the OIDCP had a PSNR of 19.53 dB, which was better than IDCP by 0.06 dB, DCP by 4.39 dB, RRO by 0.97 dB, AOD by 1.41 dB, and GCAN by 0.34 dB. On the KeDeMa dataset, the OIDCP obtained the best overall performance and gave dehazed images with stable visual quality. This suggests that the results of this study may benefit model-based dehazing algorithms.
Collapse
Affiliation(s)
- Cheng-Hsiung Hsieh
- Department of Computer Science and Information Engineering, Chaoyang University of Technology, No. 168, Jifong E. Rd., Taichung 413, Taiwan
| | - Ze-Yu Chen
- Department of Computer Science and Information Engineering, Chaoyang University of Technology, No. 168, Jifong E. Rd., Taichung 413, Taiwan
| | - Yi-Hung Chang
- Macronix International Co., No. 19, Lihsin Rd., Science Park, Hsinchu 300, Taiwan
| |
Collapse
|
9
|
Xiao X, Li Z, Ning W, Zhang N, Teng X. LFR-Net: Local feature residual network for single image dehazing. ARRAY 2023. [DOI: 10.1016/j.array.2023.100278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023] Open
|
10
|
From depth-aware haze generation to real-world haze removal. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08101-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
11
|
Li F, Wang Z, He G. AP Shadow Net: A Remote Sensing Shadow Removal Network Based on Atmospheric Transport and Poisson's Equation. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1301. [PMID: 36141187 PMCID: PMC9497877 DOI: 10.3390/e24091301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/01/2022] [Accepted: 09/11/2022] [Indexed: 06/16/2023]
Abstract
Shadow is one of the fundamental indicators of remote sensing image which could cause loss or interference of the target data. As a result, the detection and removal of shadow has already been the hotspot of current study because of the complicated background information. In the following passage, a model combining the Atmospheric Transport Model (hereinafter abbreviated as ATM) with the Poisson Equation, AP ShadowNet, is proposed for the shadow detection and removal of remote sensing images by unsupervised learning. This network based on a preprocessing network based on ATM, A Net, and a network based on the Poisson Equation, P Net. Firstly, corresponding mapping between shadow and unshaded area is generated by the ATM. The brightened image will then enter the Confrontation identification in the P Net. Lastly, the reconstructed image is optimized on color consistency and edge transition by Poisson Equation. At present, most shadow removal models based on neural networks are significantly data-driven. Fortunately, by the model in this passage, the unsupervised shadow detection and removal could be released from the data source restrictions from the remote sensing images themselves. By verifying the shadow removal on our model, the result shows a satisfying effect from a both qualitative and quantitative angle. From a qualitative point of view, our results have a prominent effect on tone consistency and removal of detailed shadows. From the quantitative point of view, we adopt the non-reference evaluation indicators: gradient structure similarity (NRSS) and Natural Image Quality Evaluator (NIQE). Combining various evaluation factors such as reasoning speed and memory occupation, it shows that it is outstanding among other current algorithms.
Collapse
Affiliation(s)
- Fan Li
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Zhiyi Wang
- UoG-UESTC Joint School, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Guoliang He
- School of Mathematical Sciences, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
12
|
Chemin N, Shan FP, Marsani MF. Traffic image haze removal based on optimized retinex model and dark channel prior. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-221240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
GPS monitoring systems and the development of driverless vehicles are almost inseparable from camera images. The images taken by traffic cameras often contain certain sky areas and noise, the traditional dark channel prior (DCP) algorithm easily produces color distortion and halo effect, when processing the hazy traffic images with sky and high brightness areas. An optimized Retinex model and dark channel prior algorithm (ORDCP) is proposed in this paper. Firstly by adjusting the calculation method of dark channel image, the proportion of dark channel is improved; Then, the transmittance image is corrected and smoothed by guided filtering and mean filtering. Finally, the Retinex model is fused to save the details.ORDCP corrects the inaccurate calculation of scene transmittance value in DCP algorithm,and modifies some dehazing problems, such as the loss of details, halo effect, contrast and color distortion,etc. Using information entropy (IE) as the objective evaluation index, combined with the subjective evaluation, it is concluded that the algorithm proposed in this paper can effectively retain the detailed information of the image, and eliminate the halo effect. Meanwhile, it meets the visual characteristics of human eyes better, and has some practicality and applicability in traffic control and intelligent detection.
Collapse
Affiliation(s)
- Ni Chemin
- Department of Mathematical Statistics, Zhejiang Yuexiu University
- School of Mathematical Sciences, Universiti Sains Malaysia
| | - Fam Pei Shan
- School of Mathematical Sciences, Universiti Sains Malaysia
| | | |
Collapse
|
13
|
Han J, Zhang S, Fan N, Ye Z. Local patchwise minimal and maximal values prior for single optical remote sensing image dehazing. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.05.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
14
|
Attention-Gate-Based Model with Inception-like Block for Single-Image Dehazing. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136725] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In recent decades, haze has become an environmental issue due to its effects on human health. It also reduces visibility and degrades the performance of computer vision algorithms in autonomous driving applications, which may jeopardize car driving safety. Therefore, it is extremely important to instantly remove the haze effect on an image. The purpose of this study is to leverage useful modules to achieve a lightweight and real-time image-dehazing model. Based on the U-Net architecture, this study integrates four modules, including an image pre-processing block, inception-like blocks, spatial pyramid pooling blocks, and attention gates. The original attention gate was revised to fit the field of image dehazing and consider different color spaces to retain the advantages of each color space. Furthermore, using an ablation study and a quantitative evaluation, the advantages of using these modules were illustrated. Through existing indoor and outdoor test datasets, the proposed method shows outstanding dehazing quality and an efficient execution time compared to other state-of-the-art methods. This study demonstrates that the proposed model can improve dehazing quality, keep the model lightweight, and obtain pleasing dehazing results. A comparison to existing methods using the RESIDE SOTS dataset revealed that the proposed model improves the SSIM and PSNR metrics by at least 5–10%.
Collapse
|
15
|
Wei H, Wu Q, Li H, Ngan KN, Li H, Meng F, Xu L. Non-Homogeneous Haze Removal via Artificial Scene Prior and Bidimensional Graph Reasoning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9136-9149. [PMID: 34735342 DOI: 10.1109/tip.2021.3122806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Due to the lack of natural scene and haze prior information, it is greatly challenging to completely remove the haze from a single image without distorting its visual content. Fortunately, the real-world haze usually presents non-homogeneous distribution, which provides us with many valuable clues in partial well-preserved regions. In this paper, we propose a Non-Homogeneous Haze Removal Network (NHRN) via artificial scene prior and bidimensional graph reasoning. Firstly, we employ the gamma correction iteratively to simulate artificial multiple shots under different exposure conditions, whose haze degrees are different and enrich the underlying scene prior. Secondly, beyond utilizing the local neighboring relationship, we build a bidimensional graph reasoning module to conduct non-local filtering in the spatial and channel dimensions of feature maps, which models their long-range dependency and propagates the natural scene prior between the well-preserved nodes and the nodes contaminated by haze. To the best of our knowledge, this is the first exploration to remove non-homogeneous haze via the graph reasoning based framework. We evaluate our method on different benchmark datasets. The results demonstrate that our method achieves superior performance over many state-of-the-art algorithms for both the single image dehazing and hazy image understanding tasks. The source code of the proposed NHRN is available on https://github.com/whrws/NHRNet.
Collapse
|
16
|
Ju M, Ding C, Guo CA, Ren W, Tao D. IDRLP: Image Dehazing Using Region Line Prior. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9043-9057. [PMID: 34714745 DOI: 10.1109/tip.2021.3122088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In this work, a novel and ultra-robust single image dehazing method called IDRLP is proposed. It is observed that when an image is divided into n regions, with each region having a similar scene depth, the brightness of both the hazy image and its haze-free correspondence are positively related with the scene depth. Based on this observation, this work determines that the hazy input and its haze-free correspondence exhibit a quasi-linear relationship after performing this region segmentation, which is named as region line prior (RLP). By combining RLP and the atmospheric scattering model (ASM), a recovery formula (RF) can be easily obtained with only two unknown parameters, i.e., the slope of the linear function and the atmospheric light. A 2D joint optimization function considering two constraints is then designed to seek the solution of RF. Unlike other comparable works, this "joint optimization" strategy makes efficient use of the information across the entire image, leading to more accurate results with ultra-high robustness. Finally, a guided filter is introduced in RF to eliminate the adverse interference caused by the region segmentation. The proposed RLP and IDRLP are evaluated from various perspectives and compared with related state-of-the-art techniques. Extensive analysis verifies the superiority of IDRLP over state-of-the-art image dehazing techniques in terms of both the recovery quality and efficiency. A software release is available at https://sites.google.com/site/renwenqi888/.
Collapse
|
17
|
Zhang L, Zhu A, Zhao S, Zhou Y. Simulation of Atmospheric Visibility Impairment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8713-8726. [PMID: 34665730 DOI: 10.1109/tip.2021.3120044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Changes in aerosol composition and its proportions can cause changes in atmospheric visibility. Vision systems deployed outdoors must take into account the negative effects brought by visibility impairment. In order to develop vision algorithms that can adapt to low atmospheric visibility conditions, a large-scale dataset containing pairs of clear images and their visibility-impaired versions (along with other annotations if necessary) is usually indispensable. However, it is almost impossible to collect large amounts of such image pairs in a real physical environment. A natural and reasonable solution is to use virtual simulation technologies, which is also the focus of this paper. In this paper, we first deeply analyze the limitations and irrationalities of the existing work specializing on simulation of atmospheric visibility impairment. We point out that many simulation schemes actually even violate the assumptions of the Koschmieder's law. Second, more importantly, based on a thorough investigation of the relevant studies in the field of atmospheric science, we present simulation strategies for five most commonly encountered visibility impairment phenomena, including mist, fog, natural haze, smog, and Asian dust. Our work establishes a direct link between the fields of atmospheric science and computer vision. In addition, as a byproduct, with the proposed simulation schemes, a large-scale synthetic dataset is established, comprising 40,000 clear source images and their 800,000 visibility-impaired versions. To make our work reproducible, source codes and the dataset have been released at https://cslinzhang.github.io/AVID/.
Collapse
|
18
|
A Stacking Ensemble Deep Learning Model for Building Extraction from Remote Sensing Images. REMOTE SENSING 2021. [DOI: 10.3390/rs13193898] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automatically extracting buildings from remote sensing images with deep learning is of great significance to urban planning, disaster prevention, change detection, and other applications. Various deep learning models have been proposed to extract building information, showing both strengths and weaknesses in capturing the complex spectral and spatial characteristics of buildings in remote sensing images. To integrate the strengths of individual models and obtain fine-scale spatial and spectral building information, this study proposed a stacking ensemble deep learning model. First, an optimization method for the prediction results of the basic model is proposed based on fully connected conditional random fields (CRFs). On this basis, a stacking ensemble model (SENet) based on a sparse autoencoder integrating U-NET, SegNet, and FCN-8s models is proposed to combine the features of the optimized basic model prediction results. Utilizing several cities in Hebei Province, China as a case study, a building dataset containing attribute labels is established to assess the performance of the proposed model. The proposed SENet is compared with three individual models (U-NET, SegNet and FCN-8s), and the results show that the accuracy of SENet is 0.954, approximately 6.7%, 6.1%, and 9.8% higher than U-NET, SegNet, and FCN-8s models, respectively. The identification of building features, including colors, sizes, shapes, and shadows, is also evaluated, showing that the accuracy, recall, F1 score, and intersection over union (IoU) of the SENet model are higher than those of the three individual models. This suggests that the proposed ensemble model can effectively depict the different features of buildings and provides an alternative approach to building extraction with higher accuracy.
Collapse
|
19
|
Fernández-Carvelo S, Martínez-Domingo MÁ, Valero EM, Romero J, Nieves JL, Hernández-Andrés J. Band Selection for Dehazing Algorithms Applied to Hyperspectral Images in the Visible Range. SENSORS 2021; 21:s21175935. [PMID: 34502824 PMCID: PMC8434606 DOI: 10.3390/s21175935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/27/2021] [Accepted: 08/30/2021] [Indexed: 11/16/2022]
Abstract
Images captured under bad weather conditions (e.g., fog, haze, mist, dust, etc.), suffer from poor contrast and visibility, and color distortions. The severity of this degradation depends on the distance, the density of the atmospheric particles and the wavelength. We analyzed eight single image dehazing algorithms representative of different strategies and originally developed for RGB images, over a database of hazy spectral images in the visible range. We carried out a brute force search to find the optimum three wavelengths according to a new combined image quality metric. The optimal triplet of monochromatic bands depends on the dehazing algorithm used and, in most cases, the different bands are quite close to each other. According to our proposed combined metric, the best method is the artificial multiple exposure image fusion (AMEF). If all wavelengths within the range 450-720 nm are used to build a sRGB renderization of the imagaes, the two best-performing methods are AMEF and the contrast limited adaptive histogram equalization (CLAHE), with very similar quality of the dehazed images. Our results show that the performance of the algorithms critically depends on the signal balance and the information present in the three channels of the input image. The capture time can be considerably shortened, and the capture device simplified by using a triplet of bands instead of the full wavelength range for dehazing purposes, although the selection of the bands must be performed specifically for a given algorithm.
Collapse
|
20
|
|
21
|
A Review of Remote Sensing Image Dehazing. SENSORS 2021; 21:s21113926. [PMID: 34200320 PMCID: PMC8201244 DOI: 10.3390/s21113926] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 05/29/2021] [Accepted: 06/02/2021] [Indexed: 11/16/2022]
Abstract
Remote sensing (RS) is one of the data collection technologies that help explore more earth surface information. However, RS data captured by satellite are susceptible to particles suspended during the imaging process, especially for data with visible light band. To make up for such deficiency, numerous dehazing work and efforts have been made recently, whose strategy is to directly restore single hazy data without the need for using any extra information. In this paper, we first classify the current available algorithm into three categories, i.e., image enhancement, physical dehazing, and data-driven. The advantages and disadvantages of each type of algorithm are then summarized in detail. Finally, the evaluation indicators used to rank the recovery performance and the application scenario of the RS data haze removal technique are discussed, respectively. In addition, some common deficiencies of current available methods and future research focus are elaborated.
Collapse
|