1
|
Hu S, Gao Q, Xie K, Wen C, Zhang W, He J. Efficient detection of driver fatigue state based on all-weather illumination scenarios. Sci Rep 2024; 14:17075. [PMID: 39048601 PMCID: PMC11269596 DOI: 10.1038/s41598-024-67131-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 07/08/2024] [Indexed: 07/27/2024] Open
Abstract
Among the causes of the annually traffic accidents, driving fatigue is the main culprit. In consequence, it is of great practical significance to carry out the research of driving fatigue detection and early warning system. However, there are still two problems in the latest methods of driving fatigue detection: one is that a single information cannot precisely reflect the actual state of the driver in different fatigue phases, another one is the detection effect is not very well or even difficult to detect under abnormal illumination. In this paper, the multi-task cascaded convolutional networks (MTCNN) and infrared-based remote photo-plethysmography (rPPG) theory are used to extract the driver's facial and physiological information, and the multi-modal specific fatigue information is deeply excavated, and the multi-modal feature fusion model is constructed to comprehensively analyze the driver's fatigue variation tendency. Aiming at the matter of low detection accuracy under abnormal illumination, the multi-modal features extracted from visible light images and infrared images are fused by multi-loss reconstruction (MLR) module, and the driving fatigue detection module is established which is based on Bi-LSTM model by utilizing fatigue timing. The experiments were validated under all-weather illumination scenarios and were carried out on the datasets NTHU-DDD, UTA-RLDDD and FAHD. The results show that the multi-modal driving fatigue detection model has better performance than the single-modal model, and the accuracy is improved by 8.1%. In the abnormal illumination such as strong and weak light, the accuracy of the method can reach 91.7% at the highest and 83.6% at the lowest. Meanwhile, in the normal illumination, it can reach 93.2%.
Collapse
Affiliation(s)
- Siyang Hu
- School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434023, China
| | - Qihuang Gao
- School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434023, China
| | - Kai Xie
- School of Electronic Information and Electrical Engineering, Yangtze University, Jingzhou, 434023, China.
| | - Chang Wen
- School of Computer Science, Yangtze University, Jingzhou, 434023, China
| | - Wei Zhang
- School of Electronic Information, Central South University, Changsha, 410004, China
| | - Jianbiao He
- School of Computer Science, Central South University, Changsha, 410083, China
| |
Collapse
|
2
|
Choi CH, Han J, Cha J, Choi H, Shin J, Kim T, Oh HW. Contrast Enhancement Method Using Region-Based Dynamic Clipping Technique for LWIR-Based Thermal Camera of Night Vision Systems. SENSORS (BASEL, SWITZERLAND) 2024; 24:3829. [PMID: 38931613 PMCID: PMC11207256 DOI: 10.3390/s24123829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024]
Abstract
In the autonomous driving industry, there is a growing trend to employ long-wave infrared (LWIR)-based uncooled thermal-imaging cameras, capable of robustly collecting data even in extreme environments. Consequently, both industry and academia are actively researching contrast-enhancement techniques to improve the quality of LWIR-based thermal-imaging cameras. However, most research results only showcase experimental outcomes using mass-produced products that already incorporate contrast-enhancement techniques. Put differently, there is a lack of experimental data on contrast enhancement post-non-uniformity (NUC) and temperature compensation (TC) processes, which generate the images seen in the final products. To bridge this gap, we propose a histogram equalization (HE)-based contrast enhancement method that incorporates a region-based clipping technique. Furthermore, we present experimental results on the images obtained after applying NUC and TC processes. We simultaneously conducted visual and qualitative performance evaluations on images acquired after NUC and TC processes. In the visual evaluation, it was confirmed that the proposed method improves image clarity and contrast ratio compared to conventional HE-based methods, even in challenging driving scenarios such as tunnels. In the qualitative evaluation, the proposed method demonstrated upper-middle-class rankings in both image quality and processing speed metrics. Therefore, our proposed method proves to be effective for the essential contrast enhancement process in LWIR-based uncooled thermal-imaging cameras intended for autonomous driving platforms.
Collapse
Affiliation(s)
- Cheol-Ho Choi
- Pangyo R&D Center, Hanwha Systems Co., Ltd., 188, Pangyoyeok-ro, Bundang-gu, Sengnam-si 13524, Gyeonggi-do, Republic of Korea; (J.H.); (J.C.); (H.C.); (J.S.); (T.K.); (H.W.O.)
| | | | | | | | | | | | | |
Collapse
|
3
|
Zhang F, Liu X, Gao C, Sang N. Color and Luminance Separated Enhancement for Low-Light Images with Brightness Guidance. SENSORS (BASEL, SWITZERLAND) 2024; 24:2711. [PMID: 38732817 PMCID: PMC11086088 DOI: 10.3390/s24092711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 04/18/2024] [Accepted: 04/22/2024] [Indexed: 05/13/2024]
Abstract
Existing retinex-based low-light image enhancement strategies focus heavily on crafting complex networks for Retinex decomposition but often result in imprecise estimations. To overcome the limitations of previous methods, we introduce a straightforward yet effective strategy for Retinex decomposition, dividing images into colormaps and graymaps as new estimations for reflectance and illumination maps. The enhancement of these maps is separately conducted using a diffusion model for improved restoration. Furthermore, we address the dual challenge of perturbation removal and brightness adjustment in illumination maps by incorporating brightness guidance. This guidance aids in precisely adjusting the brightness while eliminating disturbances, ensuring a more effective enhancement process. Extensive quantitative and qualitative experimental analyses demonstrate that our proposed method improves the performance by approximately 4.4% on the LOL dataset compared to other state-of-the-art diffusion-based methods, while also validating the model's generalizability across multiple real-world datasets.
Collapse
Affiliation(s)
| | | | - Changxin Gao
- Key Laboratory of Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (F.Z.); (X.L.); (N.S.)
| | | |
Collapse
|
4
|
Li T. Restoration of UAV-Based Backlit Images for Geological Mapping of a High-Steep Slope. SENSORS (BASEL, SWITZERLAND) 2024; 24:1586. [PMID: 38475123 DOI: 10.3390/s24051586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/18/2024] [Accepted: 02/28/2024] [Indexed: 03/14/2024]
Abstract
Unmanned aerial vehicle (UAV)-based geological mapping is significant for understanding the geological structure in the high-steep slopes, but the images obtained in these areas are inevitably influenced by the backlit effect because of the undulating terrain and the viewpoint change of the camera mounted on the UAV. To handle this concern, a novel backlit image restoration method is proposed that takes the real-world application into account and addresses the color distortion issue existing in backlit images captured in high-steep slope scenes. Specifically, there are two main steps in the proposed method, which consist of the backlit removal and the color and detail enhancement. The backlit removal first eliminates the backlit effect using the Retinex strategy, and then the color and detail enhancement step improves the image color and sharpness. The author designs extensive comparison experiments from multiple angles and applies the proposed method to different engineering applications. The experimental results show that the proposed method has potential compared to other main-stream methods both in qualitative visual effects and universal quantitative evaluation metrics. The backlit images processed by the proposed method are significantly improved by the process of feature key point matching, which is very conducive to the fine construction of 3D geological models of the high-steep slope.
Collapse
Affiliation(s)
- Tengyue Li
- Key Laboratory of Geophysical Exploration Equipment Ministry of Education of China, Jilin University, 938 West Democracy Street, Changchun 130026, China
- College of Construction Engineering, Jilin University, 938 West Democracy Street, Changchun 130026, China
- Badong National Observation and Research Station of Geohazards, China University of Geosciences, Wuhan 430074, China
| |
Collapse
|
5
|
Feng W, Wu G, Zhou S, Li X. Low-light image enhancement based on Retinex-Net with color restoration. APPLIED OPTICS 2023; 62:6577-6584. [PMID: 37706788 DOI: 10.1364/ao.491768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 08/06/2023] [Indexed: 09/15/2023]
Abstract
Low-light images often suffer from a variety of degradation problems such as loss of detail, color distortions, and prominent noise. In this paper, the Retinex-Net model and loss function with color restoration are proposed to reduce color distortion in low-light image enhancement. The model trains the decom-net and color recovery-net to achieve decomposition of low-light images and color restoration of reflected images, respectively. First, a convolutional neural network and the designed loss functions are used in the decom-net to decompose the low-light image pair into an optimal reflection image and illumination image as the input of the network, and the reflection image after normal light decomposition is taken as the label. Then, an end-to-end color recovery network with a simplified model and time complexity is learned and combined with the color recovery loss function to obtain the correction reflection map with higher perception quality, and gamma correction is applied to the decomposed illumination image. Finally, the corrected reflection image and the illumination image are synthesized to get the enhanced image. The experimental results show that the proposed network model has lower brightness-order-error (LOE) and natural image quality evaluator (NIQE) values, and the average LOE and NIQE values of the low-light dataset images can be reduced to 942 and 6.42, respectively, which significantly improves image quality compared with other low-light enhancement methods. Generally, our proposed method can effectively improve image illuminance and restore color information in the end-to-end learning process of low-light images.
Collapse
|
6
|
Tian J, Zhang J. A Zero-Shot Low Light Image Enhancement Method Integrating Gating Mechanism. SENSORS (BASEL, SWITZERLAND) 2023; 23:7306. [PMID: 37631842 PMCID: PMC10458961 DOI: 10.3390/s23167306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 07/15/2023] [Accepted: 07/19/2023] [Indexed: 08/27/2023]
Abstract
Photographs taken under harsh ambient lighting can suffer from a number of image quality degradation phenomena due to insufficient exposure. These include reduced brightness, loss of transfer information, noise, and color distortion. In order to solve the above problems, researchers have proposed many deep learning-based methods to improve the illumination of images. However, most existing methods face the problem of difficulty in obtaining paired training data. In this context, a zero-reference image enhancement network for low light conditions is proposed in this paper. First, the improved Encoder-Decoder structure is used to extract image features to generate feature maps and generate the parameter matrix of the enhancement factor from the feature maps. Then, the enhancement curve is constructed using the parameter matrix. The image is iteratively enhanced using the enhancement curve and the enhancement parameters. Second, the unsupervised algorithm needs to design an image non-reference loss function in training. Four non-reference loss functions are introduced to train the parameter estimation network. Experiments on several datasets with only low-light images show that the proposed network has improved performance compared with other methods in NIQE, PIQE, and BRISQUE non-reference evaluation index, and ablation experiments are carried out for key parts, which proves the effectiveness of this method. At the same time, the performance data of the method on PC devices and mobile devices are investigated, and the experimental analysis is given. This proves the feasibility of the method in this paper in practical application.
Collapse
Affiliation(s)
| | - Jianwei Zhang
- School of Computer Science, Sichuan University, Chengdu 610065, China;
| |
Collapse
|
7
|
Liu R, Ma L, Ma T, Fan X, Luo Z. Learning With Nested Scene Modeling and Cooperative Architecture Search for Low-Light Vision. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:5953-5969. [PMID: 36215366 DOI: 10.1109/tpami.2022.3212995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Images captured from low-light scenes often suffer from severe degradations, including low visibility, color casts, intensive noises, etc. These factors not only degrade image qualities, but also affect the performance of downstream Low-Light Vision (LLV) applications. A variety of deep networks have been proposed to enhance the visual quality of low-light images. However, they mostly rely on significant architecture engineering and often suffer from the high computational burden. More importantly, it still lacks an efficient paradigm to uniformly handle various tasks in the LLV scenarios. To partially address the above issues, we establish Retinex-inspired Unrolling with Architecture Search (RUAS), a general learning framework, that can address low-light enhancement task, and has the flexibility to handle other challenging downstream vision tasks. Specifically, we first establish a nested optimization formulation, together with an unrolling strategy, to explore underlying principles of a series of LLV tasks. Furthermore, we design a differentiable strategy to cooperatively search specific scene and task architectures for RUAS. Last but not least, we demonstrate how to apply RUAS for both low- and high-level LLV applications (e.g., enhancement, detection and segmentation). Extensive experiments verify the flexibility, effectiveness, and efficiency of RUAS.
Collapse
|
8
|
Leng H, Fang B, Zhou M, Wu B, Mao Q. Low-Light Image Enhancement with Contrast Increase and Illumination Smooth. INT J PATTERN RECOGN 2023; 37. [DOI: 10.1142/s0218001423540034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
In image enhancement, maintaining the texture and attenuating noise are worth discussing. To address these problems, we propose a low-light image enhancement method with contrast increase and illumination smooth. First, we calculate the maximum map and the minimum map of RGB channels, and then we set maximum map as the initial value for illumination and introduce minimum map to smooth illumination. Second, we use the histogram-equalized version of the input image to construct the weight for the illumination map. Third, we propose an optimization problem to obtain the smooth illumination and refined reflectance. Experimental results show that our method can achieve better performance compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Hongyue Leng
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Fang
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Mingliang Zhou
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Wu
- Aerospace Science and Technology Industry, Microelectronics System Institute Co., Ltd., No. 269, North Section of Hupan Road, Chengdu, Sichuan 610213, P. R. China
| | - Qin Mao
- School of Computer and Information, Qiannan Normal College for Nationalities, Doupengshan Road, Duyun, Guizhou 558000, P. R. China
- Key Laboratory of Complex Systems and Intelligent Optimization of Guizhou Province, Duyun, Guizhou 558000, P. R. China
| |
Collapse
|
9
|
Khan RA, Luo Y, Wu FX. Multi-level GAN based enhanced CT scans for liver cancer diagnosis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
10
|
Guo J, Ma J, García-Fernández ÁF, Zhang Y, Liang H. A survey on image enhancement for Low-light images. Heliyon 2023; 9:e14558. [PMID: 37025779 PMCID: PMC10070385 DOI: 10.1016/j.heliyon.2023.e14558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/22/2023] [Accepted: 03/09/2023] [Indexed: 03/17/2023] Open
Abstract
In real scenes, due to the problems of low light and unsuitable views, the images often exhibit a variety of degradations, such as low contrast, color distortion, and noise. These degradations affect not only visual effects but also computer vision tasks. This paper focuses on the combination of traditional algorithms and machine learning algorithms in the field of image enhancement. The traditional methods, including their principles and improvements, are introduced from three categories: gray level transformation, histogram equalization, and Retinex methods. Machine learning based algorithms are not only divided into end-to-end learning and unpaired learning, but also concluded to decomposition-based learning and fusion based learning based on the applied image processing strategies. Finally, the involved methods are comprehensively compared by multiple image quality assessment methods, including mean square error, natural image quality evaluator, structural similarity, peak signal to noise ratio, etc.
Collapse
Affiliation(s)
- Jiawei Guo
- Department of Computer Science, University of Liverpool, Liverpool, UK
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
| | - Jieming Ma
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
- Corresponding author.
| | - Ángel F. García-Fernández
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK
- ARIES research center, Universidad Antonio de Nebrija, Madrid, Spain
| | - Yungang Zhang
- School of Information Science Yunnan Normal University, Kunming, China
| | - Haining Liang
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
| |
Collapse
|
11
|
Zhang J, Wang Z, He Y. Dataset artificial augmentation with a small number of training samples for reflectance estimation. OPTICS EXPRESS 2023; 31:8005-8019. [PMID: 36859919 DOI: 10.1364/oe.479723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 02/02/2023] [Indexed: 06/18/2023]
Abstract
The accuracy of the spectral reflectance estimation approaches highly depends on the amount, coverage, and representation of valid samples in the training dataset. We present a dataset artificial augmentation approach with a small number of actual training samples by light source spectra tuning. Then, the reflectance estimation process is carried out with our augmented color samples for commonly used datasets (IES, Munsell, Macbeth, Leeds). Finally, the impact of the augmented color sample number is investigated using different augmented color sample numbers. The results show that our proposed approach can artificially augment the color samples from CCSG 140 color samples to 13791 color samples and even more. The reflectance estimation performances with augmented color samples are much higher than with the benchmark CCSG datasets for all tested datasets (IES, Munsell, Macbeth, Leeds, as well as a real-scene hyperspectral reflectance database). It indicates that the proposed dataset augmentation approach is practical for improving the reflectance estimation performances.
Collapse
|
12
|
Han R, Tang C, Xu M, Lei Z. A Retinex-based variational model for noise suppression and nonuniform illumination correction in corneal confocal microscopy images. Phys Med Biol 2023; 68. [PMID: 36577141 DOI: 10.1088/1361-6560/acaeef] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 12/28/2022] [Indexed: 12/29/2022]
Abstract
Objective.Corneal confocal microscopy (CCM) image analysis is a non-invasivein vivoclinical technique that can quantify corneal nerve fiber damage. However, the acquired CCM images are often accompanied by speckle noise and nonuniform illumination, which seriously affects the analysis and diagnosis of the diseases.Approach.In this paper, first we propose a variational Retinex model for the inhomogeneity correction and noise removal of CCM images. In this model, the Beppo Levi space is introduced to constrain the smoothness of the illumination layer for the first time, and the fractional order differential is adopted as the regularization term to constrain reflectance layer. Then, a denoising regularization term is also constructed with Block Matching 3D (BM3D) to suppress noise. Finally, by adjusting the uneven illumination layer, we obtain the final results. Second, an image quality evaluation metric is proposed to evaluate the illumination uniformity of images objectively.Main results.To demonstrate the effectiveness of our method, the proposed method is tested on 628 low-quality CCM images from the CORN-2 dataset. Extensive experiments show the proposed method outperforms the other four related methods in terms of noise removal and uneven illumination suppression.SignificanceThis demonstrates that the proposed method may be helpful for the diagnostics and analysis of eye diseases.
Collapse
Affiliation(s)
- Rui Han
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Chen Tang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Min Xu
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Zhenkun Lei
- State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024, People's Republic of China
| |
Collapse
|
13
|
Han R, Tang C, Xu M, Liang B, Wu T, Lei Z. Enhancement method with naturalness preservation and artifact suppression based on an improved Retinex variational model for color retinal images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:155-164. [PMID: 36607085 DOI: 10.1364/josaa.474020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Retinal images are widely used for the diagnosis of various diseases. However, low-quality retinal images with uneven illumination, low contrast, or blurring may seriously interfere with diagnosis by ophthalmologists. This study proposes an enhancement method for low-quality retinal color images. In this paper, an improved variational Retinex model for color retinal images is first proposed and applied to each channel of the RGB color space to obtain the illuminance and reflectance layers. Subsequently, the Naka-Rushton equation is introduced to correct the illumination layer, and an enhancement operator is constructed to improve the clarity of the reflectance layer. Finally, the corrected illuminance and enhanced reflectance are recombined. Contrast-limited adaptive histogram equalization is introduced to further improve the clarity and contrast. To demonstrate the effectiveness of the proposed method, this method is tested on 527 images from four publicly available datasets and 40 local clinical images from Tianjin Eye Hospital (China). Experimental results show that the proposed method outperforms the other four enhancement methods and has obvious advantages in naturalness preservation and artifact suppression.
Collapse
|
14
|
Li C, Guo C, Han L, Jiang J, Cheng MM, Gu J, Loy CC. Low-Light Image and Video Enhancement Using Deep Learning: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:9396-9416. [PMID: 34752382 DOI: 10.1109/tpami.2021.3126387] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination. Recent advances in this area are dominated by deep learning-based solutions, where many learning strategies, network structures, loss functions, training data, etc. have been employed. In this paper, we provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues. To examine the generalization of existing methods, we propose a low-light image and video dataset, in which the images and videos are taken by different mobile phones' cameras under diverse illumination conditions. Besides, for the first time, we provide a unified online platform that covers many popular LLIE methods, of which the results can be produced through a user-friendly web interface. In addition to qualitative and quantitative evaluation of existing methods on publicly available and our proposed datasets, we also validate their performance in face detection in the dark. This survey together with the proposed dataset and online platform could serve as a reference source for future study and promote the development of this research field. The proposed platform and dataset as well as the collected methods, datasets, and evaluation metrics are publicly available and will be regularly updated. Project page: https://www.mmlab-ntu.com/project/lliv_survey/index.html.
Collapse
|
15
|
Lecca M, Gianini G, Serapioni RP. Mathematical insights into the original Retinex algorithm for image enhancement. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:2063-2072. [PMID: 36520703 DOI: 10.1364/josaa.471953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 10/03/2022] [Indexed: 06/17/2023]
Abstract
The Retinex theory, originally developed by Land and McCann as a computation model of the human color sensation, has become, with time, a pillar of digital image enhancement. In this area, the Retinex algorithm is widely used to improve the quality of any input image by increasing the visibility of its content and details, enhancing its colorfulness, and weakening, or even removing, some undesired effects of the illumination. The algorithm was originally described by its creators in terms of a sequence of image processing operations and was not fully formalized mathematically. Later, works focusing on aspects of the original formulation and adopting some of its principles tried to frame the algorithm within a mathematical formalism: this yielded every time a partial rendering of the model and resulted in several interesting model variants. The purpose of the present work is to fill a gap in the Retinex-related literature by providing a complete mathematical formalization of the original Retinex algorithm. The overarching goals of this work are to provide mathematical insights into the Retinex theory, promote awareness of the use of the model within image enhancement, and enable better appreciation of differences and similarities with later models based on Retinex principles. For this purpose, we compare our model with others proposed in the literature, paying particular attention to the work published in 2005 by Provenzi and others.
Collapse
|
16
|
Ma L, Liu R, Zhang J, Fan X, Luo Z. Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5666-5680. [PMID: 33929967 DOI: 10.1109/tnnls.2021.3071245] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, a variety of deep learning techniques have been developed to address this challenging task. A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces, causing many unfavorable outcomes, e.g., details loss, color unsaturation, and artifacts. To address these issues, we develop a new context-sensitive decomposition network (CSDNet) architecture to exploit the scene-level contextual dependencies on spatial scales. More concretely, we build a two-stream estimation mechanism including reflectance and illumination estimation network. We design a novel context-sensitive decomposition connection to bridge the two-stream mechanism by incorporating the physical principle. The spatially varying illumination guidance is further constructed for achieving the edge-aware smoothness property of the illumination component. According to different training patterns, we construct CSDNet (paired supervision) and context-sensitive decomposition generative adversarial network (CSDGAN) (unpaired supervision) to fully evaluate our designed architecture. We test our method on seven testing benchmarks [including massachusetts institute of technology (MIT)-Adobe FiveK, LOL, ExDark, and naturalness preserved enhancement (NPE)] to conduct plenty of analytical and evaluated experiments. Thanks to our designed context-sensitive decomposition connection, we successfully realized excellent enhanced results (with sufficient details, vivid colors, and few noises), which fully indicates our superiority against existing state-of-the-art approaches. Finally, considering the practical needs for high efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels. Furthermore, by sharing an encoder for these two components, we obtain a more lightweight version (SLiteCSDNet for short). SLiteCSDNet just contains 0.0301M parameters but achieves the almost same performance as CSDNet. Code is available at https://github.com/KarelZhang/CSDNet-CSDGAN.
Collapse
|
17
|
Wang X, Hu R, Xu X. Single low-light image brightening using learning-based intensity mapping. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.08.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
18
|
Zhuang P, Wu J, Porikli F, Li C. Underwater Image Enhancement With Hyper-Laplacian Reflectance Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5442-5455. [PMID: 35947571 DOI: 10.1109/tip.2022.3196546] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Underwater image enhancement aims at improving the visibility and eliminating color distortions of underwater images degraded by light absorption and scattering in water. Recently, retinex variational models show remarkable capacity of enhancing images by estimating reflectance and illumination in a retinex decomposition course. However, ambiguous details and unnatural color still challenge the performance of retinex variational models on underwater image enhancement. To overcome these limitations, we propose a hyper-laplacian reflectance priors inspired retinex variational model to enhance underwater images. Specifically, the hyper-laplacian reflectance priors are established with the l1/2 -norm penalty on first-order and second-order gradients of the reflectance. Such priors exploit sparsity-promoting and complete-comprehensive reflectance that is used to enhance both salient structures and fine-scale details and recover the naturalness of authentic colors. Besides, the l2 norm is found to be suitable for accurately estimating the illumination. As a result, we turn a complex underwater image enhancement issue into simple subproblems that separately and simultaneously estimate the reflection and the illumination that are harnessed to enhance underwater images in a retinex variational model. We mathematically analyze and solve the optimal solution of each subproblem. In the optimization course, we develop an alternating minimization algorithm that is efficient on element-wise operations and independent of additional prior knowledge of underwater conditions. Extensive experiments demonstrate the superiority of the proposed method in both subjective results and objective assessments over existing methods. The code is available at: https://github.com/zhuangpeixian/HLRP.
Collapse
|
19
|
Li X, Shang J, Song W, Chen J, Zhang G, Pan J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:6126. [PMID: 36015886 PMCID: PMC9412568 DOI: 10.3390/s22166126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 08/11/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Images captured in a low-light environment are strongly influenced by noise and low contrast, which is detrimental to tasks such as image recognition and object detection. Retinex-based approaches have been continuously explored for low-light enhancement. Nevertheless, Retinex decomposition is a highly ill-posed problem. The estimation of the decomposed components should be combined with proper constraints. Meanwhile, the noise mixed in the low-light image causes unpleasant visual effects. To address these problems, we propose a Constraint Low-Rank Approximation Retinex model (CLAR). In this model, two exponential relative total variation constraints were imposed to ensure that the illumination is piece-wise smooth and that the reflectance component is piece-wise continuous. In addition, the low-rank prior was introduced to suppress the noise in the reflectance component. With a tailored separated alternating direction method of multipliers (ADMM) algorithm, the illumination and reflectance components were updated accurately. Experimental results on several public datasets verify the effectiveness of the proposed model subjectively and objectively.
Collapse
|
20
|
Lin Q, Zheng Z, Jia X. UHD Low-light image enhancement via interpretable bilateral learning. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
21
|
Liu R, Ma L, Zhang Y, Fan X, Luo Z. Underexposed Image Correction via Hybrid Priors Navigated Deep Propagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3425-3436. [PMID: 33513118 DOI: 10.1109/tnnls.2021.3052903] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Enhancing visual quality for underexposed images is an extensively concerning task that plays an important role in various areas of multimedia and computer vision. Most existing methods often fail to generate high-quality results with appropriate luminance and abundant details. To address these issues, we develop a novel framework, integrating both knowledge from physical principles and implicit distributions from data to address underexposed image correction. More concretely, we propose a new perspective to formulate this task as an energy-inspired model with advanced hybrid priors. A propagation procedure navigated by the hybrid priors is well designed for simultaneously propagating the reflectance and illumination toward desired results. We conduct extensive experiments to verify the necessity of integrating both underlying principles (i.e., with knowledge) and distributions (i.e., from data) as navigated deep propagation. Plenty of experimental results of underexposed image correction demonstrate that our proposed method performs favorably against the state-of-the-art methods on both subjective and objective assessments. In addition, we execute the task of face detection to further verify the naturalness and practical value of underexposed image correction. What is more, we apply our method to solve single-image haze removal whose experimental results further demonstrate our superiorities.
Collapse
|
22
|
Luo J, Ren W, Wang T, Li C, Cao X. Under-Display Camera Image Enhancement via Cascaded Curve Estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4856-4868. [PMID: 35709110 DOI: 10.1109/tip.2022.3182278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The new trend of full-screen devices encourages manufacturers to position a camera behind a screen, i.e., the newly-defined Under-Display Camera (UDC). Therefore, UDC image restoration has been a new realistic single image enhancement problem. In this work, we propose a curve estimation network operating on the hue (H) and saturation (S) channels to perform adaptive enhancement for degraded images captured by UDCs. The proposed network aims to match the complicated relationship between the images captured by under-display and display-free cameras. To extract effective features, we cascade the proposed curve estimation network with sharing weights, and we introduce a spatial and channel attention module in each curve estimation network to exploit attention-aware features. In addition, we learn the curve estimation network in a semi-supervised manner to alleviate the restriction of the requirement for amounts of labeled images and improve the generalization ability for unseen degraded images in various realistic scenes. The semi-supervised network consists of a supervised branch trained on labeled data and an unsupervised branch trained on unlabeled data. To train the proposed model, we build a new dataset comprised of real-world labeled and unlabeled images. Extensive experiments demonstrate that our proposed algorithm performs favorably against state-of-the-art image enhancement methods for UDC images in terms of accuracy and speed, especially on ultra-high-definition (UHD) images.
Collapse
|
23
|
Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Images in real surface defect detection scenes often suffer from uneven illumination. Retinex-based image enhancement methods can effectively eliminate the interference caused by uneven illumination and improve the visual quality of such images. However, these methods suffer from the loss of defect-discriminative information and a high computational burden. To address the above issues, we propose a joint-prior-based uneven illumination enhancement (JPUIE) method. Specifically, a semi-coupled retinex model is first constructed to accurately and effectively eliminate uneven illumination. Furthermore, a multiscale Gaussian-difference-based background prior is proposed to reweight the data consistency term, thereby avoiding the loss of defect information in the enhanced image. Last, by using the powerful nonlinear fitting ability of deep neural networks, a deep denoised prior is proposed to replace existing physics priors, effectively reducing the time consumption. Various experiments are carried out on public and private datasets, which are used to compare the defect images and enhanced results in a symmetric way. The experimental results demonstrate that our method is more conducive to downstream visual inspection tasks than other methods.
Collapse
|
24
|
Kumar R, Bhandari AK. Spatial mutual information based detail preserving magnetic resonance image enhancement. Comput Biol Med 2022; 146:105644. [DOI: 10.1016/j.compbiomed.2022.105644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 05/08/2022] [Accepted: 05/14/2022] [Indexed: 11/28/2022]
|
25
|
Ahn S, Shin J, Lim H, Lee J, Paik J. CODEN: combined optimization-based decomposition and learning-based enhancement network for Retinex-based brightness and contrast enhancement. OPTICS EXPRESS 2022; 30:23608-23621. [PMID: 36225037 DOI: 10.1364/oe.459063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 06/04/2022] [Indexed: 06/16/2023]
Abstract
In this paper, we present a novel low-light image enhancement method by combining optimization-based decomposition and enhancement network for simultaneously enhancing brightness and contrast. The proposed method works in two steps including Retinex decomposition and illumination enhancement, and can be trained in an end-to-end manner. The first step separates the low-light image into illumination and reflectance components based on the Retinex model. Specifically, it performs model-based optimization followed by learning for edge-preserved illumination smoothing and detail-preserved reflectance denoising. In the second step, the illumination output from the first step, together with its gamma corrected and histogram equalized versions, serves as input to illumination enhancement network (IEN) including residual squeeze and excitation blocks (RSEBs). Extensive experiments prove that our method shows better performance compared with state-of-the-art low-light enhancement methods in the sense of both objective and subjective measures.
Collapse
|
26
|
A predictive intelligence approach for low-light enhancement. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
27
|
Ganesan A, Santhanam SM. A novel feature descriptor based coral image classification using extreme learning machine with ameliorated chimp optimization algorithm. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2021.101527] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
28
|
Xia W, Chen E, Pautler S, Peters T. Laparoscopic image enhancement based on distributed retinex optimization with refined information fusion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
29
|
GEVE: A generative adversarial network for extremely dark image/video enhancement. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2021.10.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
30
|
N2PN: Non-reference two-pathway network for low-light image enhancement. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02627-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
31
|
Han R, Tang C, Xu M, Li J, Lei Z. Joint enhancement and denoising in electronic speckle pattern interferometry fringe patterns with low contrast or uneven illumination via an oriented variational Retinex model. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:239-249. [PMID: 35200960 DOI: 10.1364/josaa.433747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
Simultaneous speckle reduction and contrast enhancement for electronic speckle pattern interferometry (ESPI) fringe patterns is a challenging task. In this paper, we propose a joint enhancement and denoising method based on the oriented variational Retinex model for ESPI fringe patterns with low contrast or uneven illumination. In our model, we use the structure prior to constrain the illumination and introduce a fractional-order differential to constrain the reflectance for enhancement, then use the second-order partial derivative of the reflectance as the denoising term to reduce noise. The proposed model is solved using the sequential method to obtain piecewise smoothed illumination and noise-suppressed reflectance sequentially, which avoids remaining noise in the illumination and reflectance map. After obtaining the refined illuminance and reflectance, we substitute the gamma-corrected illuminance into the camera response function to further adjust the reflectance as the final enhancement result. We test our proposed method on two non-uniform illumination computer-simulated and two low-contrast experimentally obtained ESPI fringe patterns. Finally, we compare our method with three other joint enhancement and denoising variational Retinex methods.
Collapse
|
32
|
Wang W, Wang A, Liu C. Variational Single Nighttime Image Haze Removal With a Gray Haze-Line Prior. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1349-1363. [PMID: 35025742 DOI: 10.1109/tip.2022.3141252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Influenced by glowing effects, nighttime haze removal is a challenging ill-posed task. Existing nighttime dehazing methods usually result in glowing artifacts, color shifts, overexposure, and noise amplification. Thus, through statistical and theoretical analyses, we propose a simple and effective gray haze-line prior (GHLP) to identify accurate hazy feature areas. This prior demonstrates that haze is concentrated on the haze line in the RGB color space and can be accurately projected into the gray component in the Y channel of the YUV color space. Based on this prior, we establish a new unified nighttime haze removal framework and then decompose a nighttime hazy image into color and gray components in the YUV color space. Glowing color correction and haze removal are two important consecutive steps in the nighttime dehazing process. The glowing color correction method is designed to separately remove glow in the color component and enhance illumination in the gray component. After obtaining a refined nighttime hazy image, we propose a new structure-aware variational framework to simultaneously estimate the inverted scene radiance and the transmission in the gray component. This approach can not only recover the high-quality nighttime scene radiance but also preserve the significant structural information and intrinsic color of the scene. Quantitative and qualitative comparisons validate the excellent effectiveness of the proposed nighttime dehazing method against previous state-of-the-art methods. In addition, the proposed approach can be extended to achieve image enhancement for inclement weather scenes, such as sandstorm scenes and extreme daytime hazy scenes.
Collapse
|
33
|
Low-Light Image Enhancement Under Mixed Noise Model with Tensor Representation. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20497-5_48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
34
|
Zhang T, Dong J, Yang L, Liu S, Lu R. Automatic defect inspection of thin film transistor-liquid crystal display panels using robust one-dimensional Fourier reconstruction with non-uniform illumination correction. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2021; 92:103701. [PMID: 34717417 DOI: 10.1063/5.0060636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 09/11/2021] [Indexed: 06/13/2023]
Abstract
Automatic inspection of micro-defects of thin film transistor-liquid crystal display (TFT-LCD) panels is a critical task in LCD manufacturing. To meet the practical demand of online inspection of a one-dimensional (1D) line image captured by the line scan visual system, we propose a robust 1D Fourier reconstruction method with the capability of automatic determination of the period Δx of the periodic pattern of a spatial domain line image and the neighboring length r of the frequency peaks of the corresponding frequency domain line image. Moreover, to alleviate the difficulty in the discrimination between the defects and the non-uniform illumination background, we present an effective way to correct the non-uniform background using robust locally weighted smoothing combined with polynomial curve fitting. As a proof-of-concept, we built a line scan visual system and tested the captured line images. The results reveal that the proposed method is able to correct the non-uniform illumination background in a proper way that does not cause false alarms in defect inspection but also preserves complete information about the defects in terms of the brightness and darkness as well as the shape, indicating its distinct advantage in defect inspection of TFT-LCD panels.
Collapse
Affiliation(s)
- Tengda Zhang
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Jingtao Dong
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Lei Yang
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Shanlin Liu
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Rongsheng Lu
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| |
Collapse
|
35
|
Cao W, Wu S, Wang D, Wu J. A High Visibility and SNR Image From One Single-Shot Low-Light Image. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2021; 41:124-137. [PMID: 32078537 DOI: 10.1109/mcg.2020.2972522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Achieving high visibility and high signal-to-noise ratio (SNR) from a single-shot image captured in low-light environments is an under-constrained problem. To cope with this issue, the intrinsic relationship between the image domain and the radiance domain is first established based on the human visual model, the atmospheric scattering model, and the camera imaging model, and the ideal exposure is derived. Using the illumination-reflection-noise prior, a new convex optimization by employed gradient constraint and the Krisch operator is then presented to estimate the noise-reduced illumination and reflection components. A high SNR image in the optimal exposure is generated in radiance domain, which is finally inversely mapped to obtain a high SNR image in image domain. Experimental results in subjective and objective tests show that the proposed algorithm has a high SNR and pleasant perception in comparison with the state-of-the-art methods.
Collapse
|
36
|
Devignetting fundus images via Bayesian estimation of illumination component and gamma correction. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
37
|
|
38
|
Deng X, Zhang Y, Wang H, Hu H. Robust underwater image enhancement method based on natural light and reflectivity. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:181-191. [PMID: 33690528 DOI: 10.1364/josaa.400199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Accepted: 12/09/2020] [Indexed: 06/12/2023]
Abstract
The poor visibility of underwater images is caused not only by scattering and absorption effects but is also related to light conditions. To improve robustness, a novel underwater image enhancement method based on natural light and reflectivity is proposed. Aiming at the scattering effects of reflectivity, a dehazing process based on the non-correlation of a foreground scene and background light is first conducted. Then, a more precise reflectivity can be estimated by substituting the captured image with the dehazed image. Moreover, classical methods often regard the dehazed image as the final result, but ignore the fact that attenuated natural light and nonuniform artificial light, which lead to insufficient brightness and halo effects, are included in the dehazed image, and are not robust to all scenes. This phenomenon enables us to remove the artificial light disturbance by introducing the dehazed image in the Lambertian model, and compensate for the loss of natural light energy by exploiting the light attenuation ratio map. Thus, the least-attenuated natural light can be further derived. Experimental results demonstrate that our method is satisfactory in producing more pleasing results under various circumstances.
Collapse
|
39
|
Xu Y, Yang C, Sun B, Yan X, Chen M. A novel multi-scale fusion framework for detail-preserving low-light image enhancement. Inf Sci (N Y) 2021. [DOI: 10.1016/j.ins.2020.09.066] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
40
|
Attention Guided Retinex Architecture Search for Robust Low-light Image Enhancement. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_38] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
41
|
Low-Light Image Enhancement Based on Quasi-Symmetric Correction Functions by Fusion. Symmetry (Basel) 2020. [DOI: 10.3390/sym12091561] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Sometimes it is very difficult to obtain high-quality images because of the limitations of image-capturing devices and the environment. Gamma correction (GC) is widely used for image enhancement. However, traditional GC perhaps cannot preserve image details and may even reduce local contrast within high-illuminance regions. Therefore, we first define two couples of quasi-symmetric correction functions (QCFs) to solve these problems. Moreover, we propose a novel low-light image enhancement method based on proposed QCFs by fusion, which combines a globally-enhanced image by QCFs and a locally-enhanced image by contrast-limited adaptive histogram equalization (CLAHE). A large number of experimental results showed that our method could significantly enhance the detail and improve the contrast of low-light images. Our method also has a better performance than other state-of-the-art methods in both subjective and objective assessments.
Collapse
|
42
|
He R, Guo X, Shi Z. SIDE-A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images. SENSORS 2020; 20:s20185300. [PMID: 32947978 PMCID: PMC7570461 DOI: 10.3390/s20185300] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 08/29/2020] [Accepted: 09/08/2020] [Indexed: 11/16/2022]
Abstract
Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.
Collapse
Affiliation(s)
- Renjie He
- School of Automation, Northwestern Polytechnical University, Xi’an 710129, China;
- Correspondence:
| | - Xintao Guo
- School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China;
| | - Zhongke Shi
- School of Automation, Northwestern Polytechnical University, Xi’an 710129, China;
| |
Collapse
|
43
|
Zhao Y, Zhang J, Pereira E, Zheng Y, Su P, Xie J, Zhao Y, Shi Y, Qi H, Liu J, Liu Y. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2725-2737. [PMID: 32078542 DOI: 10.1109/tmi.2020.2974499] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Precise characterization and analysis of corneal nerve fiber tortuosity are of great importance in facilitating examination and diagnosis of many eye-related diseases. In this paper we propose a fully automated method for image-level tortuosity estimation, comprising image enhancement, exponential curvature estimation, and tortuosity level classification. The image enhancement component is based on an extended Retinex model, which not only corrects imbalanced illumination and improves image contrast in an image, but also models noise explicitly to aid removal of imaging noise. Afterwards, we take advantage of exponential curvature estimation in the 3D space of positions and orientations to directly measure curvature based on the enhanced images, rather than relying on the explicit segmentation and skeletonization steps in a conventional pipeline usually with accumulated pre-processing errors. The proposed method has been applied over two corneal nerve microscopy datasets for the estimation of a tortuosity level for each image. The experimental results show that it performs better than several selected state-of-the-art methods. Furthermore, we have performed manual gradings at tortuosity level of four hundred and three corneal nerve microscopic images, and this dataset has been released for public access to facilitate other researchers in the community in carrying out further research on the same and related topics.
Collapse
|
44
|
Liu R, Jiang Z, Fan X, Luo Z. Knowledge-Driven Deep Unrolling for Robust Image Layer Separation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:1653-1666. [PMID: 31329566 DOI: 10.1109/tnnls.2019.2921597] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Single-image layer separation targets to decompose the observed image into two independent components in terms of different application demands. It is known that many vision and multimedia applications can be (re)formulated as a separation problem. Due to the fundamentally ill-posed natural of these separations, existing methods are inclined to investigate model priors on the separated components elaborately. Nevertheless, it is knotty to optimize the cost function with complicated model regularizations. Effectiveness is greatly conceded by the settled iteration mechanism, and the adaption cannot be guaranteed due to the poor data fitting. What is more, for a universal framework, the most taxing point is that one type of visual cue cannot be shared with different tasks. To partly overcome the weaknesses mentioned earlier, we delve into a generic optimization unrolling technique to incorporate deep architectures into iterations for adaptive image layer separation. First, we propose a general energy model with implicit priors, which is based on maximum a posterior, and employ the extensively accepted alternating direction method of multiplier to determine our elementary iteration mechanism. By unrolling with one general residual architecture prior and one task-specific prior, we attain a straightforward, flexible, and data-dependent image separation framework successfully. We apply our method to four different tasks, including single-image-rain streak removal, high-dynamic-range tone mapping, low-light image enhancement, and single-image reflection removal. Extensive experiments demonstrate that the proposed method is applicable to multiple tasks and outperforms the state of the arts by a large margin qualitatively and quantitatively.
Collapse
|
45
|
Ren X, Yang W, Cheng WH, Liu J. LR3M: Robust Low-Light Enhancement via Low-Rank Regularized Retinex Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5862-5876. [PMID: 32286975 DOI: 10.1109/tip.2020.2984098] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Noise causes unpleasant visual effects in low-light image/video enhancement. In this paper, we aim to make the enhancement model and method aware of noise in the whole process. To deal with heavy noise which is not handled in previous methods, we introduce a robust low-light enhancement approach, aiming at well enhancing low-light images/videos and suppressing intensive noise jointly. Our method is based on the proposed Low-Rank Regularized Retinex Model (LR3M), which is the first to inject low-rank prior into a Retinex decomposition process to suppress noise in the reflectance map. Our method estimates a piece-wise smoothed illumination and a noise-suppressed reflectance sequentially, avoiding remaining noise in the illumination and reflectance maps which are usually presented in alternative decomposition methods. After getting the estimated illumination and reflectance, we adjust the illumination layer and generate our enhancement result. Furthermore, we apply our LR3M to video low-light enhancement. We consider inter-frame coherence of illumination maps and find similar patches through reflectance maps of successive frames to form the low-rank prior to make use of temporal correspondence. Our method performs well for a wide variety of images and videos, and achieves better quality both in enhancing and denoising, compared with the state-of-the-art methods.
Collapse
|
46
|
Low-Light Image Enhancement Based on Deep Symmetric Encoder–Decoder Convolutional Networks. Symmetry (Basel) 2020. [DOI: 10.3390/sym12030446] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
A low-light image enhancement method based on a deep symmetric encoder–decoder convolutional network (LLED-Net) is proposed in the paper. In surveillance and tactical reconnaissance, collecting visual information from a dynamic environment and accurately processing that data is critical to making the right decisions and ensuring mission success. However, due to the cost and technical limitations of camera sensors, it is difficult to capture clear images or videos in low-light conditions. In this paper, a special encoder–decoder convolution network is designed to utilize multi-scale feature maps and join jump connections to avoid gradient disappearance. In order to preserve the image texture as much as possible, by using structural similarity (SSIM) loss to train the model on the data sets with different brightness level, the model can adaptively enhance low-light images in low-light environments. The results show that the proposed algorithm provides significant improvements in quantitative comparison with RED-Net and several other representative image enhancement algorithms.
Collapse
|
47
|
Xu J, Hou Y, Ren D, Liu L, Zhu F, Yu M, Wang H, Shao L. STAR: A Structure and Texture Aware Retinex Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:5022-5037. [PMID: 32167892 DOI: 10.1109/tip.2020.2974060] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Retinex theory is developed mainly to decompose an image into the illumination and reflectance components by analyzing local image derivatives. In this theory, larger derivatives are attributed to the changes in reflectance, while smaller derivatives are emerged in the smooth illumination. In this paper, we utilize exponentiated local derivatives (with an exponent γ) of an observed image to generate its structure map and texture map. The structure map is produced by been amplified with γ > 1, while the texture map is generated by been shrank with γ < 1. To this end, we design exponential filters for the local derivatives, and present their capability on extracting accurate structure and texture maps, influenced by the choices of exponents γ. The extracted structure and texture maps are employed to regularize the illumination and reflectance components in Retinex decomposition. A novel Structure and Texture Aware Retinex (STAR) model is further proposed for illumination and reflectance decomposition of a single image. We solve the STAR model by an alternating optimization algorithm. Each sub-problem is transformed into a vectorized least squares regression, with closed-form solutions. Comprehensive experiments on commonly tested datasets demonstrate that, the proposed STAR model produce better quantitative and qualitative performance than previous competing methods, on illumination and reflectance decomposition, low-light image enhancement, and color correction. The code is publicly available at https://github.com/csjunxu/STAR.
Collapse
|
48
|
Zhang XS, Yang KF, Zhou J, Li YJ. Retina inspired tone mapping method for high dynamic range images. OPTICS EXPRESS 2020; 28:5953-5964. [PMID: 32225854 DOI: 10.1364/oe.380555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 02/10/2020] [Indexed: 06/10/2023]
Abstract
The limited dynamic range of regular screens restricts the display of high dynamic range (HDR) images. Inspired by retinal processing mechanisms, we propose a tone mapping method to address this problem. In the retina, horizontal cells (HCs) adaptively adjust their receptive field (RF) size based on the local stimuli to regulate the visual signals absorbed by photoreceptors. Using this adaptive mechanism, the proposed method compresses the dynamic range locally in different regions, and has the capability of avoiding halo artifacts around the edges of high luminance contrast. Moreover, the proposed method introduces the center-surround antagonistic RF structure of bipolar cells (BCs) to enhance the local contrast and details. Extensive experiments show that the proposed method performs robustly well on a wide variety of images, providing competitive results against the state-of-the-art methods in terms of visual inspection, objective metrics and observer scores.
Collapse
|
49
|
|
50
|
Luo X, Zeng HQ, Wan Y, Zhang XB, Du YP, Peters TM. Endoscopic Vision Augmentation Using Multiscale Bilateral-Weighted Retinex for Robotic Surgery. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2863-2874. [PMID: 31094684 DOI: 10.1109/tmi.2019.2916101] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Endoscopic vision plays a significant role in minimally invasive surgical procedures. The visibility and maintenance of such direct in situ vision is paramount not only for safety by preventing inadvertent injury but also to improve precision and reduce operating time. Unfortunately, the endoscopic vision is unavoidably degraded due to the illumination variations during surgery. This paper aims to restore or augment such degraded visualization and quantitatively evaluate it during robotic surgery. A multiscale bilateral-weighted retinex method is proposed to remove non-uniform and highly directional illumination and enhance surgical vision, while an objective no-reference image visibility assessment method is defined in terms of sharpness, naturalness, and contrast, to quantitatively and objectively evaluate the endoscopic visualization on surgical video sequences. The methods were validated on surgical data, with the experimental results showing that our method outperforms existent retinex approaches. In particular, the combined visibility was improved from 0.81 to 1.06, while three surgeons generally agreed that the results were restored with much better visibility.
Collapse
|