1
|
Roldán D, Redenbach C, Schladitz K, Kübel C, Schlabach S. Image quality evaluation for FIB-SEM images. J Microsc 2024; 293:98-117. [PMID: 38112173 DOI: 10.1111/jmi.13254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 12/20/2023]
Abstract
Focused ion beam scanning electron microscopy (FIB-SEM) tomography is a serial sectioning technique where an FIB mills off slices from the material sample that is being analysed. After every slicing, an SEM image is taken showing the newly exposed layer of the sample. By combining all slices in a stack, a 3D image of the material is generated. However, specific artefacts caused by the imaging technique distort the images, hampering the morphological analysis of the structure. Typical quality problems in microscopy imaging are noise and lack of contrast or focus. Moreover, specific artefacts are caused by the FIB milling, namely, curtaining and charging artefacts. We propose quality indices for the evaluation of the quality of FIB-SEM data sets. The indices are validated on real and experimental data of different structures and materials.
Collapse
Affiliation(s)
| | | | - Katja Schladitz
- Fraunhofer Institute of Industrial Mathematics, Kaiserslautern, Germany
| | - Christian Kübel
- Institute of Nanotechnology (INT), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Karlsruhe Nano Micro Facility (KNMFi), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Research group in-situ electron microscopy, Joint Research Laboratory Nanomaterials, Department of Materials & Earth Sciences, Technical University Darmstadt, Darmstadt, Germany
| | - Sabine Schlabach
- Institute of Nanotechnology (INT), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Karlsruhe Nano Micro Facility (KNMFi), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
- Institute for Applied Materials (IAM), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| |
Collapse
|
2
|
Fan G, Gan M, Fan B, Chen CLP. Multiscale Cross-Connected Dehazing Network With Scene Depth Fusion. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1598-1612. [PMID: 35776818 DOI: 10.1109/tnnls.2022.3184164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this article, we propose a multiscale cross-connected dehazing network with scene depth fusion. We focus on the correlation between a hazy image and the corresponding depth image. The model encodes and decodes the hazy image and the depth image separately and includes cross connections at the decoding end to directly generate a clean image in an end-to-end manner. Specifically, we first construct an input pyramid to obtain the receptive fields of the depth image and the hazy image at multiple levels. Then, we add the features of the corresponding dimensions in the input pyramid to the encoder. Finally, the two paths of the decoder are cross-connected. In addition, the proposed model uses wavelet pooling and residual channel attention modules (RCAMs) as components. A series of ablation experiments shows that the wavelet pooling and RCAMs effectively improve the performance of the model. We conducted extensive experiments on multiple dehazing datasets, and the results show that the model is superior to other advanced methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and subjective visual effects. The source code and supplementary are available at https://github.com/CCECfgd/MSCDN-master.
Collapse
|
3
|
Tang L, Ma J, Zhang H, Guo X. DRLIE: Flexible Low-Light Image Enhancement via Disentangled Representations. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2694-2707. [PMID: 35853059 DOI: 10.1109/tnnls.2022.3190880] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Low-light image enhancement (LIME) aims to convert images with unsatisfied lighting into desired ones. Different from existing methods that manipulate illumination in uncontrollable manners, we propose a flexible framework to take user-specified guide images as references to improve the practicability. To achieve the goal, this article models an image as the combination of two components, that is, content and exposure attribute, from an information decoupling perspective. Specifically, we first adopt a content encoder and an attribute encoder to disentangle the two components. Then, we combine the scene content information of the low-light image with the exposure attribute of the guide image to reconstruct the enhanced image through a generator. Extensive experiments on public datasets demonstrate the superiority of our approach over state-of-the-art alternatives. Particularly, the proposed method allows users to enhance images according to their preferences, by providing specific guide images. Our source code and the pretrained model are available at https://github.com/Linfeng-Tang/DRLIE.
Collapse
|
4
|
Zhao M, Yang R, Hu M, Liu B. Deep Learning-Based Technique for Remote Sensing Image Enhancement Using Multiscale Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2024; 24:673. [PMID: 38276366 PMCID: PMC11154389 DOI: 10.3390/s24020673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/10/2024] [Accepted: 01/17/2024] [Indexed: 01/27/2024]
Abstract
The present study proposes a novel deep-learning model for remote sensing image enhancement. It maintains image details while enhancing brightness in the feature extraction module. An improved hierarchical model named Global Spatial Attention Network (GSA-Net), based on U-Net for image enhancement, is proposed to improve the model's performance. To circumvent the issue of insufficient sample data, gamma correction is applied to create low-light images, which are then used as training examples. A loss function is constructed using the Structural Similarity (SSIM) and Peak Signal-to-Noise Ratio (PSNR) indices. The GSA-Net network and loss function are utilized to restore images obtained via low-light remote sensing. This proposed method was tested on the Northwestern Polytechnical University Very-High-Resolution 10 (NWPU VHR-10) dataset, and its overall superiority was demonstrated in comparison with other state-of-the-art algorithms using various objective assessment indicators, such as PSNR, SSIM, and Learned Perceptual Image Patch Similarity (LPIPS). Furthermore, in high-level visual tasks such as object detection, this novel method provides better remote sensing images with distinct details and higher contrast than the competing methods.
Collapse
Affiliation(s)
| | | | - Min Hu
- School of Computer Science, Yangtze University, Jingzhou 434023, China; (M.Z.); (R.Y.); (B.L.)
| | | |
Collapse
|
5
|
Siracusano G, La Corte A, Nucera AG, Gaeta M, Chiappini M, Finocchio G. Effective processing pipeline PACE 2.0 for enhancing chest x-ray contrast and diagnostic interpretability. Sci Rep 2023; 13:22471. [PMID: 38110512 PMCID: PMC10728198 DOI: 10.1038/s41598-023-49534-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 12/09/2023] [Indexed: 12/20/2023] Open
Abstract
Preprocessing is an essential task for the correct analysis of digital medical images. In particular, X-ray imaging might contain artifacts, low contrast, diffractions or intensity inhomogeneities. Recently, we have developed a procedure named PACE that is able to improve chest X-ray (CXR) images including the enforcement of clinical evaluation of pneumonia originated by COVID-19. At the clinical benchmark state of this tool, there have been found some peculiar conditions causing a reduction of details over large bright regions (as in ground-glass opacities and in pleural effusions in bedridden patients) and resulting in oversaturated areas. Here, we have significantly improved the overall performance of the original approach including the results in those specific cases by developing PACE2.0. It combines 2D image decomposition, non-local means denoising, gamma correction, and recursive algorithms to improve image quality. The tool has been evaluated using three metrics: contrast improvement index, information entropy, and effective measure of enhancement, resulting in an average increase of 35% in CII, 7.5% in ENT, 95.6% in EME and 13% in BRISQUE against original radiographies. Additionally, the enhanced images were fed to a pre-trained DenseNet-121 model for transfer learning, resulting in an increase in classification accuracy from 80 to 94% and recall from 89 to 97%, respectively. These improvements led to a potential enhancement of the interpretability of lesion detection in CXRs. PACE2.0 has the potential to become a valuable tool for clinical decision support and could help healthcare professionals detect pneumonia more accurately.
Collapse
Affiliation(s)
- Giulio Siracusano
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy.
| | - Aurelio La Corte
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy
| | - Annamaria Giuseppina Nucera
- Unit of Radiology, Department of Advanced Diagnostic-Therapeutic Technologies, "Bianchi-Melacrino-Morelli" Hospital, Reggio Calabria, Via Giuseppe Melacrino, 21, 89124, Reggio Calabria, Italy
| | - Michele Gaeta
- Department of Biomedical Sciences, Dental and of Morphological and Functional Images, University of Messina, Via Consolare Valeria 1, 98125, Messina, Italy
| | - Massimo Chiappini
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Maris Scarl, Via Vigna Murata 606, 00143, Rome, Italy.
| | - Giovanni Finocchio
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Department of Mathematical and Computer Sciences, Physical Sciences and Earth Sciences, University of Messina, V.le F. Stagno D'Alcontres 31, 98166, Messina, Italy.
| |
Collapse
|
6
|
Bhimavarapu U, Chintalapudi N, Battineni G. Automatic Detection and Classification of Diabetic Retinopathy Using the Improved Pooling Function in the Convolution Neural Network. Diagnostics (Basel) 2023; 13:2606. [PMID: 37568969 PMCID: PMC10416913 DOI: 10.3390/diagnostics13152606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 07/30/2023] [Accepted: 08/02/2023] [Indexed: 08/13/2023] Open
Abstract
Diabetic retinopathy (DR) is an eye disease associated with diabetes that can lead to blindness. Early diagnosis is critical to ensure that patients with diabetes are not affected by blindness. Deep learning plays an important role in diagnosing diabetes, reducing the human effort to diagnose and classify diabetic and non-diabetic patients. The main objective of this study was to provide an improved convolution neural network (CNN) model for automatic DR diagnosis from fundus images. The pooling function increases the receptive field of convolution kernels over layers. It reduces computational complexity and memory requirements because it reduces the resolution of feature maps while preserving the essential characteristics required for subsequent layer processing. In this study, an improved pooling function combined with an activation function in the ResNet-50 model was applied to the retina images in autonomous lesion detection with reduced loss and processing time. The improved ResNet-50 model was trained and tested over the two datasets (i.e., APTOS and Kaggle). The proposed model achieved an accuracy of 98.32% for APTOS and 98.71% for Kaggle datasets. It is proven that the proposed model has produced greater accuracy when compared to their state-of-the-art work in diagnosing DR with retinal fundus images.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522302, India
| | - Nalini Chintalapudi
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- The Research Centre of the ECE Department, V. R. Siddhartha Engineering College, Vijayawada 520007, India
| |
Collapse
|
7
|
Qiu J, Wang Z, Huang H. High dynamic range image compression based on the multi-peak S-shaped tone curve. OPTICS EXPRESS 2023; 31:9841-9853. [PMID: 37157546 DOI: 10.1364/oe.483448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Tone mapping methods aim to compress the high dynamic range (HDR) images so that they can be displayed on common devices. The tone curve plays a key role in many tone mapping methods, which can directly adjust the range of the HDR image. The S-shaped tone curves can produce impressive performances due to their flexibility. However, the conventional S-shaped tone curve in tone mapping methods is single and had the problem of excessive compressing of the dense grayscale areas, resulting in the loss of details in this area, and insufficient compressing of the sparse grayscale areas, resulting in low contrast of tone mapped image. This paper proposes a multi-peak S-shaped (MPS) tone curve to address these problems. Specifically, the grayscale interval of the HDR image is divided according to the significant peak and valley distribution of the grayscale histogram, and each interval is tone mapped by an S-shaped tone curve. We further propose an adaptive S-shaped tone curve based on the luminance adaptation mechanism of the human visual system, which can effectively reduce the compression in the dense grayscale areas and increase the compression in the sparse grayscale areas, preserving details while improving the contrast of tone mapped images. Experiments show that our MPS tone curve replaces the single S-shaped tone curve in relevant methods for better performance and outperforms the state-of-the-art tone mapping methods.
Collapse
|
8
|
Li L, Li D, Wang S, Jiao Q, Bian L. Tuning-free and self-supervised image enhancement against ill exposure. OPTICS EXPRESS 2023; 31:10368-10385. [PMID: 37157585 DOI: 10.1364/oe.484628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Complex lighting conditions and the limited dynamic range of imaging devices result in captured images with ill exposure and information loss. Existing image enhancement methods based on histogram equalization, Retinex-inspired decomposition model, and deep learning suffer from manual tuning or poor generalization. In this work, we report an image enhancement method against ill exposure with self-supervised learning, enabling tuning-free correction. First, a dual illumination estimation network is constructed to estimate the illumination for under- and over-exposed areas. Thus, we get the corresponding intermediate corrected images. Second, given the intermediate corrected images with different best-exposed areas, Mertens' multi-exposure fusion strategy is utilized to fuse the intermediate corrected images to acquire a well-exposed image. The correction-fusion manner allows adaptive dealing with various types of ill-exposed images. Finally, the self-supervised learning strategy is studied which learns global histogram adjustment for better generalization. Compared to training on paired datasets, we only need ill-exposed images. This is crucial in cases where paired data is inaccessible or less than perfect. Experiments show that our method can reveal more details with better visual perception than other state-of-the-art methods. Furthermore, the weighted average scores of image naturalness matrics NIQE and BRISQUE, and contrast matrics CEIQ and NSS on five real-world image datasets are boosted by 7%, 15%, 4%, and 2%, respectively, over the recent exposure correction method.
Collapse
|
9
|
Khan RA, Luo Y, Wu FX. Multi-level GAN based enhanced CT scans for liver cancer diagnosis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
10
|
Finlayson G, McVey J. TM-Net: A Neural Net Architecture for Tone Mapping. J Imaging 2022; 8:jimaging8120325. [PMID: 36547490 PMCID: PMC9785189 DOI: 10.3390/jimaging8120325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 12/05/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
Tone mapping functions are applied to images to compress the dynamic range of an image, to make image details more conspicuous, and most importantly, to produce a pleasing reproduction. Contrast Limited Histogram Equalization (CLHE) is one of the simplest and most widely deployed tone mapping algorithms. CLHE works by iteratively refining an input histogram (to meet certain conditions) until convergence, then the cumulative histogram of the result is used to define the tone map that is used to enhance the image. This paper makes three contributions. First, we show that CLHE can be exactly formulated as a deep tone mapping neural network (which we call the TM-Net). The TM-Net has as many layers as there are refinements in CLHE (i.e., 60+ layers since CLHE can take up to 60 refinements to converge). Second, we show that we can train a fixed 2-layer TM-Net to compute CLHE, thereby making CLHE up to 30× faster to compute. Thirdly, we take a more complex tone-mapper (that uses quadratic programming) and show that it too can also be implemented - without loss of visual accuracy-using a bespoke trained 2-layer TM-Net. Experiments on a large corpus of 40,000+ images validate our methods.
Collapse
|
11
|
Lee SH, Park HG, Kwon KH, Kim BH, Kim MY, Jeong SH. Accurate Ship Detection Using Electro-Optical Image-Based Satellite on Enhanced Feature and Land Awareness. SENSORS (BASEL, SWITZERLAND) 2022; 22:9491. [PMID: 36502193 PMCID: PMC9739475 DOI: 10.3390/s22239491] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/28/2022] [Accepted: 12/01/2022] [Indexed: 06/17/2023]
Abstract
This paper proposes an algorithm that improves ship detection accuracy using preprocessing and post-processing. To achieve this, high-resolution electro-optical satellite images with a wide range of shape and texture information were considered. The developed algorithms display the problem of unreliable detection of ships owing to clouds, large waves, weather influences, and shadows from large terrains. False detections in land areas with image information similar to that of ships are observed frequently. Therefore, this study involves three algorithms: global feature enhancement pre-processing (GFEP), multiclass ship detector (MSD), and false detected ship exclusion by sea land segmentation image (FDSESI). First, GFEP enhances the image contrast of high-resolution electro-optical satellite images. Second, the MSD extracts many primary ship candidates. Third, falsely detected ships in the land region are excluded using the mask image that divides the sea and land. A series of experiments was performed using the proposed method on a database of 1984 images. The database includes five ship classes. Therefore, a method focused on improving the accuracy of various ships is proposed. The results show a mean average precision (mAP) improvement from 50.55% to 63.39% compared with other deep learning-based detection algorithms.
Collapse
Affiliation(s)
- Sang-Heon Lee
- School of Electronics Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- The Korea Institute of Industrial Technology, Cheonan 31056, Republic of Korea
| | - Hae-Gwang Park
- The Oceanlightai. Co., Ltd., Daegu 41260, Republic of Korea
| | - Ki-Hoon Kwon
- School of Electronics Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
| | - Byeong-Hak Kim
- The Korea Institute of Industrial Technology, Cheonan 31056, Republic of Korea
| | - Min Young Kim
- School of Electronics Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
- Research Center for Neurosurgical Robotic System, Daegu 41566, Republic of Korea
| | - Seung-Hyun Jeong
- School of Mechatronics, Korea University of Technology and Education, Cheonan 31253, Republic of Korea
| |
Collapse
|
12
|
Cao Y, Tong X, Wang F, Yang J, Cao Y, Strat ST, Tisse CL. A deep thermal-guided approach for effective low-light visible image enhancement. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
13
|
Prasitpuriprecha C, Pitakaso R, Gonwirat S, Enkvetchakul P, Preeprem T, Jantama SS, Kaewta C, Weerayuth N, Srichok T, Khonjun S, Nanthasamroeng N. Embedded AMIS-Deep Learning with Dialog-Based Object Query System for Multi-Class Tuberculosis Drug Response Classification. Diagnostics (Basel) 2022; 12:diagnostics12122980. [PMID: 36552987 PMCID: PMC9777254 DOI: 10.3390/diagnostics12122980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 11/23/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022] Open
Abstract
A person infected with drug-resistant tuberculosis (DR-TB) is the one who does not respond to typical TB treatment. DR-TB necessitates a longer treatment period and a more difficult treatment protocol. In addition, it can spread and infect individuals in the same manner as regular TB, despite the fact that early detection of DR-TB could reduce the cost and length of TB treatment. This study provided a fast and effective classification scheme for the four subtypes of TB: Drug-sensitive tuberculosis (DS-TB), drug-resistant tuberculosis (DR-TB), multidrug-resistant tuberculosis (MDR-TB), and extensively drug-resistant tuberculosis (XDR-TB). The drug response classification system (DRCS) has been developed as a classification tool for DR-TB subtypes. As a classification method, ensemble deep learning (EDL) with two types of image preprocessing methods, four convolutional neural network (CNN) architectures, and three decision fusion methods have been created. Later, the model developed by EDL will be included in the dialog-based object query system (DBOQS), in order to enable the use of DRCS as the classification tool for DR-TB in assisting medical professionals with diagnosing DR-TB. EDL yields an improvement of 1.17-43.43% over the existing methods for classifying DR-TB, while compared with classic deep learning, it generates 31.25% more accuracy. DRCS was able to increase accuracy to 95.8% and user trust to 95.1%, and after the trial period, 99.70% of users were interested in continuing the utilization of the system as a supportive diagnostic tool.
Collapse
Affiliation(s)
| | - Rapeepan Pitakaso
- Department of Industrial Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Sarayut Gonwirat
- Department of Computer Engineering and Automation, Kalasin University, Kalasin 46000, Thailand
| | - Prem Enkvetchakul
- Department of Information Technology, Buriram Rajabhat University, Buriram 31000, Thailand
- Correspondence:
| | - Thanawadee Preeprem
- Faculty of Pharmaceutical Sciences, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | | | - Chutchai Kaewta
- Department of Computer Science, Ubon Ratchathani Rajabhat University, Ubon Ratchathani 34000, Thailand
| | - Nantawatana Weerayuth
- Department of Mechanical Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Thanatkij Srichok
- Department of Industrial Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Surajet Khonjun
- Department of Industrial Engineering, Ubon Ratchathani University, Ubon Ratchathani 34190, Thailand
| | - Natthapong Nanthasamroeng
- Department of Engineering Technology, Ubon Ratchathani Rajabhat University, Ubon Ratchathani 34000, Thailand
| |
Collapse
|
14
|
Ma S, Yang C, Bao S. Contrast Enhancement Method Based on Multi-Scale Retinex and Adaptive Gamma Correction. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2022. [DOI: 10.20965/jaciii.2022.p0875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
The most common methods to improve the quality of images with insufficient visibility are retinex-based and gamma correction methods. The fundamental assumption of retinex theory is that the color of an object can be represented as the multiplication of its illumination and reflectance. The retinex-based method improves the quality of the insufficiently visible image by repairing its illumination. The multi-scale retinex (MSR) is a classic retinex-based method. Though MSR better enhances the details of the image, it sometimes reverses its lightness value. The method based on adaptive gamma correction with weighting distribution (AGCWD) is to modify the visibility of images by gamma function. However, AGCWD provides a good enhancement effect on low-contrast areas, it also enhances the high-light region making it too bright. In this paper, a method that combines the advantages of MSR and AGCWD is proposed. Firstly, the advatages of MSR and AGCWD are preserved into detailed image through the weight that considers illumination. Then, the image constructed by combining the detailed and original images could maintain the contrast of the high-light region and enhance details of the low-light region. The validity of the proposed method is shown by experiments using several images.
Collapse
|
15
|
Bi X, Wang P, Wu T, Zha F, Xu P. Non-uniform illumination underwater image enhancement via events and frame fusion. APPLIED OPTICS 2022; 61:8826-8832. [PMID: 36256018 DOI: 10.1364/ao.463099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 09/09/2022] [Indexed: 06/16/2023]
Abstract
Absorption and scattering by aqueous media can attenuate light and cause underwater optical imagery difficulty. Artificial light sources are usually used to aid deep-sea imaging. Due to the limited dynamic range of standard cameras, artificial light sources often cause underwater images to be underexposed or overexposed. By contrast, event cameras have a high dynamic range and high temporal resolution but cannot provide frames with rich color characteristics. In this paper, we exploit the complementarity of the two types of cameras to propose an efficient yet simple method for image enhancement of uneven underwater illumination, which can generate enhanced images containing better scene details and colors similar to standard frames. Additionally, we create a dataset recorded by the Dynamic and Active-pixel Vision Sensor that includes both event streams and frames, enabling testing of the proposed method and frame-based image enhancement methods. The experimental results conducted on our dataset with qualitative and quantitative measures demonstrate that the proposed method outperforms the compared enhancement algorithms.
Collapse
|
16
|
Gao X, Zhang M, Luo J. Low-Light Image Enhancement via Retinex-Style Decomposition of Denoised Deep Image Prior. SENSORS 2022; 22:s22155593. [PMID: 35898096 PMCID: PMC9332408 DOI: 10.3390/s22155593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/10/2022] [Accepted: 07/24/2022] [Indexed: 11/16/2022]
Abstract
Low-light images are a common phenomenon when taking photos in low-light environments with inappropriate camera equipment, leading to shortcomings such as low contrast, color distortion, uneven brightness, and high loss of detail. These shortcomings are not only subjectively annoying but also affect the performance of many computer vision systems. Enhanced low-light images can be better applied to image recognition, object detection and image segmentation. This paper proposes a novel RetinexDIP method to enhance images. Noise is considered as a factor in image decomposition using deep learning generative strategies. The involvement of noise makes the image more real, weakens the coupling relationship between the three components, avoids overfitting, and improves generalization. Extensive experiments demonstrate that our method outperforms existing methods qualitatively and quantitatively.
Collapse
Affiliation(s)
- Xianjie Gao
- Department of Basic Sciences, Shanxi Agricultural University, Jinzhong 030801, China;
| | - Mingliang Zhang
- School of Mathematics and Statistics, Qilu University of Technology (Shandong Academy of Sciences), Jinan 250353, China;
| | - Jinming Luo
- School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, China
- Correspondence:
| |
Collapse
|
17
|
Kumar R, Bhandari AK. Spatial mutual information based detail preserving magnetic resonance image enhancement. Comput Biol Med 2022; 146:105644. [DOI: 10.1016/j.compbiomed.2022.105644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Revised: 05/08/2022] [Accepted: 05/14/2022] [Indexed: 11/28/2022]
|
18
|
Venugopal V, Joseph J, Vipin Das M, Kumar Nath M. An EfficientNet-based modified sigmoid transform for enhancing dermatological macro-images of melanoma and nevi skin lesions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 222:106935. [PMID: 35724474 DOI: 10.1016/j.cmpb.2022.106935] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 04/28/2022] [Accepted: 06/03/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE During the initial stages, skin lesions may not have sufficient intensity difference or contrast from the background region on dermatological macro-images. The lack of proper light exposure at the time of capturing the image also reduces the contrast. Low contrast between lesion and background regions adversely impacts segmentation. Enhancement techniques for improving the contrast between lesion and background skin on dermatological macro-images are limited in the literature. An EfficientNet-based modified sigmoid transform for enhancing the contrast on dermatological macro-images is proposed to address this issue. METHODS A modified sigmoid transform is applied in the HSV color space. The crossover point in the modified sigmoid transform that divides the macro-image into lesion and background is predicted using a modified EfficientNet regressor to exclude manual intervention and subjectivity. The Modified EfficientNet regressor is constructed by replacing the classifier layer in the conventional EfficientNet with a regression layer. Transfer learning is employed to reduce the training time and size of the dataset required to train the modified EfficientNet regressor. For training the modified EfficientNet regressor, a set of value components extracted from the HSV color space representation of the macro-images in the training dataset is fed as input. The corresponding set of ideal crossover points at which the values of Dice similarity coefficient (DSC) between the ground-truth images and the segmented output images obtained from Otsu's thresholding are maximum, is defined as the target. RESULTS On images enhanced with the proposed framework, the DSC of segmented results obtained by Otsu's thresholding increased from 0.68 ± 0.34 to 0.81 ± 0.17. CONCLUSIONS The proposed algorithm could consistently improve the contrast between lesion and background on a comprehensive set of test images, justifying its applications in automated analysis of dermatological macro-images.
Collapse
Affiliation(s)
- Vipin Venugopal
- Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry 609609, India.
| | - Justin Joseph
- School of Bioengineering, VIT Bhopal University, Sehore, Madhya Pradesh 466114, India.
| | - M Vipin Das
- Department of Dermatology, Kerala Health Services, Trivandrum, Kerala 695035, India.
| | - Malaya Kumar Nath
- Department of Electronics and Communication Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry 609609, India.
| |
Collapse
|
19
|
Agrawal S, Panda R, Mishro P, Abraham A. A novel joint histogram equalization based image contrast enhancement. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2019.05.010] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
20
|
Xia W, Chen E, Pautler S, Peters T. Laparoscopic image enhancement based on distributed retinex optimization with refined information fusion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
21
|
Multi-exposure microscopic image fusion-based detail enhancement algorithm. Ultramicroscopy 2022; 236:113499. [DOI: 10.1016/j.ultramic.2022.113499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 12/16/2021] [Accepted: 02/16/2022] [Indexed: 02/04/2023]
|
22
|
Huang H, Yang W, Hu Y, Liu J, Duan LY. Towards Low Light Enhancement With RAW Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1391-1405. [PMID: 35038292 DOI: 10.1109/tip.2022.3140610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper, we make the first benchmark effort to elaborate on the superiority of using RAW images in the low light enhancement and develop a novel alternative route to utilize RAW images in a more flexible and practical way. Inspired by a full consideration on the typical image processing pipeline, we are inspired to develop a new evaluation framework, Factorized Enhancement Model (FEM), which decomposes the properties of RAW images into measurable factors and provides a tool for exploring how properties of RAW images affect the enhancement performance empirically. The empirical benchmark results show that the Linearity of data and Exposure Time recorded in meta-data play the most critical role, which brings distinct performance gains in various measures over the approaches taking the sRGB images as input. With the insights obtained from the benchmark results in mind, a RAW-guiding Exposure Enhancement Network (REENet) is developed, which makes trade-offs between the advantages and inaccessibility of RAW images in real applications in a way of using RAW images only in the training phase. REENet projects sRGB images into linear RAW domains to apply constraints with corresponding RAW images to reduce the difficulty of modeling training. After that, in the testing phase, our REENet does not rely on RAW images. Experimental results demonstrate not only the superiority of REENet to state-of-the-art sRGB-based methods and but also the effectiveness of the RAW guidance and all components.
Collapse
|
23
|
|
24
|
Kumar R, Kumar Bhandari A. Luminosity and contrast enhancement of retinal vessel images using weighted average histogram. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
25
|
Thierry S, Colince W, Pascal NE, Alexendre N. Shock filter coupled with a high-order PDE for additive noise removal and image quality enhancement. ARRAY 2021. [DOI: 10.1016/j.array.2021.100105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
26
|
Gao Q, Wu X. Real-Time Deep Image Retouching Based on Learnt Semantics Dependent Global Transforms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7378-7390. [PMID: 34424843 DOI: 10.1109/tip.2021.3104173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Although artists' actions in photo retouching appear to be highly nonlinear in nature and very difficult to characterize analytically, we find that the net effects of interactively editing a mundane image to a desired appearance can be modeled, in most cases, by a parametric monotonically non-decreasing global tone mapping function in the luminance axis and by a global affine transform in the chrominance plane that are weighted by saliency. This allows us to simplify the machine learning problem of mimicking artists in photo retouching to constructing a deep artful image transform (DAIT) using convolutional neural networks (CNN). The CNN design of DAIT aims to learn the image-dependent parameters of the luminance tone mapping function and the affine chrominance transform, rather than learning the end-to-end pixel level mapping as in the mainstream methods of image restoration and enhancement. The proposed DAIT approach reduces the computation complexity of the neural network by two orders of magnitude, which also, as a side benefit, improves the robustness and generalization capability at the inference stage. The high throughput and robustness of DAIT lend itself readily to real-time video enhancement as well after a simple temporal processing. Experiments and a Turing-type test are conducted to evaluate the proposed method and its competitors.
Collapse
|
27
|
|
28
|
Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01466-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
29
|
Yang W, Wang S, Fang Y, Wang Y, Liu J. Band Representation-Based Semi-Supervised Low-Light Image Enhancement: Bridging the Gap Between Signal Fidelity and Perceptual Quality. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3461-3473. [PMID: 33656992 DOI: 10.1109/tip.2021.3062184] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
It has been widely acknowledged that under-exposure causes a variety of visual quality degradation because of intensive noise, decreased visibility, biased color, etc. To alleviate these issues, a novel semi-supervised learning approach is proposed in this paper for low-light image enhancement. More specifically, we propose a deep recursive band network (DRBN) to recover a linear band representation of an enhanced normal-light image based on the guidance of the paired low/normal-light images. Such design philosophy enables the principled network to generate a quality improved one by reconstructing the given bands based upon another learnable linear transformation which is perceptually driven by an image quality assessment neural network. On one hand, the proposed network is delicately developed to obtain a variety of coarse-to-fine band representations, of which the estimations benefit each other in a recursive process mutually. On the other hand, the extracted band representation of the enhanced image in the recursive band learning stage of DRBN is capable of bridging the gap between the restoration knowledge of paired data and the perceptual quality preference to high-quality images. Subsequently, the band recomposition learns to recompose the band representation towards fitting perceptual regularization of high-quality images with the perceptual guidance. The proposed architecture can be flexibly trained with both paired and unpaired data. Extensive experiments demonstrate that our method produces better enhanced results with visually pleasing contrast and color distributions, as well as well-restored structural details.
Collapse
|
30
|
|
31
|
Karadeniz AS, Erdem E, Erdem A. Burst Photography for Learning to Enhance Extremely Dark Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9372-9385. [PMID: 34788215 DOI: 10.1109/tip.2021.3125394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Capturing images under extremely low-light conditions poses significant challenges for the standard camera pipeline. Images become too dark and too noisy, which makes traditional enhancement techniques almost impossible to apply. Recently, learning-based approaches have shown very promising results for this task since they have substantially more expressive capabilities to allow for improved quality. Motivated by these studies, in this paper, we aim to leverage burst photography to boost the performance and obtain much sharper and more accurate RGB images from extremely dark raw images. The backbone of our proposed framework is a novel coarse-to-fine network architecture that generates high-quality outputs progressively. The coarse network predicts a low-resolution, denoised raw image, which is then fed to the fine network to recover fine-scale details and realistic textures. To further reduce the noise level and improve the color accuracy, we extend this network to a permutation invariant structure so that it takes a burst of low-light images as input and merges information from multiple images at the feature-level. Our experiments demonstrate that our approach leads to perceptually more pleasing results than the state-of-the-art methods by producing more detailed and considerably higher quality images.
Collapse
|
32
|
Pipeline for Advanced Contrast Enhancement (PACE) of Chest X-ray in Evaluating COVID-19 Patients by Combining Bidimensional Empirical Mode Decomposition and Contrast Limited Adaptive Histogram Equalization (CLAHE). SUSTAINABILITY 2020. [DOI: 10.3390/su12208573] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
COVID-19 is a new pulmonary disease which is driving stress to the hospitals due to the large number of cases worldwide. Imaging of lungs can play a key role in the monitoring of health status. Non-contrast chest computed tomography (CT) has been used for this purpose, mainly in China, with significant success. However, this approach cannot be massively used, mainly for both high risk and cost, also in some countries, this tool is not extensively available. Alternatively, chest X-ray, although less sensitive than CT-scan, can provide important information about the evolution of pulmonary involvement during the disease; this aspect is very important to verify the response of a patient to treatments. Here, we show how to improve the sensitivity of chest X-ray via a nonlinear post-processing tool, named PACE (Pipeline for Advanced Contrast Enhancement), combining properly Fast and Adaptive Bidimensional Empirical Mode Decomposition (FABEMD) and Contrast Limited Adaptive Histogram Equalization (CLAHE). The results show an enhancement of the image contrast as confirmed by three widely used metrics: (i) contrast improvement index, (ii) entropy, and (iii) measure of enhancement. This improvement gives rise to a detectability of more lung lesions as identified by two radiologists, who evaluated the images separately, and confirmed by CT-scans. The results show this method is a flexible and an effective approach for medical image enhancement and can be used as a post-processing tool for medical image understanding and analysis.
Collapse
|
33
|
Biswas B, Bhattacharyya S, Chakrabarti A, Dey KN, Platos J, Snasel V. Colonoscopy contrast-enhanced by intuitionistic fuzzy soft sets for polyp cancer localization. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
34
|
Ni Z, Yang W, Wang S, Ma L, Kwong S. Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:9140-9151. [PMID: 32960763 DOI: 10.1109/tip.2020.3023615] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Improving the aesthetic quality of images is challenging and eager for the public. To address this problem, most existing algorithms are based on supervised learning methods to learn an automatic photo enhancer for paired data, which consists of low-quality photos and corresponding expert-retouched versions. However, the style and characteristics of photos retouched by experts may not meet the needs or preferences of general users. In this paper, we present an unsupervised image enhancement generative adversarial network (UEGAN), which learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner, rather than learning on a large number of paired images. The proposed model is based on single deep GAN which embeds the modulation and attention mechanisms to capture richer global and local features. Based on the proposed model, we introduce two losses to deal with the unsupervised image enhancement: (1) fidelity loss, which is defined as a l2 regularization in the feature domain of a pre-trained VGG network to ensure the content between the enhanced image and the input image is the same, and (2) quality loss that is formulated as a relativistic hinge adversarial loss to endow the input image the desired characteristics. Both quantitative and qualitative results show that the proposed model effectively improves the aesthetic quality of images.
Collapse
|
35
|
Zhang Y, Mou X, Chandler DM. Learning No-Reference Quality Assessment of Multiply and Singly Distorted Images with Big Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2676-2691. [PMID: 31794396 DOI: 10.1109/tip.2019.2952010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Previous research on no-reference (NR) quality assessment of multiply-distorted images focused mainly on three distortion types (white noise, Gaussian blur, and JPEG compression), while in practice images can be contaminated by many other common distortions due to the various stages of processing. Although MUSIQUE (MUltiply-and Singly-distorted Image QUality Estimator) Zhang et al., TIP 2018 is a successful NR algorithm, this approach is still limited to the three distortion types. In this paper, we extend MUSIQUE to MUSIQUE-II to blindly assess the quality of images corrupted by five distortion types (white noise, Gaussian blur, JPEG compression, JPEG2000 compression, and contrast change) and their combinations. The proposed MUSIQUE-II algorithm builds upon the classification and parameter-estimation framework of its predecessor by using more advanced models and a more comprehensive set of distortion-sensitive features. Specifically, MUSIQUE-II relies on a three-layer classification model to identify 19 distortion types. To predict the five distortion parameter values, MUSIQUE-II extracts an additional 14 contrast features and employs a multi-layer probability-weighting rule. Finally, MUSIQUE-II employs a new most-apparent-distortion strategy to adaptively combine five quality scores based on outputs of three classification models. Experimental results tested on three multiply-distorted and six singly-distorted image quality databases show that MUSIQUE-II yields not only a substantial improvement in quality predictive performance as compared with its predecessor, but also highly competitive performance relative to other state-of-the-art FR/NR IQA algorithms.
Collapse
|
36
|
kansal S, Tripathi RK. Adaptive Geometric Filtering Based on Average Brightness of the Image and Discrete Cosine Transform Coefficient Adjustment for Gray and Color Image Enhancement. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2019. [DOI: 10.1007/s13369-019-04151-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
37
|
Abstract
SIGNIFICANCE Head-mounted low vision devices have received considerable attention in recent years owing to rapidly developing technology, facilitating ease of use and functionality. Systematic clinical evaluations of such devices remain rare but are needed to steer future device development. PURPOSE The purpose of this study was to investigate, in a multicenter prospective trial, the short- and medium-term effects of a head-worn vision enhancement device (eSight Eyewear). METHODS Participants aged 13 to 75 years with stable vision (distance acuity, 20/60 to 20/400; visual field diameter >20°) were recruited across six sites. Data were collected at baseline (no device), at fitting (with device), and after 3 months of everyday use. Outcome measures were visual ability measured by the Veterans Affairs Low Vision Visual Functioning Questionnaire 48, distance acuity (Early Treatment Diabetic Retinopathy Study), reading performance (MNREAD chart), contrast sensitivity (MARS chart), face recognition, and a modified version of the Melbourne Low Vision Activities of Daily Living (ADL) Index. RESULTS Among the 51 participants, eSight introduction immediately improved distance acuity (0.74 ± 0.28 logMAR), contrast sensitivity (0.57 ± 0.53 log units), and critical print size (0.52 ± 0.43 logMAR), all P < .001, without any further change after 3 months; reading acuity improved at fitting (0.56 ± 0.35 logMAR) and by one additional line after 3 months, whereas reading speed only slightly increased across all three time points. The Melbourne ADL score and face recognition improved at fitting (P < .01) with trends toward further improvement at 3 months. After 3 months of use, Veterans Affairs Low Vision Visual Functioning Questionnaire 48 person measures (in logits) improved: overall, 0.84, P < .001; reading, 2.75, P < .001; mobility, 0.04, not statistically significant; visual information, 1.08, P < .001; and visual motor, 0.48, P = .02. CONCLUSIONS eSight introduction yields immediate improvements in visual ability, with face recognition and ADLs showing a tentative benefit of further use. Overall, visual ability, reading, and visual information showed greatest benefit with device use. Further studies need to examine benefits of practice and training and possible differential effects of underlying pathology or baseline vision.
Collapse
|
38
|
Abstract
This paper proposes a single image haze removal algorithm that shows a marked improvement on the color attenuation prior-based method. Through a vast number of experiments on a wide variety of images, it is discovered that there are problems in the color attenuation prior, such as color distortion and background noise, which arise due to the fact that the priors do not hold true in all circumstances. Successful resolution of these problems using the proposed algorithm shows its superior performance to other state-of-the-art methods in terms of both subjective visual quality and quantitative metrics, on both synthetic and natural hazy image datasets. The proposed algorithm also is computationally friendly, due to the use of an efficient quad-decomposition algorithm for atmospheric light estimation and a simple modified hybrid median filter for depth map refinement.
Collapse
|
39
|
Wu HT, Wu Y, Guan Z, Cheung YM. Lossless Contrast Enhancement of Color Images with Reversible Data Hiding. ENTROPY 2019. [PMCID: PMC7515439 DOI: 10.3390/e21090910] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Recently, lossless contrast enhancement (CE) has been proposed so that a contrast-changed image can be converted to its original version by maintaining information entropy in it. As most of the lossless CE methods are proposed for grayscale images, artifacts are probably introduced after directly applying them to color images. For instance, color distortions may be caused after CE is separately conducted in each channel of the RGB (red, green, and blue) model. To cope with this issue, a new scheme is proposed based on the HSV (hue, saturation, and value) color model. Specifically, both hue and saturation components are kept unchanged while only the value component is modified. More precisely, the ratios between the RGB components are maintained while a reversible data hiding method is applied to the value component to achieve CE effects. The experimental results clearly show CE effects obtained with the proposed scheme, while the original color images can be perfectly recovered. Several metrics including image entropy were adopted to measure the changes made in CE procedure, while the performances were compared with those of one existing scheme. The evaluation results demonstrate that better image quality and increased information entropy can be simultaneously achieved with our proposed scheme.
Collapse
Affiliation(s)
- Hao-Tian Wu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China; (Y.W.); (Z.G.)
- Correspondence:
| | - Yue Wu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China; (Y.W.); (Z.G.)
| | - Zhihao Guan
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China; (Y.W.); (Z.G.)
| | - Yiu-ming Cheung
- Department of Computer Science, Hong Kong Baptist University, Hong Kong, China;
| |
Collapse
|
40
|
Zhang C, Wang K, An Y, He K, Tong T, Tian J. Improved generative adversarial networks using the total gradient loss for the resolution enhancement of fluorescence images. BIOMEDICAL OPTICS EXPRESS 2019; 10:4742-4756. [PMID: 31565522 PMCID: PMC6757480 DOI: 10.1364/boe.10.004742] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 08/16/2019] [Accepted: 08/16/2019] [Indexed: 05/09/2023]
Abstract
Because of the optical properties of medical fluorescence images (FIs) and hardware limitations, light scattering and diffraction constrain the image quality and resolution. In contrast to device-based approaches, we developed a post-processing method for FI resolution enhancement by employing improved generative adversarial networks. To overcome the drawback of fake texture generation, we proposed total gradient loss for network training. Fine-tuning training procedure was applied to further improve the network architecture. Finally, a more agreeable network for resolution enhancement was applied to actual FIs to produce sharper and clearer boundaries than in the original images.
Collapse
Affiliation(s)
- Chong Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- Authors contributed equally to this article
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- Authors contributed equally to this article
| | - Yu An
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
| | - Kunshan He
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
| | - Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
- Beijing Key Laboratory of Molecular Imaging, Beijing 100190, China
- BUAA-CCMU Advanced Innovation Center for Big Data-Based Precision Medicine, Beijing 100083, China
| |
Collapse
|
41
|
Ren W, Liu S, Ma L, Xu Q, Xu X, Cao X, Du J, Yang MH. Low-Light Image Enhancement via a Deep Hybrid Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:4364-4375. [PMID: 30998467 DOI: 10.1109/tip.2019.2910412] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Camera sensors often fail to capture clear images or videos in a poorly lit environment. In this paper, we propose a trainable hybrid network to enhance the visibility of such degraded images. The proposed network consists of two distinct streams to simultaneously learn the global content and the salient structures of the clear image in a unified network. More specifically, the content stream estimates the global content of the low-light input through an encoder-decoder network. However, the encoder in the content stream tends to lose some structure details. To remedy this, we propose a novel spatially variant recurrent neural network (RNN) as an edge stream to model edge details, with the guidance of another auto-encoder. The experimental results show that the proposed network favorably performs against the state-of-the-art low-light image enhancement algorithms.
Collapse
|
42
|
Xu L, Zhao D, Yan Y, Kwong S, Chen J, Duan LY. IDeRs: Iterative dehazing method for single remote sensing image. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2019.02.058] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
43
|
Adaptive Contrast Enhancement for Infrared Images Based on the Neighborhood Conditional Histogram. REMOTE SENSING 2019. [DOI: 10.3390/rs11111381] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, an adaptive contrast enhancement method based on the neighborhood conditional histogram is proposed to improve the visual quality of thermal infrared images. Existing block-based local contrast enhancement methods usually suffer from the over-enhancement of smooth regions or the loss of some details. To address these drawbacks, we first introduce a neighborhood conditional histogram to adaptively enhance the contrast and avoid the over-enhancement caused by the original histogram. Then the clip-redistributed histogram of the contrast-limited adaptive histogram equalization (CLAHE) is replaced by the neighborhood conditional histogram. In addition, the local mapping function of each sub-block is updated based on the global mapping function to further eliminate the block artifacts. Lastly, the optimized local contrast enhancement process, which combines both global and local enhanced results is employed to obtain the desired enhanced result. Experiments are conducted to evaluate the performance of the proposed method and the other five methods are introduced as a comparison. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms the other block-based methods on local contrast enhancement, visual quality improvement, and noise suppression.
Collapse
|
44
|
Cuckoo search algorithm-based brightness preserving histogram scheme for low-contrast image enhancement. Soft comput 2019. [DOI: 10.1007/s00500-019-03992-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
45
|
Optimized Contrast Enhancement for Infrared Images Based on Global and Local Histogram Specification. REMOTE SENSING 2019. [DOI: 10.3390/rs11070849] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In this paper, an optimized contrast enhancement method combining global and local enhancement results is proposed to improve the visual quality of infrared images. Global and local contrast enhancement methods have their merits and demerits, respectively. The proposed method utilizes the complementary characteristics of these two methods to achieve noticeable contrast enhancement without artifacts. In our proposed method, the 2D histogram, which contains both global and local gray level distribution characteristics of the original image, is computed first. Then, based on the 2D histogram, the global and local enhanced results are obtained by applying histogram specification globally and locally. Lastly, the enhanced result is computed by solving an optimization equation subjected to global and local constraints. The pixel-wise regularization parameters for the optimization equation are adaptively determined based on the edge information of the original image. Thus, the proposed method is able to enhance the local contrast while preserving the naturalness of the original image. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms the block-based methods for improving the visual quality of infrared images.
Collapse
|
46
|
Investigation of a novel automatic micro image-based method for the recognition of animal fibers based on Wavelet and Markov Random Field. Micron 2019; 119:88-97. [DOI: 10.1016/j.micron.2019.01.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2018] [Revised: 01/23/2019] [Accepted: 01/23/2019] [Indexed: 11/20/2022]
|
47
|
Gómez P, Semmler M, Schützenberger A, Bohr C, Döllinger M. Low-light image enhancement of high-speed endoscopic videos using a convolutional neural network. Med Biol Eng Comput 2019; 57:1451-1463. [DOI: 10.1007/s11517-019-01965-4] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 02/20/2019] [Indexed: 12/31/2022]
|
48
|
Fu Q, Zhang Z, Celenk M, Wu A. A POSHE-Based Optimum Clip-Limit Contrast Enhancement Method for Ultrasonic Logging Images. SENSORS (BASEL, SWITZERLAND) 2018; 18:s18113954. [PMID: 30445698 PMCID: PMC6263424 DOI: 10.3390/s18113954] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 11/05/2018] [Accepted: 11/12/2018] [Indexed: 06/09/2023]
Abstract
Enabled by piezoceramic transducers, ultrasonic logging images often suffer from low contrast and indistinct local details, which makes it difficult to analyze and interpret geologic features in the images. In this work, we propose a novel partially overlapped sub-block histogram-equalization (POSHE)-based optimum clip-limit contrast enhancement (POSHEOC) method to highlight the local details hidden in ultrasonic well logging images obtained through piezoceramic transducers. The proposed algorithm introduces the idea of contrast-limited enhancement to modify the cumulative distribution functions of the POSHE and build a new quality evaluation index considering the effects of the mean gradient and mean structural similarity. The new index is designed to obtain the optimal clip-limit value for histogram equalization of the sub-block. It makes the choice of the optimal clip-limit automatically according to the input image. Experimental results based on visual perceptual evaluation and quantitative measures demonstrate that the proposed method yields better quality in terms of enhancing the contrast, emphasizing the local details while preserving the brightness and restricting the excessive enhancement compared with the other seven histogram equalization-based techniques from the literature. This study provides a feasible and effective method to enhance ultrasonic logging images obtained through piezoceramic transducers and is significant for the interpretation of actual ultrasonic logging data.
Collapse
Affiliation(s)
- Qingqing Fu
- Electronics and Information School, Yangtze University, Jingzhou 434023, China.
- National Demonstration Center for Experimental Electrical & Electronic Education, Yangtze University, Jingzhou 434023, China.
| | - Zhengbing Zhang
- Electronics and Information School, Yangtze University, Jingzhou 434023, China.
- National Demonstration Center for Experimental Electrical & Electronic Education, Yangtze University, Jingzhou 434023, China.
| | - Mehmet Celenk
- School of Electrical Engineering and Computer Science, Ohio University, Athens, OH 45701, USA.
| | - Aiping Wu
- Electronics and Information School, Yangtze University, Jingzhou 434023, China.
- National Demonstration Center for Experimental Electrical & Electronic Education, Yangtze University, Jingzhou 434023, China.
| |
Collapse
|
49
|
Framelet regularization for uneven intensity correction of color images with illumination and reflectance estimation. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2018.06.063] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
50
|
An improved contrast equalization technique for contrast enhancement in scanning electron microscopy images. Microsc Res Tech 2018; 81:1132-1142. [DOI: 10.1002/jemt.23100] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Revised: 06/09/2018] [Accepted: 06/28/2018] [Indexed: 11/07/2022]
|