1
|
Meng R, Yu Z, Fu Q, Fan Y, Fu L, Ding Z, Yang S, Cao Z, Jia L. Smartphone-based colorimetric detection platform using color correction algorithms to reduce external interference. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2024; 316:124350. [PMID: 38692108 DOI: 10.1016/j.saa.2024.124350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 04/15/2024] [Accepted: 04/24/2024] [Indexed: 05/03/2024]
Abstract
Smartphone-based digital image colorimetry is a powerful, fast, low-cost approach to detecting target analytes. However, lighting conditions and camera parameters easily affect the detection results, significantly curtailing its applicability in multiple scenarios. In this study, an Android-based mobile application (SMP-CC) is developed, which offers a comprehensive package that includes image acquisition, color correction, and colorimetric analysis functions. Using a custom color card, a built-in algorithm in SMP-CC can minimize the color difference between the standard color block image captured by different smartphones under different lighting conditions and the standard value by an LS171 colorimeter less than 4.36. The algorithm significantly eliminates the impacts of external lighting conditions and differences in cell phone models. Furthermore, the feasibility of SMP-CC was verified by successful colorimetric detection of urine pH, glucose, and protein, demonstrating its potential in smartphone-based digital image colorimetry.
Collapse
Affiliation(s)
- Ruidong Meng
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Zhicheng Yu
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Qiang Fu
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Yi Fan
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Li Fu
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Zixuan Ding
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Shuo Yang
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China
| | - Zhanmao Cao
- School of Computer Science, South China Normal University, Guangzhou 510631, China.
| | - Li Jia
- Ministry of Education Key Laboratory of Laser Life Science & Guangdong Provincial Key Laboratory of Laser Life Science & Guangzhou Key Laboratory of Spectral Analysis and Functional Probes, College of Biophotonics, South China Normal University, Guangzhou 510631, China.
| |
Collapse
|
2
|
Madeira T, Oliveira M, Dias P. Neural Colour Correction for Indoor 3D Reconstruction Using RGB-D Data. SENSORS (BASEL, SWITZERLAND) 2024; 24:4141. [PMID: 39000926 PMCID: PMC11243902 DOI: 10.3390/s24134141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2024] [Revised: 06/17/2024] [Accepted: 06/25/2024] [Indexed: 07/16/2024]
Abstract
With the rise in popularity of different human-centred applications using 3D reconstruction data, the problem of generating photo-realistic models has become an important task. In a multiview acquisition system, particularly for large indoor scenes, the acquisition conditions will differ along the environment, causing colour differences between captures and unappealing visual artefacts in the produced models. We propose a novel neural-based approach to colour correction for indoor 3D reconstruction. It is a lightweight and efficient approach that can be used to harmonize colour from sparse captures over complex indoor scenes. Our approach uses a fully connected deep neural network to learn an implicit representation of the colour in 3D space, while capturing camera-dependent effects. We then leverage this continuous function as reference data to estimate the required transformations to regenerate pixels in each capture. Experiments to evaluate the proposed method on several scenes of the MP3D dataset show that it outperforms other relevant state-of-the-art approaches.
Collapse
Affiliation(s)
- Tiago Madeira
- Institute of Electronics and Informatics Engineering of Aveiro (IEETA), Intelligent System Associate Laboratory (LASI), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Electronics, Telecommunications and Informatics (DETI), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Miguel Oliveira
- Institute of Electronics and Informatics Engineering of Aveiro (IEETA), Intelligent System Associate Laboratory (LASI), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Mechanical Engineering (DEM), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Paulo Dias
- Institute of Electronics and Informatics Engineering of Aveiro (IEETA), Intelligent System Associate Laboratory (LASI), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Electronics, Telecommunications and Informatics (DETI), University of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
3
|
Vazquez-Corral J, Finlayson GD, Herranz L. Improving the perception of low-light enhanced images. OPTICS EXPRESS 2024; 32:5174-5190. [PMID: 38439250 DOI: 10.1364/oe.509713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 01/02/2024] [Indexed: 03/06/2024]
Abstract
Improving images captured under low-light conditions has become an important topic in computational color imaging, as it has a wide range of applications. Most current methods are either based on handcrafted features or on end-to-end training of deep neural networks that mostly focus on minimizing some distortion metric -such as PSNR or SSIM- on a set of training images. However, the minimization of distortion metrics does not mean that the results are optimal in terms of perception (i.e. perceptual quality). As an example, the perception-distortion trade-off states that, close to the optimal results, improving distortion results in worsening perception. This means that current low-light image enhancement methods -that focus on distortion minimization- cannot be optimal in the sense of obtaining a good image in terms of perception errors. In this paper, we propose a post-processing approach in which, given the original low-light image and the result of a specific method, we are able to obtain a result that resembles as much as possible that of the original method, but, at the same time, giving an improvement in the perception of the final image. More in detail, our method follows the hypothesis that in order to minimally modify the perception of an input image, any modification should be a combination of a local change in the shading across a scene and a global change in illumination color. We demonstrate the ability of our method quantitatively using perceptual blind image metrics such as BRISQUE, NIQE, or UNIQUE, and through user preference tests.
Collapse
|
4
|
Kucuk A, Finlayson GD, Mantiuk R, Ashraf M. Performance Comparison of Classical Methods and Neural Networks for Colour Correction. J Imaging 2023; 9:214. [PMID: 37888321 PMCID: PMC10607821 DOI: 10.3390/jimaging9100214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 10/05/2023] [Indexed: 10/28/2023] Open
Abstract
Colour correction is the process of converting RAW RGB pixel values of digital cameras to a standard colour space such as CIE XYZ. A range of regression methods including linear, polynomial and root-polynomial least-squares have been deployed. However, in recent years, various neural network (NN) models have also started to appear in the literature as an alternative to classical methods. In the first part of this paper, a leading neural network approach is compared and contrasted with regression methods. We find that, although the neural network model supports improved colour correction compared with simple least-squares regression, it performs less well than the more advanced root-polynomial regression. Moreover, the relative improvement afforded by NNs, compared to linear least-squares, is diminished when the regression methods are adapted to minimise a perceptual colour error. Problematically, unlike linear and root-polynomial regressions, the NN approach is tied to a fixed exposure (and when exposure changes, the afforded colour correction can be quite poor). We explore two solutions that make NNs more exposure-invariant. First, we use data augmentation to train the NN for a range of typical exposures and second, we propose a new NN architecture which, by construction, is exposure-invariant. Finally, we look into how the performance of these algorithms is influenced when models are trained and tested on different datasets. As expected, the performance of all methods drops when tested with completely different datasets. However, we noticed that the regression methods still outperform the NNs in terms of colour correction, even though the relative performance of the regression methods does change based on the train and test datasets.
Collapse
Affiliation(s)
- Abdullah Kucuk
- School of Computing Sciences, University of East Anglia, Norwich NR4 7TJ, UK;
| | - Graham D. Finlayson
- School of Computing Sciences, University of East Anglia, Norwich NR4 7TJ, UK;
| | - Rafal Mantiuk
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK; (R.M.); (M.A.)
| | - Maliha Ashraf
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK; (R.M.); (M.A.)
| |
Collapse
|
5
|
Wu YL, Wang CS, Weng WC, Lin YC. Development of a Cloud-Based Image Processing Health Checkup System for Multi-Item Urine Analysis. SENSORS (BASEL, SWITZERLAND) 2023; 23:7733. [PMID: 37765790 PMCID: PMC10535996 DOI: 10.3390/s23187733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 09/03/2023] [Accepted: 09/04/2023] [Indexed: 09/29/2023]
Abstract
With the busy pace of modern life, an increasing number of people are afflicted by lifestyle diseases. Going directly to the hospital for medical checks is not only time-consuming but also costly. Fortunately, the emergence of rapid tests has alleviated this burden. Accurately interpreting test results is extremely important; misinterpreting the results of rapid tests could lead to delayed medical treatment. Given that URS-10 serve as a rapid test capable of detecting 10 distinct parameters in urine samples, the results of assessing these parameters can offer insights into the subject's physiological condition. These parameters encompass aspects such as metabolism, renal function, diabetes, urinary tract disorders, hemolytic diseases, and acid-base balance, among others. Although the operational procedure is straightforward, the variegated color changes exhibited in the outcomes of individual parameters render it challenging for lay users to deduce causal factors solely from color variations. Moreover, potential misinterpretations could arise due to visual discrepancies. In this study, we successfully developed a cloud-based health checkup system that can be used in an indoor environment. The system is used by placing a URS-10 test strip on a colorimetric board developed for this study, then using a smartphone application to take images which are uploaded to a server for cloud computing. Finally, the interpretation results are stored in the cloud and sent back to the smartphone to be checked by the user. Furthermore, to confirm whether the color calibration technology can eliminate color differences between different cameras, and also whether the colorimetric board and the urine test strips can perform color comparisons correctly in different light intensity environments, indoor environments that could simulate a specific light intensity were established for testing purposes. When comparing the experimental results to real test strips, only two groups failed to reach an identification success rate of 100%, and in both of these cases the success rate reached 95%. The experimental results confirmed that the system developed in this study was able to eliminate color differences between camera devices and could be used without special technical requirements or training.
Collapse
Affiliation(s)
- Yu-Lin Wu
- Department of Engineering Science, National Cheng Kung University, 1 University Road, Tainan 70101, Taiwan; (Y.-L.W.); (W.-C.W.)
| | - Chien-Shun Wang
- Department of Engineering Science, National Cheng Kung University, 1 University Road, Tainan 70101, Taiwan; (Y.-L.W.); (W.-C.W.)
| | - Wei-Chien Weng
- Department of Engineering Science, National Cheng Kung University, 1 University Road, Tainan 70101, Taiwan; (Y.-L.W.); (W.-C.W.)
| | - Yu-Cheng Lin
- Department of Engineering Science, National Cheng Kung University, 1 University Road, Tainan 70101, Taiwan; (Y.-L.W.); (W.-C.W.)
- Institute of Innovative Semiconductor Manufacturing, National Sun Yat-sen University, 70 Lien-hai Road, Kaohsiung 804, Taiwan
| |
Collapse
|
6
|
Chu ML, Ge XYM, Eastham J, Nguyen T, Fuji RN, Sullivan R, Ruderman D. Assessment of Color Reproducibility and Mitigation of Color Variation in Whole Slide Image Scanners for Toxicologic Pathology. Toxicol Pathol 2023; 51:313-328. [PMID: 38288712 DOI: 10.1177/01926233231224468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
Digital pathology workflows in toxicologic pathology rely on whole slide images (WSIs) from histopathology slides. Inconsistent color reproduction by WSI scanners of different models and from different manufacturers can result in different color representations and inter-scanner color variation in the WSIs. Although pathologists can accommodate a range of color variation during their evaluation of WSIs, color variability can degrade the performance of computational applications in digital pathology. In particular, color variability can compromise the generalization of artificial intelligence applications to large volumes of data from diverse sources. To address these challenges, we developed a process that includes two modules: (1) assessing the color reproducibility of our scanners and the color variation among them and (2) applying color correction to WSIs to minimize the color deviation and variation. Our process ensures consistent color reproduction across WSI scanners and enhances color homogeneity in WSIs, and its flexibility enables easy integration as a post-processing step following scanning by WSI scanners of different models and from different manufacturers.
Collapse
Affiliation(s)
- Mei-Lan Chu
- Genentech Inc., South San Francisco, California, USA
| | - Xing-Yue M Ge
- Genentech Inc., South San Francisco, California, USA
| | | | - Trung Nguyen
- Genentech Inc., South San Francisco, California, USA
| | - Reina N Fuji
- Genentech Inc., South San Francisco, California, USA
| | - Ruth Sullivan
- Genentech Inc., South San Francisco, California, USA
| | | |
Collapse
|
7
|
Zhao S, Liu L, Feng Z, Liao N, Liu Q, Xie X. Colorimetric Characterization of Color Imaging System Based on Kernel Partial Least Squares. SENSORS (BASEL, SWITZERLAND) 2023; 23:5706. [PMID: 37420871 DOI: 10.3390/s23125706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/05/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
Colorimetric characterization is the basis of color information management in color imaging systems. In this paper, we propose a colorimetric characterization method based on kernel partial least squares (KPLS) for color imaging systems. This method takes the kernel function expansion of the three-channel response values (RGB) in the device-dependent space of the imaging system as input feature vectors, and CIE-1931 XYZ as output vectors. We first establish a KPLS color-characterization model for color imaging systems. Then we determine the hyperparameters based on nested cross validation and grid search; a color space transformation model is realized. The proposed model is validated with experiments. The CIELAB, CIELUV and CIEDE2000 color differences are used as evaluation metrics. The results of the nested cross validation test for the ColorChecker SG chart show that the proposed model is superior to the weighted nonlinear regression model and the neural network model. The method proposed in this paper has good prediction accuracy.
Collapse
Affiliation(s)
- Siyu Zhao
- School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China
| | - Lu Liu
- School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China
| | - Zibing Feng
- School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China
| | - Ningfang Liao
- National Key Lab of Colour Science and Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Qiang Liu
- School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China
- Research Center of Graphic Communication, Printing and Packaging, Wuhan University, Wuhan 430079, China
| | - Xufen Xie
- School of Information Science and Engineering, Dalian Polytechnic University, Dalian 116034, China
| |
Collapse
|
8
|
Chen S, Lü B, Wu X, Liu W, Lü Q. Filter design and color correction for the X-cube prism 3CCD camera. APPLIED OPTICS 2023; 62:979-988. [PMID: 36821156 DOI: 10.1364/ao.472758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Accepted: 12/25/2022] [Indexed: 06/18/2023]
Abstract
For the X-cube prism three-charge-coupled-device (3CCD) camera, the spectra of the designed dichroic films in the X-cube prism shift with changes in the angle of incident light, producing non-uniformity of color on the image plane. We considered the influence of the incident angle on color performance in filter design and directly optimized the thin film to improve color consistency. An optical model was constructed to calculate the distribution of camera spectral sensitivity and independently correct the non-uniform color on the image plane. Results showed that the optimization and correction methods could significantly improve the color performance of the X-cube prism 3CCD camera.
Collapse
|
9
|
Dal’Col L, Coelho D, Madeira T, Dias P, Oliveira M. A Sequential Color Correction Approach for Texture Mapping of 3D Meshes. SENSORS (BASEL, SWITZERLAND) 2023; 23:607. [PMID: 36679413 PMCID: PMC9865480 DOI: 10.3390/s23020607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/29/2022] [Accepted: 01/03/2023] [Indexed: 06/17/2023]
Abstract
Texture mapping can be defined as the colorization of a 3D mesh using one or multiple images. In the case of multiple images, this process often results in textured meshes with unappealing visual artifacts, known as texture seams, caused by the lack of color similarity between the images. The main goal of this work is to create textured meshes free of texture seams by color correcting all the images used. We propose a novel color-correction approach, called sequential pairwise color correction, capable of color correcting multiple images from the same scene, using a pairwise-based method. This approach consists of sequentially color correcting each image of the set with respect to a reference image, following color-correction paths computed from a weighted graph. The color-correction algorithm is integrated with a texture-mapping pipeline that receives uncorrected images, a 3D mesh, and point clouds as inputs, producing color-corrected images and a textured mesh as outputs. Results show that the proposed approach outperforms several state-of-the-art color-correction algorithms, both in qualitative and quantitative evaluations. The approach eliminates most texture seams, significantly increasing the visual quality of the textured meshes.
Collapse
Affiliation(s)
- Lucas Dal’Col
- Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Mechanical Engineering (DEM), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Daniel Coelho
- Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Mechanical Engineering (DEM), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Tiago Madeira
- Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Electronics, Telecommunications and Informatics (DETI), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Paulo Dias
- Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Electronics, Telecommunications and Informatics (DETI), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Miguel Oliveira
- Intelligent System Associate Laboratory (LASI), Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro, 3810-193 Aveiro, Portugal
- Department of Mechanical Engineering (DEM), University of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
10
|
Wen YC, Wen S, Hsu L, Chi S. Irradiance Independent Spectrum Reconstruction from Camera Signals Using the Interpolation Method. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22218498. [PMID: 36366197 PMCID: PMC9656597 DOI: 10.3390/s22218498] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 10/28/2022] [Accepted: 11/01/2022] [Indexed: 05/25/2023]
Abstract
The spectrum of light captured by a camera can be reconstructed using the interpolation method. The reconstructed spectrum is a linear combination of the reference spectra, where the weighting coefficients are calculated from the signals of the pixel and the reference samples by interpolation. This method is known as the look-up table (LUT) method. It is irradiance-dependent due to the dependence of the reconstructed spectrum shape on the sample irradiance. Since the irradiance can vary in field applications, an irradiance-independent LUT (II-LUT) method is required to recover spectral reflectance. This paper proposes an II-LUT method to interpolate the spectrum in the normalized signal space. Munsell color chips irradiated with D65 were used as samples. Example cameras are a tricolor camera and a quadcolor camera. Results show that the proposed method can achieve the irradiance independent spectrum reconstruction and computation time saving at the expense of the recovered spectral reflectance error. Considering that the irradiance variation will introduce additional errors, the actual mean error using the II-LUT method might be smaller than that of the ID-LUT method. It is also shown that the proposed method outperformed the weighted principal component analysis method in both accuracy and computation speed.
Collapse
Affiliation(s)
- Yu-Che Wen
- Department of Electrophysics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
| | - Senfar Wen
- Department of Electrical Engineering, Yuan Ze University, No. 135 Yuan-Tung Road, Taoyuan 32003, Taiwan
| | - Long Hsu
- Department of Electrophysics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
| | - Sien Chi
- Department of Photonics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
| |
Collapse
|
11
|
Wen YC, Wen S, Hsu L, Chi S. Spectral Reflectance Recovery from the Quadcolor Camera Signals Using the Interpolation and Weighted Principal Component Analysis Methods. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22166288. [PMID: 36016049 PMCID: PMC9416231 DOI: 10.3390/s22166288] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 08/14/2022] [Accepted: 08/19/2022] [Indexed: 05/25/2023]
Abstract
The recovery of surface spectral reflectance using the quadcolor camera was numerically studied. Assume that the RGB channels of the quadcolor camera are the same as the Nikon D5100 tricolor camera. The spectral sensitivity of the fourth signal channel was tailored using a color filter. Munsell color chips were used as reflective surfaces. When the interpolation method or the weighted principal component analysis (wPCA) method is used to reconstruct spectra, using the quadcolor camera can effectively reduce the mean spectral error of the test samples compared to using the tricolor camera. Except for computation time, the interpolation method outperforms the wPCA method in spectrum reconstruction. A long-pass optical filter can be applied to the fourth channel for reducing the mean spectral error. A short-pass optical filter can be applied to the fourth channel for reducing the mean color difference, but the mean spectral error will be larger. Due to the small color difference, the quadcolor camera using an optimized short-pass filter may be suitable as an imaging colorimeter. It was found that an empirical design rule to keep the color difference small is to reduce the error in fitting the color-matching functions using the camera spectral sensitivity functions.
Collapse
Affiliation(s)
- Yu-Che Wen
- Department of Electrophysics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
| | - Senfar Wen
- Department of Electrical Engineering, Yuan Ze University, No. 135 Yuan-Tung Road, Taoyuan 320, Taiwan
| | - Long Hsu
- Department of Electrophysics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
| | - Sien Chi
- Department of Photonics, National Yang Ming Chiao Tung University, No. 1001 University Road, Hsinchu 30010, Taiwan
| |
Collapse
|
12
|
Particle Swarm Optimisation in Practice: Multiple Applications in a Digital Microscope System. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We demonstrate that particle swarm optimisation (PSO) can be used to solve a variety of problems arising during operation of a digital inspection microscope. This is a use case for the feasibility of heuristics in a real-world product. We show solutions to four measurement problems, all based on PSO. This allows for a compact software implementation solving different problems. We have found that PSO can solve a variety of problems with small software footprints and good results in a real-world embedded system. Notably, in the microscope application, this eliminates the need to return the device to the factory for calibration.
Collapse
|
13
|
Wang L, Wu B, Wang X, Zhu Q, Xu K. Endoscopic image luminance enhancement based on the inverse square law for illuminance and retinex. Int J Med Robot 2022; 18:e2396. [PMID: 35318786 DOI: 10.1002/rcs.2396] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 03/20/2022] [Accepted: 03/20/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND In a single-port robotic system where the 3D endoscope possesses two bending segments, only point light sources can be integrated at the tip due to space limitations. However, point light sources usually provide non-uniform illumination, causing the endoscopic images to appear bright in the centre and dark near the corners. METHODS Based on the inverse square law for illuminance, an initial luminance weighting is first proposed to increase the image luminance uniformity. Then, a saturation-based model is proposed to finalise the luminance weighting to avoid overexposure and colour discrepancy, while the single-scale retinex (SSR) scheme is employed for noise control. RESULTS Via qualitative and quantitative comparisons, the proposed method performs effectively in enhancing the luminance and uniformity of endoscopic images, in terms of both visual perception and objective assessment. CONCLUSIONS The proposed method can effectively reduce the image degradation caused by point light sources.
Collapse
Affiliation(s)
- Longfei Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Baibo Wu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiang Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qingyi Zhu
- Department of Urology, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Kai Xu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
14
|
Auxiliary Reference Samples for Extrapolating Spectral Reflectance from Camera RGB Signals. SENSORS 2022; 22:s22134923. [PMID: 35808412 PMCID: PMC9269503 DOI: 10.3390/s22134923] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/26/2022] [Accepted: 06/28/2022] [Indexed: 02/05/2023]
Abstract
Surface spectral reflectance is useful for color reproduction. In this study, the reconstruction of spectral reflectance using a conventional camera was investigated. The spectrum reconstruction error could be reduced by interpolating camera RGB signals, in contrast to methods based on basis spectra, such as principal component analysis (PCA). The disadvantage of the interpolation method is that it cannot interpolate samples outside the convex hull of reference samples in the RGB signal space. An interpolation method utilizing auxiliary reference samples (ARSs) to extrapolate the outside samples is proposed in this paper. The ARSs were created using reference samples and color filters. The convex hull of the reference samples and ARSs was expanded to enclose outside samples for extrapolation. A commercially available camera was taken as an example. The results show that with the proposed method, the extrapolation error was smaller than that of the computationally time-consuming weighted PCA method. A low cost and fast detection speed for spectral reflectance recovery can be achieved using a conventional camera.
Collapse
|
15
|
Hu Z, Nsampi NE, Wang X, Wang Q. PNRNet: Physically-Inspired Neural Rendering for Any-to-Any Relighting. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:3935-3948. [PMID: 35635816 DOI: 10.1109/tip.2022.3177311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Existing any-to-any relighting methods suffer from the task-aliasing effects and the loss of local details in the image generation process, such as shading and attached-shadow. In this paper, we present PNRNet, a novel neural architecture that decomposes the any-to-any relighting task into three simpler sub-tasks, i.e. lighting estimation, color temperature transfer, and lighting direction transfer, to avoid the task-aliasing effects. These sub-tasks are easy to learn and can be trained with direct supervisions independently. To better preserve local shading and attached-shadow details, we propose a parallel multi-scale network that incorporates multiple physical attributes to model local illuminations for lighting direction transfer. We also introduce a simple yet effective color temperature transfer network to learn a pixel-level non-linear function which allows color temperature adjustment beyond the predefined color temperatures and generalizes well to real images. Extensive experiments demonstrate that our proposed approach achieves better results quantitatively and qualitatively than prior works.
Collapse
|
16
|
Zhu Y, Finlayson GD. Matched illumination: using light modulation as a proxy for a color filter that makes a camera more colorimetric. OPTICS EXPRESS 2022; 30:22006-22024. [PMID: 36224909 DOI: 10.1364/oe.451839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Accepted: 03/10/2022] [Indexed: 06/16/2023]
Abstract
In previous work, it was shown that a camera can theoretically be made more colorimetric-its RGBs become more linearly related to XYZ tristimuli-by placing a specially designed color filter in the optical path. While the prior art demonstrated the principle, the optimal color-correction filters were not actually manufactured. In this paper, we provide a novel way of creating the color filtering effect without making a physical filter: we modulate the spectrum of the light source by using a spectrally tunable lighting system to recast the prefiltering effect from a lighting perspective. According to our method, if we wish to measure color under a D65 light, we relight the scene with a modulated D65 spectrum where the light modulation mimics the effect of color prefiltering in the prior art. We call our optimally modulated light, the matched illumination. In the experiments, using synthetic and real measurements, we show that color measurement errors can be reduced by about 50% or more on simulated data and 25% or more on real images when the matched illumination is used.
Collapse
|
17
|
Abstract
Launched in March 2021, the 3U CubeSat nanosatellite was the first ever to use an ultra-lightweight harmonic diffractive lens for Earth remote sensing. We describe the CubeSat platform we used; our 10 mm diameter and 70 mm focal length lens synthesis, design, and manufacturing; a custom 3D-printed camera housing built from a zero-thermal-expansion metal alloy; and the on-Earth image post-processing with a convolutional neural network resulting in images comparable in quality to classical refractive optics used for remote sensing before.
Collapse
|
18
|
Coelho D, Dal’Col L, Madeira T, Dias P, Oliveira M. A Robust 3D-Based Color Correction Approach for Texture Mapping Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:1730. [PMID: 35270879 PMCID: PMC8914668 DOI: 10.3390/s22051730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/17/2022] [Accepted: 02/18/2022] [Indexed: 11/16/2022]
Abstract
Texture mapping of 3D models using multiple images often results in textured meshes with unappealing visual artifacts known as texture seams. These artifacts can be more or less visible, depending on the color similarity between the used images. The main goal of this work is to produce textured meshes free of texture seams through a process of color correcting all images of the scene. To accomplish this goal, we propose two contributions to the state-of-the-art of color correction: a pairwise-based methodology, capable of color correcting multiple images from the same scene; the application of 3D information from the scene, namely meshes and point clouds, to build a filtering procedure, in order to produce a more reliable spatial registration between images, thereby increasing the robustness of the color correction procedure. We also present a texture mapping pipeline that receives uncorrected images, an untextured mesh, and point clouds as inputs, producing a final textured mesh and color corrected images as output. Results include a comparison with four other color correction approaches. These show that the proposed approach outperforms all others, both in qualitative and quantitative metrics. The proposed approach enhances the visual quality of textured meshes by eliminating most of the texture seams.
Collapse
Affiliation(s)
- Daniel Coelho
- Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal; (L.D.); (M.O.)
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (T.M.); (P.D.)
| | - Lucas Dal’Col
- Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal; (L.D.); (M.O.)
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (T.M.); (P.D.)
| | - Tiago Madeira
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (T.M.); (P.D.)
| | - Paulo Dias
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (T.M.); (P.D.)
- Department of Electronics, Telecommunications and Informatics, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Miguel Oliveira
- Department of Mechanical Engineering, University of Aveiro, 3810-193 Aveiro, Portugal; (L.D.); (M.O.)
- Institute of Electronics and Informatics Engineering of Aveiro, University of Aveiro, 3810-193 Aveiro, Portugal; (T.M.); (P.D.)
| |
Collapse
|
19
|
Accurate Quantification of Anthocyanin in Red Flesh Apples Using Digital Photography and Image Analysis. HORTICULTURAE 2022. [DOI: 10.3390/horticulturae8020145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Red fleshed apples (Malus × domestica Borkh.) differ in colour intensity between cultivars, seasons and sites. The objective of this study was to develop a procedure for predicting anthocyanin content from digital images of flesh discs. Flesh cylinders of uniform colour were excised, scanned and their colours determined in the R, G and B and the L*a*b* colour spaces. Anthocyanin content was also quantified chemically. A calibration line was constructed to predict anthocyanin content of flesh discs of varying colour from a scan or a photograph in the studio or outdoors. Anthocyanin concentration was linearly related to the logarithms of G, B and L*. From these relationships, the anthocyanin content of a flesh disc was predicted, pixel by pixel. Colour corrections were applied using a reference colour chart included in all images. The Finlayson algorithm was most effective for correcting the G parameter obtained by a flatbed scanner. For variable imaging methods (scanning or photography), the Vandermonde algorithm for correcting the L* parameter and the Finlayson algorithm for correcting the G parameter were most effective in predicting anthocyanin content. The procedure allows accurate prediction of anthocyanin content of red fleshed apples from simple colour scans or photographs.
Collapse
|
20
|
Which Features Are More Correlated to Illuminant Estimation: A Composite Substitute. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Computational color constancy (CCC) is to endow computers or cameras with the capability to remove the color bias effect caused by different scene illuminations. The first procedure of CCC is illuminant estimation, i.e., to calculate the illuminant color for a given image scene. Recently, some methods directly mapping image features to illuminant estimation provide an effective and robust solution for this issue. Nevertheless, due to diverse image features, it is uncertain to select which features to model illuminant color. In this research, a series of artificial features weaved into a mapping-based illuminant estimation framework is extensively investigated. This framework employs a multi-model structure and integrates the functions of kernel-based fuzzy c-means (KFCM) clustering, non-negative least square regression (NLSR), and fuzzy weighting. By comparing the resulting performance of different features, the features more correlated to illuminant estimation are found in the candidate feature set. Furthermore, the composite features are designed to achieve the outstanding performances of illuminant estimation. Extensive experiments are performed on typical benchmark datasets and the effectiveness of the proposed method has been validated. The proposed method makes illuminant estimation an explicit transformation of suitable image features with regressed and fuzzy weights, which has significant potential for both competing performances and fast implementation against state-of-the-art methods.
Collapse
|
21
|
Bhatnagar V, Bansod PP. Challenges and Solutions in Automated Tongue Diagnosis Techniques: A Review. Crit Rev Biomed Eng 2022; 50:47-63. [PMID: 35997110 DOI: 10.1615/critrevbiomedeng.2022044392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Tongue diagnosis is used in various traditional medicine cultures as a non-invasive method for assessing an individual's health. Tongue image analysis has the potential for assessing the metabolism and functionality of the internal organs, making it a quick method of diagnosis. As automated systems give quantitative and objective results thereby effective in facilitating diagnosis, a review was conducted to evaluate literature on current methods of tongue diagnosis. Different methods of tongue diagnosis in the literature were identified and compared. Information on automated tongue diagnosis system, such as image acquisition, color correction, segmentation, feature extraction and classification, particularly in traditional medicine were reviewed. The aim of the review was to identify effective image processing techniques to be compatible with automated system for tongue diagnosis using some easily available to all imaging device rather than a dedicated state of art acquisition systems, which may not be easily accessible to general public. All methods identified were either being researched or developed and no specific system was identified that is currently available for routine use in clinics or home monitoring for patients. The healthcare sector could benefit from access to validated and automated tongue diagnosis systems. The feasibility of a mobile enabled platform to intelligently make use of this traditional method of diagnosis should be explored. In order to provide cheap and quick preliminary diagnosis for clinical practice automation of this noninvasive traditional technique can prove to be a boon for health care sector.
Collapse
Affiliation(s)
- Vibha Bhatnagar
- Department of Biomedical Engineering, Shri. G.S. Institute of Technology & Science, Indore 452003, India
| | - Prashant P Bansod
- Department of Biomedical Engineering, Shri. G.S. Institute of Technology & Science, Indore 452003, India
| |
Collapse
|
22
|
Rocha I, Azevedo F, Carvalho PH, Peixoto PS, Segundo MA, Oliveira HP. An Edge-Based Computer Vision Approach for Determination of Sulfonamides in Water. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1007/978-3-031-04881-4_33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
23
|
Lo IC, Shih KT, Chen HH. Efficient and Accurate Stitching for 360° Dual-Fisheye Images and Videos. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 31:251-262. [PMID: 34855594 DOI: 10.1109/tip.2021.3130531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Back-to-back dual-fisheye cameras are the most cost-effective devices to capture 360° visual content. However, image and video stitching for such cameras often suffer from the effect of fisheye distortion, photometric inconsistency between the two views, and non-collocated optical centers. In this paper, we present algorithms for geometric calibration, photometric compensation, and seamless stitching to address these issues for back-to-back dual-fisheye cameras. Specifically, we develop a co-centric trajectory model for geometric calibration to characterize both intrinsic and extrinsic parameters of the fisheye camera to fifth-order precision, a photometric correction model for intensity and color compensation to provide efficient and accurate local color transfer, and a mesh deformation model along with an adaptive seam carving method for image stitching to reduce geometric distortion and ensure optimal spatiotemporal alignment. The stitching algorithm and the compensation algorithm can run efficiently for 1920×960 images. Quantitative evaluation of geometric distortion, color discontinuity, jitter, and ghost artifact of the resulting image and video shows that our solution outperforms the state-of-the-art techniques.
Collapse
|
24
|
Abebe MA, Hardeberg JY, Vartdal G. Smartphones’ Skin Colour Reproduction Analysis for Neonatal Jaundice Detection. J Imaging Sci Technol 2021. [DOI: 10.2352/j.imagingsci.technol.2021.65.6.060407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Abstract In recent years, smartphone-based colour imaging systems are being increasingly used for Neonatal jaundice detection applications. These systems are based on the estimation of bilirubin concentration levels that correlates with newborns’ skin colour images
corresponding to total serum bilirubin (TSB) and transcutaneous bilirubinometry (TcB) measurements. However, the colour reproduction capacity of smartphone cameras are known to be influenced by various factors including the technological and acquisition process variabilities. To make an accurate
bilirubin estimation, irrespective of the type of smartphone and illumination conditions used to capture the newborns’ skin images, an inclusive and complete model, or data set, which can represent all the possible real world acquisitions scenarios needs to be utilized. Due to various
challenges in generating such a model or a data set, some solutions tend towards the application of reduced data set (designed for reference conditions and devices only) and colour correction systems (for the transformation of other smartphone skin images to the reference space). Such approaches
will make the bilirubin estimation methods highly dependent on the accuracy of their employed colour correction systems, and the capability of reducing device-to-device colour reproduction variability. However, the state-of-the-art methods with similar methodologies were only evaluated and
validated on a single smartphone camera. The vulnerability of the systems in making an incorrect jaundice diagnosis can only be shown with a thorough investigation of the colour reproduction variability for extended number of smartphones and illumination conditions. Accordingly, this work
presents and discuss the results of such broad investigation, including the evaluation of seven smartphone cameras, ten light sources, and three different colour correction approaches. The overall results show statistically significant colour differences among devices, even after colour correction
applications, and that further analysis on clinically significance of such differences is required for skin colour based jaundice diagnosis.
Collapse
Affiliation(s)
| | | | - Gunnar Vartdal
- Colour and Visual Computing Laboratory, Picterus AS; Gjøvik, Norway
| |
Collapse
|
25
|
Cao Y, Zhao B, Tong X, Chen J, Yang J, Cao Y, Li X. Data-driven framework for high-accuracy color restoration of RGBN multispectral filter array sensors under extremely low-light conditions. OPTICS EXPRESS 2021; 29:23654-23670. [PMID: 34614627 DOI: 10.1364/oe.426940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 06/25/2021] [Indexed: 06/13/2023]
Abstract
RGBN multispectral filter array provides a cost-effective and one-shot acquisition solution to capture well-aligned RGB and near-infrared (NIR) images which are useful for various optical applications. However, signal responses of the R, G, B channels are inevitably distorted by the undesirable spectral crosstalk of the NIR bands, thus the captured RGB images are adversely desaturated. In this paper, we present a data-driven framework for effective spectral crosstalk compensation of RGBN multispectral filter array sensors. We set up a multispectral image acquisition system to capture RGB and NIR image pairs under various illuminations which are subsequently utilized to train a multi-task convolutional neural network (CNN) architecture to perform simultaneous noise reduction and color restoration. Moreover, we present a technique for generating high-quality reference images and a task-specific joint loss function to facilitate the training of the proposed CNN model. Experimental results demonstrate the effectiveness of the proposed method, outperforming the state-of-the-art color restoration solutions and achieving more accurate color restoration results for desaturated and noisy RGB images captured under extremely low-light conditions.
Collapse
|
26
|
Trombini M, Ferraro F, Manfredi E, Petrillo G, Dellepiane S. Camera Color Correction for Cultural Heritage Preservation Based on Clustered Data. J Imaging 2021; 7:115. [PMID: 39080903 PMCID: PMC8321384 DOI: 10.3390/jimaging7070115] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 07/09/2021] [Accepted: 07/10/2021] [Indexed: 11/30/2022] Open
Abstract
Cultural heritage preservation is a crucial topic for our society. When dealing with fine art, color is a primary feature that encompasses much information related to the artwork's conservation status and to the pigments' composition. As an alternative to more sophisticated devices, the analysis and identification of color pigments may be addressed via a digital camera, i.e., a non-invasive, inexpensive, and portable tool for studying large surfaces. In the present study, we propose a new supervised approach to camera characterization based on clustered data in order to address the homoscedasticity of the acquired data. The experimental phase is conducted on a real pictorial dataset, where pigments are grouped according to their chromatic or chemical properties. The results show that such a procedure leads to better characterization with respect to state-of-the-art methods. In addition, the present study introduces a method to deal with organic pigments in a quantitative visual approach.
Collapse
Affiliation(s)
- Marco Trombini
- Department of Electrical, Electronics and Telecommunication Engineering and Naval Architecture, Università degli Studi di Genova, Via All’Opera Pia 11A, 16145 Genoa, Italy; (M.T.); (F.F.)
| | - Federica Ferraro
- Department of Electrical, Electronics and Telecommunication Engineering and Naval Architecture, Università degli Studi di Genova, Via All’Opera Pia 11A, 16145 Genoa, Italy; (M.T.); (F.F.)
| | - Emanuela Manfredi
- Department of Chemistry and Industrial Chemistry, Università degli Studi di Genova, Via Dodecaneso 31, 16146 Genoa, Italy; (E.M.); (G.P.)
| | - Giovanni Petrillo
- Department of Chemistry and Industrial Chemistry, Università degli Studi di Genova, Via Dodecaneso 31, 16146 Genoa, Italy; (E.M.); (G.P.)
| | - Silvana Dellepiane
- Department of Electrical, Electronics and Telecommunication Engineering and Naval Architecture, Università degli Studi di Genova, Via All’Opera Pia 11A, 16145 Genoa, Italy; (M.T.); (F.F.)
| |
Collapse
|
27
|
Finlayson GD, Zhu Y. Designing Color Filters That Make Cameras More Colorimetric. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:853-867. [PMID: 33226947 DOI: 10.1109/tip.2020.3038523] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
When we place a colored filter in front of a camera the effective camera response functions are equal to the given camera spectral sensitivities multiplied by the filter spectral transmittance. In this article, we solve for the filter which returns the modified sensitivities as close to being a linear transformation from the color matching functions of the human visual system as possible. When this linearity condition - sometimes called the Luther condition- is approximately met, the 'camera+filter' system can be used for accurate color measurement. Then, we reformulate our filter design optimisation for making the sensor responses as close to the CIEXYZ tristimulus values as possible given the knowledge of real measured surfaces and illuminants spectra data. This data-driven method in turn is extended to incorporate constraints on the filter (smoothness and bounded transmission). Also, because how the optimisation is initialised is shown to impact on the performance of the solved-for filters, a multi-initialisation optimisation is developed. Experiments demonstrate that, by taking pictures through our optimised color filters, we can make cameras significantly more colorimetric.
Collapse
|
28
|
Clouet A, Vaillant J, Alleysson D. The Geometry of Noise in Color and Spectral Image Sensors. SENSORS 2020; 20:s20164487. [PMID: 32796625 PMCID: PMC7471994 DOI: 10.3390/s20164487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 08/04/2020] [Accepted: 08/08/2020] [Indexed: 11/16/2022]
Abstract
Digital images are always affected by noise and the reduction of its impact is an active field of research. Noise due to random photon fall onto the sensor is unavoidable but could be amplified by the camera image processing such as in the color correction step. Color correction is expressed as the combination of a spectral estimation and a computation of color coordinates in a display color space. Then we use geometry to depict raw, spectral and color signals and noise. Geometry is calibrated on the physics of image acquisition and spectral characteristics of the sensor to study the impact of the sensor space metric on noise amplification. Since spectral channels are non-orthogonal, we introduce the contravariant signal to noise ratio for noise evaluation at spectral reconstruction level. Having definitions of signal to noise ratio for each steps of spectral or color reconstruction, we compare performances of different types of sensors (RGB, RGBW, RGBWir, CMY, RYB, RGBC).
Collapse
Affiliation(s)
- Axel Clouet
- CEA, Univ. Grenoble Alpes, LETI 38054 Grenoble CEDEX 9, France;
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105 Grenoble, France;
- Correspondence:
| | - Jérôme Vaillant
- CEA, Univ. Grenoble Alpes, LETI 38054 Grenoble CEDEX 9, France;
| | - David Alleysson
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105 Grenoble, France;
| |
Collapse
|
29
|
Choi W, Park HS, Kyung CM. Color reproduction pipeline for an RGBW color filter array sensor. OPTICS EXPRESS 2020; 28:15678-15690. [PMID: 32403590 DOI: 10.1364/oe.391253] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Accepted: 04/28/2020] [Indexed: 06/11/2023]
Abstract
Many types of RGBW color filter array (CFA) have been proposed for various purposes. Most studies utilize white pixel intensity for improving the signal-to-noise ratio of the image and demosaicing the image, but we note that the white pixel intensity can also be utilized to improve color reproduction. In this paper, we propose a color reproduction pipeline for RGBW CFA sensors based on a fast, accurate, and hardware-friendly gray pixel detection using white pixel intensity. The proposed color reproduction pipeline was tested on a dataset captured from an OPA sensor which has RGBW CFA. Experimental results show that the proposed pipeline estimates the illumination more accurately and preserves the achromatic color better than conventional methods which do not use white pixel intensity.
Collapse
|
30
|
Accurate device-independent colorimetric measurements using smartphones. PLoS One 2020; 15:e0230561. [PMID: 32214340 PMCID: PMC7098568 DOI: 10.1371/journal.pone.0230561] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 03/04/2020] [Indexed: 11/19/2022] Open
Abstract
Smartphones provide an ideal platform for colorimetric measurements due to their low cost, portability and image quality. As with any imaging-based colorimetry system, ambient light and device variations introduce error which must be dealt with. We propose a novel processing method consisting of a one-time calibration stage to account for inter-phone variations, and an innovative use of ambient light subtraction with image pairs to account for variation in ambient light. Data collection is kept very simple, making it particularly useful for use in the field, since nothing additional is required in the images. Ambient subtraction is first demonstrated for a range of colors and phones (Samsung S8 and LG Nexus 5X), and the Subtracted Signal to Noise Ratio (SSNR) is defined as a metric for assessing whether an image pair is appropriate at the time of image capture. The experimentally determined SSNR threshold below which to suggest retaking the images is 3.4. The classification accuracy for results using the proposed calibration pipeline is then compared to the simplest image metadata-based alternative and is found to be greatly superior. Finally, a custom colorcard is shown to improve the accuracy of device-independent results for known smaller ranges of colors over a standard colorcard, making this a possible application-specific modification to the overall processing pipeline.
Collapse
|
31
|
Color Calibration of Proximal Sensing RGB Images of Oilseed Rape Canopy via Deep Learning Combined with K-Means Algorithm. REMOTE SENSING 2019. [DOI: 10.3390/rs11243001] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Plant color is a key feature for estimating parameters of the plant grown under different conditions using remote sensing images. In this case, the variation in plant color should be only due to the influence of the growing conditions and not due to external confounding factors like a light source. Hence, the impact of the light source in plant color should be alleviated using color calibration algorithms. This study aims to develop an efficient, robust, and cutting-edge approach for automatic color calibration of three-band (red green blue: RGB) images. Specifically, we combined the k-means model and deep learning for accurate color calibration matrix (CCM) estimation. A dataset of 3150 RGB images for oilseed rape was collected by a proximal sensing technique under varying illumination conditions and used to train, validate, and test our proposed framework. Firstly, we manually derived CCMs by mapping RGB color values of each patch of a color chart obtained in an image to standard RGB (sRGB) color values of that chart. Secondly, we grouped the images into clusters according to the CCM assigned to each image using the unsupervised k-means algorithm. Thirdly, the images with the new cluster labels were used to train and validate the deep learning convolutional neural network (CNN) algorithm for an automatic CCM estimation. Finally, the estimated CCM was applied to the input image to obtain an image with a calibrated color. The performance of our model for estimating CCM was evaluated using the Euclidean distance between the standard and the estimated color values of the test dataset. The experimental results showed that our deep learning framework can efficiently extract useful low-level features for discriminating images with inconsistent colors and achieved overall training and validation accuracies of 98.00% and 98.53%, respectively. Further, the final CCM provided an average Euclidean distance of 16.23 ΔΕ and outperformed the previously reported methods. This proposed technique can be used in real-time plant phenotyping at multiscale levels.
Collapse
|
32
|
Molada-Tebar A, Riutort-Mayol G, Marqués-Mateu Á, Lerma JL. A Gaussian Process Model for Color Camera Characterization: Assessment in Outdoor Levantine Rock Art Scenes. SENSORS (BASEL, SWITZERLAND) 2019; 19:E4610. [PMID: 31652795 PMCID: PMC6866521 DOI: 10.3390/s19214610] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 10/17/2019] [Accepted: 10/21/2019] [Indexed: 11/16/2022]
Abstract
In this paper, we propose a novel approach to undertake the colorimetric camera characterization procedure based on a Gaussian process (GP). GPs are powerful and flexible nonparametric models for multivariate nonlinear functions. To validate the GP model, we compare the results achieved with a second-order polynomial model, which is the most widely used regression model for characterization purposes. We applied the methodology on a set of raw images of rock art scenes collected with two different Single Lens Reflex (SLR) cameras. A leave-one-out cross-validation (LOOCV) procedure was used to assess the predictive performance of the models in terms of CIE XYZ residuals and Δ E a b * color differences. Values of less than 3 CIELAB units were achieved for Δ E a b * . The output sRGB characterized images show that both regression models are suitable for practical applications in cultural heritage documentation. However, the results show that colorimetric characterization based on the Gaussian process provides significantly better results, with lower values for residuals and Δ E a b * . We also analyzed the induced noise into the output image after applying the camera characterization. As the noise depends on the specific camera, proper camera selection is essential for the photogrammetric work.
Collapse
Affiliation(s)
- Adolfo Molada-Tebar
- Department of Cartographic Engineering, Geodesy, and Photogrammetry, Universitat Politècnica de València, València, 46022, Spain.
| | - Gabriel Riutort-Mayol
- Department of Cartographic Engineering, Geodesy, and Photogrammetry, Universitat Politècnica de València, València, 46022, Spain.
| | - Ángel Marqués-Mateu
- Department of Cartographic Engineering, Geodesy, and Photogrammetry, Universitat Politècnica de València, València, 46022, Spain.
| | - José Luis Lerma
- Department of Cartographic Engineering, Geodesy, and Photogrammetry, Universitat Politècnica de València, València, 46022, Spain.
| |
Collapse
|
33
|
Gao SB, Zhang M, Li YJ. Improving color constancy by selecting suitable set of training images. OPTICS EXPRESS 2019; 27:25611-25633. [PMID: 31510431 DOI: 10.1364/oe.27.025611] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 08/13/2019] [Indexed: 06/10/2023]
Abstract
With very simple implementation, regression-based color constancy (CC) methods have recently obtained very competitive performance by applying a correction matrix to the results of some low level-based CC algorithms. However, most regression-based methods, e.g., Corrected Moment (CM), apply a same correction matrix to all the test images. Considering that the captured image color is usually determined by various factors (e.g., illuminant and surface reflectance), it is obviously not reasonable enough to apply a same correction to different test images without considering the intrinsic difference among images. In this work, we first mathematically analyze the key factors that may influence the performance of regression-based CC, and then we design principled rules to automatically select the suitable training images to learn an optimal correction matrix for each test image. With this strategy, the original regression-based CC (e.g., CM) is clearly improved to obtain more competitive performance on four widely used benchmark datasets. We also show that although this work focuses on improving the regression-based CM method, a noteworthy aspect of the proposed automatic training data selection strategy is its applicability to several representative regression-based approaches for the color constancy problem.
Collapse
|
34
|
Finlayson G, Gong H, Fisher RB. Color Homography: Theory and Applications. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:20-33. [PMID: 29990184 DOI: 10.1109/tpami.2017.2760833] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Images of co-planar points in 3-dimensional space taken from different camera positions are a homography apart. Homographies are at the heart of geometric methods in computer vision and are used in geometric camera calibration, 3D reconstruction, stereo vision and image mosaicking among other tasks. In this paper we show the surprising result that homographies are the apposite tool for relating image colors of the same scene when the capture conditions-illumination color, shading and device-change. Three applications of color homographies are investigated. First, we show that color calibration is correctly formulated as a homography problem. Second, we compare the chromaticity distributions of an image of colorful objects to a database of object chromaticity distributions using homography matching. In the color transfer problem, the colors in one image are mapped so that the resulting image color style matches that of a target image. We show that natural image color transfer can be re-interpreted as a color homography mapping. Experiments demonstrate that solving the color homography problem leads to more accurate calibration, improved color-based object recognition, and we present a new direction for developing natural color transfer algorithms.
Collapse
|
35
|
Berry JC, Fahlgren N, Pokorny AA, Bart RS, Veley KM. An automated, high-throughput method for standardizing image color profiles to improve image-based plant phenotyping. PeerJ 2018; 6:e5727. [PMID: 30310752 PMCID: PMC6174877 DOI: 10.7717/peerj.5727] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2018] [Accepted: 09/10/2018] [Indexed: 12/11/2022] Open
Abstract
High-throughput phenotyping has emerged as a powerful method for studying plant biology. Large image-based datasets are generated and analyzed with automated image analysis pipelines. A major challenge associated with these analyses is variation in image quality that can inadvertently bias results. Images are made up of tuples of data called pixels, which consist of R, G, and B values, arranged in a grid. Many factors, for example image brightness, can influence the quality of the image that is captured. These factors alter the values of the pixels within images and consequently can bias the data and downstream analyses. Here, we provide an automated method to adjust an image-based dataset so that brightness, contrast, and color profile is standardized. The correction method is a collection of linear models that adjusts pixel tuples based on a reference panel of colors. We apply this technique to a set of images taken in a high-throughput imaging facility and successfully detect variance within the image dataset. In this case, variation resulted from temperature-dependent light intensity throughout the experiment. Using this correction method, we were able to standardize images throughout the dataset, and we show that this correction enhanced our ability to accurately quantify morphological measurements within each image. We implement this technique in a high-throughput pipeline available with this paper, and it is also implemented in PlantCV.
Collapse
Affiliation(s)
- Jeffrey C. Berry
- Donald Danforth Plant Science Center, Saint Louis, MO, United States of America
| | - Noah Fahlgren
- Donald Danforth Plant Science Center, Saint Louis, MO, United States of America
| | | | - Rebecca S. Bart
- Donald Danforth Plant Science Center, Saint Louis, MO, United States of America
| | - Kira M. Veley
- Donald Danforth Plant Science Center, Saint Louis, MO, United States of America
| |
Collapse
|
36
|
Li Z, Xiong N, Liu J, Gao W, Shamey R. Determining the colorimetric attributes of multicolored materials based on a global correction and unsupervised image segmentation method. APPLIED OPTICS 2018; 57:7482-7491. [PMID: 30461814 DOI: 10.1364/ao.57.007482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 08/07/2018] [Indexed: 06/09/2023]
Abstract
Fast and accurate measurement of colors in multicolored prints using commercial instruments or existing computer vision systems remains a challenge due to limitations in image segmentation methods and the size and complexity of the colored patterns. To determine the colorimetric attributes (L*a*b*) of multicolored materials, an approach based on global color correction and an effective unsupervised image segmentation is presented. The colorimetric attributes of all patches in a ColorChecker chart were measured spectrophotometrically, and an image of the chart was also captured. Images were segmented using a modified Chan-Vese method, and the sRGB values of each patch were extracted and then transformed into L*a*b* values. In order to optimize the transformation process, the performance of 10 models was examined by minimizing the average color differences between measured and calculated colorimetric values. To assess the performance of the model, a set of printed samples was employed and the color differences between the predicted and measured L*a*b* values of samples were compared. The results show that the modified Chan-Vese method, with suitable settings, generates satisfactory segmentation of the printed images with mean and maximum ΔE00 values of 2.43 and 4.28 between measured and calculated values.
Collapse
|
37
|
Qiu J, Xu H, Ye Z, Diao C. Image quality degradation of object-color metamer mismatching in digital camera color reproduction. APPLIED OPTICS 2018; 57:2851-2860. [PMID: 29714292 DOI: 10.1364/ao.57.002851] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Accepted: 03/07/2018] [Indexed: 06/08/2023]
Abstract
Metamer mismatching is a phenomenon where two objects that are colorimetrically indistinguishable under one lighting condition become distinguishable under another one. Due to the unavailability of spectral information, metamer mismatching introduces an inherent uncertainty into cameras' color reproduction. To investigate the degree of image quality degradation by the metamer mismatching, a large spectral reflectance database was compiled in this study to search the object-color metamers sets of the spectra in hyperspectral images. Then, metamer-degraded images were constructed and compared with the ground truth images by directional statistics-based color similarity index image quality assessment metrics to evaluate the perceptual image degradation. The results indicate that the object-color metamer mismatching has only little impact on the image quality degradation, whereas the inappropriate selection of color correction matrices involved with the illumination metamerism is the primary factor for the accuracy decrease in the digital camera color reproduction.
Collapse
|
38
|
Alsam A, Rivertz HJ. A mathematical approach to best luminance maps. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2018; 35:B239-B243. [PMID: 29603984 DOI: 10.1364/josaa.35.00b239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 01/31/2018] [Indexed: 06/08/2023]
Abstract
An algorithm to calculate the best global mapping from color to grayscale is presented. We assert that the best mapping minimizes the difference between the multi-channel local tensor and the tensor of the resultant mono-chromatic image. To minimize the objective function, we represent the grayscale image as a weighted sum of the RGB channels, three channels and their second-order polynomial and three channels and their root polynomial. The optimization searches for the best weights to combine the linear, polynomial, and root polynomial functions. Our results show that the optimal weights can half the root mean square difference between the color gradients and those achieved by the conventional luminance transformation. Further improvement is achieved by adding the squared and root squared channels to the solution. The improvements are also visually evident.
Collapse
|
39
|
Securing Color Fidelity in 3D Architectural Heritage Scenarios. SENSORS 2017; 17:s17112437. [PMID: 29068359 PMCID: PMC5712986 DOI: 10.3390/s17112437] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Revised: 10/07/2017] [Accepted: 10/22/2017] [Indexed: 11/17/2022]
Abstract
Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’).
Collapse
|
40
|
Stets JD, Dal Corso A, Nielsen JB, Lyngby RA, Jensen SHN, Wilm J, Doest MB, Gundlach C, Eiriksson ER, Conradsen K, Dahl AB, Bærentzen JA, Frisvad JR, Aanæs H. Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering. APPLIED OPTICS 2017; 56:7679-7690. [PMID: 29047754 DOI: 10.1364/ao.56.007679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 08/15/2017] [Indexed: 06/07/2023]
Abstract
Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces. This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content creation. It is also useful for identifying and improving issues in the different steps of the pipeline. In this work, we use it to improve reconstruction, apply analysis by synthesis to estimate optical properties, and to develop our method for scene reassembly.
Collapse
|
41
|
Mackiewicz M, Andersen CF, Finlayson G. Method for hue plane preserving color correction. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:2166-2177. [PMID: 27857433 DOI: 10.1364/josaa.33.002166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Hue plane preserving color correction (HPPCC), introduced by Andersen and Hardeberg [Proceedings of the 13th Color and Imaging Conference (CIC) (2005), pp. 141-146], maps device-dependent color values (RGB) to colorimetric color values (XYZ) using a set of linear transforms, realized by white point preserving 3×3 matrices, where each transform is learned and applied in a subregion of color space, defined by two adjacent hue planes. The hue plane delimited subregions of camera RGB values are mapped to corresponding hue plane delimited subregions of estimated colorimetric XYZ values. Hue planes are geometrical half-planes, where each is defined by the neutral axis and a chromatic color in a linear color space. The key advantage of the HPPCC method is that, while offering an estimation accuracy of higher order methods, it maintains the linear colorimetric relations of colors in hue planes. As a significant result, it therefore also renders the colorimetric estimates invariant to exposure and shading of object reflection. In this paper, we present a new flexible and robust version of HPPCC using constrained least squares in the optimization, where the subregions can be chosen freely in number and position in order to optimize the results while constraining transform continuity at the subregion boundaries. The method is compared to a selection of other state-of-the-art characterization methods, and the results show that it outperforms the original HPPCC method.
Collapse
|
42
|
Find Andersen C, Connah D. Weighted Constrained Hue-Plane Preserving Camera Characterization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:4329-4339. [PMID: 27416591 DOI: 10.1109/tip.2016.2590303] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Color correction relates device dependent sensor responses (RGB) to device independent color values (XYZ). Here we present a new approach to Hue-plane Preserving Color Correction (HPPCC) using weighted constrained 3 × 3 matrices. Hue-plane preservation was introduced in [1] in conjunction with an HPPCC method. That method maps using a finite number of local white point preserving 3 × 3 matrices, each of which operates in a hue-angle delimited subregion of device space defined by the white and two adjacent chromatic training set colors. However, that formulation does not leave room for optimization or continuity beyond C0 in the transitions between the subregions. To remedy that our new method uses hue-angle specific weighted matrixing: given a device RGB from which a device hue-angle is derived, a corresponding transformation matrix is found as the normalized weighted sum of all precalculated constrained white point and training color preserving matrices. Each weight is calculated as a power function of the minimum difference between the device and the training color hue-angle. The weighting function provides local influence to the matrices that are in close hue-angle proximity to the device color. The power of the function is optimized for global accuracy. We call this Hue-plane Preserving Color Correction by Weighted Constrained Matrixing HPPCC-WCM 1 1. Experiments performed using different input spectra show that our method consistently improves on both stability and accuracy compared to state of the art methods.
Collapse
|
43
|
Qiu J, Xu H. Camera response prediction for various capture settings using the spectral sensitivity and crosstalk model. APPLIED OPTICS 2016; 55:6989-6999. [PMID: 27607275 DOI: 10.1364/ao.55.006989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, a camera response formation model is proposed to accurately predict the responses of images captured under various exposure settings. Differing from earlier works that estimated the camera relative spectral sensitivity, our model constructs the physical spectral sensitivity curves and device-dependent parameters that convert the absolute spectral radiances of target surfaces to the camera readout responses. With this model, the camera responses to miscellaneous combinations of surfaces and illuminants could be accurately predicted. Thus, creating an "imaging simulator" by using the colorimetric and photometric research based on the cameras would be of great convenience.
Collapse
|