1
|
Marvie JE, Nehmé Y, Graziosi D, Lavoué G. Crafting the MPEG metrics for objective and perceptual quality assessment of volumetric videos. QUALITY AND USER EXPERIENCE 2023; 8:4. [PMID: 37304060 PMCID: PMC10242241 DOI: 10.1007/s41233-023-00057-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Indexed: 06/13/2023]
Abstract
Efficient objective and perceptual metrics are valuable tools to evaluate the visual impact of compression artifacts on the visual quality of volumetric videos (VVs). In this paper, we present some of the MPEG group efforts to create, benchmark and calibrate objective quality assessment metrics for volumetric videos represented as textured meshes. We created a challenging dataset of 176 volumetric videos impaired with various distortions and conducted a subjective experiment to gather human opinions (more than 5896 subjective scores were collected). We adapted two state-of-the-art model-based metrics for point cloud evaluation to our context of textured mesh evaluation by selecting efficient sampling methods. We also present a new image-based metric for the evaluation of such VVs whose purpose is to reduce the cumbersome computation times inherent to the point-based metrics due to their use of multiple kd-tree searches. Each metric presented above is calibrated (i.e., selection of best values for parameters such as the number of views or grid sampling density) and evaluated on our new ground-truth subjective dataset. For each metric, the optimal selection and combination of features is determined by logistic regression through cross-validation. This performance analysis, combined with MPEG experts' requirements, lead to the validation of two selected metrics and recommendations on the features of most importance through learned feature weights.
Collapse
Affiliation(s)
| | - Yana Nehmé
- INSA Lyon, Univ Lyon, CNRS, UCBL, LIRIS, UMR5205, Lyon, France
| | | | - Guillaume Lavoué
- Centrale Lyon, Univ Lyon, CNRS, INSA Lyon, UCBL, LIRIS, UMR5205, ENISE, Lyon, France
| |
Collapse
|
2
|
Reinhard J, Urban P. Perceptually Optimizing Color Look-up Tables. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:403-414. [PMID: 37015403 DOI: 10.1109/tip.2022.3228498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The quality of ICC profiles with embedded look-up tables (LUTs) depends on multiple factors: 1. the accuracy of the optical printer model, 2. the exploitation of the available gamut combined with the quality of the gamut mapping approach encoded in the B2A-LUTs (backwards LUTs) and 3. the tonal smoothness as well color accuracy of the backwards LUTs. It can be shown that optimizing the smoothness of the LUTs comes at the expense of color accuracy and requires gamut reduction because of internal tonal edges. We present a method to optimize backwards LUTs of existing ICC profiles w.r.t accuracy, smoothness, gamut exploitation and mapping, which can be extended beyond color, e.g. to joint color and translucency backward LUTs. The approach is based on a perceptual difference metric that is used to optimize the LUT's tonal smoothness constrained to preserve both the accuracy of and the relationship between colors.
Collapse
|
3
|
Ragab M, Choudhry H, Al-Rabia MW, Binyamin SS, Aldarmahi AA, Mansour RF. Early and accurate detection of melanoma skin cancer using hybrid level set approach. Front Physiol 2022; 13:965630. [PMID: 36545278 PMCID: PMC9760861 DOI: 10.3389/fphys.2022.965630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
Digital dermoscopy is used to identify cancer in skin lesions, and sun exposure is one of the leading causes of melanoma. It is crucial to distinguish between healthy skin and malignant lesions when using computerised lesion detection and classification. Lesion segmentation influences categorization accuracy and precision. This study introduces a novel way of classifying lesions. Hair filters, gel, bubbles, and specular reflection are all options. An improved levelling method is employed in an innovative method for detecting and removing cancerous hairs. The lesion is distinguished from the surrounding skin by the adaptive sigmoidal function; this function considers the severity of localised lesions. An improved technique for identifying a lesion from surrounding tissue is proposed in the article, followed by a classifier and available features that resulted in 94.40% accuracy and 93% success. According to research, the best method for selecting features and classifications can produce more accurate predictions before and during treatment. When the recommended strategy is put to the test using the Melanoma Skin Cancer Dataset, the recommended technique outperforms the alternative.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia,Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah, Saudi Arabia,Mathematics Department, Faculty of Science, Al-Azhar University, Nasr City, Egypt,*Correspondence: Mahmoud Ragab,
| | - Hani Choudhry
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah, Saudi Arabia,Biochemistry Department, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Mohammed W. Al-Rabia
- Department of Medical Microbiology and Parasitology, Faculty of Medicine, King Abdulaziz University, Jeddah, Saudi Arabia,Health Promotion Center, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Sami Saeed Binyamin
- Computer and Information Technology Department, The Applied College, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Ahmed A. Aldarmahi
- Basic Science Department, College of Science and Health Professions, King Saud Bin Abdulaziz University for Health Sciences, Jeddah, Saudi Arabia,King Abdullah International Medical Research Center, Ministry of National Guard—Health Affairs, Jeddah, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga, Egypt
| |
Collapse
|
4
|
Muñoz-Postigo J, Valero EM, Martínez-Domingo MA, Gomez-Robledo L, Huertas R, Hernández-Andrés J. CVD-MET: an image difference metric designed for analysis of color vision deficiency aids. OPTICS EXPRESS 2022; 30:34665-34683. [PMID: 36242474 DOI: 10.1364/oe.456346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/18/2022] [Indexed: 06/16/2023]
Abstract
Color vision deficiency (CVD) has gained in relevance in the last decade, with a surge of proposals for aid systems that aim to improve the color discrimination capabilities of CVD subjects. This paper focuses on the proposal of a new metric called CVD-MET, that can evaluate the efficiency and naturalness of these systems through a set of images using a simulation of the subject's vision. In the simulation, the effect of chromatic adaptation is introduced via CIECAM02, which is relevant for the evaluation of passive aids (color filters). To demonstrate the potential of the CVD-MET, an evaluation of a representative set of passive and active aids is carried out both with conventional image quality metrics and with CVD-MET. The results suggest that the active aids (recoloration algorithms) are in general more efficient and produce more natural images, although the changes that are introduced do not shift the CVD's perception of the scene towards the normal observer's perception.
Collapse
|
5
|
Deng L, Yao B, Yang Y, Zhu L, Wang G, Gu C, Xu L. Color-speckle assessment in multi-primary laser-projection systems based on a 3D J za zb z color space. OPTICS EXPRESS 2022; 30:33374-33394. [PMID: 36242377 DOI: 10.1364/oe.465619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/08/2022] [Indexed: 06/16/2023]
Abstract
We propose and demonstrate a color-speckle assessment method based on a three-dimensional Jzazbz color space, which is appropriate for both three-primary and multi-primary systems. In the proposed scheme, new physical quantities are defined to describe the color-speckle characteristics, which provides a general and intuitive color-speckle evaluation for different laser projectors. Experimental verification is also performed using three-primary and six-primary laser projectors. The simulation and measurement results are consistent.
Collapse
|
6
|
Urzúa AR, Wolf KB. Unitary rotation of pixellated polychromatic images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:1323-1329. [PMID: 36215575 DOI: 10.1364/josaa.462530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/15/2022] [Indexed: 06/16/2023]
Abstract
Unitary rotations of polychromatic images on finite two-dimensional pixellated screens provide invertibility, group composition, and thus conservation of information. Rotations have been applied on monochromatic image data sets, where we now examine closer the Gibbs-like oscillations that appear due to discrete "discontinuities" of the input images under unitary transformations. Extended to three-color images, we examine here the display of color at the pixels where, due to oscillations, some pixel color values may fall outside their required common numerical range [0,1], between absence and saturation of the red, green, and blue formant colors we choose to represent the images.
Collapse
|
7
|
Abstract
For over 100 y, the scientific community has adhered to a paradigm, introduced by Riemann and furthered by Helmholtz and Schrodinger, where perceptual color space is a three-dimensional Riemannian space. This implies that the distance between two colors is the length of the shortest path that connects them. We show that a Riemannian metric overestimates the perception of large color differences because large color differences are perceived as less than the sum of small differences. This effect, called diminishing returns, cannot exist in a Riemannian geometry. Consequently, we need to adapt how we model color differences, as the current standard, ΔE, recognized by the International Commission for Weights and Measures, does not account for diminishing returns in color difference perception. The scientific community generally agrees on the theory, introduced by Riemann and furthered by Helmholtz and Schrödinger, that perceived color space is not Euclidean but rather, a three-dimensional Riemannian space. We show that the principle of diminishing returns applies to human color perception. This means that large color differences cannot be derived by adding a series of small steps, and therefore, perceptual color space cannot be described by a Riemannian geometry. This finding is inconsistent with the current approaches to modeling perceptual color space. Therefore, the assumed shape of color space requires a paradigm shift. Consequences of this apply to color metrics that are currently used in image and video processing, color mapping, and the paint and textile industries. These metrics are valid only for small differences. Rethinking them outside of a Riemannian setting could provide a path to extending them to large differences. This finding further hints at the existence of a second-order Weber–Fechner law describing perceived differences.
Collapse
|
8
|
Tang R, Xiao Y, Luo H, Qiao X, Hou J. One-step electrospinning PMMA-SPO with hierarchical architectures as a multi-functional transparent screen window. NEW J CHEM 2022. [DOI: 10.1039/d2nj02851d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
A fascinating multifunctional screen window containing air filtration, rain-flow transportation and photochromic functions.
Collapse
Affiliation(s)
- Rongxing Tang
- Key Laboratory of Automobile Materials of Ministry of Education, College of Materials Science and Engineering, Jilin University, Changchun, 130025, China
| | - Yanan Xiao
- Key Laboratory of Automobile Materials of Ministry of Education, College of Materials Science and Engineering, Jilin University, Changchun, 130025, China
| | - Hao Luo
- Key Laboratory of Automobile Materials of Ministry of Education, College of Materials Science and Engineering, Jilin University, Changchun, 130025, China
| | - Xiaolan Qiao
- State Key Laboratory for Modification of Chemical Fibers and Polymer Materials, College of Materials Science and Engineering, Donghua University, Shanghai, 201620, People's Republic of China
| | - Jiazi Hou
- Key Laboratory of Automobile Materials of Ministry of Education, College of Materials Science and Engineering, Jilin University, Changchun, 130025, China
| |
Collapse
|
9
|
Nehme Y, Dupont F, Farrugia JP, Le Callet P, Lavoue G. Visual Quality of 3D Meshes With Diffuse Colors in Virtual Reality: Subjective and Objective Evaluation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2202-2219. [PMID: 33166254 DOI: 10.1109/tvcg.2020.3036153] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Surface meshes associated with diffuse texture or color attributes are becoming popular multimedia contents. They provide a high degree of realism and allow six degrees of freedom (6DoF) interactions in immersive virtual reality environments. Just like other types of multimedia, 3D meshes are subject to a wide range of processing, e.g., simplification and compression, which result in a loss of quality of the final rendered scene. Thus, both subjective studies and objective metrics are needed to understand and predict this visual loss. In this work, we introduce a large dataset of 480 animated meshes with diffuse color information, and associated with perceived quality judgments. The stimuli were generated from 5 source models subjected to geometry and color distortions. Each stimulus was associated with 6 hypothetical rendering trajectories (HRTs): combinations of 3 viewpoints and 2 animations. A total of 11520 quality judgments (24 per stimulus) were acquired in a subjective experiment conducted in virtual reality. The results allowed us to explore the influence of source models, animations and viewpoints on both the quality scores and their confidence intervals. Based on these findings, we propose the first metric for quality assessment of 3D meshes with diffuse colors, which works entirely on the mesh domain. This metric incorporates perceptually-relevant curvature-based and color-based features. We evaluate its performance, as well as a number of Image Quality Metrics (IQMs), on two datasets: ours and a dataset of distorted textured meshes. Our metric demonstrates good results and a better stability than IQMs. Finally, we investigated how the knowledge of the viewpoint (i.e., the visible parts of the 3D model) may improve the results of objective metrics.
Collapse
|
10
|
Safdar M, Emmel P. Perceptually uniform cross-gamut mapping between surface colors. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:140-147. [PMID: 33362161 DOI: 10.1364/josaa.411618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/06/2020] [Indexed: 06/12/2023]
Abstract
Gamut mapping is an important part of the color reproduction pipeline. A color's appearance depends on the gamut achievable by the reproduction device (e.g., display, printer, etc.) or the reproduction material (e.g., plastics, paints, textiles, etc.). In the surface color industry, often a single color is managed such that, if it lies outside of the reproduction gamut, it would be mapped to a visually similar color on the boundary of the reproduction gamut using a gamut mapping algorithm. The algorithm's performance mainly depends on the uniformity of the working color space and/or selection of a focal point, inside the reproduction gamut, towards which the mapping line should be directed. Hitherto, the CIE standard color difference formula CIEDE2000 is the best known perceptual color difference metric for the standard dynamic range. In this paper, a method is proposed with the aim to achieve perceptually uniform mapping of a source color to the reproduction gamut using the CIEDE2000 as reference for uniformity. The proposed method, named UNIMAP00, is independent of the uniformity of the working color space, and no focal points are needed. The current results closely agreed with the experimental findings previously reported by other researchers.
Collapse
|
11
|
Zhao B, Luo MR. Hue linearity of color spaces for wide color gamut and high dynamic range media. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:865-875. [PMID: 32400722 DOI: 10.1364/josaa.386515] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Accepted: 04/07/2020] [Indexed: 06/11/2023]
Abstract
Hue linearity is an important property of uniform color spaces such that hues perceived to be similar should be located on a straight line, an iso-hue line, in that space. Previously derived hue linearity data only cover a limited color gamut. Two new psychophysical experiments were conducted that used a wide color gamut (WCG) and high dynamic range (HDR) display to extend the color range using both hue matching and unitary hue estimation methods. The new data were used to evaluate the CIELAB, CAM16-UCS, IPT, and Jzazbz uniform color spaces. The experimental results indicated that IPT and Jzazbz outperformed the other two, especially in the blue region. The same method was used to test these spaces using the other published data sets. The results from different data sets gave similar results. Finally, all results were combined to form a normalized data set to represent the data under HDR and WCG display conditions. Furthermore, the four unitary hue data can be used to develop or refine color appearance models.
Collapse
|
12
|
Le Moan S, Pedersen M. A Three-Feature Model to Predict Colour Change Blindness. Vision (Basel) 2019; 3:vision3040061. [PMID: 31735862 PMCID: PMC6969898 DOI: 10.3390/vision3040061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2019] [Revised: 10/22/2019] [Accepted: 11/06/2019] [Indexed: 11/16/2022] Open
Abstract
Change blindness is a striking shortcoming of our visual system which is exploited in the popular ‘Spot the difference’ game, as it makes us unable to notice large visual changes happening right before our eyes. Change blindness illustrates the fact that we see much less than we think we do. In this paper, we introduce a fully automated model to predict colour change blindness in cartoon images based on image complexity, change magnitude and observer experience. Using linear regression with only three parameters, the predictions of the proposed model correlate significantly with measured detection times. We also demonstrate the efficacy of the model to classify stimuli in terms of difficulty.
Collapse
Affiliation(s)
- Steven Le Moan
- Department of Mechanical and Electrical Engineering, Massey University, 4410 Palmerston North, New Zealand
- Correspondence:
| | - Marius Pedersen
- Department of Computer Science, Norwegian University of Science and Technology, 2815 Gjøvik, Norway;
| |
Collapse
|
13
|
Lu W, Zeng M, Wang L, Luo H, Mukherjee S, Huang X, Deng Y. Navigation Algorithm Based on the Boundary Line of Tillage Soil Combined with Guided Filtering and Improved Anti-Noise Morphology. SENSORS 2019; 19:s19183918. [PMID: 31514382 PMCID: PMC6766904 DOI: 10.3390/s19183918] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Revised: 09/05/2019] [Accepted: 09/08/2019] [Indexed: 11/16/2022]
Abstract
An improved anti-noise morphology vision navigation algorithm is proposed for intelligent tractor tillage in a complex agricultural field environment. At first, the two key steps of guided filtering and improved anti-noise morphology navigation line extraction were addressed in detail. Then, the experiments were carried out in order to verify the effectiveness and advancement of the presented algorithm. Finally, the optimal template and its application condition were studied for improving the image-processing speed. The comparison experiment results show that the YCbCr color space has minimum time consumption of 0.094 s in comparison with HSV, HIS, and 2R-G-B color spaces. The guided filtering method can effectively distinguish the boundary between the tillage soil compared to other competing vanilla methods such as Tarel, multi-scale retinex, wavelet-based retinex, and homomorphic filtering in spite of having the fastest processing speed of 0.113 s. The extracted soil boundary line of the improved anti-noise morphology algorithm has the best precision and speed compared to other operators such as Sobel, Roberts, Prewitt, and Log. After comparing different sizes of image templates, the optimal template with the size of 140 × 260 pixels could achieve high-precision vision navigation while the course deviation angle was not more than 7.5°. The maximum tractor speed of the optimal template and global template were 51.41 km/h and 27.47 km/h, respectively, which can meet the real-time vision navigation requirement of the smart tractor tillage operation in the field. The experimental vision navigation results demonstrated the feasibility of the autonomous vision navigation for tractor tillage operation in the field using the tillage soil boundary line extracted by the proposed improved anti-noise morphology algorithm, which has broad application prospect.
Collapse
Affiliation(s)
- Wei Lu
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China.
- Key Laboratory of Intelligent Agricultural Equipment in Jiangsu Province, Nanjing Agricultural University, Nanjing 210031, China.
| | - Mengjie Zeng
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China.
- Key Laboratory of Intelligent Agricultural Equipment in Jiangsu Province, Nanjing Agricultural University, Nanjing 210031, China.
| | - Ling Wang
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China.
- Key Laboratory of Intelligent Agricultural Equipment in Jiangsu Province, Nanjing Agricultural University, Nanjing 210031, China.
| | - Hui Luo
- College of Engineering, Nanjing Agricultural University, Nanjing 210031, China.
- Key Laboratory of Intelligent Agricultural Equipment in Jiangsu Province, Nanjing Agricultural University, Nanjing 210031, China.
| | - Subrata Mukherjee
- NDE Laboratory, College of Engineering, Michigan State University, East Lansing, MI 48824, USA.
| | - Xuhui Huang
- NDE Laboratory, College of Engineering, Michigan State University, East Lansing, MI 48824, USA.
| | - Yiming Deng
- NDE Laboratory, College of Engineering, Michigan State University, East Lansing, MI 48824, USA.
| |
Collapse
|
14
|
Martins I, Carvalho P, Corte-Real L, Alba-Castro JL. BMOG: boosted Gaussian Mixture Model with controlled complexity for background subtraction. Pattern Anal Appl 2018. [DOI: 10.1007/s10044-018-0699-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
15
|
Oliveira RB, Pereira AS, Tavares JMRS. Computational diagnosis of skin lesions from dermoscopic images using combined features. Neural Comput Appl 2018. [DOI: 10.1007/s00521-018-3439-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
16
|
Oliveira RB, Pereira AS, Tavares JMRS. Skin lesion computational diagnosis of dermoscopic images: Ensemble models based on input feature manipulation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 149:43-53. [PMID: 28802329 DOI: 10.1016/j.cmpb.2017.07.009] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Revised: 06/18/2017] [Accepted: 07/19/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVES The number of deaths worldwide due to melanoma has risen in recent times, in part because melanoma is the most aggressive type of skin cancer. Computational systems have been developed to assist dermatologists in early diagnosis of skin cancer, or even to monitor skin lesions. However, there still remains a challenge to improve classifiers for the diagnosis of such skin lesions. The main objective of this article is to evaluate different ensemble classification models based on input feature manipulation to diagnose skin lesions. METHODS Input feature manipulation processes are based on feature subset selections from shape properties, colour variation and texture analysis to generate diversity for the ensemble models. Three subset selection models are presented here: (1) a subset selection model based on specific feature groups, (2) a correlation-based subset selection model, and (3) a subset selection model based on feature selection algorithms. Each ensemble classification model is generated using an optimum-path forest classifier and integrated with a majority voting strategy. The proposed models were applied on a set of 1104 dermoscopic images using a cross-validation procedure. RESULTS The best results were obtained by the first ensemble classification model that generates a feature subset ensemble based on specific feature groups. The skin lesion diagnosis computational system achieved 94.3% accuracy, 91.8% sensitivity and 96.7% specificity. CONCLUSIONS The input feature manipulation process based on specific feature subsets generated the greatest diversity for the ensemble classification model with very promising results.
Collapse
Affiliation(s)
- Roberta B Oliveira
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, rua Dr. Roberto Frias, Porto 4200-465, Portugal.
| | - Aledir S Pereira
- Departamento de Ciências de Computação e Estatística, Instituto de Biociências, Letras e Ciências Exatas, Universidade Estadual Paulista, rua Cristóvão Colombo, 2265, São José do Rio Preto, SP 15054-000, Brazil.
| | - João Manuel R S Tavares
- Instituto de Ciência e Inovação em Engenharia Mecânica e Engenharia Industrial, Departamento de Engenharia Mecânica, Faculdade de Engenharia, Universidade do Porto, rua Dr. Roberto Frias, Porto 4200-465, Portugal.
| |
Collapse
|
17
|
Safdar M, Cui G, Kim YJ, Luo MR. Perceptually uniform color space for image signals including high dynamic range and wide gamut. OPTICS EXPRESS 2017; 25:15131-15151. [PMID: 28788944 DOI: 10.1364/oe.25.015131] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 06/11/2017] [Indexed: 06/07/2023]
Abstract
A perceptually uniform color space has been long desired for a wide range of imaging applications. Such a color space should be able to represent a color pixel in three unique and independent attributes (lightness, chroma, and hue). Such a space would be perceptually uniform over a wide gamut, linear in iso-hue directions, and can predict both small and large color differences as well as lightness in high dynamic range environments. It would also have minimum computational cost for real time or quasi-real time processing. Presently available color spaces are not able to achieve these goals satisfactorily and comprehensively. In this study, a uniform color space is proposed and its performance in predicting a wide range of experimental data is presented in comparison with the other state of the art color spaces.
Collapse
|
18
|
Kimpe T, Rostang J, Van Hoey G, Xthona A. Color standard display function: A proposed extension of DICOM GSDF. Med Phys 2017; 43:5009. [PMID: 27587031 DOI: 10.1118/1.4959544] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Color images are being used more in medical imaging for a broad range of modalities and applications. While in the past, color was mostly used for annotations, today color is also widely being used for diagnostic purposes. Surprisingly enough, there is no agreed upon standard yet that describes how color medical images need to be visualized and how calibration and quality assurance of color medical displays need to be performed. This paper proposes color standard display function (CSDF) which is an extension of the DICOM GSDF standard toward color. CSDF defines how color medical displays need to be calibrated and how QA can be performed to obtain perceptually linear behavior not only for grayscale but also for color. METHODS The proposed CSDF algorithm uses DICOM GSDF calibration as a starting point and subsequently uses a color visual difference metric to redistribute colors in order to obtain perceptual linearity not only for the grayscale behavior but also for the color behavior. A clear calibration and quality assurance algorithm is defined and is validated on a wide range of different display systems. RESULTS A detailed description of the proposed CSDF calibration and quality assurance algorithms is provided. These algorithms have been tested extensively on three types of display systems: consumer displays, professional displays, and medical grade displays. Test results are reported both for the calibration algorithm as well as for the quantitative and visual quality assurance methods. The tests confirm that the described algorithm generates consistent results and is able to increase perceptual linearity for color and grayscale visualization. Moreover the proposed algorithms are working well on a wide range of display systems. CONCLUSIONS CSDF has been proposed as an extension of the DICOM GSDF standard toward color. Calibration and QA algorithms for CSDF have been described in detail. The proposed algorithms have been tested on several types of display systems and the results confirm that CSDF largely increases the perceptual linearity of visualized colors, while at the same time remaining compliant with DICOM GSDF.
Collapse
Affiliation(s)
- Tom Kimpe
- Barco NV, Healthcare Division, Beneluxpark 21, 8500 Kortrijk, Belgium
| | - Johan Rostang
- Barco NV, Healthcare Division, Beneluxpark 21, 8500 Kortrijk, Belgium
| | - Gert Van Hoey
- Barco NV, Healthcare Division, Beneluxpark 21, 8500 Kortrijk, Belgium
| | - Albert Xthona
- Barco NV, Healthcare Division, Beneluxpark 21, 8500 Kortrijk, Belgium
| |
Collapse
|
19
|
Evaluation of green tea sensory quality via process characteristics and image information. FOOD AND BIOPRODUCTS PROCESSING 2017. [DOI: 10.1016/j.fbp.2016.12.004] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
20
|
BMOG: Boosted Gaussian Mixture Model with Controlled Complexity. PATTERN RECOGNITION AND IMAGE ANALYSIS 2017. [DOI: 10.1007/978-3-319-58838-4_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
21
|
Computer Based Melanocytic and Nevus Image Enhancement and Segmentation. BIOMED RESEARCH INTERNATIONAL 2016; 2016:2082589. [PMID: 27774454 PMCID: PMC5059650 DOI: 10.1155/2016/2082589] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2015] [Accepted: 07/18/2016] [Indexed: 01/25/2023]
Abstract
Digital dermoscopy aids dermatologists in monitoring potentially cancerous skin lesions. Melanoma is the 5th common form of skin cancer that is rare but the most dangerous. Melanoma is curable if it is detected at an early stage. Automated segmentation of cancerous lesion from normal skin is the most critical yet tricky part in computerized lesion detection and classification. The effectiveness and accuracy of lesion classification are critically dependent on the quality of lesion segmentation. In this paper, we have proposed a novel approach that can automatically preprocess the image and then segment the lesion. The system filters unwanted artifacts including hairs, gel, bubbles, and specular reflection. A novel approach is presented using the concept of wavelets for detection and inpainting the hairs present in the cancer images. The contrast of lesion with the skin is enhanced using adaptive sigmoidal function that takes care of the localized intensity distribution within a given lesion's images. We then present a segmentation approach to precisely segment the lesion from the background. The proposed approach is tested on the European database of dermoscopic images. Results are compared with the competitors to demonstrate the superiority of the suggested approach.
Collapse
|
22
|
Khalid S, Jamil U, Saleem K, Akram MU, Manzoor W, Ahmed W, Sohail A. Segmentation of skin lesion using Cohen-Daubechies-Feauveau biorthogonal wavelet. SPRINGERPLUS 2016; 5:1603. [PMID: 27652176 PMCID: PMC5028360 DOI: 10.1186/s40064-016-3211-4] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 09/02/2016] [Indexed: 11/10/2022]
Abstract
This paper presents a novel technique for segmentation of skin lesion in dermoscopic images based on wavelet transform along with morphological operations. The acquired dermoscopic images may include artifacts inform of gel, dense hairs and water bubble which make accurate segmentation more challenging. We have also embodied an efficient approach for artifacts removal and hair inpainting, to enhance the overall segmentation results. In proposed research, color space is also analyzed and selection of blue channel for lesion segmentation have confirmed better performance than techniques which utilizes gray scale conversion. We tackle the problem by finding the most suitable mother wavelet for skin lesion segmentation. The performance achieved with 'bior6.8' Cohen-Daubechies-Feauveau biorthogonal wavelet is found to be superior as compared to other wavelet family. The proposed methodology achieves 93.87 % accuracy on dermoscopic images of PH2 dataset acquired at Dermatology Service of Hospital Pedro Hispano, Matosinhos, Portugal.
Collapse
Affiliation(s)
- Shehzad Khalid
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan
| | - Uzma Jamil
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan ; Government College University, Faisalabad, Pakistan
| | - Kashif Saleem
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan
| | - M Usman Akram
- National University of Sciences and Technology, Islamabad, Pakistan
| | - Waleed Manzoor
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan
| | - Waqas Ahmed
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan
| | - Amina Sohail
- Department of Computer Engineering, Bahria University, Islamabad, Pakistan
| |
Collapse
|
23
|
|
24
|
Lee D, Plataniotis KN. Towards a Full-Reference Quality Assessment for Color Images Using Directional Statistics. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:3950-3965. [PMID: 26186778 DOI: 10.1109/tip.2015.2456419] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper presents a novel computational model for quantifying the perceptual quality of color images consistently with subjective evaluations. The proposed full-reference color metric, namely, a directional statistics-based color similarity index, is designed to consistently perform well over commonly encountered chromatic and achromatic distortions. In order to accurately predict the visual quality of color images, we make use of local color descriptors extracted from three perceptual color channels: 1) hue; 2) chroma; and 3) lightness. In particular, directional statistical tools are employed to properly process hue data by considering their periodicities. Moreover, two weighting mechanisms are exploited to accurately combine locally measured comparison scores into a final score. Extensive experimentation performed on large-scale databases indicates that the proposed metric is effective across a wide range of chromatic and achromatic distortions, making it better suited for the evaluation and optimization of color image processing algorithms.
Collapse
|
25
|
Guan T, Zhou D, Xu C, Liu Y. A novel RGB Fourier transform-based color space for optical microscopic image processing. ACTA ACUST UNITED AC 2014. [DOI: 10.1186/s40638-014-0016-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
26
|
Le Moan S, Urban P. Image-difference prediction: from color to spectral. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:2058-2068. [PMID: 24710405 DOI: 10.1109/tip.2014.2311373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We propose a new strategy to evaluate the quality of multi and hyperspectral images, from the perspective of human perception. We define the spectral image difference as the overall perceived difference between two spectral images under a set of specified viewing conditions (illuminants). First, we analyze the stability of seven image-difference features across illuminants, by means of an information-theoretic strategy. We demonstrate, in particular, that in the case of common spectral distortions (spectral gamut mapping, spectral compression, spectral reconstruction), chromatic features vary much more than achromatic ones despite considering chromatic adaptation. Then, we propose two computationally efficient spectral image difference metrics and compare them to the results of a subjective visual experiment. A significant improvement is shown over existing metrics such as the widely used root-mean square error.
Collapse
|
27
|
Preiss J, Fernandes F, Urban P. Color-image quality assessment: from prediction to optimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1366-1378. [PMID: 24723533 DOI: 10.1109/tip.2014.2302684] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
While image-difference metrics show good prediction performance on visual data, they often yield artifact-contaminated results if used as objective functions for optimizing complex image-processing tasks. We investigate in this regard the recently proposed color-image-difference (CID) metric particularly developed for predicting gamut-mapping distortions. We present an algorithm for optimizing gamut mapping employing the CID metric as the objective function. Resulting images contain various visual artifacts, which are addressed by multiple modifications yielding the improved color-image-difference (iCID) metric. The iCID-based optimizations are free from artifacts and retain contrast, structure, and color of the original image to a great extent. Furthermore, the prediction performance on visual data is improved by the modifications.
Collapse
|
28
|
Shen R, Cheng I, Basu A. QoE-based multi-exposure fusion in hierarchical multivariate Gaussian CRF. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:2469-2478. [PMID: 23288338 DOI: 10.1109/tip.2012.2236346] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Many state-of-the-art fusion methods, combining details in images taken under different exposures into one well-exposed image, can be found in the literature. However, insufficient study has been conducted to explore how perceptual factors can provide viewers better quality of experience on fused images. We propose two perceptual quality measures: perceived local contrast and color saturation, which are embedded in our novel hierarchical multivariate Gaussian conditional random field model, to illustrate improved performance for multi-exposure fusion. We show that our method generates images with better quality than existing methods for a variety of scenes.
Collapse
Affiliation(s)
- Rui Shen
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2E8, Canada.
| | | | | |
Collapse
|
29
|
Lissner I, Preiss J, Urban P, Lichtenauer MS, Zolliker P. Image-difference prediction: from grayscale to color. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:435-446. [PMID: 23008252 DOI: 10.1109/tip.2012.2216279] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Existing image-difference measures show excellent accuracy in predicting distortions, such as lossy compression, noise, and blur. Their performance on certain other distortions could be improved; one example of this is gamut mapping. This is partly because they either do not interpret chromatic information correctly or they ignore it entirely. We present an image-difference framework that comprises image normalization, feature extraction, and feature combination. Based on this framework, we create image-difference measures by selecting specific implementations for each of the steps. Particular emphasis is placed on using color information to improve the assessment of gamut-mapped images. Our best image-difference measure shows significantly higher prediction accuracy on a gamut-mapping dataset than all other evaluated measures.
Collapse
Affiliation(s)
- Ingmar Lissner
- Institute of Printing Science and Technology, Technische Universität Darmstadt, Darmstadt 64289, Germany.
| | | | | | | | | |
Collapse
|
30
|
Abbas Q, Garcia IF, Emre Celebi M, Ahmad W, Mushtaq Q. A perceptually oriented method for contrast enhancement and segmentation of dermoscopy images. Skin Res Technol 2012; 19:e490-7. [DOI: 10.1111/j.1600-0846.2012.00670.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/04/2012] [Indexed: 01/23/2023]
Affiliation(s)
| | - Irene Fondón Garcia
- Department of Signal Theory and Communications; School of Engineering Path of Discovery; Seville; Spain
| | - M. Emre Celebi
- Department of Computer Science; Louisiana State University; Shreveport; Louisiana; USA
| | | | | |
Collapse
|