1
|
Dinmeung N, Sirisathitkul Y, Sirisathitkul C. Colorimetric parameters for bloodstain characterization by smartphone. ARAB JOURNAL OF BASIC AND APPLIED SCIENCES 2023. [DOI: 10.1080/25765299.2023.2194129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/08/2023] Open
Affiliation(s)
- Natthinee Dinmeung
- Division of Physics, School of Science, Walailak University, Nakhon Si Thammarat, Thailand
| | - Yaowarat Sirisathitkul
- Division of Computer Engineering and Electronics, School of Engineering and Technology, Walailak University, Nakhon Si Thammarat, Thailand
| | | |
Collapse
|
2
|
Zhang G, Song S, Panescu J, Shapiro N, Dannemiller KC, Qin R. A novel systems solution for accurate colorimetric measurement through smartphone-based augmented reality. PLoS One 2023; 18:e0287099. [PMID: 37319291 PMCID: PMC10270580 DOI: 10.1371/journal.pone.0287099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 05/30/2023] [Indexed: 06/17/2023] Open
Abstract
Quantifying the colors of objects is useful in a wide range of applications, including medical diagnosis, agricultural monitoring, and food safety. Accurate colorimetric measurement of objects is a laborious process normally performed through a color matching test in the laboratory. A promising alternative is to use digital images for colorimetric measurement, due to their portability and ease of use. However, image-based measurements suffer from errors caused by the non-linear image formation process and unpredictable environmental lighting. Solutions to this problem often perform relative color correction among multiple images through discrete color reference boards, which may yield biased results due to the lack of continuous observation. In this paper, we propose a smartphone-based solution, that couples a designated color reference board with a novel color correction algorithm, to achieve accurate and absolute color measurements. Our color reference board contains multiple color stripes with continuous color sampling at the sides. A novel correction algorithm is proposed to utilize a first-order spatial varying regression model to perform the color correction, which leverages both the absolute color magnitude and scale to maximize the correction accuracy. The proposed algorithm is implemented as a "human-in-the-loop" smartphone application, where users are guided by an augmented reality scheme with a marker tracking module to take images at an angle that minimizes the impact of non-Lambertian reflectance. Our experimental results show that our colorimetric measurement is device independent and can reduce up to 90% color variance for images collected under different lighting conditions. In the application of reading pH values from test papers, we show that our system performs 200% better than human reading. The designed color reference board, the correction algorithm, and our augmented reality guiding approach form an integrated system as a novel solution to measure color with increased accuracy. This technique has the flexibility to improve color reading performance in systems beyond existing applications, evidenced by both qualitative and quantitative experiments on example applications such as pH-test reading.
Collapse
Affiliation(s)
- Guixiang Zhang
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, Ohio, United States of America
- Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio, United States of America
- Geospatial Data Analytics Lab, The Ohio State University, Columbus, Ohio, United States of America
| | - Shuang Song
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, Ohio, United States of America
- Geospatial Data Analytics Lab, The Ohio State University, Columbus, Ohio, United States of America
| | - Jenny Panescu
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, Ohio, United States of America
| | - Nicholas Shapiro
- Institute for Society and Genetics, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Karen C. Dannemiller
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, Ohio, United States of America
- Environmental Health Sciences, The Ohio State University, Columbus, Ohio, United States of America
- Sustainability Institute, The Ohio State University, Columbus, Ohio, United States of America
| | - Rongjun Qin
- Department of Civil, Environmental and Geodetic Engineering, The Ohio State University, Columbus, Ohio, United States of America
- Department of Electrical and Computer Engineering, The Ohio State University, Columbus, Ohio, United States of America
- Geospatial Data Analytics Lab, The Ohio State University, Columbus, Ohio, United States of America
- Translational Data Analytics Institute, The Ohio State University, Columbus, Ohio, United States of America
| |
Collapse
|
3
|
Cugmas B, Štruc E, Kovče U, Lužar K, Olivry T. Evaluation of native canine skin color by smartphone-based dermatoscopy. Skin Res Technol 2022; 28:299-304. [PMID: 35064590 PMCID: PMC9907669 DOI: 10.1111/srt.13130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 12/18/2021] [Indexed: 11/30/2022]
Abstract
BACKGROUND Human skin color, predominantly determined by the chromophores of melanin, hemoglobin, and exogenous carotenoids, is often measured to serve various medical and cosmetic applications. Although colorimetry has been used to evaluate the skin erythema in allergic dogs, the native canine skin color remains unknown. METHODS We measured the skin color in 101 healthy dogs using a calibrated optical system with a smartphone and a mobile dermatoscope DermLite DL1. The results were retrieved in the CIELAB color system and compared to the human color ranges. RESULTS The lightness (L*) of canine skin ranged from 28.5 to 78.3, which is slightly broader than that of human skin. There was a difference of 3.9 in redness (a*) between canine and human skin, but this variation could be attributed to the similarly valued colorimetric error of the optical system. Nonetheless, the skin yellowness was significantly different for dogs and humans (respective median b* of 12.3 versus 16.6, p < 0.01). This difference might be due to canids not being able to accumulate typically yellowish carotenoids. Furthermore, the native canine skin color did not exhibit a typical dependence between the coordinates of lightness (L*) and yellowness (b*), known as the individual typology angle, °ITA. CONCLUSION We reported the first dataset of the native canine skin color in the CIELAB color space. We discovered a similarity in skin lightness and a difference in skin yellowness. However, further studies are needed for a more precise comparison of skin redness.
Collapse
Affiliation(s)
- Blaž Cugmas
- Biophotonics laboratory, Institute of Atomic Physics and Spectroscopy, University of Latvia, Rīga, Latvia.,Veterinary clinic Zamba, Vets4science Ltd, Celje, Slovenia
| | | | - Urška Kovče
- Veterinary clinic Zamba, Vets4science Ltd, Celje, Slovenia
| | - Katja Lužar
- Veterinary clinic Zamba, Vets4science Ltd, Celje, Slovenia
| | - Thierry Olivry
- Department of Clinical Sciences, College of Veterinary Medicine, NC State University, Raleigh, North Carolina, USA
| |
Collapse
|
4
|
Abebe MA, Hardeberg JY, Vartdal G. Smartphones’ Skin Colour Reproduction Analysis for Neonatal Jaundice Detection. J Imaging Sci Technol 2021. [DOI: 10.2352/j.imagingsci.technol.2021.65.6.060407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Abstract In recent years, smartphone-based colour imaging systems are being increasingly used for Neonatal jaundice detection applications. These systems are based on the estimation of bilirubin concentration levels that correlates with newborns’ skin colour images
corresponding to total serum bilirubin (TSB) and transcutaneous bilirubinometry (TcB) measurements. However, the colour reproduction capacity of smartphone cameras are known to be influenced by various factors including the technological and acquisition process variabilities. To make an accurate
bilirubin estimation, irrespective of the type of smartphone and illumination conditions used to capture the newborns’ skin images, an inclusive and complete model, or data set, which can represent all the possible real world acquisitions scenarios needs to be utilized. Due to various
challenges in generating such a model or a data set, some solutions tend towards the application of reduced data set (designed for reference conditions and devices only) and colour correction systems (for the transformation of other smartphone skin images to the reference space). Such approaches
will make the bilirubin estimation methods highly dependent on the accuracy of their employed colour correction systems, and the capability of reducing device-to-device colour reproduction variability. However, the state-of-the-art methods with similar methodologies were only evaluated and
validated on a single smartphone camera. The vulnerability of the systems in making an incorrect jaundice diagnosis can only be shown with a thorough investigation of the colour reproduction variability for extended number of smartphones and illumination conditions. Accordingly, this work
presents and discuss the results of such broad investigation, including the evaluation of seven smartphone cameras, ten light sources, and three different colour correction approaches. The overall results show statistically significant colour differences among devices, even after colour correction
applications, and that further analysis on clinically significance of such differences is required for skin colour based jaundice diagnosis.
Collapse
Affiliation(s)
| | | | - Gunnar Vartdal
- Colour and Visual Computing Laboratory, Picterus AS; Gjøvik, Norway
| |
Collapse
|
5
|
Almadhor A, Rauf HT, Lali MIU, Damaševičius R, Alouffi B, Alharbi A. AI-Driven Framework for Recognition of Guava Plant Diseases through Machine Learning from DSLR Camera Sensor Based High Resolution Imagery. SENSORS 2021; 21:s21113830. [PMID: 34205885 PMCID: PMC8198251 DOI: 10.3390/s21113830] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Revised: 05/23/2021] [Accepted: 05/24/2021] [Indexed: 11/16/2022]
Abstract
Plant diseases can cause a considerable reduction in the quality and number of agricultural products. Guava, well known to be the tropics' apple, is one significant fruit cultivated in tropical regions. It is attacked by 177 pathogens, including 167 fungal and others such as bacterial, algal, and nematodes. In addition, postharvest diseases may cause crucial production loss. Due to minor variations in various guava disease symptoms, an expert opinion is required for disease analysis. Improper diagnosis may cause economic losses to farmers' improper use of pesticides. Automatic detection of diseases in plants once they emerge on the plants' leaves and fruit is required to maintain high crop fields. In this paper, an artificial intelligence (AI) driven framework is presented to detect and classify the most common guava plant diseases. The proposed framework employs the ΔE color difference image segmentation to segregate the areas infected by the disease. Furthermore, color (RGB, HSV) histogram and textural (LBP) features are applied to extract rich, informative feature vectors. The combination of color and textural features are used to identify and attain similar outcomes compared to individual channels, while disease recognition is performed by employing advanced machine-learning classifiers (Fine KNN, Complex Tree, Boosted Tree, Bagged Tree, Cubic SVM). The proposed framework is evaluated on a high-resolution (18 MP) image dataset of guava leaves and fruit. The best recognition results were obtained by Bagged Tree classifier on a set of RGB, HSV, and LBP features (99% accuracy in recognizing four guava fruit diseases (Canker, Mummification, Dot, and Rust) against healthy fruit). The proposed framework may help the farmers to avoid possible production loss by taking early precautions.
Collapse
Affiliation(s)
- Ahmad Almadhor
- Department of Computer Engineering, Networks Jouf University, Sakaka 72388, Saudi Arabia;
| | - Hafiz Tayyab Rauf
- Centre for Smart Systems, AI and Cybersecurity, Staffordshire University, Stoke-on-Trent ST4 2DE, UK
- Correspondence:
| | | | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland;
| | - Bader Alouffi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O.Box 11099, Taif 21944, Saudi Arabia;
| | - Abdullah Alharbi
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| |
Collapse
|
6
|
Hunt B, Ruiz AJ, Pogue BW. Smartphone-based imaging systems for medical applications: a critical review. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-200421VR. [PMID: 33860648 PMCID: PMC8047775 DOI: 10.1117/1.jbo.26.4.040902] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 03/29/2021] [Indexed: 05/15/2023]
Abstract
SIGNIFICANCE Smartphones come with an enormous array of functionality and are being more widely utilized with specialized attachments in a range of healthcare applications. A review of key developments and uses, with an assessment of strengths/limitations in various clinical workflows, was completed. AIM Our review studies how smartphone-based imaging (SBI) systems are designed and tested for specialized applications in medicine and healthcare. An evaluation of current research studies is used to provide guidelines for improving the impact of these research advances. APPROACH First, the established and emerging smartphone capabilities that can be leveraged for biomedical imaging are detailed. Then, methods and materials for fabrication of optical, mechanical, and electrical interface components are summarized. Recent systems were categorized into four groups based on their intended application and clinical workflow: ex vivo diagnostic, in vivo diagnostic, monitoring, and treatment guidance. Lastly, strengths and limitations of current SBI systems within these various applications are discussed. RESULTS The native smartphone capabilities for biomedical imaging applications include cameras, touchscreens, networking, computation, 3D sensing, audio, and motion, in addition to commercial wearable peripheral devices. Through user-centered design of custom hardware and software interfaces, these capabilities have the potential to enable portable, easy-to-use, point-of-care biomedical imaging systems. However, due to barriers in programming of custom software and on-board image analysis pipelines, many research prototypes fail to achieve a prospective clinical evaluation as intended. Effective clinical use cases appear to be those in which handheld, noninvasive image guidance is needed and accommodated by the clinical workflow. Handheld systems for in vivo, multispectral, and quantitative fluorescence imaging are a promising development for diagnostic and treatment guidance applications. CONCLUSIONS A holistic assessment of SBI systems must include interpretation of their value for intended clinical settings and how their implementations enable better workflow. A set of six guidelines are proposed to evaluate appropriateness of smartphone utilization in terms of clinical context, completeness, compactness, connectivity, cost, and claims. Ongoing work should prioritize realistic clinical assessments with quantitative and qualitative comparison to non-smartphone systems to clearly demonstrate the value of smartphone-based systems. Improved hardware design to accommodate the rapidly changing smartphone ecosystem, creation of open-source image acquisition and analysis pipelines, and adoption of robust calibration techniques to address phone-to-phone variability are three high priority areas to move SBI research forward.
Collapse
Affiliation(s)
- Brady Hunt
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
- Address all correspondence to Brady Hunt,
| | - Alberto J. Ruiz
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
| | - Brian W. Pogue
- Dartmouth College, Thayer School of Engineering, Hanover, New Hampshire, United States
| |
Collapse
|
7
|
Cugmas B, Viškere D, Štruc E, Olivry T. Evaluation of Erythema Severity in Dermatoscopic Images of Canine Skin: Erythema Index Assessment and Image Sampling Reliability. SENSORS 2021; 21:s21041285. [PMID: 33670225 PMCID: PMC7916917 DOI: 10.3390/s21041285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 02/05/2021] [Accepted: 02/08/2021] [Indexed: 11/30/2022]
Abstract
The regular monitoring of erythema, one of the most important skin lesions in atopic (allergic) dogs, is essential for successful anti-allergic therapy. The smartphone-based dermatoscopy enables a convenient way to acquire quality images of erythematous skin. However, the image sampling to evaluate erythema severity is still done manually, introducing result variability. In this study, we investigated the correlation between the most popular erythema indices (EIs) and dermatologists’ erythema perception, and we measured intra- and inter-rater variability of the currently-used manual image-sampling methods (ISMs). We showed that the EIBRG, based on all three RGB (red, green, and blue) channels, performed the best with an average Spearman coefficient of 0.75 and a typical absolute disagreement of less than 14% with the erythema assessed by clinicians. On the other hand, two image-sampling methods, based on either selecting specific pixels or small skin areas, performed similarly well. They achieved high intra- and inter-rater reliability with the intraclass correlation coefficient (ICC) and Krippendorff’s alpha well above 0.90. These results indicated that smartphone-based dermatoscopy could be a convenient and precise way to evaluate skin erythema severity. However, better outlined, or even automated ISMs, are likely to improve the intra- and inter-rater reliability in severe erythematous cases.
Collapse
Affiliation(s)
- Blaž Cugmas
- Biophotonics laboratory, Institute of Atomic Physics and Spectroscopy, University of Latvia, 19 Raiņa Blvd., LV-1586 Rīga, Latvia;
- Correspondence: ; Tel.: +371-67-033-848
| | - Daira Viškere
- Biophotonics laboratory, Institute of Atomic Physics and Spectroscopy, University of Latvia, 19 Raiņa Blvd., LV-1586 Rīga, Latvia;
- Faculty of Veterinary Medicine, Latvia University of Life Sciences and Technologies, 8 Kristapa Helmaņa Str., LV-3004 Jelgava, Latvia
| | - Eva Štruc
- Vetamplify SIA, veterinary services, 57/59-32 Krišjāņa Valdemāra Str., LV-1010 Rīga, Latvia;
| | - Thierry Olivry
- Department of Clinical Sciences, College of Veterinary Medicine, NC State University, 1060 William Moore Dr., Raleigh, NC 27607, USA;
- Comparative Medicine Institute, NC State University, Raleigh, NC 27606, USA
| |
Collapse
|