1
|
Jung G, Lee J, Kim S. Spectrum-based deep learning framework for dermatological pigment analysis and simulation. Comput Biol Med 2024; 178:108741. [PMID: 38879933 DOI: 10.1016/j.compbiomed.2024.108741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 05/16/2024] [Accepted: 06/08/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Deep learning in dermatology presents promising tools for automated diagnosis but faces challenges, including labor-intensive ground truth preparation and a primary focus on visually identifiable features. Spectrum-based approaches offer professional-level information like pigment distribution maps, but encounter practical limitations such as complex system requirements. METHODS This study introduces a spectrum-based framework for training a deep learning model to generate melanin and hemoglobin distribution maps from skin images. This approach eliminates the need for manually prepared ground truth by synthesizing output maps into skin images for regression analysis. The framework is applied to acquire spectral data, create pigment distribution maps, and simulate pigment variations. RESULTS Our model generated reflectance spectra and spectral images that accurately reflect pigment absorption properties, outperforming spectral upsampling methods. It produced pigment distribution maps with correlation coefficients of 0.913 for melanin and 0.941 for hemoglobin compared to the VISIA system. Additionally, the model's simulated images of pigment variations exhibited a proportional correlation with adjustments made to pigment levels. These evaluations are based on pigment absorption properties, the Individual Typology Angle (ITA), and pigment indices. CONCLUSION The model produces pigment distribution maps comparable to those from specialized clinical equipment and simulated images with numerically adjusted pigment variations. This approach demonstrates significant promise for developing professional-level diagnostic tools for future clinical applications.
Collapse
Affiliation(s)
- Geunho Jung
- AI R&D center, lululab Inc., 318 Dosan-daero, Gangnam-gu, Seoul, 06054, Republic of Korea.
| | - Jongha Lee
- AI R&D center, lululab Inc., 318 Dosan-daero, Gangnam-gu, Seoul, 06054, Republic of Korea.
| | - Semin Kim
- AI R&D center, lululab Inc., 318 Dosan-daero, Gangnam-gu, Seoul, 06054, Republic of Korea.
| |
Collapse
|
2
|
Lin Q, Guo X, Feng B, Guo J, Ni S, Dong H. A novel multi-task learning network for skin lesion classification based on multi-modal clues and label-level fusion. Comput Biol Med 2024; 175:108549. [PMID: 38704901 DOI: 10.1016/j.compbiomed.2024.108549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 04/20/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
In this paper, we propose a multi-task learning (MTL) network based on the label-level fusion of metadata and hand-crafted features by unsupervised clustering to generate new clustering labels as an optimization goal. We propose a MTL module (MTLM) that incorporates an attention mechanism to enable the model to learn more integrated, variable information. We propose a dynamic strategy to adjust the loss weights of different tasks, and trade off the contributions of multiple branches. Instead of feature-level fusion, we propose label-level fusion and combine the results of our proposed MTLM with the results of the image classification network to achieve better lesion prediction on multiple dermatological datasets. We verify the effectiveness of the proposed model by quantitative and qualitative measures. The MTL network using multi-modal clues and label-level fusion can yield the significant performance improvement for skin lesion classification.
Collapse
Affiliation(s)
- Qifeng Lin
- College of Software, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Xiaoxin Guo
- Key Laboratory of Symbol Computation and Knowledge Engineering of Ministry of Education, Jilin University, 2699 Qianjin Street, Changchun, 130012, China; College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun, 130012, China.
| | - Bo Feng
- College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Juntong Guo
- College of Software, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Shuang Ni
- College of Software, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| | - Hongliang Dong
- College of Computer Science and Technology, Jilin University, 2699 Qianjin Street, Changchun, 130012, China
| |
Collapse
|
3
|
Goetz L, Seedat N, Vandersluis R, van der Schaar M. Generalization-a key challenge for responsible AI in patient-facing clinical applications. NPJ Digit Med 2024; 7:126. [PMID: 38773304 PMCID: PMC11109198 DOI: 10.1038/s41746-024-01127-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 04/25/2024] [Indexed: 05/23/2024] Open
Affiliation(s)
- Lea Goetz
- Artificial Intelligence and Machine Learning, GSK, London, UK.
| | - Nabeel Seedat
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK.
| | | | - Mihaela van der Schaar
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK
- Cambridge Centre for AI in Medicine, University of Cambridge, Cambridge, UK
| |
Collapse
|
4
|
Hoang DT, Shulman ED, Turakulov R, Abdullaev Z, Singh O, Campagnolo EM, Lalchungnunga H, Stone EA, Nasrallah MP, Ruppin E, Aldape K. Prediction of DNA methylation-based tumor types from histopathology in central nervous system tumors with deep learning. Nat Med 2024:10.1038/s41591-024-02995-8. [PMID: 38760587 DOI: 10.1038/s41591-024-02995-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 04/11/2024] [Indexed: 05/19/2024]
Abstract
Precision in the diagnosis of diverse central nervous system (CNS) tumor types is crucial for optimal treatment. DNA methylation profiles, which capture the methylation status of thousands of individual CpG sites, are state-of-the-art data-driven means to enhance diagnostic accuracy but are also time consuming and not widely available. Here, to address these limitations, we developed Deep lEarning from histoPathoLOgy and methYlation (DEPLOY), a deep learning model that classifies CNS tumors to ten major categories from histopathology. DEPLOY integrates three distinct components: the first classifies CNS tumors directly from slide images ('direct model'), the second initially generates predictions for DNA methylation beta values, which are subsequently used for tumor classification ('indirect model'), and the third classifies tumor types directly from routinely available patient demographics. First, we find that DEPLOY accurately predicts beta values from histopathology images. Second, using a ten-class model trained on an internal dataset of 1,796 patients, we predict the tumor categories in three independent external test datasets including 2,156 patients, achieving an overall accuracy of 95% and balanced accuracy of 91% on samples that are predicted with high confidence. These results showcase the potential future use of DEPLOY to assist pathologists in diagnosing CNS tumors within a clinically relevant short time frame.
Collapse
Affiliation(s)
- Danh-Tai Hoang
- Biological Data Science Institute, College of Science, Australian National University, Canberra, Australian Capital Territory, Australia
| | - Eldad D Shulman
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA
| | - Rust Turakulov
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA
| | - Zied Abdullaev
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA
| | - Omkar Singh
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA
| | - Emma M Campagnolo
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA
| | - H Lalchungnunga
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA
| | - Eric A Stone
- Biological Data Science Institute, College of Science, Australian National University, Canberra, Australian Capital Territory, Australia
| | - MacLean P Nasrallah
- Division of Neuropathology, Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Eytan Ruppin
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA.
| | - Kenneth Aldape
- Laboratory of Pathology, Center for Cancer Research, National Cancer Institute, Bethesda, MD, USA.
| |
Collapse
|
5
|
Haykal D, Garibyan L, Flament F, Cartier H. Hybrid cosmetic dermatology: AI generated horizon. Skin Res Technol 2024; 30:e13721. [PMID: 38696225 PMCID: PMC11064925 DOI: 10.1111/srt.13721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2023] [Accepted: 04/15/2024] [Indexed: 05/04/2024]
Affiliation(s)
| | - Lilit Garibyan
- Wellman Center for PhotomedicineMassachusetts General HospitalBostonMassachusettsUSA
- Department of DermatologyHarvard Medical SchoolBostonMassachusettsUSA
| | | | | |
Collapse
|
6
|
Yu Z, Flament F, Jiang R, Houghton J, Kroely C, Cabut N, Haykal D, Sehgal C, Jablonski NG, Jean A, Aarabi P. The relevance and accuracy of an AI algorithm-based descriptor on 23 facial attributes in a diverse female US population. Skin Res Technol 2024; 30:e13690. [PMID: 38716749 PMCID: PMC11077572 DOI: 10.1111/srt.13690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 03/18/2024] [Indexed: 05/12/2024]
Abstract
BACKGROUND The response of AI in situations that mimic real life scenarios is poorly explored in populations of high diversity. OBJECTIVE To assess the accuracy and validate the relevance of an automated, algorithm-based analysis geared toward facial attributes devoted to the adornment routines of women. METHODS In a cross-sectional study, two diversified groups presenting similar distributions such as age, ancestry, skin phototype, and geographical location was created from the selfie images of 1041 female in a US population. 521 images were analyzed as part of a new training dataset aimed to improve the original algorithm and 520 were aimed to validate the performance of the AI. From a total 23 facial attributes (16 continuous and 7 categorical), all images were analyzed by 24 make-up experts and by the automated descriptor tool. RESULTS For all facial attributes, the new and the original automated tool both surpassed the grading of the experts on a diverse population of women. For the 16 continuous attributes, the gradings obtained by the new system strongly correlated with the assessment made by make-up experts (r ≥ 0.80; p < 0.0001) and supported by a low error rate. For the seven categorical attributes, the overall accuracy of the AI-facial descriptor was improved via enrichment of the training dataset. However, some weaker performance in spotting specific facial attributes were noted. CONCLUSION In conclusion, the AI-automatic facial descriptor tool was deemed accurate for analysis of facial attributes for diverse women although some skin complexion, eye color, and hair features required some further finetuning.
Collapse
Affiliation(s)
- Zhi Yu
- Modiface – A L'Oréal Group CompanyTorontoCanada
| | | | | | | | | | | | | | | | - Nina G Jablonski
- Department of AnthropologyThe Pennsylvania State University, University ParkPennsylvaniaUSA
| | | | | |
Collapse
|
7
|
Chen M, Zhou AE, Jain N, Gronbeck C, Feng H, Grant-Kels JM. Ethics of artificial intelligence in dermatology. Clin Dermatol 2024; 42:313-316. [PMID: 38401700 DOI: 10.1016/j.clindermatol.2024.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/26/2024]
Abstract
The integration of artificial intelligence (AI) in dermatology holds promise for enhancing clinical accuracy, enabling earlier detection of skin malignancies, suggesting potential management of skin lesions and eruptions, and promoting improved continuity of care. AI implementation in dermatology, however, raises several ethical concerns. This review explores the current benefits and challenges associated with AI integration, underscoring ethical considerations related to autonomy, informed consent, and privacy. We also examine the ways in which beneficence, nonmaleficence, and distributive justice may be impacted. Clarifying the role of AI, striking a balance between security and transparency, fostering open dialogue with our patients, collaborating with developers of AI, implementing educational initiatives for dermatologists and their patients, and participating in the establishment of regulatory guidelines are essential to navigating ethical and responsible AI incorporation into dermatology.
Collapse
Affiliation(s)
- Maggie Chen
- Department of Dermatology, University of Maryland School of Medicine, Baltimore, Maryland, USA
| | - Albert E Zhou
- Department of Dermatology, University of Conneticut School of Medicine, Farmington, Connecticut, USA
| | - Neelesh Jain
- Department of Dermatology, University of Conneticut School of Medicine, Farmington, Connecticut, USA
| | - Christian Gronbeck
- Department of Dermatology, University of Conneticut School of Medicine, Farmington, Connecticut, USA
| | - Hao Feng
- Department of Dermatology, University of Conneticut School of Medicine, Farmington, Connecticut, USA
| | - Jane M Grant-Kels
- Department of Dermatology, University of Conneticut School of Medicine, Farmington, Connecticut, USA; Department of Dermatology, University of Florida College of Medicine, Gainesville, Florida, USA.
| |
Collapse
|
8
|
Hirani R, Noruzi K, Khuram H, Hussaini AS, Aifuwa EI, Ely KE, Lewis JM, Gabr AE, Smiley A, Tiwari RK, Etienne M. Artificial Intelligence and Healthcare: A Journey through History, Present Innovations, and Future Possibilities. Life (Basel) 2024; 14:557. [PMID: 38792579 PMCID: PMC11122160 DOI: 10.3390/life14050557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/22/2024] [Accepted: 04/24/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a powerful tool in healthcare significantly impacting practices from diagnostics to treatment delivery and patient management. This article examines the progress of AI in healthcare, starting from the field's inception in the 1960s to present-day innovative applications in areas such as precision medicine, robotic surgery, and drug development. In addition, the impact of the COVID-19 pandemic on the acceleration of the use of AI in technologies such as telemedicine and chatbots to enhance accessibility and improve medical education is also explored. Looking forward, the paper speculates on the promising future of AI in healthcare while critically addressing the ethical and societal considerations that accompany the integration of AI technologies. Furthermore, the potential to mitigate health disparities and the ethical implications surrounding data usage and patient privacy are discussed, emphasizing the need for evolving guidelines to govern AI's application in healthcare.
Collapse
Affiliation(s)
- Rahim Hirani
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Kaleb Noruzi
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Hassan Khuram
- College of Medicine, Drexel University, Philadelphia, PA 19129, USA
| | - Anum S. Hussaini
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, MA 02115, USA
| | - Esewi Iyobosa Aifuwa
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Kencie E. Ely
- Kirk Kerkorian School of Medicine, University of Nevada Las Vegas, Las Vegas, NV 89106, USA
| | - Joshua M. Lewis
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Ahmed E. Gabr
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
| | - Abbas Smiley
- School of Medicine and Dentistry, University of Rochester, Rochester, NY 14642, USA
| | - Raj K. Tiwari
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Mill Etienne
- School of Medicine, New York Medical College, 40 Sunshine Cottage Road, Valhalla, NY 10595, USA; (R.H.)
- Department of Neurology, New York Medical College, Valhalla, NY 10595, USA
| |
Collapse
|
9
|
Chen M, Zhang M, Yin L, Ma L, Ding R, Zheng T, Yue Q, Lui S, Sun H. Medical image foundation models in assisting diagnosis of brain tumors: a pilot study. Eur Radiol 2024:10.1007/s00330-024-10728-1. [PMID: 38627290 DOI: 10.1007/s00330-024-10728-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 02/08/2024] [Accepted: 03/04/2024] [Indexed: 04/23/2024]
Abstract
OBJECTIVES To build self-supervised foundation models for multicontrast MRI of the whole brain and evaluate their efficacy in assisting diagnosis of brain tumors. METHODS In this retrospective study, foundation models were developed using 57,621 enhanced head MRI scans through self-supervised learning with a pretext task of cross-contrast context restoration with two different content dropout schemes. Downstream classifiers were constructed based on the pretrained foundation models and fine-tuned for brain tumor detection, discrimination, and molecular status prediction. Metrics including accuracy, sensitivity, specificity, and area under the ROC curve (AUC) were used to evaluate the performance. Convolutional neural networks trained exclusively on downstream task data were employed for comparative analysis. RESULTS The pretrained foundation models demonstrated their ability to extract effective representations from multicontrast whole-brain volumes. The best classifiers, endowed with pretrained weights, showed remarkable performance with accuracies of 94.9, 92.3, and 80.4%, and corresponding AUC values of 0.981, 0.972, and 0.852 on independent test datasets in brain tumor detection, discrimination, and molecular status prediction, respectively. The classifiers with pretrained weights outperformed the convolutional classifiers trained from scratch by approximately 10% in terms of accuracy and AUC across all tasks. The saliency regions in the correctly predicted cases are mainly clustered around the tumors. Classifiers derived from the two dropout schemes differed significantly only in the detection of brain tumors. CONCLUSIONS Foundation models obtained from self-supervised learning have demonstrated encouraging potential for scalability and interpretability in downstream brain tumor-related tasks and hold promise for extension to neurological diseases with diffusely distributed lesions. CLINICAL RELEVANCE STATEMENT The application of our proposed method to the prediction of key molecular status in gliomas is expected to improve treatment planning and patient outcomes. Additionally, the foundation model we developed could serve as a cornerstone for advancing AI applications in the diagnosis of brain-related diseases.
Collapse
Affiliation(s)
- Mengyao Chen
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
- Huaxi MR Research Center (HMRRC), West China Hospital of Sichuan University, Chengdu, China
| | | | - Lijuan Yin
- Department of Pathology, West China Hospital of Sichuan University, Chengdu, China
| | - Lu Ma
- Department of Neurosurgery, West China Hospital of Sichuan University, Chengdu, China
| | - Renxing Ding
- IT center, West China Hospital of Sichuan University, Chengdu, China
| | - Tao Zheng
- IT center, West China Hospital of Sichuan University, Chengdu, China
| | - Qiang Yue
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
- Huaxi MR Research Center (HMRRC), West China Hospital of Sichuan University, Chengdu, China
| | - Su Lui
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
- Huaxi MR Research Center (HMRRC), West China Hospital of Sichuan University, Chengdu, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.
- Huaxi MR Research Center (HMRRC), West China Hospital of Sichuan University, Chengdu, China.
| |
Collapse
|
10
|
Li Q, Yang Z, Chen K, Zhao M, Long H, Deng Y, Hu H, Jia C, Wu M, Zhao Z, Zhu H, Zhou S, Zhao M, Cao P, Zhou S, Song Y, Tang G, Liu J, Jiang J, Liao W, Zhou W, Yang B, Xiong F, Zhang S, Gao X, Jiang Y, Zhang W, Zhang B, He YL, Ran L, Zhang C, Wu W, Suolang Q, Luo H, Kang X, Wu C, Jin H, Chen L, Guo Q, Gui G, Li S, Si H, Guo S, Liu HY, Liu X, Ma GZ, Deng D, Yuan L, Lu J, Zeng J, Jiang X, Lyu X, Chen L, Hu B, Tao J, Liu Y, Wang G, Zhu G, Yao Z, Xu Q, Yang B, Wang Y, Ding Y, Yang X, Kai H, Wu H, Lu Q. Human-multimodal deep learning collaboration in 'precise' diagnosis of lupus erythematosus subtypes and similar skin diseases. J Eur Acad Dermatol Venereol 2024. [PMID: 38619440 DOI: 10.1111/jdv.20031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 02/09/2024] [Indexed: 04/16/2024]
Abstract
BACKGROUND Lupus erythematosus (LE) is a spectrum of autoimmune diseases. Due to the complexity of cutaneous LE (CLE), clinical skin image-based artificial intelligence is still experiencing difficulties in distinguishing subtypes of LE. OBJECTIVES We aim to develop a multimodal deep learning system (MMDLS) for human-AI collaboration in diagnosis of LE subtypes. METHODS This is a multi-centre study based on 25 institutions across China to assist in diagnosis of LE subtypes, other eight similar skin diseases and healthy subjects. In total, 446 cases with 800 clinical skin images, 3786 multicolor-immunohistochemistry (multi-IHC) images and clinical data were collected, and EfficientNet-B3 and ResNet-18 were utilized in this study. RESULTS In the multi-classification task, the overall performance of MMDLS on 13 skin conditions is much higher than single or dual modals (Sen = 0.8288, Spe = 0.9852, Pre = 0.8518, AUC = 0.9844). Further, the MMDLS-based diagnostic-support help improves the accuracy of dermatologists from 66.88% ± 6.94% to 81.25% ± 4.23% (p = 0.0004). CONCLUSIONS These results highlight the benefit of human-MMDLS collaborated framework in telemedicine by assisting dermatologists and rheumatologists in the differential diagnosis of LE subtypes and similar skin diseases.
Collapse
Affiliation(s)
- Qianwen Li
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Zhi Yang
- Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University, Xiangtan, China
| | - Kaili Chen
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Ming Zhao
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Hai Long
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Yueming Deng
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Haoran Hu
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Chen Jia
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Meiyu Wu
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Zhidan Zhao
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Huan Zhu
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Suqing Zhou
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Mingming Zhao
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Pengpeng Cao
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Shengnan Zhou
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Yang Song
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Guishao Tang
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Juan Liu
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Jiao Jiang
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Wei Liao
- Department of Dermatology, Hunan Children's Hospital, Changsha, China
| | - Wenhui Zhou
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Bingyi Yang
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Feng Xiong
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Suhan Zhang
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Xiaofei Gao
- Department of Dermatology, Hunan Children's Hospital, Changsha, China
| | - Yiqun Jiang
- Institute of Dermatology, Chinese Academy of Medical Sciences and Peking Union Medical College, Nanjing, China
| | - Wei Zhang
- Institute of Dermatology, Chinese Academy of Medical Sciences and Peking Union Medical College, Nanjing, China
| | - Bo Zhang
- Institute of Dermatology, Chinese Academy of Medical Sciences and Peking Union Medical College, Nanjing, China
| | - Yan-Ling He
- Department of Dermatology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Liwei Ran
- Department of Dermatology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Chunlei Zhang
- Department of Dermatology, Peking University Third Hospital, Beijing, China
| | - Wenting Wu
- Department of Dermatology, Peking University Third Hospital, Beijing, China
| | - Quzong Suolang
- Department of Dermatology, People's Hospital of Tibet Autonomous Region, Lhasa, China
| | - Hanhuan Luo
- Department of Dermatology, People's Hospital of Tibet Autonomous Region, Lhasa, China
| | - Xiaojing Kang
- Department of Dermatology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, China
| | - Caoying Wu
- Department of Dermatology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, China
| | - Hongzhong Jin
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Lei Chen
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Qing Guo
- Department of Dermatology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Guangji Gui
- Department of Dermatology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shanshan Li
- Department of Dermatology, The First Bethune Hospital of Jilin University, Changchun, China
| | - Henan Si
- Department of Dermatology, The First Bethune Hospital of Jilin University, Changchun, China
| | - Shuping Guo
- Department of Dermatology, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Hong-Ye Liu
- Department of Dermatology, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Xiguang Liu
- Department of Dermatology, The Hei Long Jiang Provincial Hospital, Harbin, China
| | - Guo-Zhang Ma
- Department of Dermatology, The Hei Long Jiang Provincial Hospital, Harbin, China
| | - Danqi Deng
- Department of Dermatology, The Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Limei Yuan
- Department of Dermatology, The Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Jianyun Lu
- Department of Dermatology, The Third Xiangya Hospital, Central South University, Changsha, China
| | - Jinrong Zeng
- Department of Dermatology, The Third Xiangya Hospital, Central South University, Changsha, China
| | - Xian Jiang
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
| | - Xiaoyan Lyu
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
| | - Liuqing Chen
- Department of Dermatology, Wuhan No. 1 Hospital, Wuhan, China
| | - Bin Hu
- Department of Dermatology, Wuhan No. 1 Hospital, Wuhan, China
| | - Juan Tao
- Department of Dermatology, Wuhan Union Hospital of China, Wuhan, China
| | - Yuhao Liu
- Department of Dermatology, Wuhan Union Hospital of China, Wuhan, China
| | - Gang Wang
- Department of Dermatology, Xijing Hospital, Xi'an, China
| | - Guannan Zhu
- Department of Dermatology, Xijing Hospital, Xi'an, China
| | - Zhirong Yao
- Department of Dermatology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qianyue Xu
- Department of Dermatology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bin Yang
- Dermatology Hospital of Southern Medical University, Guangzhou, China
| | - Yu Wang
- Dermatology Hospital of Southern Medical University, Guangzhou, China
| | - Yan Ding
- Hainan Provincial Hospital of Skin Disease, Haikou, China
| | - Xianxu Yang
- Hainan Provincial Hospital of Skin Disease, Haikou, China
| | - Hu Kai
- Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University, Xiangtan, China
| | - Haijing Wu
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
| | - Qianjin Lu
- Department of Dermatology, Hunan Key Laboratory of Medical Epigenomics, The Second Xiangya Hospital of Central South University, Changsha, China
- Institute of Dermatology, Chinese Academy of Medical Sciences and Peking Union Medical College, Nanjing, China
- Key Laboratory of Basic and Translational Research on Immune-Mediated Skin Diseases, Chinese Academy of Medical Sciences, Nanjing, China
- Jiangsu Key Laboratory of Molecular Biology for Skin Diseases and STIs, Nanjing, China
| |
Collapse
|
11
|
Fliorent R, Fardman B, Podwojniak A, Javaid K, Tan IJ, Ghani H, Truong TM, Rao B, Heath C. Artificial intelligence in dermatology: advancements and challenges in skin of color. Int J Dermatol 2024; 63:455-461. [PMID: 38444331 DOI: 10.1111/ijd.17076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 01/13/2024] [Accepted: 01/30/2024] [Indexed: 03/07/2024]
Abstract
Artificial intelligence (AI) uses algorithms and large language models in computers to simulate human-like problem-solving and decision-making. AI programs have recently acquired widespread popularity in the field of dermatology through the application of online tools in the assessment, diagnosis, and treatment of skin conditions. A literature review was conducted using PubMed and Google Scholar analyzing recent literature (from the last 10 years through October 2023) to evaluate current AI programs in use for dermatologic purposes, identifying challenges in this technology when applied to skin of color (SOC), and proposing future steps to enhance the role of AI in dermatologic practice. Challenges surrounding AI and its application to SOC stem from the underrepresentation of SOC in datasets and issues with image quality and standardization. With these existing issues, current AI programs inevitably do worse at identifying lesions in SOC. Additionally, only 30% of the programs identified in this review had data reported on their use in dermatology, specifically in SOC. Significant development of these applications is required for the accurate depiction of darker skin tone images in datasets. More research is warranted in the future to better understand the efficacy of AI in aiding diagnosis and treatment options for SOC patients.
Collapse
Affiliation(s)
| | - Brian Fardman
- Rowan-Virtua School of Osteopathic Medicine, Stratford, NJ, USA
| | | | - Kiran Javaid
- Rowan-Virtua School of Osteopathic Medicine, Stratford, NJ, USA
| | - Isabella J Tan
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - Hira Ghani
- Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Thu M Truong
- Center for Dermatology, Rutgers Robert Wood Johnson, Somerset, NJ, USA
| | - Babar Rao
- Center for Dermatology, Rutgers Robert Wood Johnson, Somerset, NJ, USA
| | - Candrice Heath
- Lewis Katz School of Medicine at Temple University, Philadelphia, PA, USA
| |
Collapse
|
12
|
Ktena I, Wiles O, Albuquerque I, Rebuffi SA, Tanno R, Roy AG, Azizi S, Belgrave D, Kohli P, Cemgil T, Karthikesalingam A, Gowal S. Generative models improve fairness of medical classifiers under distribution shifts. Nat Med 2024; 30:1166-1173. [PMID: 38600282 PMCID: PMC11031395 DOI: 10.1038/s41591-024-02838-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 01/26/2024] [Indexed: 04/12/2024]
Abstract
Domain generalization is a ubiquitous challenge for machine learning in healthcare. Model performance in real-world conditions might be lower than expected because of discrepancies between the data encountered during deployment and development. Underrepresentation of some groups or conditions during model development is a common cause of this phenomenon. This challenge is often not readily addressed by targeted data acquisition and 'labeling' by expert clinicians, which can be prohibitively expensive or practically impossible because of the rarity of conditions or the available clinical expertise. We hypothesize that advances in generative artificial intelligence can help mitigate this unmet need in a steerable fashion, enriching our training dataset with synthetic examples that address shortfalls of underrepresented conditions or subgroups. We show that diffusion models can automatically learn realistic augmentations from data in a label-efficient manner. We demonstrate that learned augmentations make models more robust and statistically fair in-distribution and out of distribution. To evaluate the generality of our approach, we studied three distinct medical imaging contexts of varying difficulty: (1) histopathology, (2) chest X-ray and (3) dermatology images. Complementing real samples with synthetic ones improved the robustness of models in all three medical tasks and increased fairness by improving the accuracy of clinical diagnosis within underrepresented groups, especially out of distribution.
Collapse
|
13
|
Park H, Park SR, Lee S, Hwang J, Lee M, Jang SI, Jung Y, Yeon Y, Kang N, Suh BF, Kim E. Development and application of artificial intelligence-based facial skin image diagnosis system: Changes in facial skin characteristics with ageing in Korean women. Int J Cosmet Sci 2024; 46:199-208. [PMID: 37881146 DOI: 10.1111/ics.12924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 10/04/2023] [Accepted: 10/17/2023] [Indexed: 10/27/2023]
Abstract
OBJECTIVE To develop and validate an artificial intelligence (AI)-based diagnostic system for analysing facial skin images using expert judgements and explore its feasibility for skin ageing research, specifically by evaluating facial skin changes in Korean women of various ages. METHODS Our AI-based facial skin diagnosis system (Dr. AMORE®) uses facial images of Korean women to analyse wrinkles, pigmentation, skin pores, and other skin red spots. The system is trained using clinical expert evaluations and deep learning. We assessed the system's precision and sensitivity by analysing the correlation between the diagnoses by the AI system and those of the experts. We used 120 images of Korean women aged 10-60 years to evaluate the changes in various facial skin characteristics with ageing. RESULTS The precision and sensitivity of the developed system were excellent (>0.9%), and the diagnosis scores using the detected area and intensity of each item were correlated significantly higher with the visual evaluation results of the clinical experts (>0.8, p < 0.001). We also analysed facial images of Korean women aged 10-60 years to quantify changes in the scores of wrinkles, pigmentation, and skin pores with age. We identified the age group with the most significant changes as 20s to 30s. Analysis of the detailed skin characteristics of each item showed that wrinkles and pigmentation changed significantly in the 20s-30s, and skin pores increased significantly in the 10s-20s. There was no significant correlation with age or change according to the age group for skin red spots. CONCLUSION Developed AI-based facial skin diagnosis system can automatically diagnose skin conditions based on clinical expert judgement using only photographic images and analyse various items in detail, quantitatively, and visually. This AI system can provide new and useful approaches in research areas that require a lot of resources and different characterizations, such as the study of facial skin ageing.
Collapse
Affiliation(s)
- Hyeokgon Park
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Sae-Ra Park
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Sangran Lee
- AI Solution Team, AMOREPACIFIC, Seoul, Korea
| | | | - Myeongryeol Lee
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Sue Im Jang
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Yuchul Jung
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Yeongmin Yeon
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Nayoung Kang
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Byung-Fhy Suh
- AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| | - Eunjoo Kim
- Clinical Research Lab, AMOREPACIFIC Research and Innovation Center, Yongin, Korea
| |
Collapse
|
14
|
Schaekermann M, Spitz T, Pyles M, Cole-Lewis H, Wulczyn E, Pfohl SR, Martin D, Jaroensri R, Keeling G, Liu Y, Farquhar S, Xue Q, Lester J, Hughes C, Strachan P, Tan F, Bui P, Mermel CH, Peng LH, Matias Y, Corrado GS, Webster DR, Virmani S, Semturs C, Liu Y, Horn I, Cameron Chen PH. Health equity assessment of machine learning performance (HEAL): a framework and dermatology AI model case study. EClinicalMedicine 2024; 70:102479. [PMID: 38685924 PMCID: PMC11056401 DOI: 10.1016/j.eclinm.2024.102479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 01/16/2024] [Accepted: 01/25/2024] [Indexed: 05/02/2024] Open
Abstract
Background Artificial intelligence (AI) has repeatedly been shown to encode historical inequities in healthcare. We aimed to develop a framework to quantitatively assess the performance equity of health AI technologies and to illustrate its utility via a case study. Methods Here, we propose a methodology to assess whether health AI technologies prioritise performance for patient populations experiencing worse outcomes, that is complementary to existing fairness metrics. We developed the Health Equity Assessment of machine Learning performance (HEAL) framework designed to quantitatively assess the performance equity of health AI technologies via a four-step interdisciplinary process to understand and quantify domain-specific criteria, and the resulting HEAL metric. As an illustrative case study (analysis conducted between October 2022 and January 2023), we applied the HEAL framework to a dermatology AI model. A set of 5420 teledermatology cases (store-and-forward cases from patients of 20 years or older, submitted from primary care providers in the USA and skin cancer clinics in Australia), enriched for diversity in age, sex and race/ethnicity, was used to retrospectively evaluate the AI model's HEAL metric, defined as the likelihood that the AI model performs better for subpopulations with worse average health outcomes as compared to others. The likelihood that AI performance was anticorrelated to pre-existing health outcomes was estimated using bootstrap methods as the probability that the negated Spearman's rank correlation coefficient (i.e., "R") was greater than zero. Positive values of R suggest that subpopulations with poorer health outcomes have better AI model performance. Thus, the HEAL metric, defined as p (R >0), measures how likely the AI technology is to prioritise performance for subpopulations with worse average health outcomes as compared to others (presented as a percentage below). Health outcomes were quantified as disability-adjusted life years (DALYs) when grouping by sex and age, and years of life lost (YLLs) when grouping by race/ethnicity. AI performance was measured as top-3 agreement with the reference diagnosis from a panel of 3 dermatologists per case. Findings Across all dermatologic conditions, the HEAL metric was 80.5% for prioritizing AI performance of racial/ethnic subpopulations based on YLLs, and 92.1% and 0.0% respectively for prioritizing AI performance of sex and age subpopulations based on DALYs. Certain dermatologic conditions were significantly associated with greater AI model performance compared to a reference category of less common conditions. For skin cancer conditions, the HEAL metric was 73.8% for prioritizing AI performance of age subpopulations based on DALYs. Interpretation Analysis using the proposed HEAL framework showed that the dermatology AI model prioritised performance for race/ethnicity, sex (all conditions) and age (cancer conditions) subpopulations with respect to pre-existing health disparities. More work is needed to investigate ways of promoting equitable AI performance across age for non-cancer conditions and to better understand how AI models can contribute towards improving equity in health outcomes. Funding Google LLC.
Collapse
Affiliation(s)
| | | | - Malcolm Pyles
- Advanced Clinical, Deerfield, IL, USA
- Department of Dermatology, Cleveland Clinic, Cleveland, OH, USA
| | | | | | | | | | | | | | - Yuan Liu
- Google Health, Mountain View, CA, USA
| | | | | | - Jenna Lester
- Advanced Clinical, Deerfield, IL, USA
- Department of Dermatology, University of California, San Francisco, CA, USA
| | | | | | | | - Peggy Bui
- Google Health, Mountain View, CA, USA
| | | | | | | | | | | | | | | | - Yun Liu
- Google Health, Mountain View, CA, USA
| | - Ivor Horn
- Google Health, Mountain View, CA, USA
| | | |
Collapse
|
15
|
Kim DH, Kim Y, Yun SY, Yu HS, Ko HC, Kim M. Risk factors for scabies in hospital: a systematic review. BMC Infect Dis 2024; 24:353. [PMID: 38575893 PMCID: PMC10993523 DOI: 10.1186/s12879-024-09167-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 02/22/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND Annually, 175.4 million people are infected with scabies worldwide. Although parasitic infections are important nosocomial infections, they are unrecognized compared to bacterial, fungal, and viral infections. In particular, nonspecific cutaneous manifestations of scabies lead to delayed diagnosis and frequent nosocomial transmission. Hospital-based studies on the risk factors for scabies have yet to be systematically reviewed. METHODS The study followed the PRISMA guidelines and was prospectively registered in PROSPERO (CRD42023363278). Literature searches were conducted in three international (PubMed, Embase, and CINAHL) and four Korean (DBpia, KISS, RISS, and Science ON) databases. We included hospital-based studies with risk estimates calculated with 95% confidence intervals for risk factors for scabies infection. The quality of the studies was assessed using the Joanna Briggs Institute critical appraisal tools. Two authors independently performed the screening and assessed the quality of the studies. RESULTS A total of 12 studies were included. Personal characteristics were categorized into demographic, economic, residential, and behavioral factors. The identified risk factors were low economic status and unhygienic behavioral practices. Being a patient in a long-term care facility or institution was an important factor. Frequent patient contact and lack of personal protective equipment were identified as risk factors. For clinical characteristics, factors were categorized as personal health and hospital environment. People who had contact with itchy others were at higher risk of developing scabies. Patients with higher severity and those with a large number of catheters are also at increased risk for scabies infection. CONCLUSIONS Factors contributing to scabies in hospitals range from personal to clinical. We emphasize the importance of performing a full skin examination when patients present with scabies symptoms and are transferred from settings such as nursing homes and assisted-living facilities, to reduce the transmission of scabies. In addition, patient education to prevent scabies and infection control systems for healthcare workers, such as wearing personal protective equipment, are needed.
Collapse
Affiliation(s)
- Dong-Hee Kim
- College of NursingᆞResearch Institute of Nursing Science, Pusan National University, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Korea
| | - Yujin Kim
- College of Nursing, Pusan National University, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Korea.
| | - Sook Young Yun
- College of Nursing, Pusan National University, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Korea
| | - Hak Sun Yu
- Department of Parasitology and Tropical Medicine, School of Medicine, Pusan National University, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Korea
| | - Hyun-Chang Ko
- Department of Dermatology, School of Medicine, Pusan National University, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Korea
| | - MinWoo Kim
- Department of Biomedical Convergence Engineering, Pusan National University, Mulgeum-eup, Yangsan-si, Gyeongsangnam-do, Korea
| |
Collapse
|
16
|
Wei ML, Tada M, So A, Torres R. Artificial intelligence and skin cancer. Front Med (Lausanne) 2024; 11:1331895. [PMID: 38566925 PMCID: PMC10985205 DOI: 10.3389/fmed.2024.1331895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Artificial intelligence is poised to rapidly reshape many fields, including that of skin cancer screening and diagnosis, both as a disruptive and assistive technology. Together with the collection and availability of large medical data sets, artificial intelligence will become a powerful tool that can be leveraged by physicians in their diagnoses and treatment plans for patients. This comprehensive review focuses on current progress toward AI applications for patients, primary care providers, dermatologists, and dermatopathologists, explores the diverse applications of image and molecular processing for skin cancer, and highlights AI's potential for patient self-screening and improving diagnostic accuracy for non-dermatologists. We additionally delve into the challenges and barriers to clinical implementation, paths forward for implementation and areas of active research.
Collapse
Affiliation(s)
- Maria L. Wei
- Department of Dermatology, University of California, San Francisco, San Francisco, CA, United States
- Dermatology Service, San Francisco VA Health Care System, San Francisco, CA, United States
| | - Mikio Tada
- Institute for Neurodegenerative Diseases, University of California, San Francisco, San Francisco, CA, United States
| | - Alexandra So
- School of Medicine, University of California, San Francisco, San Francisco, CA, United States
| | - Rodrigo Torres
- Dermatology Service, San Francisco VA Health Care System, San Francisco, CA, United States
| |
Collapse
|
17
|
Wu A, Ngo M, Thomas C. Assessment of patient perceptions of artificial intelligence use in dermatology: A cross-sectional survey. Skin Res Technol 2024; 30:e13656. [PMID: 38481072 PMCID: PMC10938028 DOI: 10.1111/srt.13656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 02/24/2024] [Accepted: 03/01/2024] [Indexed: 03/17/2024]
Affiliation(s)
- Alexander Wu
- Department of DermatologyUniversity of Texas Southwestern Medical CenterDallasUSA
| | - Madeline Ngo
- Department of DermatologyUniversity of Texas Southwestern Medical CenterDallasUSA
| | - Cristina Thomas
- Department of DermatologyUniversity of Texas Southwestern Medical CenterDallasUSA
- Department of Internal MedicineUniversity of Texas Southwestern Medical CenterDallasUSA
| |
Collapse
|
18
|
Gomes RFT, Schmith J, de Figueiredo RM, Freitas SA, Machado GN, Romanini J, Almeida JD, Pereira CT, Rodrigues JDA, Carrard VC. Convolutional neural network misclassification analysis in oral lesions: an error evaluation criterion by image characteristics. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 137:243-252. [PMID: 38161085 DOI: 10.1016/j.oooo.2023.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 10/02/2023] [Accepted: 10/04/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE This retrospective study analyzed the errors generated by a convolutional neural network (CNN) when performing automated classification of oral lesions according to their clinical characteristics, seeking to identify patterns in systemic errors in the intermediate layers of the CNN. STUDY DESIGN A cross-sectional analysis nested in a previous trial in which automated classification by a CNN model of elementary lesions from clinical images of oral lesions was performed. The resulting CNN classification errors formed the dataset for this study. A total of 116 real outputs were identified that diverged from the estimated outputs, representing 7.6% of the total images analyzed by the CNN. RESULTS The discrepancies between the real and estimated outputs were associated with problems relating to image sharpness, resolution, and focus; human errors; and the impact of data augmentation. CONCLUSIONS From qualitative analysis of errors in the process of automated classification of clinical images, it was possible to confirm the impact of image quality, as well as identify the strong impact of the data augmentation process. Knowledge of the factors that models evaluate to make decisions can increase confidence in the high classification potential of CNNs.
Collapse
Affiliation(s)
- Rita Fabiane Teixeira Gomes
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil.
| | - Jean Schmith
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Rodrigo Marques de Figueiredo
- Polytechnic School, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil; Technology in Automation and Electronics Laboratory-TECAE Lab, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | - Samuel Armbrust Freitas
- Department of Applied Computing, University of Vale do Rio dos Sinos-UNISINOS, São Leopoldo, Brazil
| | | | - Juliana Romanini
- Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| | - Janete Dias Almeida
- Department of Biosciences and Oral Diagnostics, São Paulo State University, Campus São José dos Campos, São Paulo, Brazil
| | | | - Jonas de Almeida Rodrigues
- Department of Surgery and Orthopaedics, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil
| | - Vinicius Coelho Carrard
- Department of Oral Pathology, Faculdade de Odontologia-Federal University of Rio Grande do Sul-UFRGS, Porto Alegre, Brazil; TelessaudeRS-UFRGS, Federal University of Rio Grande do Sul, Porto Alegre, Rio Grande do Sul, Brazil; Oral Medicine, Otorhynolaringology Service, Hospital de Clínicas de Porto Alegre (HCPA), Porto Alegre, Rio Grande do Sul, Brazil
| |
Collapse
|
19
|
Savulescu J, Giubilini A, Vandersluis R, Mishra A. Ethics of artificial intelligence in medicine. Singapore Med J 2024; 65:150-158. [PMID: 38527299 PMCID: PMC7615805 DOI: 10.4103/singaporemedj.smj-2023-279] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/08/2024] [Indexed: 03/27/2024]
Abstract
ABSTRACT This article reviews the main ethical issues that arise from the use of artificial intelligence (AI) technologies in medicine. Issues around trust, responsibility, risks of discrimination, privacy, autonomy, and potential benefits and harms are assessed. For better or worse, AI is a promising technology that can revolutionise healthcare delivery. It is up to us to make AI a tool for the good by ensuring that ethical oversight accompanies the design, development and implementation of AI technology in clinical practice.
Collapse
Affiliation(s)
- Julian Savulescu
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Alberto Giubilini
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| | - Robert Vandersluis
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| | - Abhishek Mishra
- Oxford Uehiro Centre for Practical Ethics, University of Oxford, Oxford, UK
| |
Collapse
|
20
|
Chang CT, Daneshjou R. Disentangling Hype from Reality for Artificial Intelligence-Based Skin Cancer Diagnosis: Comment on a Narrative Review. J Invest Dermatol 2024; 144:444-445. [PMID: 38244023 DOI: 10.1016/j.jid.2023.11.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Accepted: 11/09/2023] [Indexed: 01/22/2024]
Affiliation(s)
- Crystal T Chang
- Department of Dermatology, Stanford University, Stanford, California, USA; Clinical Excellence Research Center, School of Medicine, Stanford University, Palo Alto, California, USA
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California, USA; Department of Biomedical Data Science, Stanford University, Stanford, California, USA.
| |
Collapse
|
21
|
Shen Y, Li H, Sun C, Ji H, Zhang D, Hu K, Tang Y, Chen Y, Wei Z, Lv J. Optimizing skin disease diagnosis: harnessing online community data with contrastive learning and clustering techniques. NPJ Digit Med 2024; 7:28. [PMID: 38332257 PMCID: PMC10853166 DOI: 10.1038/s41746-024-01014-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Accepted: 01/18/2024] [Indexed: 02/10/2024] Open
Abstract
Skin diseases pose significant challenges in China. Internet health forums offer a platform for millions of users to discuss skin diseases and share images for early intervention, leaving large amount of valuable dermatology images. However, data quality and annotation challenges limit the potential of these resources for developing diagnostic models. In this study, we proposed a deep-learning model that utilized unannotated dermatology images from diverse online sources. We adopted a contrastive learning approach to learn general representations from unlabeled images and fine-tuned the model on coarsely annotated images from Internet forums. Our model classified 22 common skin diseases. To improve annotation quality, we used a clustering method with a small set of standardized validation images. We tested the model on images collected by 33 experienced dermatologists from 15 tertiary hospitals and achieved a 45.05% top-1 accuracy, outperforming the published baseline model by 3%. Accuracy increased with additional validation images, reaching 49.64% with 50 images per category. Our model also demonstrated transferability to new tasks, such as detecting monkeypox, with a 61.76% top-1 accuracy using only 50 additional images in the training process. We also tested our model on benchmark datasets to show the generalization ability. Our findings highlight the potential of unannotated images from online forums for future dermatology applications and demonstrate the effectiveness of our model for early diagnosis and potential outbreak mitigation.
Collapse
Affiliation(s)
- Yue Shen
- Simulation of Complex Systems Lab, Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Chiba, Japan
| | - Huanyu Li
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Can Sun
- Institution of Aix-marseille, Wuhan University of Technology WHUT, Wuhan City, China
| | - Hongtao Ji
- Shanghai Business School No. 6333, Oriental Meigu Avenue, Shanghai, China
| | - Daojun Zhang
- The third affiliated hospital of CQMU, Chongqing, China
| | - Kun Hu
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Yiqi Tang
- Shanghai Beforteen AI Lab, Shanghai, China
| | - Yu Chen
- Simulation of Complex Systems Lab, Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Chiba, Japan
| | - Zikun Wei
- Shanghai Beforteen AI Lab, Shanghai, China.
| | - Junwei Lv
- Shanghai Beforteen AI Lab, Shanghai, China.
| |
Collapse
|
22
|
Mirza FN, Lim RK, Yumeen S, Wahood S, Zaidat B, Shah A, Tang OY, Kawaoka J, Seo SJ, DiMarco C, Muglia J, Goldbach HS, Wisco O, Qureshi AA, Libby TJ. Performance of Three Large Language Models on Dermatology Board Examinations. J Invest Dermatol 2024; 144:398-400. [PMID: 37541614 DOI: 10.1016/j.jid.2023.06.208] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 06/27/2023] [Indexed: 08/06/2023]
Affiliation(s)
- Fatima N Mirza
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA.
| | - Rachel K Lim
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA; The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Sara Yumeen
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Samer Wahood
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Bashar Zaidat
- Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Asghar Shah
- The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Oliver Y Tang
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - John Kawaoka
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Su-Jean Seo
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Christopher DiMarco
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Jennie Muglia
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Hayley S Goldbach
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Oliver Wisco
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Abrar A Qureshi
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| | - Tiffany J Libby
- Department of Dermatology, The Warren Alpert Medical School of Brown University, Providence, Rhode Island, USA
| |
Collapse
|
23
|
Colonnese F, Di Luzio F, Rosato A, Panella M. Bimodal Feature Analysis with Deep Learning for Autism Spectrum Disorder Detection. Int J Neural Syst 2024; 34:2450005. [PMID: 38063381 DOI: 10.1142/s0129065724500059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Autism Spectrum Disorder (ASD) is a complex and heterogeneous neurodevelopmental disorder which affects a significant proportion of the population, with estimates suggesting that about 1 in 100 children worldwide are affected by ASD. This study introduces a new Deep Neural Network for identifying ASD in children through gait analysis, using features extracted from frames composing video recordings of their walking patterns. The innovative method presented herein is based on imagery and combines gait analysis and deep learning, offering a noninvasive and objective assessment of neurodevelopmental disorders while delivering high accuracy in ASD detection. Our model proposes a bimodal approach based on the concatenation of two distinct Convolutional Neural Networks processing two feature sets extracted from the same videos. The features obtained from the convolutions of both networks are subsequently flattened and merged into a single vector, serving as input for the fully connected layers in the binary classification process. This approach demonstrates the potential for effective ASD detection in children through the combination of gait analysis and deep learning techniques.
Collapse
Affiliation(s)
- Federica Colonnese
- Department of Information Engineering, Electronics and Telecommunications, University of Rome "La Sapienza" Via Eudossiana 18, 00184 Rome, Italy
| | - Francesco Di Luzio
- Department of Information Engineering, Electronics and Telecommunications, University of Rome "La Sapienza" Via Eudossiana 18, 00184 Rome, Italy
| | - Antonello Rosato
- Department of Information Engineering, Electronics and Telecommunications, University of Rome "La Sapienza" Via Eudossiana 18, 00184 Rome, Italy
| | - Massimo Panella
- Department of Information Engineering, Electronics and Telecommunications, University of Rome "La Sapienza" Via Eudossiana 18, 00184 Rome, Italy
| |
Collapse
|
24
|
Physician-machine partnerships boost diagnostic accuracy, but bias persists. Nat Med 2024; 30:356-357. [PMID: 38317022 DOI: 10.1038/s41591-023-02733-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2024]
|
25
|
Groh M, Badri O, Daneshjou R, Koochek A, Harris C, Soenksen LR, Doraiswamy PM, Picard R. Deep learning-aided decision support for diagnosis of skin disease across skin tones. Nat Med 2024; 30:573-583. [PMID: 38317019 PMCID: PMC10878981 DOI: 10.1038/s41591-023-02728-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Accepted: 11/16/2023] [Indexed: 02/07/2024]
Abstract
Although advances in deep learning systems for image-based medical diagnosis demonstrate their potential to augment clinical decision-making, the effectiveness of physician-machine partnerships remains an open question, in part because physicians and algorithms are both susceptible to systematic errors, especially for diagnosis of underrepresented populations. Here we present results from a large-scale digital experiment involving board-certified dermatologists (n = 389) and primary-care physicians (n = 459) from 39 countries to evaluate the accuracy of diagnoses submitted by physicians in a store-and-forward teledermatology simulation. In this experiment, physicians were presented with 364 images spanning 46 skin diseases and asked to submit up to four differential diagnoses. Specialists and generalists achieved diagnostic accuracies of 38% and 19%, respectively, but both specialists and generalists were four percentage points less accurate for the diagnosis of images of dark skin as compared to light skin. Fair deep learning system decision support improved the diagnostic accuracy of both specialists and generalists by more than 33%, but exacerbated the gap in the diagnostic accuracy of generalists across skin tones. These results demonstrate that well-designed physician-machine partnerships can enhance the diagnostic accuracy of physicians, illustrating that success in improving overall diagnostic accuracy does not necessarily address bias.
Collapse
Affiliation(s)
- Matthew Groh
- Northwestern University Kellogg School of Management, Evanston, IL, USA.
- MIT Media Lab, Cambridge, MA, USA.
| | - Omar Badri
- Northeast Dermatology Associates, Beverly, MA, USA
| | - Roxana Daneshjou
- Stanford Department of Biomedical Data Science, Stanford, CA, USA
- Stanford Department of Dermatology, Redwood City, CA, USA
| | | | | | - Luis R Soenksen
- Wyss Institute for Bioinspired Engineering at Harvard, Boston, MA, USA
| | - P Murali Doraiswamy
- MIT Media Lab, Cambridge, MA, USA
- Duke University School of Medicine, Durham, NC, USA
| | | |
Collapse
|
26
|
Howell MD, Corrado GS, DeSalvo KB. Three Epochs of Artificial Intelligence in Health Care. JAMA 2024; 331:242-244. [PMID: 38227029 DOI: 10.1001/jama.2023.25057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/17/2024]
Abstract
Importance Interest in artificial intelligence (AI) has reached an all-time high, and health care leaders across the ecosystem are faced with questions about where, when, and how to deploy AI and how to understand its risks, problems, and possibilities. Observations While AI as a concept has existed since the 1950s, all AI is not the same. Capabilities and risks of various kinds of AI differ markedly, and on examination 3 epochs of AI emerge. AI 1.0 includes symbolic AI, which attempts to encode human knowledge into computational rules, as well as probabilistic models. The era of AI 2.0 began with deep learning, in which models learn from examples labeled with ground truth. This era brought about many advances both in people's daily lives and in health care. Deep learning models are task-specific, meaning they do one thing at a time, and they primarily focus on classification and prediction. AI 3.0 is the era of foundation models and generative AI. Models in AI 3.0 have fundamentally new (and potentially transformative) capabilities, as well as new kinds of risks, such as hallucinations. These models can do many different kinds of tasks without being retrained on a new dataset. For example, a simple text instruction will change the model's behavior. Prompts such as "Write this note for a specialist consultant" and "Write this note for the patient's mother" will produce markedly different content. Conclusions and Relevance Foundation models and generative AI represent a major revolution in AI's capabilities, ffering tremendous potential to improve care. Health care leaders are making decisions about AI today. While any heuristic omits details and loses nuance, the framework of AI 1.0, 2.0, and 3.0 may be helpful to decision-makers because each epoch has fundamentally different capabilities and risks.
Collapse
|
27
|
Beam K, Sharma P, Levy P, Beam AL. Artificial intelligence in the neonatal intensive care unit: the time is now. J Perinatol 2024; 44:131-135. [PMID: 37443271 DOI: 10.1038/s41372-023-01719-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/24/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Artificial intelligence (AI) has the potential to revolutionize the neonatal intensive care unit (NICU) care by leveraging the large-scale, high-dimensional data that are generated by NICU patients. There is an emerging recognition that the confluence of technological progress, commercialization pathways, and rich data sets provides a unique opportunity for AI to make a lasting impact on the NICU. In this perspective article, we discuss four broad categories of AI applications in the NICU: imaging interpretation, prediction modeling of electronic health record data, integration of real-time monitoring data, and documentation and billing. By enhancing decision-making, streamlining processes, and improving patient outcomes, AI holds the potential to transform the quality of care for vulnerable newborns, making the excitement surrounding AI advancements well-founded and the potential for significant positive change stronger than ever before.
Collapse
Affiliation(s)
- Kristyn Beam
- Department of Neonatology, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Puneet Sharma
- Division of Newborn Medicine, Department of Pediatrics Boston Children's Hospital, Boston, MA, USA
| | - Phil Levy
- Division of Newborn Medicine, Department of Pediatrics Boston Children's Hospital, Boston, MA, USA
| | - Andrew L Beam
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA.
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
28
|
Zhang D, Li A, Wu W, Yu L, Kang X, Huo X. CR-Conformer: a fusion network for clinical skin lesion classification. Med Biol Eng Comput 2024; 62:85-94. [PMID: 37653185 DOI: 10.1007/s11517-023-02904-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 08/03/2023] [Indexed: 09/02/2023]
Abstract
Deep convolutional neural network (DCNN) models have been widely used to diagnose skin lesions, and some of them have achieved diagnostic results comparable to or even better than dermatologists. Most publicly available skin lesion datasets used to train DCNN were dermoscopic images. Expensive dermoscopic equipment is rarely available in rural clinics or small hospitals in remote areas. Therefore, it is of great significance to rely on clinical images for computer-aided diagnosis of skin lesions. This paper proposes an improved dual-branch fusion network called CR-Conformer. It integrates a DCNN branch that can effectively extract local features and a Transformer branch that can extract global features to capture more valuable features in clinical skin lesion images. In addition, we improved the DCNN branch to extract enhanced features in four directions through the convolutional rotation operation, further improving the classification performance of clinical skin lesion images. To verify the effectiveness of our proposed method, we conducted comprehensive tests on a private dataset named XJUSL, which contains ten types of clinical skin lesions. The test results indicate that our proposed method reduced the number of parameters by 11.17 M and improved the accuracy of clinical skin lesion image classification by 1.08%. It has the potential to realize automatic diagnosis of skin lesions in mobile devices.
Collapse
Affiliation(s)
- Dezhi Zhang
- Department of Dermatology and Venereology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, 830000, China
- Xinjiang Clinical Research Center for Dermatologic Diseases, Urumqi, China
- Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, China
| | - Aolun Li
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Weidong Wu
- Department of Dermatology and Venereology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, 830000, China.
- Xinjiang Clinical Research Center for Dermatologic Diseases, Urumqi, China.
- Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, China.
| | - Long Yu
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| | - Xiaojing Kang
- Department of Dermatology and Venereology, People's Hospital of Xinjiang Uygur Autonomous Region, Urumqi, 830000, China
- Xinjiang Clinical Research Center for Dermatologic Diseases, Urumqi, China
- Xinjiang Key Laboratory of Dermatology Research (XJYS1707), Urumqi, China
| | - Xiangzuo Huo
- School of Information Science and Engineering, Xinjiang University, Urumqi, China
| |
Collapse
|
29
|
Hwang JK, Del Toro NP, Han G, Oh DH, Tejasvi T, Lipner SR. Review of Teledermatology: Lessons Learned from the COVID-19 Pandemic. Am J Clin Dermatol 2024; 25:5-14. [PMID: 38062339 DOI: 10.1007/s40257-023-00826-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/05/2023] [Indexed: 01/23/2024]
Abstract
Utilization of telemedicine for dermatology has greatly expanded since the start of the COVID-19 pandemic, with over 500 new teledermatology studies published since 2020. An updated review on teledermatology is necessary to incorporate new findings and perspectives, and educate dermatologists on effective utilization. We discuss teledermatology in terms of diagnostic accuracy and clinical outcomes, patient and physician satisfaction, considerations for special patient populations, published practice guidelines, cost effectiveness and efficiency, as well as administrative regulations and policies. Our findings emphasize the need for dermatologist education, prioritization of reliable reimbursement systems, and technological innovations to support the continued development of teledermatology in the post-pandemic era.
Collapse
Affiliation(s)
- Jonathan K Hwang
- Department of Dermatology, Weill Cornell Medicine, 1305 York Avenue, New York, NY, 10021, USA
| | - Natalia Pelet Del Toro
- Department of Dermatology, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 1991 Marcus Ave, New Hyde Park, NY, 11042, USA
| | - George Han
- Department of Dermatology, Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, 1991 Marcus Ave, New Hyde Park, NY, 11042, USA
| | - Dennis H Oh
- Department of Dermatology, University of California, San Francisco, 4150 Clement Street, San Francisco, CA, 94121, USA
| | - Trilokraj Tejasvi
- Department of Dermatology, University of Michigan Medicine, 1910 Taubman Center, Ann Arbor, MI, 48109, USA
| | - Shari R Lipner
- Department of Dermatology, Weill Cornell Medicine, 1305 York Avenue, New York, NY, 10021, USA.
| |
Collapse
|
30
|
Zhou M, Jie W, Tang F, Zhang S, Mao Q, Liu C, Hao Y. Deep learning algorithms for classification and detection of recurrent aphthous ulcerations using oral clinical photographic images. J Dent Sci 2024; 19:254-260. [PMID: 38303872 PMCID: PMC10829559 DOI: 10.1016/j.jds.2023.04.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 04/19/2023] [Indexed: 02/03/2024] Open
Abstract
Background/purpose The application of artificial intelligence diagnosis based on deep learning in the medical field has been widely accepted. We aimed to evaluate convolutional neural networks (CNNs) for automated classification and detection of recurrent aphthous ulcerations (RAU), normal oral mucosa, and other common oral mucosal diseases in clinical oral photographs. Materials and methods The study included 785 clinical oral photographs, which was divided into 251 images of RAU, 271 images of the normal oral mucosa, and 263 images of other common oral mucosal diseases. Four and three CNN models were used for the classification and detection tasks, respectively. 628 images were randomly selected as training data. In addition, 78 and 79 images were assigned as validating and testing data. Main outcome measures included precision, recall, F1, specificity, sensitivity and area under the receiver operating characteristics curve (AUC). Results In the classification task, the Pretrained ResNet50 model had the best performance with a precision of 92.86%, a recall of 91.84%, an F1 score of 92.24%, a specificity of 96.41%, a sensitivity of 91.84% and an AUC of 98.95%. In the detection task, the Pretrained YOLOV5 model had the best performance with a precision of 98.70%, a recall of 79.51%, an F1 score of 88.07% and an AUC of Precision-Recall curve 90.89%. Conclusion The Pretrained ResNet50 and the Pretrained YOLOV5 algorithms were shown to have superior performance and acceptable potential in the classification and detection of RAU lesions based on non-invasive oral images, which may prove useful in clinical practice.
Collapse
Affiliation(s)
- Mimi Zhou
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Weiping Jie
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Fan Tang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Shangjun Zhang
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Qinghua Mao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Chuanxia Liu
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| | - Yilong Hao
- Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Cancer Center of Zhejiang University, Hangzhou, China
| |
Collapse
|
31
|
DeGrave AJ, Cai ZR, Janizek JD, Daneshjou R, Lee SI. Auditing the inference processes of medical-image classifiers by leveraging generative AI and the expertise of physicians. Nat Biomed Eng 2023:10.1038/s41551-023-01160-9. [PMID: 38155295 DOI: 10.1038/s41551-023-01160-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 10/30/2023] [Indexed: 12/30/2023]
Abstract
The inferences of most machine-learning models powering medical artificial intelligence are difficult to interpret. Here we report a general framework for model auditing that combines insights from medical experts with a highly expressive form of explainable artificial intelligence. Specifically, we leveraged the expertise of dermatologists for the clinical task of differentiating melanomas from melanoma 'lookalikes' on the basis of dermoscopic and clinical images of the skin, and the power of generative models to render 'counterfactual' images to understand the 'reasoning' processes of five medical-image classifiers. By altering image attributes to produce analogous images that elicit a different prediction by the classifiers, and by asking physicians to identify medically meaningful features in the images, the counterfactual images revealed that the classifiers rely both on features used by human dermatologists, such as lesional pigmentation patterns, and on undesirable features, such as background skin texture and colour balance. The framework can be applied to any specialized medical domain to make the powerful inference processes of machine-learning models medically understandable.
Collapse
Affiliation(s)
- Alex J DeGrave
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
- Medical Scientist Training Program, University of Washington, Seattle, WA, USA
| | - Zhuo Ran Cai
- Program for Clinical Research and Technology, Department of Dermatology, Stanford University School of Medicine, Stanford, CA, USA
| | - Joseph D Janizek
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA
- Medical Scientist Training Program, University of Washington, Seattle, WA, USA
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University School of Medicine, Stanford, CA, USA.
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA.
| | - Su-In Lee
- Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA.
| |
Collapse
|
32
|
Park SR, Park H, Lee S, Hwang J, Suh BF, Kim E. Facial age evaluated by artificial intelligence system, Dr.AMORE®: An objective, intuitive, and reliable new skin diagnosis technology. J Cosmet Dermatol 2023. [PMID: 38149689 DOI: 10.1111/jocd.16146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 11/29/2023] [Accepted: 12/04/2023] [Indexed: 12/28/2023]
Affiliation(s)
- Sae-Ra Park
- Clinical Research Lab, AMOREPACIFIC R&I Center, Yongin-si, Korea
| | - Hyeokgon Park
- Clinical Research Lab, AMOREPACIFIC R&I Center, Yongin-si, Korea
| | - Sangran Lee
- AI Solution Team, AMOREPACIFIC Corporation, Seoul, Korea
| | - Joongwon Hwang
- AI Solution Team, AMOREPACIFIC Corporation, Seoul, Korea
| | | | - Eunjoo Kim
- Clinical Research Lab, AMOREPACIFIC R&I Center, Yongin-si, Korea
| |
Collapse
|
33
|
He X, Zheng X, Ding H. Existing Barriers Faced by and Future Design Recommendations for Direct-to-Consumer Health Care Artificial Intelligence Apps: Scoping Review. J Med Internet Res 2023; 25:e50342. [PMID: 38109173 PMCID: PMC10758939 DOI: 10.2196/50342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 09/20/2023] [Accepted: 11/28/2023] [Indexed: 12/19/2023] Open
Abstract
BACKGROUND Direct-to-consumer (DTC) health care artificial intelligence (AI) apps hold the potential to bridge the spatial and temporal disparities in health care resources, but they also come with individual and societal risks due to AI errors. Furthermore, the manner in which consumers interact directly with health care AI is reshaping traditional physician-patient relationships. However, the academic community lacks a systematic comprehension of the research overview for such apps. OBJECTIVE This paper systematically delineated and analyzed the characteristics of included studies, identified existing barriers and design recommendations for DTC health care AI apps mentioned in the literature and also provided a reference for future design and development. METHODS This scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews guidelines and was conducted according to Arksey and O'Malley's 5-stage framework. Peer-reviewed papers on DTC health care AI apps published until March 27, 2023, in Web of Science, Scopus, the ACM Digital Library, IEEE Xplore, PubMed, and Google Scholar were included. The papers were analyzed using Braun and Clarke's reflective thematic analysis approach. RESULTS Of the 2898 papers retrieved, 32 (1.1%) covering this emerging field were included. The included papers were recently published (2018-2023), and most (23/32, 72%) were from developed countries. The medical field was mostly general practice (8/32, 25%). In terms of users and functionalities, some apps were designed solely for single-consumer groups (24/32, 75%), offering disease diagnosis (14/32, 44%), health self-management (8/32, 25%), and health care information inquiry (4/32, 13%). Other apps connected to physicians (5/32, 16%), family members (1/32, 3%), nursing staff (1/32, 3%), and health care departments (2/32, 6%), generally to alert these groups to abnormal conditions of consumer users. In addition, 8 barriers and 6 design recommendations related to DTC health care AI apps were identified. Some more subtle obstacles that are particularly worth noting and corresponding design recommendations in consumer-facing health care AI systems, including enhancing human-centered explainability, establishing calibrated trust and addressing overtrust, demonstrating empathy in AI, improving the specialization of consumer-grade products, and expanding the diversity of the test population, were further discussed. CONCLUSIONS The booming DTC health care AI apps present both risks and opportunities, which highlights the need to explore their current status. This paper systematically summarized and sorted the characteristics of the included studies, identified existing barriers faced by, and made future design recommendations for such apps. To the best of our knowledge, this is the first study to systematically summarize and categorize academic research on these apps. Future studies conducting the design and development of such systems could refer to the results of this study, which is crucial to improve the health care services provided by DTC health care AI apps.
Collapse
Affiliation(s)
- Xin He
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
| | - Xi Zheng
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
| | - Huiyuan Ding
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
34
|
Xu X, Jia Q, Yuan H, Qiu H, Dong Y, Xie W, Yao Z, Zhang J, Nie Z, Li X, Shi Y, Zou JY, Huang M, Zhuang J. A clinically applicable AI system for diagnosis of congenital heart diseases based on computed tomography images. Med Image Anal 2023; 90:102953. [PMID: 37734140 DOI: 10.1016/j.media.2023.102953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 08/22/2023] [Accepted: 09/01/2023] [Indexed: 09/23/2023]
Abstract
Congenital heart disease (CHD) is the most common type of birth defect. Without timely detection and treatment, approximately one-third of children with CHD would die in the infant period. However, due to the complicated heart structures, early diagnosis of CHD and its types is quite challenging, even for experienced radiologists. Here, we present an artificial intelligence (AI) system that achieves a comparable performance of human experts in the critical task of classifying 17 categories of CHD types. We collected the first-large CT dataset from three different CT machines, including more than 3750 CHD patients over 14 years. Experimental results demonstrate that it can achieve diagnosis accuracy (86.03%) comparable with junior cardiovascular radiologists (86.27%) in a World Health Organization appointed research and cooperation center in China on most types of CHD, and obtains a higher sensitivity (82.91%) than junior cardiovascular radiologists (76.18%). The accuracy of the combination of our AI system (97.20%) and senior radiologists achieves comparable results to that of junior radiologists and senior radiologists (97.16%) which is the current clinical routine. Our AI system can further provide 3D visualization of hearts to senior radiologists for interpretation and flexible review, surgeons for precise intuition of heart structures, and clinicians for more precise outcome prediction. We demonstrate the potential of our model to be integrated into current clinic practice to improve the diagnosis of CHD globally, especially in regions where experienced radiologists can be scarce.
Collapse
Affiliation(s)
- Xiaowei Xu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Qianjun Jia
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Haiyun Yuan
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Hailong Qiu
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Yuhao Dong
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Wen Xie
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zeyang Yao
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Jiawei Zhang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Zhiqaing Nie
- Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong Special Administrative Region
| | - Yiyu Shi
- Computer Science and Engineering, University of Notre Dame, IN, 46656, USA
| | - James Y Zou
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA.
| | - Meiping Huang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Catheterization Lab, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Jian Zhuang
- Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Department of Cardiovascular Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| |
Collapse
|
35
|
Ali R, Tang OY, Connolly ID, Zadnik Sullivan PL, Shin JH, Fridley JS, Asaad WF, Cielo D, Oyelese AA, Doberstein CE, Gokaslan ZL, Telfeian AE. Performance of ChatGPT and GPT-4 on Neurosurgery Written Board Examinations. Neurosurgery 2023; 93:1353-1365. [PMID: 37581444 DOI: 10.1227/neu.0000000000002632] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 05/19/2023] [Indexed: 08/16/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Interest surrounding generative large language models (LLMs) has rapidly grown. Although ChatGPT (GPT-3.5), a general LLM, has shown near-passing performance on medical student board examinations, the performance of ChatGPT or its successor GPT-4 on specialized examinations and the factors affecting accuracy remain unclear. This study aims to assess the performance of ChatGPT and GPT-4 on a 500-question mock neurosurgical written board examination. METHODS The Self-Assessment Neurosurgery Examinations (SANS) American Board of Neurological Surgery Self-Assessment Examination 1 was used to evaluate ChatGPT and GPT-4. Questions were in single best answer, multiple-choice format. χ 2 , Fisher exact, and univariable logistic regression tests were used to assess performance differences in relation to question characteristics. RESULTS ChatGPT (GPT-3.5) and GPT-4 achieved scores of 73.4% (95% CI: 69.3%-77.2%) and 83.4% (95% CI: 79.8%-86.5%), respectively, relative to the user average of 72.8% (95% CI: 68.6%-76.6%). Both LLMs exceeded last year's passing threshold of 69%. Although scores between ChatGPT and question bank users were equivalent ( P = .963), GPT-4 outperformed both (both P < .001). GPT-4 answered every question answered correctly by ChatGPT and 37.6% (50/133) of remaining incorrect questions correctly. Among 12 question categories, GPT-4 significantly outperformed users in each but performed comparably with ChatGPT in 3 (functional, other general, and spine) and outperformed both users and ChatGPT for tumor questions. Increased word count (odds ratio = 0.89 of answering a question correctly per +10 words) and higher-order problem-solving (odds ratio = 0.40, P = .009) were associated with lower accuracy for ChatGPT, but not for GPT-4 (both P > .005). Multimodal input was not available at the time of this study; hence, on questions with image content, ChatGPT and GPT-4 answered 49.5% and 56.8% of questions correctly based on contextual context clues alone. CONCLUSION LLMs achieved passing scores on a mock 500-question neurosurgical written board examination, with GPT-4 significantly outperforming ChatGPT.
Collapse
Affiliation(s)
- Rohaid Ali
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Oliver Y Tang
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Ian D Connolly
- Department of Neurosurgery, Massachusetts General Hospital, Boston , Massachusetts , USA
| | - Patricia L Zadnik Sullivan
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - John H Shin
- Department of Neuroscience, Norman Prince Neurosciences Institute, Rhode Island Hospital, Providence , Rhode Island , USA
| | - Jared S Fridley
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Wael F Asaad
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
- Department of Neuroscience, Norman Prince Neurosciences Institute, Rhode Island Hospital, Providence , Rhode Island , USA
- Department of Neuroscience, Brown University, Providence , Rhode Island , USA
- Department of Neuroscience, Carney Institute for Brain Science, Brown University, Providence , Rhode Island , USA
| | - Deus Cielo
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Adetokunbo A Oyelese
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Curtis E Doberstein
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Ziya L Gokaslan
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| | - Albert E Telfeian
- Department of Neurosurgery, The Warren Alpert Medical School of Brown University, Providence , Rhode Island , USA
| |
Collapse
|
36
|
Wang Z, Zhang L, Shu X, Wang Y, Feng Y. Consistent representation via contrastive learning for skin lesion diagnosis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107826. [PMID: 37837885 DOI: 10.1016/j.cmpb.2023.107826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/19/2023] [Accepted: 09/21/2023] [Indexed: 10/16/2023]
Abstract
BACKGROUND Skin lesions are a prevalent ailment, with melanoma emerging as a particularly perilous variant. Encouragingly, artificial intelligence displays promising potential in early detection, yet its integration within clinical contexts, particularly involving multi-modal data, presents challenges. While multi-modal approaches enhance diagnostic efficacy, the influence of modal bias is often disregarded. METHODS In this investigation, a multi-modal feature learning technique termed "Contrast-based Consistent Representation Disentanglement" for dermatological diagnosis is introduced. This approach employs adversarial domain adaptation to disentangle features from distinct modalities, fostering a shared representation. Furthermore, a contrastive learning strategy is devised to incentivize the model to preserve uniformity in common lesion attributes across modalities. Emphasizing the learning of a uniform representation among models, this approach circumvents reliance on supplementary data. RESULTS Assessment of the proposed technique on a 7-point criteria evaluation dataset yields an average accuracy of 76.1% for multi-classification tasks, surpassing researched state-of-the-art methods. The approach tackles modal bias, enabling the acquisition of a consistent representation of common lesion appearances across diverse modalities, which transcends modality boundaries. This study underscores the latent potential of multi-modal feature learning in dermatological diagnosis. CONCLUSION In summation, a multi-modal feature learning strategy is posited for dermatological diagnosis. This approach outperforms other state-of-the-art methods, underscoring its capacity to enhance diagnostic precision for skin lesions.
Collapse
Affiliation(s)
- Zizhou Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China; Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu 610065, China.
| | - Xin Shu
- College of Computer Science, Sichuan University, Chengdu 610065, China.
| | - Yan Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| | - Yangqin Feng
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore 138632, Singapore.
| |
Collapse
|
37
|
Lakdawala N, Lakdawala N, Gronbeck C, Grant-Kels JM. Ethical considerations in patient-directed artificial intelligence platforms. J Am Acad Dermatol 2023:S0190-9622(23)03229-2. [PMID: 38008409 DOI: 10.1016/j.jaad.2023.11.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 11/18/2023] [Accepted: 11/20/2023] [Indexed: 11/28/2023]
Affiliation(s)
- Nehal Lakdawala
- University of Connecticut School of Medicine, Farmington, Connecticut
| | - Nikita Lakdawala
- The Ronald O. Perelman Department of Dermatology, New York University Langone Health, New York, New York
| | - Christian Gronbeck
- Department of Dermatology, University of Connecticut, Farmington, Connecticut
| | - Jane M Grant-Kels
- Department of Dermatology, University of Connecticut, Farmington, Connecticut; Department of Dermatology, University of Florida, Gainesville, Florida.
| |
Collapse
|
38
|
Siddalingappa R, Kanagaraj S. K-nearest-neighbor algorithm to predict the survival time and classification of various stages of oral cancer: a machine learning approach. F1000Res 2023; 11:70. [PMID: 38046542 PMCID: PMC10690040 DOI: 10.12688/f1000research.75469.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/16/2023] [Indexed: 12/05/2023] Open
Abstract
Background:For years now, cancer treatments have entailed tried-and-true methods. Yet, oncologists and clinicians recommend a series of surgeries, chemotherapy, and radiation therapy. Yet, even amidst these treatments, the number of deaths due to cancer increases at an alarming rate. The prognosis of cancer patients is influenced by mutations, age, and various cancer stages. However, the association between these variables is unclear. Methods: The present work adopts a machine learning technique-k-nearest neighbor; for both regression and classification tasks, regression for predicting the survival time of oral cancer patients, and classification for classifying the patients into one of the predefined oral cancer stages. Two cross-validation approaches-hold-out and k-fold methods-have been used to examine the prediction results. Results: The experimental results show that the k-fold method performs better than the hold-out method, providing the least mean absolute error score of 0.015. Additionally, the model classifies patients into a valid group. Of the 429 records, 97 (out of 106), 99 (out of 119), 95 (out of 113), and 77 (out of 91) were classified to its correct label as stages - 1, 2, 3, and 4. The accuracy, recall, precision, and F-measure for each classification group obtained are 0.84, 0.85, 0.85, and 0.84. Conclusions: The study showed that aged patients with a higher number of mutations than young patients have a higher risk of short survival. Senior patients with a more significant number of mutations have an increased risk of getting into the last cancer stage.
Collapse
Affiliation(s)
- Rashmi Siddalingappa
- Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, 560012, India
| | - Sekar Kanagaraj
- Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, 560012, India
| |
Collapse
|
39
|
Del Amor R, Pérez-Cano J, López-Pérez M, Terradez L, Aneiros-Fernandez J, Morales S, Mateos J, Molina R, Naranjo V. Annotation protocol and crowdsourcing multiple instance learning classification of skin histological images: The CR-AI4SkIN dataset. Artif Intell Med 2023; 145:102686. [PMID: 37925214 DOI: 10.1016/j.artmed.2023.102686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/11/2023] [Accepted: 10/13/2023] [Indexed: 11/06/2023]
Abstract
Digital Pathology (DP) has experienced a significant growth in recent years and has become an essential tool for diagnosing and prognosis of tumors. The availability of Whole Slide Images (WSIs) and the implementation of Deep Learning (DL) algorithms have paved the way for the appearance of Artificial Intelligence (AI) systems that support the diagnosis process. These systems require extensive and varied data for their training to be successful. However, creating labeled datasets in histopathology is laborious and time-consuming. We have developed a crowdsourcing-multiple instance labeling/learning protocol that is applied to the creation and use of the CR-AI4SkIN dataset.2 CR-AI4SkIN contains 271 WSIs of 7 Cutaneous Spindle Cell (CSC) neoplasms with expert and non-expert labels at region and WSI levels. It is the first dataset of these types of neoplasms made available. The regions selected by the experts are used to learn an automatic extractor of Regions of Interest (ROIs) from WSIs. To produce the embedding of each WSI, the representations of patches within the ROIs are obtained using a contrastive learning method, and then combined. Finally, they are fed to a Gaussian process-based crowdsourcing classifier, which utilizes the noisy non-expert WSI labels. We validate our crowdsourcing-multiple instance learning method in the CR-AI4SkIN dataset, addressing a binary classification problem (malign vs. benign). The proposed method obtains an F1 score of 0.7911 on the test set, outperforming three widely used aggregation methods for crowdsourcing tasks. Furthermore, our crowdsourcing method also outperforms the supervised model with expert labels on the test set (F1-score = 0.6035). The promising results support the proposed crowdsourcing multiple instance learning annotation protocol. It also validates the automatic extraction of interest regions and the use of contrastive embedding and Gaussian process classification to perform crowdsourcing classification tasks.
Collapse
Affiliation(s)
- Rocío Del Amor
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain
| | - Jose Pérez-Cano
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain
| | - Miguel López-Pérez
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain.
| | - Liria Terradez
- Pathology Department. Hospital Clínico Universitario de Valencia, Universidad de Valencia, Spain
| | | | - Sandra Morales
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain
| | - Javier Mateos
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain
| | - Rafael Molina
- Department of Computer Science and Artificial Intelligence, University of Granada, 18010 Granada, Spain
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, Universitat Politècnica de València, Valencia, Spain
| |
Collapse
|
40
|
Kololgi SP, Lahari CS. Harnessing the Power of Artificial Intelligence in Dermatology: A Comprehensive Commentary. Indian J Dermatol 2023; 68:678-681. [PMID: 38371574 PMCID: PMC10868991 DOI: 10.4103/ijd.ijd_581_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2024] Open
Abstract
This special article provides a comprehensive commentary on the significant role of artificial intelligence (AI) in the field of dermatology. It explores the potential of AI in various aspects of dermatologic practice, including diagnosis, treatment planning, research and patient management. The article discusses the current state of AI in dermatology, its challenges and the ethical considerations surrounding its implementation. It highlights the transformative impact of AI on dermatologic care and offers insights into the future directions of AI in the field.
Collapse
Affiliation(s)
- Shreyas P. Kololgi
- From the Department of Dermatology, Venerology and Leprosy, SS Institute of Medical Sciences and Research Centre, Bengaluru, Karnataka, India
| | - CS Lahari
- From the Department of Dermatology, Venerology and Leprosy, SS Institute of Medical Sciences and Research Centre, Bengaluru, Karnataka, India
| |
Collapse
|
41
|
Jiang Y, Wang C, Zhou S. Artificial intelligence-based risk stratification, accurate diagnosis and treatment prediction in gynecologic oncology. Semin Cancer Biol 2023; 96:82-99. [PMID: 37783319 DOI: 10.1016/j.semcancer.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 08/27/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
As data-driven science, artificial intelligence (AI) has paved a promising path toward an evolving health system teeming with thrilling opportunities for precision oncology. Notwithstanding the tremendous success of oncological AI in such fields as lung carcinoma, breast tumor and brain malignancy, less attention has been devoted to investigating the influence of AI on gynecologic oncology. Hereby, this review sheds light on the ever-increasing contribution of state-of-the-art AI techniques to the refined risk stratification and whole-course management of patients with gynecologic tumors, in particular, cervical, ovarian and endometrial cancer, centering on information and features extracted from clinical data (electronic health records), cancer imaging including radiological imaging, colposcopic images, cytological and histopathological digital images, and molecular profiling (genomics, transcriptomics, metabolomics and so forth). However, there are still noteworthy challenges beyond performance validation. Thus, this work further describes the limitations and challenges faced in the real-word implementation of AI models, as well as potential solutions to address these issues.
Collapse
Affiliation(s)
- Yuting Jiang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shengtao Zhou
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
42
|
Cassidy B, Hoon Yap M, Pappachan JM, Ahmad N, Haycocks S, O'Shea C, Fernandez CJ, Chacko E, Jacob K, Reeves ND. Artificial intelligence for automated detection of diabetic foot ulcers: A real-world proof-of-concept clinical evaluation. Diabetes Res Clin Pract 2023; 205:110951. [PMID: 37848163 DOI: 10.1016/j.diabres.2023.110951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 10/02/2023] [Accepted: 10/11/2023] [Indexed: 10/19/2023]
Abstract
OBJECTIVE Conduct a multicenter proof-of-concept clinical evaluation to assess the accuracy of an artificial intelligence system on a smartphone for automated detection of diabetic foot ulcers. METHODS The evaluation was undertaken with patients with diabetes (n = 81) from September 2020 to January 2021. A total of 203 foot photographs were collected using a smartphone, analysed using the artificial intelligence system, and compared against expert clinician judgement, with 162 images showing at least one ulcer, and 41 showing no ulcer. Sensitivity and specificity of the system against clinician decisions was determined and inter- and intra-rater reliability analysed. RESULTS Predictions/decisions made by the system showed excellent sensitivity (0.9157) and high specificity (0.8857). Merging of intersecting predictions improved specificity to 0.9243. High levels of inter- and intra-rater reliability for clinician agreement on the ability of the artificial intelligence system to detect diabetic foot ulcers was also demonstrated (Kα > 0.8000 for all studies, between and within raters). CONCLUSIONS We demonstrate highly accurate automated diabetic foot ulcer detection using an artificial intelligence system with a low-end smartphone. This is the first key stage in the creation of a fully automated diabetic foot ulcer detection and monitoring system, with these findings underpinning medical device development.
Collapse
Affiliation(s)
- Bill Cassidy
- Department of Computing Mathematics, Manchester Metropolitan University, John Dalton Building, Manchester M1 5GD, UK.
| | - Moi Hoon Yap
- Department of Computing Mathematics, Manchester Metropolitan University, John Dalton Building, Manchester M1 5GD, UK.
| | - Joseph M Pappachan
- Lancashire Teaching Hospitals NHS Foundation Trust, Preston PR2 9HT, UK.
| | - Naseer Ahmad
- Manchester University NHS Foundation Trust, Manchester M13 9WL, UK.
| | | | - Claire O'Shea
- Te Whatu Ora Health New Zealand Waikato, Pembroke Street, Hamilton 3240, New Zealand. claire.o'
| | - Cornelious J Fernandez
- Department of Endocrinology and Metabolism, Pilgrim Hospital, United Lincolnshire Hospitals NHS Trust, Boston LN2 5QY, UK.
| | - Elias Chacko
- Jersey General Hospital, The Parade, St Helier, JE1 3QS Jersey, UK.
| | - Koshy Jacob
- Eastbourne District General Hospital, Kings Drive, Eastbourne BN21 2UD, UK.
| | - Neil D Reeves
- Faculty of Science & Engineering, Manchester Metropolitan University, John Dalton Building, Manchester M1 5GD, UK.
| |
Collapse
|
43
|
Karlin J, Gai L, LaPierre N, Danesh K, Farajzadeh J, Palileo B, Taraszka K, Zheng J, Wang W, Eskin E, Rootman D. Ensemble neural network model for detecting thyroid eye disease using external photographs. Br J Ophthalmol 2023; 107:1722-1729. [PMID: 36126104 DOI: 10.1136/bjo-2022-321833] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/22/2022] [Indexed: 11/03/2022]
Abstract
PURPOSE To describe an artificial intelligence platform that detects thyroid eye disease (TED). DESIGN Development of a deep learning model. METHODS 1944 photographs from a clinical database were used to train a deep learning model. 344 additional images ('test set') were used to calculate performance metrics. Receiver operating characteristic, precision-recall curves and heatmaps were generated. From the test set, 50 images were randomly selected ('survey set') and used to compare model performance with ophthalmologist performance. 222 images obtained from a separate clinical database were used to assess model recall and to quantitate model performance with respect to disease stage and grade. RESULTS The model achieved test set accuracy of 89.2%, specificity 86.9%, recall 93.4%, precision 79.7% and an F1 score of 86.0%. Heatmaps demonstrated that the model identified pixels corresponding to clinical features of TED. On the survey set, the ensemble model achieved accuracy, specificity, recall, precision and F1 score of 86%, 84%, 89%, 77% and 82%, respectively. 27 ophthalmologists achieved mean performance of 75%, 82%, 63%, 72% and 66%, respectively. On the second test set, the model achieved recall of 91.9%, with higher recall for moderate to severe (98.2%, n=55) and active disease (98.3%, n=60), as compared with mild (86.8%, n=68) or stable disease (85.7%, n=63). CONCLUSIONS The deep learning classifier is a novel approach to identify TED and is a first step in the development of tools to improve diagnostic accuracy and lower barriers to specialist evaluation.
Collapse
Affiliation(s)
- Justin Karlin
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Lisa Gai
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Nathan LaPierre
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Kayla Danesh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Justin Farajzadeh
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Bea Palileo
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| | - Kodi Taraszka
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Jie Zheng
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Wei Wang
- Department of Computer Science, University of California, Los Angeles, California, USA
| | - Eleazar Eskin
- Department of Computer Science, University of California, Los Angeles, California, USA
- Department of Human Genetics, University of California, Los Angeles, California, USA
| | - Daniel Rootman
- Division of Orbital and Ophthalmic Plastic Surgery, Stein and Doheny Eye Institutes, University of California, Los Angeles, CA, USA
| |
Collapse
|
44
|
du Crest D, Garibyan L, Hædersdal M, Zink A, Madhumita M, Harth Y, Bechstein S, Friis J, Riemer C, Kumar N, Parkkinen S, Shpudeiko V. Skin & Digital-the 2022 startups. DERMATOLOGIE (HEIDELBERG, GERMANY) 2023; 74:899-903. [PMID: 37550513 DOI: 10.1007/s00105-023-05204-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/12/2023] [Indexed: 08/09/2023]
Affiliation(s)
| | - Lilit Garibyan
- Wellman Center for Photomedicine, Massachusetts General Hospital, Department of Dermatology, Harvard Medical School, Boston, USA
| | - Merete Hædersdal
- Department of Dermatology, Copenhagen University Hospital-Bispebjerg, Copenhagen, Denmark
| | - A Zink
- Department of Dermatology and Allergy, School of Medicine, Technical University of Munich, Munich, Germany
| | | | | | - Sarah Bechstein
- Evident Medizin Gmbh, Carl-Benz-Str. 3, 68723, Schwetzingen, Germany
| | | | | | - Neal Kumar
- Piction Health and Andover Dermatology, Boston, USA
| | | | | |
Collapse
|
45
|
Cho SI, Navarrete-Dechent C, Daneshjou R, Cho HS, Chang SE, Kim SH, Na JI, Han SS. Generation of a Melanoma and Nevus Data Set From Unstandardized Clinical Photographs on the Internet. JAMA Dermatol 2023; 159:1223-1231. [PMID: 37792351 PMCID: PMC10551819 DOI: 10.1001/jamadermatol.2023.3521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 06/16/2023] [Indexed: 10/05/2023]
Abstract
Importance Artificial intelligence (AI) training for diagnosing dermatologic images requires large amounts of clean data. Dermatologic images have different compositions, and many are inaccessible due to privacy concerns, which hinder the development of AI. Objective To build a training data set for discriminative and generative AI from unstandardized internet images of melanoma and nevus. Design, Setting, and Participants In this diagnostic study, a total of 5619 (CAN5600 data set) and 2006 (CAN2000 data set; a manually revised subset of CAN5600) cropped lesion images of either melanoma or nevus were semiautomatically annotated from approximately 500 000 photographs on the internet using convolutional neural networks (CNNs), region-based CNNs, and large mask inpainting. For unsupervised pretraining, 132 673 possible lesions (LESION130k data set) were also created with diversity by collecting images from 18 482 websites in approximately 80 countries. A total of 5000 synthetic images (GAN5000 data set) were generated using the generative adversarial network (StyleGAN2-ADA; training, CAN2000 data set; pretraining, LESION130k data set). Main Outcomes and Measures The area under the receiver operating characteristic curve (AUROC) for determining malignant neoplasms was analyzed. In each test, 1 of the 7 preexisting public data sets (total of 2312 images; including Edinburgh, an SNU subset, Asan test, Waterloo, 7-point criteria evaluation, PAD-UFES-20, and MED-NODE) was used as the test data set. Subsequently, a comparative study was conducted between the performance of the EfficientNet Lite0 CNN on the proposed data set and that trained on the remaining 6 preexisting data sets. Results The EfficientNet Lite0 CNN trained on the annotated or synthetic images achieved higher or equivalent mean (SD) AUROCs to the EfficientNet Lite0 trained using the pathologically confirmed public data sets, including CAN5600 (0.874 [0.042]; P = .02), CAN2000 (0.848 [0.027]; P = .08), and GAN5000 (0.838 [0.040]; P = .31 [Wilcoxon signed rank test]) and the preexisting data sets combined (0.809 [0.063]) by the benefits of increased size of the training data set. Conclusions and Relevance The synthetic data set in this diagnostic study was created using various AI technologies from internet images. A neural network trained on the created data set (CAN5600) performed better than the same network trained on preexisting data sets combined. Both the annotated (CAN5600 and LESION130k) and synthetic (GAN5000) data sets could be shared for AI training and consensus between physicians.
Collapse
Affiliation(s)
| | | | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, California
| | - Hye Soo Cho
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| | - Sung Eun Chang
- Department of Dermatology, Asan Medical Center, Ulsan University College of Medicine, Seoul, Korea
| | - Seong Hwan Kim
- Department of Plastic and Reconstructive Surgery, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Jung-Im Na
- Department of Dermatology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seoul, Korea
| | - Seung Seog Han
- Department of Dermatology, I Dermatology Clinic, Seoul, Korea
- IDerma Inc, Seoul, Korea
| |
Collapse
|
46
|
Omiye JA, Gui H, Daneshjou R, Cai ZR, Muralidharan V. Principles, applications, and future of artificial intelligence in dermatology. Front Med (Lausanne) 2023; 10:1278232. [PMID: 37901399 PMCID: PMC10602645 DOI: 10.3389/fmed.2023.1278232] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 09/27/2023] [Indexed: 10/31/2023] Open
Abstract
This paper provides an overview of artificial-intelligence (AI), as applied to dermatology. We focus our discussion on methodology, AI applications for various skin diseases, limitations, and future opportunities. We review how the current image-based models are being implemented in dermatology across disease subsets, and highlight the challenges facing widespread adoption. Additionally, we discuss how the future of AI in dermatology might evolve and the emerging paradigm of large language, and multi-modal models to emphasize the importance of developing responsible, fair, and equitable models in dermatology.
Collapse
Affiliation(s)
| | - Haiwen Gui
- Department of Dermatology, Stanford University, Stanford, CA, United States
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University, Stanford, CA, United States
- Department of Biomedical Data Science, Stanford University, Stanford, CA, United States
| | - Zhuo Ran Cai
- Department of Dermatology, Stanford University, Stanford, CA, United States
| | | |
Collapse
|
47
|
Li J, Du D, Zhang J, Liu W, Wang J, Wei X, Xue L, Li X, Diao P, Zhang L, Jiang X. Development and validation of an artificial intelligence-powered acne grading system incorporating lesion identification. Front Med (Lausanne) 2023; 10:1255704. [PMID: 37869155 PMCID: PMC10587552 DOI: 10.3389/fmed.2023.1255704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Accepted: 09/12/2023] [Indexed: 10/24/2023] Open
Abstract
Background The management of acne requires the consideration of its severity; however, a universally adopted evaluation system for clinical practice is lacking. Artificial intelligence (AI) evaluation systems hold the promise of enhancing the efficiency and reproducibility of assessments. Artificial intelligence (AI) evaluation systems offer the potential to enhance the efficiency and reproducibility of assessments in this domain. While the identification of skin lesions represents a crucial component of acne evaluation, existing AI systems often overlook lesion identification or fail to integrate it with severity assessment. This study aimed to develop an AI-powered acne grading system and compare its performance with physician image-based scoring. Methods A total of 1,501 acne patients were included in the study, and standardized pictures were obtained using the VISIA system. The initial evaluation involved 40 stratified sampled frontal photos assessed by seven dermatologists. Subsequently, the three doctors with the highest inter-rater agreement annotated the remaining 1,461 images, which served as the dataset for the development of the AI system. The dataset was randomly divided into two groups: 276 images were allocated for training the acne lesion identification platform, and 1,185 images were used to assess the severity of acne. Results The average precision of our model for skin lesion identification was 0.507 and the average recall was 0.775. The AI severity grading system achieved good agreement with the true label (linear weighted kappa = 0.652). After integrating the lesion identification results into the severity assessment with fixed weights and learnable weights, the kappa rose to 0.737 and 0.696, respectively, and the entire evaluation on a Linux workstation with a Tesla K40m GPU took less than 0.1s per picture. Conclusion This study developed a system that detects various types of acne lesions and correlates them well with acne severity grading, and the good accuracy and efficiency make this approach potentially an effective clinical decision support tool.
Collapse
Affiliation(s)
- Jiaqi Li
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
- Laboratory of Dermatology, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Clinical Institute of Inflammation and Immunology, Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Dan Du
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
- Laboratory of Dermatology, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Clinical Institute of Inflammation and Immunology, Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| | - Jianwei Zhang
- Med-X Center for Informatics, Sichuan University, Chengdu, China
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Wenjie Liu
- Med-X Center for Informatics, Sichuan University, Chengdu, China
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Junyou Wang
- Med-X Center for Informatics, Sichuan University, Chengdu, China
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Xin Wei
- Med-X Center for Informatics, Sichuan University, Chengdu, China
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Li Xue
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
- Laboratory of Dermatology, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Clinical Institute of Inflammation and Immunology, Sichuan University, Chengdu, China
| | - Xiaoxue Li
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
- Laboratory of Dermatology, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Clinical Institute of Inflammation and Immunology, Sichuan University, Chengdu, China
| | - Ping Diao
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
- Laboratory of Dermatology, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Clinical Institute of Inflammation and Immunology, Sichuan University, Chengdu, China
| | - Lei Zhang
- Med-X Center for Informatics, Sichuan University, Chengdu, China
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Xian Jiang
- Department of Dermatology, West China Hospital, Sichuan University, Chengdu, China
- Laboratory of Dermatology, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Clinical Institute of Inflammation and Immunology, Sichuan University, Chengdu, China
- Med-X Center for Informatics, Sichuan University, Chengdu, China
| |
Collapse
|
48
|
Derekas P, Spyridonos P, Likas A, Zampeta A, Gaitanis G, Bassukas I. The Promise of Semantic Segmentation in Detecting Actinic Keratosis Using Clinical Photography in the Wild. Cancers (Basel) 2023; 15:4861. [PMID: 37835555 PMCID: PMC10571759 DOI: 10.3390/cancers15194861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/15/2023] Open
Abstract
AK is a common precancerous skin condition that requires effective detection and treatment monitoring. To improve the monitoring of the AK burden in clinical settings with enhanced automation and precision, the present study evaluates the application of semantic segmentation based on the U-Net architecture (i.e., AKU-Net). AKU-Net employs transfer learning to compensate for the relatively small dataset of annotated images and integrates a recurrent process based on convLSTM to exploit contextual information and address the challenges related to the low contrast and ambiguous boundaries of AK-affected skin regions. We used an annotated dataset of 569 clinical photographs from 115 patients with actinic keratosis to train and evaluate the model. From each photograph, patches of 512 × 512 pixels were extracted using translation lesion boxes that encompassed lesions in different positions and captured different contexts of perilesional skin. In total, 16,488 translation-augmented crops were used for training the model, and 403 lesion center crops were used for testing. To demonstrate the improvements in AK detection, AKU-Net was compared with plain U-Net and U-Net++ architectures. The experimental results highlighted the effectiveness of AKU-Net, improving upon both automation and precision over existing approaches, paving the way for more effective and reliable evaluation of actinic keratosis in clinical settings.
Collapse
Affiliation(s)
- Panagiotis Derekas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Panagiota Spyridonos
- Department of Medical Physics, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece
| | - Aristidis Likas
- Department of Computer Science & Engineering, School of Engineering, University of Ioannina, 45110 Ioannina, Greece; (P.D.); (A.L.)
| | - Athanasia Zampeta
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Georgios Gaitanis
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| | - Ioannis Bassukas
- Department of Skin and Venereal Diseases, Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece; (A.Z.); (G.G.); (I.B.)
| |
Collapse
|
49
|
Khader F, Müller-Franzes G, Wang T, Han T, Tayebi Arasteh S, Haarburger C, Stegmaier J, Bressem K, Kuhl C, Nebelung S, Kather JN, Truhn D. Multimodal Deep Learning for Integrating Chest Radiographs and Clinical Parameters: A Case for Transformers. Radiology 2023; 309:e230806. [PMID: 37787671 DOI: 10.1148/radiol.230806] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Background Clinicians consider both imaging and nonimaging data when diagnosing diseases; however, current machine learning approaches primarily consider data from a single modality. Purpose To develop a neural network architecture capable of integrating multimodal patient data and compare its performance to models incorporating a single modality for diagnosing up to 25 pathologic conditions. Materials and Methods In this retrospective study, imaging and nonimaging patient data were extracted from the Medical Information Mart for Intensive Care (MIMIC) database and an internal database comprised of chest radiographs and clinical parameters inpatients in the intensive care unit (ICU) (January 2008 to December 2020). The MIMIC and internal data sets were each split into training (n = 33 893, n = 28 809), validation (n = 740, n = 7203), and test (n = 1909, n = 9004) sets. A novel transformer-based neural network architecture was trained to diagnose up to 25 conditions using nonimaging data alone, imaging data alone, or multimodal data. Diagnostic performance was assessed using area under the receiver operating characteristic curve (AUC) analysis. Results The MIMIC and internal data sets included 36 542 patients (mean age, 63 years ± 17 [SD]; 20 567 male patients) and 45 016 patients (mean age, 66 years ± 16; 27 577 male patients), respectively. The multimodal model showed improved diagnostic performance for all pathologic conditions. For the MIMIC data set, the mean AUC was 0.77 (95% CI: 0.77, 0.78) when both chest radiographs and clinical parameters were used, compared with 0.70 (95% CI: 0.69, 0.71; P < .001) for only chest radiographs and 0.72 (95% CI: 0.72, 0.73; P < .001) for only clinical parameters. These findings were confirmed on the internal data set. Conclusion A model trained on imaging and nonimaging data outperformed models trained on only one type of data for diagnosing multiple diseases in patients in an ICU setting. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Kitamura and Topol in this issue.
Collapse
Affiliation(s)
- Firas Khader
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Gustav Müller-Franzes
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Tianci Wang
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Tianyu Han
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Soroosh Tayebi Arasteh
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Christoph Haarburger
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Johannes Stegmaier
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Keno Bressem
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Christiane Kuhl
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Sven Nebelung
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Jakob Nikolas Kather
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| | - Daniel Truhn
- From the Department of Diagnostic and Interventional Radiology (F.K., G.M.F., T.W., S.T.A., C.K., S.N., D.T.) and Department of Medicine III (J.N.K.), University Hospital Aachen, Pauwelsstraße 30, 52074 Aachen, Germany; Physics of Molecular Imaging Systems, Institute of Experimental Molecular Imaging (T.H.), and Institute of Imaging and Computer Vision (J.S.), RWTH Aachen University, Aachen, Germany; Ocumeda, Munich, Germany (C.H.); Department of Diagnostic and Interventional Radiology, Charité-Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany (K.B.); Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany (J.N.K.); Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK (J.N.K.); and Department of Medical Oncology, National Center for Tumor Diseases, University Hospital Heidelberg, Heidelberg, Germany (J.N.K.)
| |
Collapse
|
50
|
Lu J, Tong X, Wu H, Liu Y, Ouyang H, Zeng Q. Image classification and auxiliary diagnosis system for hyperpigmented skin diseases based on deep learning. Heliyon 2023; 9:e20186. [PMID: 37809588 PMCID: PMC10559947 DOI: 10.1016/j.heliyon.2023.e20186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 10/10/2023] Open
Abstract
Background and aim Melasma (ML), naevus fusco-caeruleus zygomaticus (NZ), freckles (FC), cafe-au-lait spots (CS), nevus of ota (NO), and lentigo simplex (LS), are common skin diseases causing hyperpigmentation. Deep learning algorithms learn the inherent laws and representation levels of sample data and can analyze the internal details of the image and classify it objectively to be used for image diagnosis. However, deep learning algorithms that can assist clinicians in diagnosing skin hyperpigmentation conditions are lacking. Methods The optimal deep-learning image recognition algorithm was explored for the auxiliary diagnosis of hyperpigmented skin disease. Pretrained models, such as VGG-19, GoogLeNet, InceptionV3, ResNet50V2, ResNet101V2, ResNet152V2, InceptionResNetV2, DesseNet201, MobileNet, and NASNetMobile were used to classify images of six common hyperpigmented skin diseases. The best deep learning algorithm for developing an online clinical diagnosis system was selected by using accuracy and area under curve (AUC) as evaluation indicators. Results In this research, the parameters of the above-mentioned ten deep learning algorithms were 18333510, 5979702, 21815078, 23577094, 42638854, 58343942, 54345958, 18333510, 3235014, and 4276058, respectively, and their training time was 380, 162, 199, 188, 315, 511, 471, 697, 101, and 144 min respectively. The respective accuracies of the training set were 85.94%, 99.72%, 99.61%, 99.52%, 99.52%, 98.84%, 99.61%, 99.13%, 99.52%, and 99.61%. The accuracy rates of the test set data were 73.28%, 57.40%, 70.04%, 71.48%, 68.23%, 71.11%, 71.84%, 73.28%, 70.39%, and 43.68%, respectively. Finally, the areas of AUC curves were 0.93, 0.86, 0.93, 0.91, 0.91, 0.92, 0.93, 0.92, 0.93, and 0.82, respectively. Conclusions The experimental parameters, training time, accuracy, and AUC of the above models suggest that MobileNet provides a good clinical application prospect in the auxiliary diagnosis of hyperpigmented skin.
Collapse
Affiliation(s)
- Jianyun Lu
- Department of Dermatology, Third Xiangya Hospital, Central South University, Changsha 410013, PR China
| | - Xiaoliang Tong
- Department of Dermatology, Third Xiangya Hospital, Central South University, Changsha 410013, PR China
| | - Hongping Wu
- Vocational Teachers College, Jiangxi Agricultural University, NanChang 330045, PR China
| | - Yaoxinchuan Liu
- Vocational Teachers College, Jiangxi Agricultural University, NanChang 330045, PR China
| | - Huidan Ouyang
- Vocational Teachers College, Jiangxi Agricultural University, NanChang 330045, PR China
| | - Qinghai Zeng
- Department of Dermatology, Third Xiangya Hospital, Central South University, Changsha 410013, PR China
| |
Collapse
|