201
|
Han T, Nebelung S, Pedersoli F, Zimmermann M, Schulze-Hagen M, Ho M, Haarburger C, Kiessling F, Kuhl C, Schulz V, Truhn D. Advancing diagnostic performance and clinical usability of neural networks via adversarial training and dual batch normalization. Nat Commun 2021; 12:4315. [PMID: 34262044 PMCID: PMC8280105 DOI: 10.1038/s41467-021-24464-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 06/17/2021] [Indexed: 11/09/2022] Open
Abstract
Unmasking the decision making process of machine learning models is essential for implementing diagnostic support systems in clinical practice. Here, we demonstrate that adversarially trained models can significantly enhance the usability of pathology detection as compared to their standard counterparts. We let six experienced radiologists rate the interpretability of saliency maps in datasets of X-rays, computed tomography, and magnetic resonance imaging scans. Significant improvements are found for our adversarial models, which are further improved by the application of dual-batch normalization. Contrary to previous research on adversarially trained models, we find that accuracy of such models is equal to standard models, when sufficiently large datasets and dual batch norm training are used. To ensure transferability, we additionally validate our results on an external test set of 22,433 X-rays. These findings elucidate that different paths for adversarial and real images are needed during training to achieve state of the art results with superior clinical interpretability.
Collapse
Affiliation(s)
- Tianyu Han
- Physics of Molecular Imaging Systems, Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Federico Pedersoli
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Markus Zimmermann
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Maximilian Schulze-Hagen
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | | | | | - Fabian Kiessling
- The Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.,Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen, Aachen, Germany
| | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Volkmar Schulz
- Physics of Molecular Imaging Systems, Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany. .,Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany. .,Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen, Aachen, Germany.
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany.
| |
Collapse
|
202
|
|
203
|
Bergier H, Duron L, Sordet C, Kawka L, Schlencker A, Chasset F, Arnaud L. Digital health, big data and smart technologies for the care of patients with systemic autoimmune diseases: Where do we stand? Autoimmun Rev 2021; 20:102864. [PMID: 34118454 DOI: 10.1016/j.autrev.2021.102864] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 04/03/2021] [Indexed: 12/22/2022]
Abstract
The past decade has seen tremendous development in digital health, including in innovative new technologies such as Electronic Health Records, telemedicine, virtual visits, wearable technology and sophisticated analytical tools such as artificial intelligence (AI) and machine learning for the deep-integration of big data. In the field of rare connective tissue diseases (rCTDs), these opportunities include increased access to scarce and remote expertise, improved patient monitoring, increased participation and therapeutic adherence, better patient outcomes and patient empowerment. In this review, we discuss opportunities and key-barriers to improve application of digital health technologies in the field of autoimmune diseases. We also describe what could be the fully digital pathway of rCTD patients. Smart technologies can be used to provide real-world evidence about the natural history of rCTDs, to determine real-life drug utilization, advanced efficacy and safety data for rare diseases and highlight significant unmet needs. Yet, digitalization remains one of the most challenging issues faced by rCTD patients, their physicians and healthcare systems. Digital health technologies offer enormous potential to improve autoimmune rCTD care but this potential has so far been largely unrealized due to those significant obstacles. The need for robust assessments of the efficacy, affordability and scalability of AI in the context of digital health is crucial to improve the care of patients with rare autoimmune diseases.
Collapse
Affiliation(s)
- Hugo Bergier
- Service de rhumatologie, Centre National de Référence des Maladies Auto-immunes Systémiques Rares Est Sud-Ouest (RESO), Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - Loïc Duron
- Department of neuroradiology, A. Rothshield Foundation Hospital, Paris, France
| | - Christelle Sordet
- Service de rhumatologie, Centre National de Référence des Maladies Auto-immunes Systémiques Rares Est Sud-Ouest (RESO), Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - Lou Kawka
- Service de rhumatologie, Centre National de Référence des Maladies Auto-immunes Systémiques Rares Est Sud-Ouest (RESO), Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - Aurélien Schlencker
- Service de rhumatologie, Centre National de Référence des Maladies Auto-immunes Systémiques Rares Est Sud-Ouest (RESO), Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - François Chasset
- Sorbonne Université, Faculté de médecine, Service de dermatologie et Allergologie, Hôpital Tenon, Paris, France
| | - Laurent Arnaud
- Department of neuroradiology, A. Rothshield Foundation Hospital, Paris, France.
| |
Collapse
|
204
|
Park S, Saw SN, Li X, Paknezhad M, Coppola D, Dinish US, Ebrahim Attia AB, Yew YW, Guan Thng ST, Lee HK, Olivo M. Model learning analysis of 3D optoacoustic mesoscopy images for the classification of atopic dermatitis. BIOMEDICAL OPTICS EXPRESS 2021; 12:3671-3683. [PMID: 34221687 PMCID: PMC8221944 DOI: 10.1364/boe.415105] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Revised: 02/15/2021] [Accepted: 03/09/2021] [Indexed: 05/07/2023]
Abstract
Atopic dermatitis (AD) is a skin inflammatory disease affecting 10% of the population worldwide. Raster-scanning optoacoustic mesoscopy (RSOM) has recently shown promise in dermatological imaging. We conducted a comprehensive analysis using three machine-learning models, random forest (RF), support vector machine (SVM), and convolutional neural network (CNN) for classifying healthy versus AD conditions, and sub-classifying different AD severities using RSOM images and clinical information. CNN model successfully differentiates healthy from AD patients with 97% accuracy. With limited data, RF achieved 65% accuracy in sub-classifying AD patients into mild versus moderate-severe cases. Identification of disease severities is vital in managing AD treatment.
Collapse
Affiliation(s)
- Sojeong Park
- Bioinformatics Institute, Agency of Science, Technology and Research, ASTAR, 30 Biopolis Street, #07-01 Matrix, 138671, Singapore
- Co-first authors
| | - Shier Nee Saw
- Bioinformatics Institute, Agency of Science, Technology and Research, ASTAR, 30 Biopolis Street, #07-01 Matrix, 138671, Singapore
- Current address: Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, Malaysia
- Co-first authors
| | - Xiuting Li
- Laboratory of Bio-Optical Imaging, Singapore Bioimaging Consortium, ASTAR, 11 Biopolis Way, 138667, Singapore
- Co-first authors
| | - Mahsa Paknezhad
- Bioinformatics Institute, Agency of Science, Technology and Research, ASTAR, 30 Biopolis Street, #07-01 Matrix, 138671, Singapore
| | - Davide Coppola
- Bioinformatics Institute, Agency of Science, Technology and Research, ASTAR, 30 Biopolis Street, #07-01 Matrix, 138671, Singapore
| | - U S Dinish
- Laboratory of Bio-Optical Imaging, Singapore Bioimaging Consortium, ASTAR, 11 Biopolis Way, 138667, Singapore
| | | | - Yik Weng Yew
- National Skin Centre, 1 Mandalay, 308205, Singapore
| | | | - Hwee Kuan Lee
- Bioinformatics Institute, Agency of Science, Technology and Research, ASTAR, 30 Biopolis Street, #07-01 Matrix, 138671, Singapore
- School of Computing, National University of Singapore, 13 Computing Drive, Singapore, 117417, Singapore
- Singapore Eye Research Institute (SERI), 11 Third Hospital Ave, Singapore, 168751, Singapore
- Image and Pervasive Access Laboratory (IPAL), 1 Fusionopolis Way, #21-01 Connexis (South Tower), 138632, Singapore
- Rehabilitation Research Institute of Singapore, 11 Mandalay Road #14-03, Clinical Sciences Building, 308232, Singapore
| | - Malini Olivo
- Laboratory of Bio-Optical Imaging, Singapore Bioimaging Consortium, ASTAR, 11 Biopolis Way, 138667, Singapore
| |
Collapse
|
205
|
AI-based pathology predicts origins for cancers of unknown primary. Nature 2021; 594:106-110. [PMID: 33953404 DOI: 10.1038/s41586-021-03512-4] [Citation(s) in RCA: 223] [Impact Index Per Article: 74.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2020] [Accepted: 04/01/2021] [Indexed: 12/16/2022]
Abstract
Cancer of unknown primary (CUP) origin is an enigmatic group of diagnoses in which the primary anatomical site of tumour origin cannot be determined1,2. This poses a considerable challenge, as modern therapeutics are predominantly specific to the primary tumour3. Recent research has focused on using genomics and transcriptomics to identify the origin of a tumour4-9. However, genomic testing is not always performed and lacks clinical penetration in low-resource settings. Here, to overcome these challenges, we present a deep-learning-based algorithm-Tumour Origin Assessment via Deep Learning (TOAD)-that can provide a differential diagnosis for the origin of the primary tumour using routinely acquired histology slides. We used whole-slide images of tumours with known primary origins to train a model that simultaneously identifies the tumour as primary or metastatic and predicts its site of origin. On our held-out test set of tumours with known primary origins, the model achieved a top-1 accuracy of 0.83 and a top-3 accuracy of 0.96, whereas on our external test set it achieved top-1 and top-3 accuracies of 0.80 and 0.93, respectively. We further curated a dataset of 317 cases of CUP for which a differential diagnosis was assigned. Our model predictions resulted in concordance for 61% of cases and a top-3 agreement of 82%. TOAD can be used as an assistive tool to assign a differential diagnosis to complicated cases of metastatic tumours and CUPs and could be used in conjunction with or in lieu of ancillary tests and extensive diagnostic work-ups to reduce the occurrence of CUP.
Collapse
|
206
|
Liu Y, Adamson R, Galan M, Hubbi B, Liu X. Quantitative characterization of human breast tissue based on deep learning segmentation of 3D optical coherence tomography images. BIOMEDICAL OPTICS EXPRESS 2021; 12:2647-2660. [PMID: 34123494 PMCID: PMC8176808 DOI: 10.1364/boe.423224] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 03/26/2021] [Accepted: 03/30/2021] [Indexed: 05/27/2023]
Abstract
In this study, we performed dual-modality optical coherence tomography (OCT) characterization (volumetric OCT imaging and quantitative optical coherence elastography) on human breast tissue specimens. We trained and validated a U-Net for automatic image segmentation. Our results demonstrated that U-Net segmentation can be used to assist clinical diagnosis for breast cancer, and is a powerful enabling tool to advance our understanding of the characteristics for breast tissue. Based on the results obtained from U-Net segmentation of 3D OCT images, we demonstrated significant morphological heterogeneity in small breast specimens acquired through diagnostic biopsy. We also found that breast specimens affected by different pathologies had different structural characteristics. By correlating U-Net analysis of structural OCT images with mechanical measurement provided by quantitative optical coherence elastography, we showed that the change of mechanical properties in breast tissue is not directly due to the change in the amount of dense or porous tissue.
Collapse
Affiliation(s)
- Yuwei Liu
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, University Heights, Newark, New Jersey 07105, USA
| | - Roberto Adamson
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, University Heights, Newark, New Jersey 07105, USA
| | - Mark Galan
- Rutgers University/New Jersey Medical School, Newark New Jersey 07103, USA
| | - Basil Hubbi
- Overlook Medical Center, Summit, New Jersey 07901, USA
| | - Xuan Liu
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, University Heights, Newark, New Jersey 07105, USA
| |
Collapse
|
207
|
Liu JL, Li SH, Cai YM, Lan DP, Lu YF, Liao W, Ying SC, Zhao ZH. Automated Radiographic Evaluation of Adenoid Hypertrophy Based on VGG-Lite. J Dent Res 2021; 100:1337-1343. [PMID: 33913367 DOI: 10.1177/00220345211009474] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Adenoid hypertrophy is a pathological hyperplasia of the adenoids, which may cause snoring and apnea, as well as impede breathing during sleep. The lateral cephalogram is commonly used by dentists to screen for adenoid hypertrophy, but it is tedious and time-consuming to measure the ratio of adenoid width to nasopharyngeal width for adenoid assessment. The purpose of this study was to develop a screening tool to automatically evaluate adenoid hypertrophy from lateral cephalograms using deep learning. We proposed the deep learning model VGG-Lite, using the largest data set (1,023 X-ray images) yet described to support the automatic detection of adenoid hypertrophy. We demonstrated that our model was able to automatically evaluate adenoid hypertrophy with a sensitivity of 0.898, a specificity of 0.882, positive predictive value of 0.880, negative predictive value of 0.900, and F1 score of 0.889. The comparison of model-only and expert-only detection performance showed that the fully automatic method (0.07 min) was about 522 times faster than the human expert (36.6 min). Comparison of human experts with or without deep learning assistance showed that model-assisted human experts spent an average of 23.3 min to evaluate adenoid hypertrophy using 100 radiographs, compared to an average of 36.6 min using an entirely manual procedure. We therefore concluded that deep learning could improve the accuracy, speed, and efficiency of evaluating adenoid hypertrophy from lateral cephalograms.
Collapse
Affiliation(s)
- J L Liu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - S H Li
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Y M Cai
- Department of Dental Technology, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - D P Lan
- Department of Dental Technology, West China Hospital of Stomatology, Sichuan University, Chengdu, China
| | - Y F Lu
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - W Liao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - S C Ying
- College of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Z H Zhao
- State Key Laboratory of Oral Diseases & National Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
208
|
Sekimitsu S, Zebardast N. Glaucoma and Machine Learning: A Call for Increased Diversity in Data. Ophthalmol Glaucoma 2021; 4:339-342. [PMID: 33879422 DOI: 10.1016/j.ogla.2021.03.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 02/25/2021] [Accepted: 03/01/2021] [Indexed: 02/07/2023]
|
209
|
Zhu CY, Wang YK, Chen HP, Gao KL, Shu C, Wang JC, Yan LF, Yang YG, Xie FY, Liu J. A Deep Learning Based Framework for Diagnosing Multiple Skin Diseases in a Clinical Environment. Front Med (Lausanne) 2021; 8:626369. [PMID: 33937279 PMCID: PMC8085301 DOI: 10.3389/fmed.2021.626369] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 03/25/2021] [Indexed: 01/06/2023] Open
Abstract
Background: Numerous studies have attempted to apply artificial intelligence (AI) in the dermatological field, mainly on the classification and segmentation of various dermatoses. However, researches under real clinical settings are scarce. Objectives: This study was aimed to construct a novel framework based on deep learning trained by a dataset that represented the real clinical environment in a tertiary class hospital in China, for better adaptation of the AI application in clinical practice among Asian patients. Methods: Our dataset was composed of 13,603 dermatologist-labeled dermoscopic images, containing 14 categories of diseases, namely lichen planus (LP), rosacea (Rosa), viral warts (VW), acne vulgaris (AV), keloid and hypertrophic scar (KAHS), eczema and dermatitis (EAD), dermatofibroma (DF), seborrheic dermatitis (SD), seborrheic keratosis (SK), melanocytic nevus (MN), hemangioma (Hem), psoriasis (Pso), port wine stain (PWS), and basal cell carcinoma (BCC). In this study, we applied Google's EfficientNet-b4 with pre-trained weights on ImageNet as the backbone of our CNN architecture. The final fully-connected classification layer was replaced with 14 output neurons. We added seven auxiliary classifiers to each of the intermediate layer groups. The modified model was retrained with our dataset and implemented using Pytorch. We constructed saliency maps to visualize our network's attention area of input images for its prediction. To explore the visual characteristics of different clinical classes, we also examined the internal image features learned by the proposed framework using t-SNE (t-distributed Stochastic Neighbor Embedding). Results: Test results showed that the proposed framework achieved a high level of classification performance with an overall accuracy of 0.948, a sensitivity of 0.934 and a specificity of 0.950. We also compared the performance of our algorithm with three most widely used CNN models which showed our model outperformed existing models with the highest area under curve (AUC) of 0.985. We further compared this model with 280 board-certificated dermatologists, and results showed a comparable performance level in an 8-class diagnostic task. Conclusions: The proposed framework retrained by the dataset that represented the real clinical environment in our department could accurately classify most common dermatoses that we encountered during outpatient practice including infectious and inflammatory dermatoses, benign and malignant cutaneous tumors.
Collapse
Affiliation(s)
- Chen-Yu Zhu
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu-Kun Wang
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | | | | | - Chang Shu
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jun-Cheng Wang
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | | | - Yi-Guang Yang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, China
| | - Feng-Ying Xie
- Image Processing Center, School of Astronautics, Beihang University, Beijing, China
| | - Jie Liu
- Department of Dermatology, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
210
|
Using deep learning to predict microvascular invasion in hepatocellular carcinoma based on dynamic contrast-enhanced MRI combined with clinical parameters. J Cancer Res Clin Oncol 2021; 147:3757-3767. [PMID: 33839938 DOI: 10.1007/s00432-021-03617-3] [Citation(s) in RCA: 50] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Accepted: 03/23/2021] [Indexed: 02/06/2023]
Abstract
PURPOSE Microvascular invasion (MVI) is a critical determinant of the early recurrence and poor prognosis of patients with hepatocellular carcinoma (HCC). Prediction of MVI status is clinically significant for the decision of treatment strategies and the assessment of patient's prognosis. A deep learning (DL) model was developed to predict the MVI status and grade in HCC patients based on preoperative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and clinical parameters. METHODS HCC patients with pathologically confirmed MVI status from January to December 2016 were enrolled and preoperative DCE-MRI of these patients were collected in this study. Then they were randomly divided into the training and testing cohorts. A DL model with eight conventional neural network (CNN) branches for eight MRI sequences was built to predict the presence of MVI, and further combined with clinical parameters for better prediction. RESULTS Among 601 HCC patients, 376 patients were pathologically MVI absent, and 225 patients were MVI present. To predict the presence of MVI, the DL model based only on images achieved an area under curve (AUC) of 0.915 in the testing cohort as compared to the radiomics model with an AUC of 0.731. The DL combined with clinical parameters (DLC) model yielded the best predictive performance with an AUC of 0.931. For the MVI-grade stratification, the DLC models achieved an overall accuracy of 0.793. Survival analysis demonstrated that the patients with DLC-predicted MVI status were associated with the poor overall survival (OS) and recurrence-free survival (RFS). Further investigation showed that hepatectomy with the wide resection margin contributes to better OS and RFS in the DLC-predicted MVI present patients. CONCLUSION The proposed DLC model can provide a non-invasive approach to evaluate MVI before surgery, which can help surgeons make decisions of surgical strategies and assess patient's prognosis.
Collapse
|
211
|
Jain A, Way D, Gupta V, Gao Y, de Oliveira Marinho G, Hartford J, Sayres R, Kanada K, Eng C, Nagpal K, DeSalvo KB, Corrado GS, Peng L, Webster DR, Dunn RC, Coz D, Huang SJ, Liu Y, Bui P, Liu Y. Development and Assessment of an Artificial Intelligence-Based Tool for Skin Condition Diagnosis by Primary Care Physicians and Nurse Practitioners in Teledermatology Practices. JAMA Netw Open 2021; 4:e217249. [PMID: 33909055 PMCID: PMC8082316 DOI: 10.1001/jamanetworkopen.2021.7249] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 02/01/2021] [Indexed: 11/30/2022] Open
Abstract
Importance Most dermatologic cases are initially evaluated by nondermatologists such as primary care physicians (PCPs) or nurse practitioners (NPs). Objective To evaluate an artificial intelligence (AI)-based tool that assists with diagnoses of dermatologic conditions. Design, Setting, and Participants This multiple-reader, multiple-case diagnostic study developed an AI-based tool and evaluated its utility. Primary care physicians and NPs retrospectively reviewed an enriched set of cases representing 120 different skin conditions. Randomization was used to ensure each clinician reviewed each case either with or without AI assistance; each clinician alternated between batches of 50 cases in each modality. The reviews occurred from February 21 to April 28, 2020. Data were analyzed from May 26, 2020, to January 27, 2021. Exposures An AI-based assistive tool for interpreting clinical images and associated medical history. Main Outcomes and Measures The primary analysis evaluated agreement with reference diagnoses provided by a panel of 3 dermatologists for PCPs and NPs. Secondary analyses included diagnostic accuracy for biopsy-confirmed cases, biopsy and referral rates, review time, and diagnostic confidence. Results Forty board-certified clinicians, including 20 PCPs (14 women [70.0%]; mean experience, 11.3 [range, 2-32] years) and 20 NPs (18 women [90.0%]; mean experience, 13.1 [range, 2-34] years) reviewed 1048 retrospective cases (672 female [64.2%]; median age, 43 [interquartile range, 30-56] years; 41 920 total reviews) from a teledermatology practice serving 11 sites and provided 0 to 5 differential diagnoses per case (mean [SD], 1.6 [0.7]). The PCPs were located across 12 states, and the NPs practiced in primary care without physician supervision across 9 states. The NPs had a mean of 13.1 (range, 2-34) years of experience and practiced in primary care without physician supervision across 9 states. Artificial intelligence assistance was significantly associated with higher agreement with reference diagnoses. For PCPs, the increase in diagnostic agreement was 10% (95% CI, 8%-11%; P < .001), from 48% to 58%; for NPs, the increase was 12% (95% CI, 10%-14%; P < .001), from 46% to 58%. In secondary analyses, agreement with biopsy-obtained diagnosis categories of maglignant, precancerous, or benign increased by 3% (95% CI, -1% to 7%) for PCPs and by 8% (95% CI, 3%-13%) for NPs. Rates of desire for biopsies decreased by 1% (95% CI, 0-3%) for PCPs and 2% (95% CI, 1%-3%) for NPs; the rate of desire for referrals decreased by 3% (95% CI, 1%-4%) for PCPs and NPs. Diagnostic agreement on cases not indicated for a dermatologist referral increased by 10% (95% CI, 8%-12%) for PCPs and 12% (95% CI, 10%-14%) for NPs, and median review time increased slightly by 5 (95% CI, 0-8) seconds for PCPs and 7 (95% CI, 5-10) seconds for NPs per case. Conclusions and Relevance Artificial intelligence assistance was associated with improved diagnoses by PCPs and NPs for 1 in every 8 to 10 cases, indicating potential for improving the quality of dermatologic care.
Collapse
Affiliation(s)
| | - David Way
- Google Health, Palo Alto, California
| | | | - Yi Gao
- Google Health, Palo Alto, California
| | | | | | | | | | - Clara Eng
- Google Health, Palo Alto, California
| | | | | | | | - Lily Peng
- Google Health, Palo Alto, California
| | | | | | - David Coz
- Google Health, Palo Alto, California
| | - Susan J. Huang
- Google Health via Advanced Clinical, Deerfield, Illinois
| | - Yun Liu
- Google Health, Palo Alto, California
| | - Peggy Bui
- Google Health, Palo Alto, California
- Division of Hospital Medicine, University of California, San Francisco
| | - Yuan Liu
- Google Health, Palo Alto, California
| |
Collapse
|
212
|
Alzubaidi L, Al-Amidie M, Al-Asadi A, Humaidi AJ, Al-Shamma O, Fadhel MA, Zhang J, Santamaría J, Duan Y. Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data. Cancers (Basel) 2021; 13:1590. [PMID: 33808207 PMCID: PMC8036379 DOI: 10.3390/cancers13071590] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/24/2021] [Accepted: 03/27/2021] [Indexed: 12/27/2022] Open
Abstract
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes-either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Ahmed Al-Asadi
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad 10001, Iraq;
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq;
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar 64005, Iraq;
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia;
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain;
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; (M.A.-A.); (A.A.-A.); (Y.D.)
| |
Collapse
|
213
|
Taylor M, Liu X, Denniston A, Esteva A, Ko J, Daneshjou R, Chan AW. Raising the Bar for Randomized Trials Involving Artificial Intelligence: The SPIRIT-Artificial Intelligence and CONSORT-Artificial Intelligence Guidelines. J Invest Dermatol 2021; 141:2109-2111. [PMID: 33766511 DOI: 10.1016/j.jid.2021.02.744] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 01/25/2021] [Accepted: 02/05/2021] [Indexed: 01/16/2023]
Abstract
Artificial intelligence (AI)-based applications have the potential to improve the quality and efficiency of patient care in dermatology. Unique challenges in the development and validation of these technologies may limit their generalizability and real-world applicability. Before the widespread adoption of AI interventions, randomized trials should be conducted to evaluate their efficacy, safety, and cost effectiveness in clinical settings. The recent Standard Protocol Items: Recommendations for Interventional Trials-AI extension and Consolidated Standards of Reporting Trials-AI extension guidelines provide recommendations for reporting the methods and results of trials involving AI interventions. High-quality trials will provide gold standard evidence to support the adoption of AI for the benefit of patient care.
Collapse
Affiliation(s)
- Matthew Taylor
- College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom; Health Data Research UK, London, United Kingdom
| | - Xiaoxuan Liu
- Health Data Research UK, London, United Kingdom; Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom; Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, United Kingdom
| | - Alastair Denniston
- Health Data Research UK, London, United Kingdom; Academic Unit of Ophthalmology, Institute of Inflammation and Ageing, College of Medical and Dental Sciences, University of Birmingham, Birmingham, United Kingdom; Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom; Birmingham Health Partners Centre for Regulatory Science and Innovation, University of Birmingham, Birmingham, United Kingdom; NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology, London, United Kingdom
| | - Andre Esteva
- Salesforce AI Research, Palo Alto, California, USA
| | - Justin Ko
- Department of Dermatology, Stanford University School of Medicine, Stanford, California, USA
| | - Roxana Daneshjou
- Department of Dermatology, Stanford University School of Medicine, Stanford, California, USA; Department of Biomedical Data Sciences, Stanford University School of Medicine, Palo Alto, California, USA
| | - An-Wen Chan
- Division of Dermatology, Women's College Research Institute, Women's College Hospital, University of Toronto, Toronto, Ontario, Canada.
| | | |
Collapse
|
214
|
Qu Y, Wang P, Liu B, Song C, Wang D, Yang H, Zhang Z, Chen P, Kang X, Du K, Yao H, Zhou B, Han T, Zuo N, Han Y, Lu J, Yu C, Zhang X, Jiang T, Zhou Y, Liu Y. AI4AD: Artificial intelligence analysis for Alzheimer's disease classification based on a multisite DTI database. BRAIN DISORDERS 2021. [DOI: 10.1016/j.dscb.2021.100005] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
|
215
|
Wu H, Yin H, Chen H, Sun M, Liu X, Yu Y, Tang Y, Long H, Zhang B, Zhang J, Zhou Y, Li Y, Zhang G, Zhang P, Zhan Y, Liao J, Luo S, Xiao R, Su Y, Zhao J, Wang F, Zhang J, Zhang W, Zhang J, Hu K, Yuan L, Deng D, Liang Y, Yang B, Lu Q. A deep learning-based smartphone platform for cutaneous lupus erythematosus classification assistance: Simplifying the diagnosis of complicated diseases. J Am Acad Dermatol 2021; 85:792-793. [PMID: 33610594 DOI: 10.1016/j.jaad.2021.02.043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 01/19/2021] [Accepted: 02/14/2021] [Indexed: 01/16/2023]
Affiliation(s)
- Haijing Wu
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China.
| | - Heng Yin
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | | | | | | | - Yizhou Yu
- Deepwise AI Lab, Beijing, China; Department of Computer Science, The University of Hong Kong, Hong Kong.
| | - Yang Tang
- Guanlan Networks (Hangzhou) Co, Ltd, Hangzhou, Zhejiang, China
| | - Hai Long
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Bo Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Jing Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Ying Zhou
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Yaping Li
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Guiyuing Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Peng Zhang
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Yi Zhan
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Jieyue Liao
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Shuaihantian Luo
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Rong Xiao
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Yuwen Su
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China
| | - Juanjuan Zhao
- Guanlan Networks (Hangzhou) Co, Ltd, Hangzhou, Zhejiang, China
| | - Fei Wang
- Guanlan Networks (Hangzhou) Co, Ltd, Hangzhou, Zhejiang, China
| | - Jing Zhang
- Guanlan Networks (Hangzhou) Co, Ltd, Hangzhou, Zhejiang, China
| | - Wei Zhang
- Guanlan Networks (Hangzhou) Co, Ltd, Hangzhou, Zhejiang, China
| | - Jin Zhang
- Guanlan Networks (Hangzhou) Co, Ltd, Hangzhou, Zhejiang, China
| | - Kai Hu
- Key Laboratory of Intelligent Computing and Information Processing of Ministry of Education, Xiangtan University, Xiangtan, Hunan, China
| | - Limei Yuan
- Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Danqi Deng
- Second Affiliated Hospital of Kunming Medical University, Kunming, China
| | - Yunsheng Liang
- Dermatology Hospital of Southern Medical University, Guangzhou, China
| | - Bin Yang
- Dermatology Hospital of Southern Medical University, Guangzhou, China
| | - Qianjin Lu
- Department of Dermatology, Second Xiangya Hospital, Central South University, Hunan Key Laboratory of Medical Epigenomics, Changsha, Hunan, China; Institute of Dermatology, Chinese Academy of Medical Sciences and Peking Union Medical College, Najing, China.
| |
Collapse
|
216
|
Daneshjou R, He B, Ouyang D, Zou JY. How to evaluate deep learning for cancer diagnostics - factors and recommendations. Biochim Biophys Acta Rev Cancer 2021; 1875:188515. [PMID: 33513392 DOI: 10.1016/j.bbcan.2021.188515] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2020] [Revised: 11/08/2020] [Accepted: 01/18/2021] [Indexed: 12/23/2022]
Abstract
The large volume of data used in cancer diagnosis presents a unique opportunity for deep learning algorithms, which improve in predictive performance with increasing data. When applying deep learning to cancer diagnosis, the goal is often to learn how to classify an input sample (such as images or biomarkers) into predefined categories (such as benign or cancerous). In this article, we examine examples of how deep learning algorithms have been implemented to make predictions related to cancer diagnosis using clinical, radiological, and pathological image data. We present a systematic approach for evaluating the development and application of clinical deep learning algorithms. Based on these examples and the current state of deep learning in medicine, we discuss the future possibilities in this space and outline a roadmap for implementations of deep learning in cancer diagnosis.
Collapse
Affiliation(s)
- Roxana Daneshjou
- Department of Dermatology, Stanford University School of Medicine, Redwood City, CA, USA; Department of Biomedical Data Science, Stanford University, Stanford, CA, USA.
| | - Bryan He
- Department of Computer Science, Stanford University, Stanford, CA, USA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - James Y Zou
- Department of Computer Science, Stanford University, Stanford, CA, USA; Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA, USA.
| |
Collapse
|
217
|
Wang Z, Zhang L, Zhao M, Wang Y, Bai H, Wang Y, Rui C, Fan C, Li J, Li N, Liu X, Wang Z, Si Y, Feng A, Li M, Zhang Q, Yang Z, Wang M, Wu W, Cao Y, Qi L, Zeng X, Geng L, An R, Li P, Liu Z, Qiao Q, Zhu W, Mo W, Liao Q, Xu W. Deep Neural Networks Offer Morphologic Classification and Diagnosis of Bacterial Vaginosis. J Clin Microbiol 2021; 59:e02236-20. [PMID: 33148709 PMCID: PMC8111127 DOI: 10.1128/jcm.02236-20] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 11/01/2020] [Indexed: 11/20/2022] Open
Abstract
Bacterial vaginosis (BV) is caused by the excessive and imbalanced growth of bacteria in vagina, affecting 30 to 50% of women. Gram staining followed by Nugent scoring based on bacterial morphotypes under the microscope is considered the gold standard for BV diagnosis; this method is often labor-intensive and time-consuming, and results vary from person to person. We developed and optimized a convolutional neural network (CNN) model and evaluated its ability to automatically identify and classify three categories of Nugent scores from microscope images. The CNN model was first established with a panel of microscopic images with Nugent scores determined by experts. The model was trained by minimizing the cross-entropy loss function and optimized by using a momentum optimizer. The separate test sets of images collected from three hospitals were evaluated by the CNN model. The CNN model consisted of 25 convolutional layers, 2 pooling layers, and a fully connected layer. The model obtained 82.4% sensitivity and 96.6% specificity with the 5,815 validation images when altered vaginal flora and BV were considered the positive samples, which was better than the rates achieved by top-level technologists and obstetricians in China. The capability of our model for generalization was so strong that it exhibited 75.1% accuracy in three categories of Nugent scores on the independent test set of 1,082 images, which was 6.6% higher than the average of three technologists, who are hold bachelor's degrees in medicine and are qualified to make diagnostic decisions. When three technologists ran one specimen in triplicate, the precision of three categories of Nugent scores was 54.0%. One hundred three samples diagnosed by two technologists on different days showed a repeatability of 90.3%. The CNN model outperformed human health care practitioners in terms of accuracy and stability for three categories of Nugent score diagnosis. The deep learning model may offer translational applications in automating diagnosis of bacterial vaginosis with proper supporting hardware.
Collapse
Affiliation(s)
- Zhongxiao Wang
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| | - Lei Zhang
- Department of Obstetrics and Gynecology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Min Zhao
- Peking University First Hospital, Beijing, China
| | - Ying Wang
- Department of Obstetrics and Gynecology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Huihui Bai
- Beijing Obstetrics and Gynecology Hospital, Capital Medical University Beijing Maternal and Child Health Care Hospital, Beijing, China
| | - Yufeng Wang
- Department of Obstetrics and Gynecology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Can Rui
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| | - Chong Fan
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| | - Jiao Li
- The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Na Li
- The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Xinhuan Liu
- Peking University Third Hospital, Beijing, China
| | - Zitao Wang
- The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Yanyan Si
- Binzhou Medical University Hospital, Binzhou, China
| | - Andrea Feng
- Beijing HarMoniCare Women's and Children's Hospital, Beijing, China
| | - Mingxuan Li
- Suzhou Turing Microbial Technologies Co., Ltd., Suzhou, China
- Beijing Turing Microbial Technologies Co., Ltd., Beijing, China
| | - Qiongqiong Zhang
- Department of Obstetrics and Gynecology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Zhe Yang
- Department of Physics, Tsinghua University, Beijing, China
| | - Mengdi Wang
- Department of Operations Research and Financial Engineering, Princeton University, Princeton, New Jersey, USA
| | - Wei Wu
- Suzhou Turing Microbial Technologies Co., Ltd., Suzhou, China
- Beijing Turing Microbial Technologies Co., Ltd., Beijing, China
| | - Yang Cao
- Suzhou Turing Microbial Technologies Co., Ltd., Suzhou, China
- Beijing Turing Microbial Technologies Co., Ltd., Beijing, China
| | - Lin Qi
- The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Xin Zeng
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| | - Li Geng
- Peking University Third Hospital, Beijing, China
| | - Ruifang An
- The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, China
| | - Ping Li
- Women's Hospital of Nanjing Medical University, Nanjing Maternity and Child Health Care Hospital, Nanjing, China
| | - Zhaohui Liu
- Beijing Obstetrics and Gynecology Hospital, Capital Medical University Beijing Maternal and Child Health Care Hospital, Beijing, China
| | - Qiao Qiao
- The Affiliated Hospital of Inner Mongolia Medical University, Hohhot, China
| | - Weipei Zhu
- The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Weike Mo
- Suzhou Turing Microbial Technologies Co., Ltd., Suzhou, China
- Beijing Turing Microbial Technologies Co., Ltd., Beijing, China
- Shanghai East Hospital, School of Life Sciences and Technology, Tongji University, Shanghai, China
| | - Qinping Liao
- Department of Obstetrics and Gynecology, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
- School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Wei Xu
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| |
Collapse
|
218
|
Stress testing reveals gaps in clinic readiness of image-based diagnostic artificial intelligence models. NPJ Digit Med 2021; 4:10. [PMID: 33479460 PMCID: PMC7820258 DOI: 10.1038/s41746-020-00380-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/09/2020] [Indexed: 02/03/2023] Open
Abstract
Artificial intelligence models match or exceed dermatologists in melanoma image classification. Less is known about their robustness against real-world variations, and clinicians may incorrectly assume that a model with an acceptable area under the receiver operating characteristic curve or related performance metric is ready for clinical use. Here, we systematically assessed the performance of dermatologist-level convolutional neural networks (CNNs) on real-world non-curated images by applying computational "stress tests". Our goal was to create a proxy environment in which to comprehensively test the generalizability of off-the-shelf CNNs developed without training or evaluation protocols specific to individual clinics. We found inconsistent predictions on images captured repeatedly in the same setting or subjected to simple transformations (e.g., rotation). Such transformations resulted in false positive or negative predictions for 6.5-22% of skin lesions across test datasets. Our findings indicate that models meeting conventionally reported metrics need further validation with computational stress tests to assess clinic readiness.
Collapse
|
219
|
Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, Liu Y, Topol E, Dean J, Socher R. Deep learning-enabled medical computer vision. NPJ Digit Med 2021; 4:5. [PMID: 33420381 PMCID: PMC7794558 DOI: 10.1038/s41746-020-00376-2] [Citation(s) in RCA: 232] [Impact Index Per Article: 77.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Accepted: 12/01/2020] [Indexed: 02/07/2023] Open
Abstract
A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.
Collapse
Affiliation(s)
| | | | | | - Nikhil Naik
- Salesforce AI Research, San Francisco, CA, USA
| | - Ali Madani
- Salesforce AI Research, San Francisco, CA, USA
| | | | - Yun Liu
- Google Research, Mountain View, CA, USA
| | - Eric Topol
- Scripps Research Translational Institute, La Jolla, CA, USA
| | - Jeff Dean
- Google Research, Mountain View, CA, USA
| | | |
Collapse
|
220
|
Chandrasekaran AC, Fu Z, Kraniski R, Wilson FP, Teaw S, Cheng M, Wang A, Ren S, Omar IM, Hinchcliff ME. Computer vision applied to dual-energy computed tomography images for precise calcinosis cutis quantification in patients with systemic sclerosis. Arthritis Res Ther 2021; 23:6. [PMID: 33407814 PMCID: PMC7788847 DOI: 10.1186/s13075-020-02392-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 12/09/2020] [Indexed: 01/12/2023] Open
Abstract
Background Although treatments have been proposed for calcinosis cutis (CC) in patients with systemic sclerosis (SSc), a standardized and validated method for CC burden quantification is necessary to enable valid clinical trials. We tested the hypothesis that computer vision applied to dual-energy computed tomography (DECT) finger images is a useful approach for precise and accurate CC quantification in SSc patients. Methods De-identified 2-dimensional (2D) DECT images from SSc patients with clinically evident lesser finger CC lesions were obtained. An expert musculoskeletal radiologist confirmed accurate manual segmentation (subtraction) of the phalanges for each image as a gold standard, and a U-Net Convolutional Neural Network (CNN) computer vision model for segmentation of healthy phalanges was developed and tested. A validation study was performed in an independent dataset whereby two independent radiologists manually measured the longest length and perpendicular short axis of each lesion and then calculated an estimated area by assuming the lesion was elliptical using the formula long axis/2 × short axis/2 × π, and a computer scientist used a region growing technique to calculate the area of CC lesions. Spearman’s correlation coefficient, Lin’s concordance correlation coefficient with 95% confidence intervals (CI), and a Bland-Altman plot (Stata V 15.1, College Station, TX) were used to test for equivalence between the radiologists’ and the CNN algorithm-generated area estimates. Results Forty de-identified 2D DECT images from SSc patients with clinically evident finger CC lesions were obtained and divided into training (N = 30 with image rotation × 3 to expand the set to N = 120) and test sets (N = 10). In the training set, five hundred epochs (iterations) were required to train the CNN algorithm to segment phalanges from adjacent CC, and accurate segmentation was evaluated using the ten held-out images. To test model performance, CC lesional area estimates calculated by two independent radiologists and a computer scientist were compared (radiologist 1 vs. radiologist 2 and radiologist 1 vs. computer vision approach) using an independent test dataset comprised of 31 images (8 index finger and 23 other fingers). For the two radiologists’, and the radiologist vs. computer vision measurements, Spearman’s rho was 0.91 and 0.94, respectively, both p < 0.0001; Lin’s concordance correlation coefficient was 0.91 (95% CI 0.85–0.98, p < 0.001) and 0.95 (95% CI 0.91–0.99, p < 0.001); and Bland-Altman plots demonstrated a mean difference between radiologist vs. radiologist, and radiologist vs. computer vision area estimates of − 0.5 mm2 (95% limits of agreement − 10.0–9.0 mm2) and 1.7 mm2 (95% limits of agreement − 6.0–9.5 mm2, respectively. Conclusions We demonstrate that CNN quantification has a high degree of correlation with expert radiologist measurement of finger CC area measurements. Future work will include segmentation of 3-dimensional (3D) images for volumetric and density quantification, as well as validation in larger, independent cohorts.
Collapse
Affiliation(s)
- Anita C Chandrasekaran
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Zhicheng Fu
- Department of Computer Science, Illinois Institute of Technology, 10 W 31st St, Chicago, IL, 60616, USA.,Motorola Mobility LLC, 222 W Merchandise Mart Plaza #1800, Chicago, IL, 60654, USA
| | - Reid Kraniski
- Department of Radiology, Yale School of Medicine, 330 Cedar St, New Haven, CT, 06520, USA
| | - F Perry Wilson
- Clinical and Translational Research Accelerator, Department of Medicine, Yale School of Medicine, Temple Medical Center, 60 Temple Street Suite 6C, New Haven, CT, 06510, USA
| | - Shannon Teaw
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Michelle Cheng
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Annie Wang
- Department of Radiology, Yale School of Medicine, 330 Cedar St, New Haven, CT, 06520, USA
| | - Shangping Ren
- Department of Computer Science, Illinois Institute of Technology, 10 W 31st St, Chicago, IL, 60616, USA.,Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA, 92182, USA
| | - Imran M Omar
- Department of Radiology, Northwestern University Feinberg School of Medicine, 676 N St Clair St, Chicago, IL, 60611, USA
| | - Monique E Hinchcliff
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA. .,Clinical and Translational Research Accelerator, Department of Medicine, Yale School of Medicine, Temple Medical Center, 60 Temple Street Suite 6C, New Haven, CT, 06510, USA. .,Department of Medicine, Division of Rheumatology, Northwestern University Feinberg School of Medicine, 240 E. Huron Street, Suite M-300, Chicago, IL, 60611, USA.
| |
Collapse
|
221
|
Chen L, Liu Y, Xu H, Ma L, Wang Y, Wang F, Zhu J, Hu X, Yi K, Yang Y, Shen H, Zhou F, Gao X, Cheng Y, Bai L, Duan Y, Wang F, Zhu Y. Touchable cell biophysics property recognition platforms enable multifunctional blood smart health care. MICROSYSTEMS & NANOENGINEERING 2021; 7:103. [PMID: 34963817 PMCID: PMC8651774 DOI: 10.1038/s41378-021-00329-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 10/25/2021] [Accepted: 11/06/2021] [Indexed: 05/10/2023]
Abstract
As a crucial biophysical property, red blood cell (RBC) deformability is pathologically altered in numerous disease states, and biochemical and structural changes occur over time in stored samples of otherwise normal RBCs. However, there is still a gap in applying it further to point-of-care blood devices due to the large external equipment (high-resolution microscope and microfluidic pump), associated operational difficulties, and professional analysis. Herein, we revolutionarily propose a smart optofluidic system to provide a differential diagnosis for blood testing via precise cell biophysics property recognition both mechanically and morphologically. Deformation of the RBC population is caused by pressing the hydrogel via an integrated mechanical transfer device. The biophysical properties of the cell population are obtained by the designed smartphone algorithm. Artificial intelligence-based modeling of cell biophysics properties related to blood diseases and quality was developed for online testing. We currently achieve 100% diagnostic accuracy for five typical clinical blood diseases (90 megaloblastic anemia, 78 myelofibrosis, 84 iron deficiency anemia, 48 thrombotic thrombocytopenic purpura, and 48 thalassemias) via real-world prospective implementation; furthermore, personalized blood quality (for transfusion in cardiac surgery) monitoring is achieved with an accuracy of 96.9%. This work suggests a potential basis for next-generation blood smart health care devices.
Collapse
Affiliation(s)
- Longfei Chen
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
- Shenzhen Research Institute, Wuhan University, Shenzhen, 518000 China
| | - Yantong Liu
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
- Shenzhen Research Institute, Wuhan University, Shenzhen, 518000 China
| | - Hongshan Xu
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
| | - Linlu Ma
- Department of Hematology, Zhongnan Hospital, Wuhan University, Wuhan, 430071 China
| | - Yifan Wang
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
| | - Fang Wang
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
| | - Jiaomeng Zhu
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
| | - Xuejia Hu
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
| | - Kezhen Yi
- Department of Laboratory Medicine, Zhongnan Hospital, Wuhan University, Wuhan, 430071 China
| | - Yi Yang
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
- Shenzhen Research Institute, Wuhan University, Shenzhen, 518000 China
| | - Hui Shen
- Department of Hematology, Zhongnan Hospital, Wuhan University, Wuhan, 430071 China
| | - Fuling Zhou
- Department of Hematology, Zhongnan Hospital, Wuhan University, Wuhan, 430071 China
| | - Xiaoqi Gao
- Affiliations School of Physics & Technology, Key Laboratory of Artificial Micro/Nano Structure of Ministry of Education, Wuhan University, Wuhan, 430072 China
| | - Yanxiang Cheng
- Remin Hospital of Wuhan University, Wuhan University, Wuhan, 430060 China
| | - Long Bai
- School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310002 China
| | - Yongwei Duan
- Department of Laboratory Medicine, Zhongnan Hospital, Wuhan University, Wuhan, 430071 China
| | - Fubing Wang
- Department of Laboratory Medicine, Zhongnan Hospital, Wuhan University, Wuhan, 430071 China
| | - Yimin Zhu
- School of Medicine, Zhejiang University, Hangzhou, Zhejiang, 310002 China
| |
Collapse
|
222
|
AIM in Dermatology. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_188-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
223
|
|
224
|
Muñoz-López C, Ramírez-Cornejo C, Marchetti MA, Han SS, Del Barrio-Díaz P, Jaque A, Uribe P, Majerson D, Curi M, Del Puerto C, Reyes-Baraona F, Meza-Romero R, Parra-Cares J, Araneda-Ortega P, Guzmán M, Millán-Apablaza R, Nuñez-Mora M, Liopyris K, Vera-Kellet C, Navarrete-Dechent C. Performance of a deep neural network in teledermatology: a single-centre prospective diagnostic study. J Eur Acad Dermatol Venereol 2020; 35:546-553. [PMID: 33037709 DOI: 10.1111/jdv.16979] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 09/22/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND The use of artificial intelligence (AI) algorithms for the diagnosis of skin diseases has shown promise in experimental settings but has not been yet tested in real-life conditions. OBJECTIVE To assess the diagnostic performance and potential clinical utility of a 174-multiclass AI algorithm in a real-life telemedicine setting. METHODS Prospective, diagnostic accuracy study including consecutive patients who submitted images for teledermatology evaluation. The treating dermatologist chose a single image to upload to a web application during teleconsultation. A follow-up reader study including nine healthcare providers (3 dermatologists, 3 dermatology residents and 3 general practitioners) was performed. RESULTS A total of 340 cases from 281 patients met study inclusion criteria. The mean (SD) age of patients was 33.7 (17.5) years; 63% (n = 177) were female. Exposure to the AI algorithm results was considered useful in 11.8% of visits (n = 40) and the teledermatologist correctly modified the real-time diagnosis in 0.6% (n = 2) of cases. The overall top-1 accuracy of the algorithm (41.2%) was lower than that of the dermatologists (60.1%), residents (57.8%) and general practitioners (49.3%) (all comparisons P < 0.05, in the reader study). When the analysis was limited to the diagnoses on which the algorithm had been explicitly trained, the balanced top-1 accuracy of the algorithm (47.6%) was comparable to the dermatologists (49.7%) and residents (47.7%) but superior to the general practitioners (39.7%; P = 0.049). Algorithm performance was associated with patient skin type and image quality. CONCLUSIONS A 174-disease class AI algorithm appears to be a promising tool in the triage and evaluation of lesions with patient-taken photographs via telemedicine.
Collapse
Affiliation(s)
- C Muñoz-López
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - C Ramírez-Cornejo
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - M A Marchetti
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - S S Han
- Dermatology Clinic, Seoul, Korea
| | - P Del Barrio-Díaz
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - A Jaque
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - P Uribe
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.,Melanoma and Skin Cancer Unit, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - D Majerson
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - M Curi
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - C Del Puerto
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - F Reyes-Baraona
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - R Meza-Romero
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - J Parra-Cares
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - P Araneda-Ortega
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - M Guzmán
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - R Millán-Apablaza
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - M Nuñez-Mora
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - K Liopyris
- Department of Dermatology, University of Athens, Andreas Syggros Hospital of Skin and Venereal Diseases, Athens, Greece
| | - C Vera-Kellet
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - C Navarrete-Dechent
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile.,Melanoma and Skin Cancer Unit, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
225
|
Deep learning based classification of facial dermatological disorders. Comput Biol Med 2020; 128:104118. [PMID: 33221639 DOI: 10.1016/j.compbiomed.2020.104118] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 11/03/2020] [Accepted: 11/07/2020] [Indexed: 01/11/2023]
Abstract
Common properties of dermatological diseases are mostly lesions with abnormal pattern and skin color (usually redness). Therefore, dermatology is one of the most appropriate areas in medicine for automated diagnosis from images using pattern recognition techniques to provide accurate, objective, early diagnosis and interventions. Also, automated techniques provide diagnosis without depending on location and time. In addition, the number of patients in dermatology departments and costs of dermatologist visits can be reduced. Therefore, in this work, an automated method is proposed to classify dermatological diseases from color digital photographs. Efficiency of the proposed approach is provided by 2 stages. In the 1st stage, lesions are detected and extracted by using a variational level set technique after noise reduction and intensity normalization steps. In the 2nd stage, lesions are classified using a pre-trained DenseNet201 architecture with an efficient loss function. In this study, five common facial dermatological diseases are handled since they also cause anxiety, depression and even suicide death. The main contributions provided by this work can be identified as follows: (i) A comprehensive survey about the state-of-the-art works on classifications of dermatological diseases using deep learning; (ii) A new fully automated lesion detection and segmentation based on level sets; (iii) A new adaptive, hybrid and non-symmetric loss function; (iv) Using a pre-trained DenseNet201 structure with the new loss function to classify skin lesions; (v) Comparative evaluations of ten convolutional networks for skin lesion classification. Experimental results indicate that the proposed approach can classify lesions with high performance (95.24% accuracy).
Collapse
|
226
|
Raza SA, Al-Niaimi F, Ali FR. The advent of artificial intelligence for the identification of skin lesions. Clin Exp Dermatol 2020; 46:413-415. [PMID: 33145783 DOI: 10.1111/ced.14405] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2020] [Indexed: 11/28/2022]
Affiliation(s)
- S A Raza
- Birmingham Medical School, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - F Al-Niaimi
- Department of Dermatology, Aalborg University Hospital, Aalborg, Denmark
| | - F R Ali
- Vernova Healthcare Community Interest Company, Macclesfield, Cheshire, UK.,Dermatological Surgery and Laser Unit, Guy's Cancer Centre, Guy's and St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
227
|
Han SS, Kim SH, Na JI. Seems to Be Low, but Is it Really Poor? Need for Cohort and Comparative Studies to Clarify the Performance of Deep Neural Networks. J Invest Dermatol 2020; 141:1329-1331. [PMID: 33075350 DOI: 10.1016/j.jid.2020.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 08/17/2020] [Accepted: 08/19/2020] [Indexed: 11/24/2022]
Affiliation(s)
- Seung Seog Han
- Department of Dermatology, I Dermatology Clinic, Seoul, Korea
| | - Seong Hwan Kim
- Department of Plastic and Reconstructive Surgery, Kangnam Sacred Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Jung-Im Na
- Department of Dermatology, Seoul National University Bundang Hospital, Seongnam, Korea.
| |
Collapse
|
228
|
Burlina PM, Joshi NJ, Mathew PA, Paul W, Rebman AW, Aucott JN. AI-based detection of erythema migrans and disambiguation against other skin lesions. Comput Biol Med 2020; 125:103977. [DOI: 10.1016/j.compbiomed.2020.103977] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Revised: 08/14/2020] [Accepted: 08/15/2020] [Indexed: 12/28/2022]
|
229
|
Thomsen K, Christensen AL, Iversen L, Lomholt HB, Winther O. Deep Learning for Diagnostic Binary Classification of Multiple-Lesion Skin Diseases. Front Med (Lausanne) 2020; 7:574329. [PMID: 33072786 PMCID: PMC7536339 DOI: 10.3389/fmed.2020.574329] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2020] [Accepted: 08/24/2020] [Indexed: 11/13/2022] Open
Abstract
Background: Diagnosis of skin diseases is often challenging and computer-aided diagnostic tools are urgently needed to underpin decision making. Objective: To develop a convolutional neural network model to classify clinically relevant selected multiple-lesion skin diseases, this in accordance to the STARD guidelines. Methods: This was an image-based retrospective study using multi-task learning for binary classification. A VGG-16 model was trained on 16,543 non-standardized images. Image data was distributed in training set (80%), validation set (10%), and test set (10%). All images were collected from a clinical database of a Danish population attending one dermatological department. Included was patients categorized with ICD-10 codes related to acne, rosacea, psoriasis, eczema, and cutaneous t-cell lymphoma. Results: Acne was distinguished from rosacea with a sensitivity of 85.42% CI 72.24–93.93% and a specificity of 89.53% CI 83.97–93.68%, cutaneous t-cell lymphoma was distinguished from eczema with a sensitivity of 74.29% CI 67.82–80.05% and a specificity of 84.09% CI 80.83–86.99%, and psoriasis from eczema with a sensitivity of 81.79% CI 78.51–84.76% and a specificity of 73.57% CI 69.76–77.13%. All results were based on the test set. Conclusion: The performance rates reported were equal or superior to those reported for general practitioners with dermatological training, indicating that computer-aided diagnostic models based on convolutional neural network may potentially be employed for diagnosing multiple-lesion skin diseases.
Collapse
Affiliation(s)
- Kenneth Thomsen
- Department of Dermatology and Venereology, Aarhus University Hospital, Aarhus, Denmark
| | - Anja Liljedahl Christensen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| | - Lars Iversen
- Department of Dermatology and Venereology, Aarhus University Hospital, Aarhus, Denmark
| | | | - Ole Winther
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.,Center for Genomic Medicine, Rigshospitalet, Copenhagen University Hospital, Copenhagen, Denmark.,Department of Biology, Bioinformatics Centre, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
230
|
Pouly M, Koller T, Gottfrois P, Lionetti S. [Artificial intelligence in image analysis-fundamentals and new developments]. Hautarzt 2020; 71:660-668. [PMID: 32789670 DOI: 10.1007/s00105-020-04663-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
BACKGROUND Since 2017, there have been several reports of artificial intelligence (AI) achieving comparable performance to human experts on medical image analysis tasks. With the first ratification of a computer vision algorithm as a medical device in 2018, the way was paved for these methods to eventually become an integral part of modern clinical practice. OBJECTIVES The purpose of this article is to review the main developments that have occurred over the last few years in AI for image analysis, in relation to clinical applications and dermatology. MATERIALS AND METHODS Following the annual ImageNet challenge, we review classical methods of machine learning for image analysis and demonstrate how these methods incorporated human expertise but failed to meet industrial requirements regarding performance and scalability. With the rise of deep learning based on artificial neural networks, these limitations could be overcome. We discuss important aspects of this technology including transfer learning and report on recent developments such as explainable AI and generative models. RESULTS Deep learning models achieved performance on a par with human experts in a broad variety of diagnostic tasks and were shown to be suitable for industrialization. Therefore, current developments focus less on further improving accuracy but rather address open issues such as interpretability and applicability under clinical conditions. Upcoming generative models allow for entirely new applications. CONCLUSIONS Deep learning has a history of remarkable success and has become the new technical standard for image analysis. The dramatic improvement these models brought over classical approaches enables applications in a rapidly increasing number of clinical fields. In dermatology, as in many other domains, artificial intelligence still faces considerable challenges but is undoubtedly developing into an essential tool of modern medicine.
Collapse
Affiliation(s)
- Marc Pouly
- Informatik, Hochschule Luzern, Suurstoffi 1, 6343, Rotkreuz, Schweiz.
| | - Thomas Koller
- Informatik, Hochschule Luzern, Suurstoffi 1, 6343, Rotkreuz, Schweiz
| | - Philippe Gottfrois
- Department of Biomedical Engineering, University of Basel, Gewerbestraße 14, 4123, Allschwil, Schweiz
| | - Simone Lionetti
- Informatik, Hochschule Luzern, Suurstoffi 1, 6343, Rotkreuz, Schweiz
| |
Collapse
|
231
|
Ankrum J. Diagnosing skin diseases using an AI-based dermatology consult. Sci Transl Med 2020. [DOI: 10.1126/scitranslmed.abc8946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Deep learning outperforms general practitioners in diagnosing 26 common skin conditions.
Collapse
Affiliation(s)
- James Ankrum
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|