1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Lyakhova UA, Lyakhov PA. Systematic review of approaches to detection and classification of skin cancer using artificial intelligence: Development and prospects. Comput Biol Med 2024; 178:108742. [PMID: 38875908 DOI: 10.1016/j.compbiomed.2024.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 06/03/2024] [Accepted: 06/08/2024] [Indexed: 06/16/2024]
Abstract
In recent years, there has been a significant improvement in the accuracy of the classification of pigmented skin lesions using artificial intelligence algorithms. Intelligent analysis and classification systems are significantly superior to visual diagnostic methods used by dermatologists and oncologists. However, the application of such systems in clinical practice is severely limited due to a lack of generalizability and risks of potential misclassification. Successful implementation of artificial intelligence-based tools into clinicopathological practice requires a comprehensive study of the effectiveness and performance of existing models, as well as further promising areas for potential research development. The purpose of this systematic review is to investigate and evaluate the accuracy of artificial intelligence technologies for detecting malignant forms of pigmented skin lesions. For the study, 10,589 scientific research and review articles were selected from electronic scientific publishers, of which 171 articles were included in the presented systematic review. All selected scientific articles are distributed according to the proposed neural network algorithms from machine learning to multimodal intelligent architectures and are described in the corresponding sections of the manuscript. This research aims to explore automated skin cancer recognition systems, from simple machine learning algorithms to multimodal ensemble systems based on advanced encoder-decoder models, visual transformers (ViT), and generative and spiking neural networks. In addition, as a result of the analysis, future directions of research, prospects, and potential for further development of automated neural network systems for classifying pigmented skin lesions are discussed.
Collapse
Affiliation(s)
- U A Lyakhova
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia.
| | - P A Lyakhov
- Department of Mathematical Modeling, North-Caucasus Federal University, 355017, Stavropol, Russia; North-Caucasus Center for Mathematical Research, North-Caucasus Federal University, 355017, Stavropol, Russia.
| |
Collapse
|
3
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
4
|
K AK, T Y S, Ahmed ST, Mathivanan SK, Varadhan S, Shah MA. Trained neural networking framework based skin cancer diagnosis and categorization using grey wolf optimization. Sci Rep 2024; 14:9388. [PMID: 38654051 DOI: 10.1038/s41598-024-59979-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 04/17/2024] [Indexed: 04/25/2024] Open
Abstract
Skin Cancer is caused due to the mutational differences in epidermis hormones and patch appearances. Many studies are focused on the design and development of effective approaches in diagnosis and categorization of skin cancer. The decisions are made on independent training dataset under limited editions and scenarios. In this research, the kaggle based datasets are optimized and categorized into a labeled data array towards indexing using Federated learning (FL). The technique is developed on grey wolf optimization algorithm to assure the dataset attribute dependencies are extracted and dimensional mapping is processed. The threshold value validation of the dimensional mapping datasets is effectively optimized and trained under the neural networking framework further expanded via federated learning standards. The technique has demonstrated 95.82% accuracy under GWO technique and 94.9% on inter-combination of Trained Neural Networking (TNN) framework and Recessive Learning (RL) in accuracy.
Collapse
Affiliation(s)
- Amit Kumar K
- School of Engineering, CMR University, Bengaluru, India
| | - Satheesha T Y
- School of Computer Science and Engineering, REVA University, Bengaluru, India
| | - Syed Thouheed Ahmed
- Department of Electrical Engineering, Indian Institute of Technology Hyderabad, Hyderabad, India.
| | | | - Sangeetha Varadhan
- Department of Computer Applications, Dr. MGR Educational and Research Institute, Chennai, 600095, India
| | - Mohd Asif Shah
- Kebri Dehar University, Kebri Dehar, Somali, 250, Ethiopia.
- Division of Research and Development, Lovely Professional University, Phagwara, Punjab, 144001, India.
| |
Collapse
|
5
|
Yee J, Rosendahl C, Aoude LG. The role of artificial intelligence and convolutional neural networks in the management of melanoma: a clinical, pathological, and radiological perspective. Melanoma Res 2024; 34:96-104. [PMID: 38141179 PMCID: PMC10906187 DOI: 10.1097/cmr.0000000000000951] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 11/29/2023] [Indexed: 12/25/2023]
Abstract
Clinical dermatoscopy and pathological slide assessment are essential in the diagnosis and management of patients with cutaneous melanoma. For those presenting with stage IIC disease and beyond, radiological investigations are often considered. The dermatoscopic, whole slide and radiological images used during clinical care are often stored digitally, enabling artificial intelligence (AI) and convolutional neural networks (CNN) to learn, analyse and contribute to the clinical decision-making. A keyword search of the Medline database was performed to assess the progression, capabilities and limitations of AI and CNN and its use in diagnosis and management of cutaneous melanoma. Full-text articles were reviewed if they related to dermatoscopy, pathological slide assessment or radiology. Through analysis of 95 studies, we demonstrate that diagnostic accuracy of AI/CNN can be superior (or at least equal) to clinicians. However, variability in image acquisition, pre-processing, segmentation, and feature extraction remains challenging. With current technological abilities, AI/CNN and clinicians synergistically working together are better than one another in all subspecialty domains relating to cutaneous melanoma. AI has the potential to enhance the diagnostic capabilities of junior dermatology trainees, primary care skin cancer clinicians and general practitioners. For experienced clinicians, AI provides a cost-efficient second opinion. From a pathological and radiological perspective, CNN has the potential to improve workflow efficiency, allowing clinicians to achieve more in a finite amount of time. Until the challenges of AI/CNN are reliably met, however, they can only remain an adjunct to clinical decision-making.
Collapse
Affiliation(s)
- Joshua Yee
- Faculty of Medicine, University of Queensland, St Lucia
| | - Cliff Rosendahl
- Primary Care Clinical Unit, Medical School, The University of Queensland, Herston
| | - Lauren G. Aoude
- Frazer Institute, The University of Queensland, Woolloongabba, QLD, Australia
| |
Collapse
|
6
|
Wei ML, Tada M, So A, Torres R. Artificial intelligence and skin cancer. Front Med (Lausanne) 2024; 11:1331895. [PMID: 38566925 PMCID: PMC10985205 DOI: 10.3389/fmed.2024.1331895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Artificial intelligence is poised to rapidly reshape many fields, including that of skin cancer screening and diagnosis, both as a disruptive and assistive technology. Together with the collection and availability of large medical data sets, artificial intelligence will become a powerful tool that can be leveraged by physicians in their diagnoses and treatment plans for patients. This comprehensive review focuses on current progress toward AI applications for patients, primary care providers, dermatologists, and dermatopathologists, explores the diverse applications of image and molecular processing for skin cancer, and highlights AI's potential for patient self-screening and improving diagnostic accuracy for non-dermatologists. We additionally delve into the challenges and barriers to clinical implementation, paths forward for implementation and areas of active research.
Collapse
Affiliation(s)
- Maria L. Wei
- Department of Dermatology, University of California, San Francisco, San Francisco, CA, United States
- Dermatology Service, San Francisco VA Health Care System, San Francisco, CA, United States
| | - Mikio Tada
- Institute for Neurodegenerative Diseases, University of California, San Francisco, San Francisco, CA, United States
| | - Alexandra So
- School of Medicine, University of California, San Francisco, San Francisco, CA, United States
| | - Rodrigo Torres
- Dermatology Service, San Francisco VA Health Care System, San Francisco, CA, United States
| |
Collapse
|
7
|
Desale RP, Patil PS. An efficient multi-class classification of skin cancer using optimized vision transformer. Med Biol Eng Comput 2024; 62:773-789. [PMID: 37996627 DOI: 10.1007/s11517-023-02969-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 11/07/2023] [Indexed: 11/25/2023]
Abstract
Skin cancer is a pervasive and deadly disease, prompting a surge in research efforts towards utilizing computer-based techniques to analyze skin lesion images to identify malignancies. This paper introduces an optimized vision transformer approach for effectively classifying skin tumors. The methodology begins with a pre-processing step aimed at preserving color constancy, eliminating hair artifacts, and reducing image noise. Here, a combination of techniques such as piecewise linear bottom hat filtering, adaptive median filtering, Gaussian filtering, and an enhanced gradient intensity method is used for pre-processing. Afterwards, the segmentation phase is initiated using the self-sparse watershed algorithm on the pre-processed image. Subsequently, the segmented image is passed through a feature extraction stage where the hybrid Walsh-Hadamard Karhunen-Loeve expansion technique is employed. The final step involves the application of an improved vision transformer for skin cancer classification. The entire methodology is implemented using the Python programming language, and the International Skin Imaging Collaboration (ISIC) 2019 database is utilized for experimentation. The experimental results demonstrate remarkable performance with the different performance metrics is accuracy 99.81%, precision 96.65%, sensitivity 98.21%, F-measure 97.42%, specificity 99.88%, recall 98.21%, Jaccard coefficient 98.54%, and Mathew's correlation coefficient (MCC) 98.89%. The proposed methodology outperforms the existing methodology.
Collapse
Affiliation(s)
- R P Desale
- E&TC Engineering Department, SSVPS's Bapusaheb Shivajirao Deore College of Engineering, Dhule, Maharashtra, 424005, India.
| | - P S Patil
- E&TC Engineering Department, SSVPS's Bapusaheb Shivajirao Deore College of Engineering, Dhule, Maharashtra, 424005, India
| |
Collapse
|
8
|
Chanda T, Hauser K, Hobelsberger S, Bucher TC, Garcia CN, Wies C, Kittler H, Tschandl P, Navarrete-Dechent C, Podlipnik S, Chousakos E, Crnaric I, Majstorovic J, Alhajwan L, Foreman T, Peternel S, Sarap S, Özdemir İ, Barnhill RL, Llamas-Velasco M, Poch G, Korsing S, Sondermann W, Gellrich FF, Heppt MV, Erdmann M, Haferkamp S, Drexler K, Goebeler M, Schilling B, Utikal JS, Ghoreschi K, Fröhling S, Krieghoff-Henning E, Brinker TJ. Dermatologist-like explainable AI enhances trust and confidence in diagnosing melanoma. Nat Commun 2024; 15:524. [PMID: 38225244 PMCID: PMC10789736 DOI: 10.1038/s41467-023-43095-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 10/31/2023] [Indexed: 01/17/2024] Open
Abstract
Artificial intelligence (AI) systems have been shown to help dermatologists diagnose melanoma more accurately, however they lack transparency, hindering user acceptance. Explainable AI (XAI) methods can help to increase transparency, yet often lack precise, domain-specific explanations. Moreover, the impact of XAI methods on dermatologists' decisions has not yet been evaluated. Building upon previous research, we introduce an XAI system that provides precise and domain-specific explanations alongside its differential diagnoses of melanomas and nevi. Through a three-phase study, we assess its impact on dermatologists' diagnostic accuracy, diagnostic confidence, and trust in the XAI-support. Our results show strong alignment between XAI and dermatologist explanations. We also show that dermatologists' confidence in their diagnoses, and their trust in the support system significantly increase with XAI compared to conventional AI. This study highlights dermatologists' willingness to adopt such XAI systems, promoting future use in the clinic.
Collapse
Affiliation(s)
- Tirtha Chanda
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Katja Hauser
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sarah Hobelsberger
- Department of Dermatology, University Hospital, Technical University Dresden, Dresden, Germany
| | - Tabea-Clara Bucher
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Carina Nogueira Garcia
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Christoph Wies
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty of University Heidelberg, Heidelberg, Germany
| | - Harald Kittler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Philipp Tschandl
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Cristian Navarrete-Dechent
- Department of Dermatology, Escuela de Medicina, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Sebastian Podlipnik
- Dermatology Department, Hospital Clínic of Barcelona, University of Barcelona, IDIBAPS, Barcelona, Spain
| | - Emmanouil Chousakos
- 1st Department of Pathology, Medical School, National & Kapodistrian University of Athens, Athens, Greece
| | - Iva Crnaric
- Department of Dermatovenereology, Sestre milosrdnice University Hospital Center, Zagreb, Croatia
| | | | - Linda Alhajwan
- Department of Dermatology, Dubai London Clinic, Dubai, United Arab Emirates
| | | | - Sandra Peternel
- Department of Dermatovenereology, Clinical Hospital Center Rijeka, Faculty of Medicine, University of Rijeka, Rijeka, Croatia
| | | | - İrem Özdemir
- Department of Dermatology, Faculty of Medicine, Gazi University, Ankara, Turkey
| | - Raymond L Barnhill
- Department of Translational Research, Institut Curie, Unit of Formation and Research of Medicine University of Paris, Paris, France
| | | | - Gabriela Poch
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Dermatology, Venereology and Allergology, Berlin, Germany
| | - Sören Korsing
- Department of Dermatology, University Hospital Essen, University Duisburg-Essen, Essen, Germany
| | - Wiebke Sondermann
- Department of Dermatology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | - Markus V Heppt
- Department of Dermatology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Michael Erdmann
- Department of Dermatology, Uniklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Sebastian Haferkamp
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Konstantin Drexler
- Department of Dermatology, University Hospital Regensburg, Regensburg, Germany
| | - Matthias Goebeler
- Department of Dermatology, Venereology and Allergology, University Hospital Würzburg, Würzburg, Germany
| | - Bastian Schilling
- Department of Dermatology, Venereology and Allergology, University Hospital Würzburg, Würzburg, Germany
| | - Jochen S Utikal
- Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Ruprecht-Karl University of Heidelberg, Mannheim, Germany
| | - Kamran Ghoreschi
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Dermatology, Venereology and Allergology, Berlin, Germany
| | - Stefan Fröhling
- Division of Translational Medical Oncology, National Center for Tumor Diseases (NCT) Heidelberg and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
9
|
Kourounis G, Elmahmudi AA, Thomson B, Hunter J, Ugail H, Wilson C. Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals. Postgrad Med J 2023; 99:1287-1294. [PMID: 37794609 PMCID: PMC10658730 DOI: 10.1093/postmj/qgad095] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/06/2023] [Accepted: 09/13/2023] [Indexed: 10/06/2023]
Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive, diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a field that enables machines to 'see' and interpret visual data. Understanding how these models work can help clinicians leverage their full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to enhance diagnostic accuracy, streamline workflow efficiency, and expand access to expert-level image analysis, contributing to the ultimate goal of delivering further improvements in patient and healthcare outcomes.
Collapse
Affiliation(s)
- Georgios Kourounis
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| | - Ali Ahmed Elmahmudi
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Brian Thomson
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - James Hunter
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | - Hassan Ugail
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Colin Wilson
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| |
Collapse
|
10
|
Huang X, Chen X, Zhong X, Tian T. The CNN model aided the study of the clinical value hidden in the implant images. J Appl Clin Med Phys 2023; 24:e14141. [PMID: 37656066 PMCID: PMC10562019 DOI: 10.1002/acm2.14141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
PURPOSE This article aims to construct a new method to evaluate radiographic image identification results based on artificial intelligence, which can complement the limited vision of researchers when studying the effect of various factors on clinical implantation outcomes. METHODS We constructed a convolutional neural network (CNN) model using the clinical implant radiographic images. Moreover, we used gradient-weighted class activation mapping (Grad-CAM) to obtain thermal maps to present identification differences before performing statistical analyses. Subsequently, to verify whether these differences presented by the Grad-CAM algorithm would be of value to clinical practices, we measured the bone thickness around the identified sites. Finally, we analyzed the influence of the implant type on the implantation according to the measurement results. RESULTS The thermal maps showed that the sites with significant differences between Straumann BL and Bicon implants as identified by the CNN model were mainly the thread and neck area. (2) The heights of the mesial, distal, buccal, and lingual bone of the Bicon implant post-op were greater than those of Straumann BL (P < 0.05). (3) Between the first and second stages of surgery, the amount of bone thickness variation at the buccal and lingual sides of the Bicon implant platform was greater than that of the Straumann BL implant (P < 0.05). CONCLUSION According to the results of this study, we found that the identified-neck-area of the Bicon implant was placed deeper than the Straumann BL implant, and there was more bone resorption on the buccal and lingual sides at the Bicon implant platform between the first and second stages of surgery. In summary, this study proves that using the CNN classification model can identify differences that complement our limited vision.
Collapse
Affiliation(s)
- Xinxu Huang
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xingyu Chen
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xinnan Zhong
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Taoran Tian
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| |
Collapse
|
11
|
Wang X, Li N, Yin X, Xing L, Zheng Y. Classification of metastatic hepatic carcinoma and hepatocellular carcinoma lesions using contrast-enhanced CT based on EI-CNNet. Med Phys 2023; 50:5630-5642. [PMID: 36869656 DOI: 10.1002/mp.16340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 02/24/2023] [Accepted: 02/24/2023] [Indexed: 03/05/2023] Open
Abstract
BACKGROUND For hepatocellular carcinoma and metastatic hepatic carcinoma, imaging is one of the main diagnostic methods. In clinical practice, diagnosis mainly relied on experienced imaging physicians, which was inefficient and cannot met the demand for rapid and accurate diagnosis. Therefore, how to efficiently and accurately classify the two types of liver cancer based on imaging is an urgent problem to be solved at present. PURPOSE The purpose of this study was to use the deep learning classification model to help radiologists classify the single metastatic hepatic carcinoma and hepatocellular carcinoma based on the enhanced features of enhanced CT (Computer Tomography) portal phase images of the liver site. METHODS In this retrospective study, 52 patients with metastatic hepatic carcinoma and 50 patients with hepatocellular carcinoma were among the patients who underwent preoperative enhanced CT examinations from 2017-2020. A total of 565 CT slices from these patients were used to train and validate the classification network (EI-CNNet, training/validation: 452/113). First, the EI block was used to extract edge information from CT slices to enrich fine-grained information and classify them. Then, ROC (Receiver Operating Characteristic) curve was used to evaluate the performance, accuracy, and recall of the EI-CNNet. Finally, the classification results of EI-CNNet were compared with popular classification models. RESULTS By utilizing 80% data for model training and 20% data for model validation, the average accuracy of this experiment was 98.2% ± 0.62 (mean ± standard deviation (SD)), the recall rate was 97.23% ± 2.77, the precision rate was 98.02% ± 2.07, the network parameters were 11.83 MB, and the validation time was 9.83 s/sample. The classification accuracy was improved by 20.98% compared to the base CNN network and the validation time was 10.38 s/sample. Compared with other classification networks, the InceptionV3 network showed improved classification results, but the number of parameters was increased and the validation time was 33 s/sample, and the classification accuracy was improved by 6.51% using this method. CONCLUSION EI-CNNet demonstrated promised diagnostic performance and has potential to reduce the workload of radiologists and may help distinguish whether the tumor is primary or metastatic in time; otherwise, it may be missed or misjudged.
Collapse
Affiliation(s)
- Xuehu Wang
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Research Center of Machine Vision Engineering & Technology of Hebei Province, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Nie Li
- College of Electronic and Information Engineering, Hebei University, Baoding, China
- Research Center of Machine Vision Engineering & Technology of Hebei Province, Baoding, China
- Key Laboratory of Digital Medical Engineering of Hebei Province, Baoding, China
| | - Xiaoping Yin
- Affiliated Hospital of Hebei University, Bao ding, China
| | - Lihong Xing
- CT/MRI room, Affiliated Hospital of Hebei University, Baoding, Hebei Province, China
| | - Yongchang Zheng
- Department of Liver Surgery, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS & PUMC), Beijing, China
| |
Collapse
|
12
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
13
|
Zhou J, Foroughi Pour A, Deirawan H, Daaboul F, Aung TN, Beydoun R, Ahmed FS, Chuang JH. Integrative deep learning analysis improves colon adenocarcinoma patient stratification at risk for mortality. EBioMedicine 2023; 94:104726. [PMID: 37499603 PMCID: PMC10388166 DOI: 10.1016/j.ebiom.2023.104726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 06/19/2023] [Accepted: 07/10/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND Colorectal cancers are the fourth most diagnosed cancer and the second leading cancer in number of deaths. Many clinical variables, pathological features, and genomic signatures are associated with patient risk, but reliable patient stratification in the clinic remains a challenging task. Here we assess how image, clinical, and genomic features can be combined to predict risk. METHODS We developed and evaluated integrative deep learning models combining formalin-fixed, paraffin-embedded (FFPE) whole slide images (WSIs), clinical variables, and mutation signatures to stratify colon adenocarcinoma (COAD) patients based on their risk of mortality. Our models were trained using a dataset of 108 patients from The Cancer Genome Atlas (TCGA), and were externally validated on newly generated dataset from Wayne State University (WSU) of 123 COAD patients and rectal adenocarcinoma (READ) patients in TCGA (N = 52). FINDINGS We first observe that deep learning models trained on FFPE WSIs of TCGA-COAD separate high-risk (OS < 3 years, N = 38) and low-risk (OS > 5 years, N = 25) patients (AUC = 0.81 ± 0.08, 5 year survival p < 0.0001, 5 year relative risk = 1.83 ± 0.04) though such models are less effective at predicting overall survival (OS) for moderate-risk (3 years < OS < 5 years, N = 45) patients (5 year survival p-value = 0.5, 5 year relative risk = 1.05 ± 0.09). We find that our integrative models combining WSIs, clinical variables, and mutation signatures can improve patient stratification for moderate-risk patients (5 year survival p < 0.0001, 5 year relative risk = 1.87 ± 0.07). Our integrative model combining image and clinical variables is also effective on an independent pathology dataset (WSU-COAD, N = 123) generated by our team (5 year survival p < 0.0001, 5 year relative risk = 1.52 ± 0.08), and the TCGA-READ data (5 year survival p < 0.0001, 5 year relative risk = 1.18 ± 0.17). Our multicenter integrative image and clinical model trained on combined TCGA-COAD and WSU-COAD is effective in predicting risk on TCGA-READ (5 year survival p < 0.0001, 5 year relative risk = 1.82 ± 0.13) data. Pathologist review of image-based heatmaps suggests that nuclear size pleomorphism, intense cellularity, and abnormal structures are associated with high-risk, while low-risk regions have more regular and small cells. Quantitative analysis shows high cellularity, high ratios of tumor cells, large tumor nuclei, and low immune infiltration are indicators of high-risk tiles. INTERPRETATION The improved stratification of colorectal cancer patients from our computational methods can be beneficial for treatment plans and enrollment of patients in clinical trials. FUNDING This study was supported by the National Cancer Institutes (Grant No. R01CA230031 and P30CA034196). The funders had no roles in study design, data collection and analysis or preparation of the manuscript.
Collapse
Affiliation(s)
- Jie Zhou
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, UCONN Health, Farmington, CT, USA
| | | | - Hany Deirawan
- Department of Pathology, Wayne State University, Detroit, MI, USA; Department of Dermatology, Wayne State University, Detroit, MI, USA
| | - Fayez Daaboul
- Department of Pathology, Wayne State University, Detroit, MI, USA
| | - Thazin Nwe Aung
- Department of Pathology, Yale University, New Haven, CT, USA
| | - Rafic Beydoun
- Department of Pathology, Wayne State University, Detroit, MI, USA
| | | | - Jeffrey H Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA; Department of Genetics and Genome Sciences, UCONN Health, Farmington, CT, USA.
| |
Collapse
|
14
|
Doeleman T, Hondelink LM, Vermeer MH, van Dijk MR, Schrader AMR. Artificial intelligence in digital pathology of cutaneous lymphomas: a review of the current state and future perspectives. Semin Cancer Biol 2023:S1044-579X(23)00095-0. [PMID: 37331571 DOI: 10.1016/j.semcancer.2023.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 06/01/2023] [Accepted: 06/12/2023] [Indexed: 06/20/2023]
Abstract
Primary cutaneous lymphomas (CLs) represent a heterogeneous group of T-cell lymphomas and B-cell lymphomas that present in the skin without evidence of extracutaneous involvement at time of diagnosis. CLs are largely distinct from their systemic counterparts in clinical presentation, histopathology, and biological behavior and, therefore, require different therapeutic management. Additional diagnostic burden is added by the fact that several benign inflammatory dermatoses mimic CL subtypes, requiring clinicopathological correlation for definitive diagnosis. Due to the heterogeneity and rarity of CL, adjunct diagnostic tools are welcomed, especially by pathologists without expertise in this field or with limited access to a centralized specialist panel. The transition into digital pathology workflows enables artificial intelligence (AI)-based analysis of patients' whole-slide pathology images (WSIs). AI can be used to automate manual processes in histopathology but, more importantly, can be applied to complex diagnostic tasks, especially suitable for rare disease like CL. To date, AI-based applications for CL have been minimally explored in literature. However, in other skin cancers and systemic lymphomas, disciplines that are recognized here as the building blocks for CLs, several studies demonstrated promising results using AI for disease diagnosis and subclassification, cancer detection, specimen triaging, and outcome prediction. Additionally, AI allows discovery of novel biomarkers or may help to quantify established biomarkers. This review summarizes and blends applications of AI in pathology of skin cancer and lymphoma and proposes how these findings can be applied to diagnostics of CL.
Collapse
Affiliation(s)
- Thom Doeleman
- Department of Pathology, Leiden University Medical Centre, Leiden, the Netherlands; Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands.
| | - Liesbeth M Hondelink
- Department of Pathology, Leiden University Medical Centre, Leiden, the Netherlands
| | - Maarten H Vermeer
- Department of Dermatology, Leiden University Medical Center, Leiden, the Netherlands
| | - Marijke R van Dijk
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Anne M R Schrader
- Department of Pathology, Leiden University Medical Centre, Leiden, the Netherlands
| |
Collapse
|
15
|
Liu N, Rejeesh MR, Sundararaj V, Gunasundari B. ACO-KELM: Anti Coronavirus Optimized Kernel-based Softplus Extreme Learning Machine for Classification of Skin Cancer. EXPERT SYSTEMS WITH APPLICATIONS 2023:120719. [PMID: 37362255 PMCID: PMC10268820 DOI: 10.1016/j.eswa.2023.120719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 04/28/2023] [Accepted: 06/03/2023] [Indexed: 06/28/2023]
Abstract
Due to the presence of redundant and irrelevant features in large-dimensional biomedical datasets, the prediction accuracy of disease diagnosis can often be decreased. Therefore, it is important to adopt feature extraction methodologies that can deal with problem structures and identify underlying data patterns. In this paper, we propose a novel approach called the Anti Coronavirus Optimized Kernel-based Softplus Extreme Learning Machine (ACO-KSELM) to accurately predict different types of skin cancer by analyzing high-dimensional datasets. To evaluate the proposed ACO-KSELM method, we used four different skin cancer image datasets: ISIC 2016, ACS, HAM10000, and PAD-UFES-20. These dermoscopic image datasets were preprocessed using Gaussian filters to remove noise and artifacts, and relevant features based on color, texture, and shape were extracted using color histogram, Haralick texture, and Hu moment extraction approaches, respectively. Finally, the proposed ACO-KSELM method accurately predicted and classified the extracted features into Basal Cell Carcinoma (BCC), Squamous Cell Carcinoma (SCC), Actinic Keratosis (ACK), Seborrheic Keratosis (SEK), Bowen's disease (BOD), Melanoma (MEL), and Nevus (NEV) categories. The analytical results showed that the proposed method achieved a higher rate of prediction accuracy of about 98.9%, 98.7%, 98.6%, and 97.9% for the ISIC 2016, ACS, HAM10000, and PAD-UFES-20 datasets, respectively.
Collapse
Affiliation(s)
- Nannan Liu
- School of Electronic and Information Engineering, Ningbo University of Technology, Ningbo, 315211, China
| | - M R Rejeesh
- REVIRE Intelligence LLP, Eraviputoorakadi, Tamilnadu India
| | | | - B Gunasundari
- Departmentof IT, REVIRE Intelligence LLP, Tamilnadu India
| |
Collapse
|
16
|
Grossarth S, Mosley D, Madden C, Ike J, Smith I, Huo Y, Wheless L. Recent Advances in Melanoma Diagnosis and Prognosis Using Machine Learning Methods. Curr Oncol Rep 2023; 25:635-645. [PMID: 37000340 PMCID: PMC10339689 DOI: 10.1007/s11912-023-01407-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2023] [Indexed: 04/01/2023]
Abstract
PURPOSE OF REVIEW The purpose was to summarize the current role and state of artificial intelligence and machine learning in the diagnosis and management of melanoma. RECENT FINDINGS Deep learning algorithms can identify melanoma from clinical, dermoscopic, and whole slide pathology images with increasing accuracy. Efforts to provide more granular annotation to datasets and to identify new predictors are ongoing. There have been many incremental advances in both melanoma diagnostics and prognostic tools using artificial intelligence and machine learning. Higher quality input data will further improve these models' capabilities.
Collapse
Affiliation(s)
- Sarah Grossarth
- Quillen College of Medicine, East Tennessee State University, Johnson City, TN, USA
| | | | - Christopher Madden
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- State University of New York Downstate College of Medicine, Brooklyn, NY, USA
| | - Jacqueline Ike
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- Meharry Medical College, Nashville, TN, USA
| | - Isabelle Smith
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA
- Vanderbilt University, Nashville, TN, USA
| | - Yuankai Huo
- Department of Computer Science and Electrical Engineering, Vanderbilt University, Nashville, TN, 37235, USA
| | - Lee Wheless
- Department of Dermatology, Vanderbilt University Medicine Center, Nashville, TN, USA.
- Department of Medicine, Division of Epidemiology, Vanderbilt University Medical Center, Nashville, TN, USA.
- Tennessee Valley Healthcare System VA Medical Center, Nashville, TN, USA.
| |
Collapse
|
17
|
Schneider L, Wies C, Krieghoff-Henning EI, Bucher TC, Utikal JS, Schadendorf D, Brinker TJ. Multimodal integration of image, epigenetic and clinical data to predict BRAF mutation status in melanoma. Eur J Cancer 2023; 183:131-138. [PMID: 36854237 DOI: 10.1016/j.ejca.2023.01.021] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 01/20/2023] [Accepted: 01/25/2023] [Indexed: 02/05/2023]
Abstract
BACKGROUND In machine learning, multimodal classifiers can provide more generalised performance than unimodal classifiers. In clinical practice, physicians usually also rely on a range of information from different examinations for diagnosis. In this study, we used BRAF mutation status prediction in melanoma as a model system to analyse the contribution of different data types in a combined classifier because BRAF status can be determined accurately by sequencing as the current gold standard, thus nearly eliminating label noise. METHODS We trained a deep learning-based classifier by combining individually trained random forests of image, clinical and methylation data to predict BRAF-V600 mutation status in primary and metastatic melanomas of The Cancer Genome Atlas cohort. RESULTS With our multimodal approach, we achieved an area under the receiver operating characteristic curve of 0.80, whereas the individual classifiers yielded areas under the receiver operating characteristic curve of 0.63 (histopathologic image data), 0.66 (clinical data) and 0.66 (methylation data) on an independent data set. CONCLUSIONS Our combined approach can predict BRAF status to some extent by identifying BRAF-V600 specific patterns at the histologic, clinical and epigenetic levels. The multimodal classifiers have improved generalisability in predicting BRAF mutation status.
Collapse
Affiliation(s)
- Lucas Schneider
- Digital Biomarkers for Oncology Group, German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | - Christoph Wies
- Digital Biomarkers for Oncology Group, German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | - Eva I Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | - Tabea-Clara Bucher
- Digital Biomarkers for Oncology Group, German Cancer Research Centre (DKFZ), Heidelberg, Germany
| | - Jochen S Utikal
- Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Dermatology, Venereology and Allergology, University Medical Center Mannheim, Ruprecht-Karl University of Heidelberg, Mannheim, Germany; DKFZ Hector Cancer Institute at the University Medical Center Mannheim, Mannheim, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, Essen, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Centre (DKFZ), Heidelberg, Germany.
| |
Collapse
|
18
|
Clarke EL, Wade RG, Magee D, Newton-Bishop J, Treanor D. Image analysis of cutaneous melanoma histology: a systematic review and meta-analysis. Sci Rep 2023; 13:4774. [PMID: 36959221 PMCID: PMC10036523 DOI: 10.1038/s41598-023-31526-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/14/2023] [Indexed: 03/25/2023] Open
Abstract
The current subjective histopathological assessment of cutaneous melanoma is challenging. The application of image analysis algorithms to histological images may facilitate improvements in workflow and prognostication. To date, several individual algorithms applied to melanoma histological images have been reported with variations in approach and reported accuracies. Histological digital images can be created using a camera mounted on a light microscope, or through whole slide image (WSI) generation using a whole slide scanner. Before any such tool could be integrated into clinical workflow, the accuracy of the technology should be carefully evaluated and summarised. Therefore, the objective of this review was to evaluate the accuracy of existing image analysis algorithms applied to digital histological images of cutaneous melanoma. Database searching of PubMed and Embase from inception to 11th March 2022 was conducted alongside citation checking and examining reports from organisations. All studies reporting accuracy of any image analysis applied to histological images of cutaneous melanoma, were included. The reference standard was any histological assessment of haematoxylin and eosin-stained slides and/or immunohistochemical staining. Citations were independently deduplicated and screened by two review authors and disagreements were resolved through discussion. The data was extracted concerning study demographics; type of image analysis; type of reference standard; conditions included and test statistics to construct 2 × 2 tables. Data was extracted in accordance with our protocol and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses-Diagnostic Test Accuracy (PRISMA-DTA) Statement. A bivariate random-effects meta-analysis was used to estimate summary sensitivities and specificities with 95% confidence intervals (CI). Assessment of methodological quality was conducted using a tailored version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. The primary outcome was the pooled sensitivity and specificity of image analysis applied to cutaneous melanoma histological images. Sixteen studies were included in the systematic review, representing 4,888 specimens. Six studies were included in the meta-analysis. The mean sensitivity and specificity of automated image analysis algorithms applied to melanoma histological images was 90% (CI 82%, 95%) and 92% (CI 79%, 97%), respectively. Based on limited and heterogeneous data, image analysis appears to offer high accuracy when applied to histological images of cutaneous melanoma. However, given the early exploratory nature of these studies, further development work is necessary to improve their performance.
Collapse
Affiliation(s)
- Emily L Clarke
- Department of Histopathology, Leeds Teaching Hospitals NHS Trust, Leeds, UK.
- Division of Pathology and Data Analytics, Leeds Institute of Cancer and Pathology, University of Leeds, Beckett Street, Leeds, LS9 7TF, UK.
| | - Ryckie G Wade
- Leeds Institute for Medical Research, University of Leeds, Leeds, UK
| | - Derek Magee
- School of Computing, University of Leeds, Leeds, UK
| | - Julia Newton-Bishop
- Division of Pathology and Data Analytics, Leeds Institute of Cancer and Pathology, University of Leeds, Beckett Street, Leeds, LS9 7TF, UK
| | - Darren Treanor
- Department of Histopathology, Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Division of Pathology and Data Analytics, Leeds Institute of Cancer and Pathology, University of Leeds, Beckett Street, Leeds, LS9 7TF, UK
- Department of Clinical Pathology, and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
19
|
Deep Learning for Skin Melanocytic Tumors in Whole-Slide Images: A Systematic Review. Cancers (Basel) 2022; 15:cancers15010042. [PMID: 36612037 PMCID: PMC9817526 DOI: 10.3390/cancers15010042] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 12/05/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
The rise of Artificial Intelligence (AI) has shown promising performance as a support tool in clinical pathology workflows. In addition to the well-known interobserver variability between dermatopathologists, melanomas present a significant challenge in their histological interpretation. This study aims to analyze all previously published studies on whole-slide images of melanocytic tumors that rely on deep learning techniques for automatic image analysis. Embase, Pubmed, Web of Science, and Virtual Health Library were used to search for relevant studies for the systematic review, in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Articles from 2015 to July 2022 were included, with an emphasis placed on the used artificial intelligence methods. Twenty-eight studies that fulfilled the inclusion criteria were grouped into four groups based on their clinical objectives, including pathologists versus deep learning models (n = 10), diagnostic prediction (n = 7); prognosis (n = 5), and histological features (n = 6). These were then analyzed to draw conclusions on the general parameters and conditions of AI in pathology, as well as the necessary factors for better performance in real scenarios.
Collapse
|
20
|
Diagnostic and Prognostic Deep Learning Applications for Histological Assessment of Cutaneous Melanoma. Cancers (Basel) 2022; 14:cancers14246231. [PMID: 36551716 PMCID: PMC9776963 DOI: 10.3390/cancers14246231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 12/08/2022] [Accepted: 12/14/2022] [Indexed: 12/23/2022] Open
Abstract
Melanoma is among the most devastating human malignancies. Accurate diagnosis and prognosis are essential to offer optimal treatment. Histopathology is the gold standard for establishing melanoma diagnosis and prognostic features. However, discrepancies often exist between pathologists, and analysis is costly and time-consuming. Deep-learning algorithms are deployed to improve melanoma diagnosis and prognostication from histological images of melanoma. In recent years, the development of these machine-learning tools has accelerated, and machine learning is poised to become a clinical tool to aid melanoma histology. Nevertheless, a review of the advances in machine learning in melanoma histology was lacking. We performed a comprehensive literature search to provide a complete overview of the recent advances in machine learning in the assessment of melanoma based on hematoxylin eosin digital pathology images. In our work, we review 37 recent publications, compare the methods and performance of the reviewed studies, and highlight the variety of promising machine-learning applications in melanoma histology.
Collapse
|
21
|
Kim I, Kang K, Song Y, Kim TJ. Application of Artificial Intelligence in Pathology: Trends and Challenges. Diagnostics (Basel) 2022; 12:diagnostics12112794. [PMID: 36428854 PMCID: PMC9688959 DOI: 10.3390/diagnostics12112794] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2022] [Accepted: 11/11/2022] [Indexed: 11/16/2022] Open
Abstract
Given the recent success of artificial intelligence (AI) in computer vision applications, many pathologists anticipate that AI will be able to assist them in a variety of digital pathology tasks. Simultaneously, tremendous advancements in deep learning have enabled a synergy with artificial intelligence (AI), allowing for image-based diagnosis on the background of digital pathology. There are efforts for developing AI-based tools to save pathologists time and eliminate errors. Here, we describe the elements in the development of computational pathology (CPATH), its applicability to AI development, and the challenges it faces, such as algorithm validation and interpretability, computing systems, reimbursement, ethics, and regulations. Furthermore, we present an overview of novel AI-based approaches that could be integrated into pathology laboratory workflows.
Collapse
Affiliation(s)
- Inho Kim
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Kyungmin Kang
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Youngjae Song
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Republic of Korea
- Correspondence: ; Tel.: +82-2-3779-2157
| |
Collapse
|
22
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
23
|
Histologic Screening of Malignant Melanoma, Spitz, Dermal and Junctional Melanocytic Nevi Using a Deep Learning Model. Am J Dermatopathol 2022; 44:650-657. [PMID: 35925282 DOI: 10.1097/dad.0000000000002232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The integration of an artificial intelligence tool into pathologists' workflow may lead to a more accurate and timely diagnosis of melanocytic lesions, directly patient care. The objective of this study was to create and evaluate the performance of such a model in achieving clinical-grade diagnoses of Spitz nevi, dermal and junctional melanocytic nevi, and melanomas. METHODS We created a beginner-level training environment by teaching our algorithm to perform cytologic inferences on 136,216 manually annotated tiles of hematoxylin and eosin-stained slides consisting of unequivocal melanocytic nevi, Spitz nevi, and invasive melanoma cases. We sequentially trained and tested our network to provide a final diagnosis-classification on 39 cases in total. Positive predictive value (precision) and sensitivity (recall) were used to measure our performance. RESULTS The tile-classification algorithm predicted the 136,216 irrelevant, melanoma, melanocytic nevi, and Spitz nevi tiles at sensitivities of 96%, 93%, 94% and 73%, respectively. The final trained model was able to correctly classify and predict the correct diagnosis in 85.7% of unseen cases (n = 28), reporting at or near screening-level performances for precision and recall of melanoma (76.2%, 100.0%), melanocytic nevi (100.0%, 75.0%), and Spitz nevi (100.0%, 75.0%). CONCLUSIONS Our pilot study proves that convolutional networks trained on cellular morphology to classify melanocytic proliferations can be used as a powerful tool to assist pathologists in screening for melanoma versus other benign lesions.
Collapse
|
24
|
Dong X, Li M, Zhou P, Deng X, Li S, Zhao X, Wu Y, Qin J, Guo W. Fusing pre-trained convolutional neural networks features for multi-differentiated subtypes of liver cancer on histopathological images. BMC Med Inform Decis Mak 2022; 22:122. [PMID: 35509058 PMCID: PMC9066403 DOI: 10.1186/s12911-022-01798-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 02/21/2022] [Indexed: 11/10/2022] Open
Abstract
Liver cancer is a malignant tumor with high morbidity and mortality, which has a tremendous negative impact on human survival. However, it is a challenging task to recognize tens of thousands of histopathological images of liver cancer by naked eye, which poses numerous challenges to inexperienced clinicians. In addition, factors such as long time-consuming, tedious work and huge number of images impose a great burden on clinical diagnosis. Therefore, our study combines convolutional neural networks with histopathology images and adopts a feature fusion approach to help clinicians efficiently discriminate the differentiation types of primary hepatocellular carcinoma histopathology images, thus improving their diagnostic efficiency and relieving their work pressure. In this study, for the first time, 73 patients with different differentiation types of primary liver cancer tumors were classified. We performed an adequate classification evaluation of liver cancer differentiation types using four pre-trained deep convolutional neural networks and nine different machine learning (ML) classifiers on a dataset of liver cancer histopathology images with multiple differentiation types. And the test set accuracy, validation set accuracy, running time with different strategies, precision, recall and F1 value were used for adequate comparative evaluation. Proved by experimental results, fusion networks (FuNet) structure is a good choice, which covers both channel attention and spatial attention, and suppresses channel interference with less information. Meanwhile, it can clarify the importance of each spatial location by learning the weights of different locations in space, then apply it to the study of classification of multi-differentiated types of liver cancer. In addition, in most cases, the Stacking-based integrated learning classifier outperforms other ML classifiers in the classification task of multi-differentiation types of liver cancer with the FuNet fusion strategy after dimensionality reduction of the fused features by principle component analysis (PCA) features, and a satisfactory result of 72.46% is achieved in the test set, which has certain practicality.
Collapse
Affiliation(s)
- Xiaogang Dong
- Department of Hepatopancreatobiliary Surgery, Cancer Affiliated Hospital of Xinjiang Medical University, Ürümqi, Xinjiang, China
| | - Min Li
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Ürümqi, 830046, China.,College of Information Science and Engineering, Xinjiang University, Ürümqi, 830046, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Xin Deng
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Siyu Li
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Xingyue Zhao
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Yi Wu
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Jiwei Qin
- College of Information Science and Engineering, Xinjiang University, Ürümqi, 830046, China.
| | - Wenjia Guo
- Cancer Institute, Affiliated Cancer Hospital of Xinjiang Medical University, Ürümqi, 830011, China. .,Key Laboratory of Oncology of Xinjiang Uyghur Autonomous Region, Ürümqi, 830011, China.
| |
Collapse
|
25
|
Bao Y, Zhang J, Zhao X, Zhou H, Chen Y, Jian J, Shi T, Gao X. Deep Learning-Based Fully Automated Diagnosis of Melanocytic Lesions by Using Whole Slide Images. J DERMATOL TREAT 2022; 33:2571-2577. [PMID: 35112978 DOI: 10.1080/09546634.2022.2038772] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Background Erroneous diagnoses of melanocytic lesions (benign, atypical, and malignant types) result in inappropriate surgical treatment plans.Objective To propose a deep learning (DL)-based fully automated diagnostic method using whole slide images (WSIs) for melanocytic lesions.Methods The method consisted of patch prediction using a DL model and patient diagnosis using an aggregation module. The method was developed with 745 WSIs, and evaluated using internal and external testing sets comprising 182 WSIs and 54 WSIs, respectively. The results were compared with those of the classification by one junior and two senior pathologists. Furthermore, we compared the performance of the three pathologists in the classification of melanocytic lesions with and without the assistance of our method.Results The method achieved an accuracy of 0.963 and 0.930 on the internal and external testing sets, respectively, which was significantly higher than that of the junior pathologist (0.419 and 0.535). With assistance from the method, all three pathologists achieved higher accuracy on the internal and external testing sets; the accuracy of the junior pathologist increased by 39.0% and 30.2%, respectively (p < 0.05).Conclusion This generalizable method can accurately classify melanocytic lesions and effectively improve the diagnostic accuracy of pathologists.
Collapse
Affiliation(s)
- Yongyang Bao
- Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Jiayi Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Xingyu Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Henghua Zhou
- Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Ying Chen
- Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Junming Jian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Tianlei Shi
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China
| | - Xin Gao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu, China.,Jinan Guoke Medical Engineering and Technology Development Co., Ltd., Jinan, Shandong, China
| |
Collapse
|
26
|
Korfiati A, Grafanaki K, Kyriakopoulos GC, Skeparnias I, Georgiou S, Sakellaropoulos G, Stathopoulos C. Revisiting miRNA Association with Melanoma Recurrence and Metastasis from a Machine Learning Point of View. Int J Mol Sci 2022; 23:1299. [PMID: 35163222 PMCID: PMC8836065 DOI: 10.3390/ijms23031299] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/20/2022] [Accepted: 01/20/2022] [Indexed: 02/07/2023] Open
Abstract
The diagnostic and prognostic value of miRNAs in cutaneous melanoma (CM) has been broadly studied and supported by advanced bioinformatics tools. From early studies using miRNA arrays with several limitations, to the recent NGS-derived miRNA expression profiles, an accurate diagnostic panel of a comprehensive pre-specified set of miRNAs that could aid timely identification of specific cancer stages is still elusive, mainly because of the heterogeneity of the approaches and the samples. Herein, we summarize the existing studies that report several miRNAs as important diagnostic and prognostic biomarkers in CM. Using publicly available NGS data, we analyzed the correlation of specific miRNA expression profiles with the expression signatures of known gene targets. Combining network analytics with machine learning, we developed specific non-linear classification models that could successfully predict CM recurrence and metastasis, based on two newly identified miRNA signatures. Subsequent unbiased analyses and independent test sets (i.e., a dataset not used for training, as a validation cohort) using our prediction models resulted in 73.85% and 82.09% accuracy in predicting CM recurrence and metastasis, respectively. Overall, our approach combines detailed analysis of miRNA profiles with heuristic optimization and machine learning, which facilitates dimensionality reduction and optimization of the prediction models. Our approach provides an improved prediction strategy that could serve as an auxiliary tool towards precision treatment.
Collapse
Affiliation(s)
- Aigli Korfiati
- Department of Medical Physics, School of Medicine, University of Patras, 26504 Patras, Greece; (A.K.); (G.S.)
| | - Katerina Grafanaki
- Department of Dermatology, School of Medicine, University of Patras, 26504 Patras, Greece;
| | | | - Ilias Skeparnias
- Laboratory of Molecular Biology, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD 20892, USA;
| | - Sophia Georgiou
- Department of Dermatology, School of Medicine, University of Patras, 26504 Patras, Greece;
| | - George Sakellaropoulos
- Department of Medical Physics, School of Medicine, University of Patras, 26504 Patras, Greece; (A.K.); (G.S.)
| | | |
Collapse
|
27
|
Lightweight convolutional neural network with knowledge distillation for cervical cells classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
28
|
Schneider L, Laiouar-Pedari S, Kuntz S, Krieghoff-Henning E, Hekler A, Kather JN, Gaiser T, Fröhling S, Brinker TJ. Integration of deep learning-based image analysis and genomic data in cancer pathology: A systematic review. Eur J Cancer 2021; 160:80-91. [PMID: 34810047 DOI: 10.1016/j.ejca.2021.10.007] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Accepted: 10/11/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Over the past decade, the development of molecular high-throughput methods (omics) increased rapidly and provided new insights for cancer research. In parallel, deep learning approaches revealed the enormous potential for medical image analysis, especially in digital pathology. Combining image and omics data with deep learning tools may enable the discovery of new cancer biomarkers and a more precise prediction of patient prognosis. This systematic review addresses different multimodal fusion methods of convolutional neural network-based image analyses with omics data, focussing on the impact of data combination on the classification performance. METHODS PubMed was screened for peer-reviewed articles published in English between January 2015 and June 2021 by two independent researchers. Search terms related to deep learning, digital pathology, omics, and multimodal fusion were combined. RESULTS We identified a total of 11 studies meeting the inclusion criteria, namely studies that used convolutional neural networks for haematoxylin and eosin image analysis of patients with cancer in combination with integrated omics data. Publications were categorised according to their endpoints: 7 studies focused on survival analysis and 4 studies on prediction of cancer subtypes, malignancy or microsatellite instability with spatial analysis. CONCLUSIONS Image-based classifiers already show high performances in prognostic and predictive cancer diagnostics. The integration of omics data led to improved performance in all studies described here. However, these are very early studies that still require external validation to demonstrate their generalisability and robustness. Further and more comprehensive studies with larger sample sizes are needed to evaluate performance and determine clinical benefits.
Collapse
Affiliation(s)
- Lucas Schneider
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sara Laiouar-Pedari
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sara Kuntz
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Achim Hekler
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jakob N Kather
- Department of Medicine III, RWTH Aachen University Hospital, Aachen, Germany; Medical Oncology, National Center for Tumour Diseases, University Hospital Heidelberg, Heidelberg, Germany
| | - Timo Gaiser
- Institute of Pathology, University Medical Centre Mannheim, University of Heidelberg, Mannheim, Germany
| | - Stefan Fröhling
- Translational Medical Oncology, National Center for Tumour Diseases, German Cancer Research Center, Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
29
|
Kiehl L, Kuntz S, Höhn J, Jutzi T, Krieghoff-Henning E, Kather JN, Holland-Letz T, Kopp-Schneider A, Chang-Claude J, Brobeil A, von Kalle C, Fröhling S, Alwers E, Brenner H, Hoffmeister M, Brinker TJ. Deep learning can predict lymph node status directly from histology in colorectal cancer. Eur J Cancer 2021; 157:464-473. [PMID: 34649117 DOI: 10.1016/j.ejca.2021.08.039] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 08/23/2021] [Accepted: 08/30/2021] [Indexed: 12/11/2022]
Abstract
BACKGROUND Lymph node status is a prognostic marker and strongly influences therapeutic decisions in colorectal cancer (CRC). OBJECTIVES The objective of the study is to investigate whether image features extracted by a deep learning model from routine histological slides and/or clinical data can be used to predict CRC lymph node metastasis (LNM). METHODS Using histological whole slide images (WSIs) of primary tumours of 2431 patients in the DACHS cohort, we trained a convolutional neural network to predict LNM. In parallel, we used clinical data derived from the same cases in logistic regression analyses. Subsequently, the slide-based artificial intelligence predictor (SBAIP) score was included in the regression. WSIs and data from 582 patients of the TCGA cohort were used as the external test set. RESULTS On the internal test set, the SBAIP achieved an area under receiver operating characteristic (AUROC) of 71.0%, the clinical classifier achieved an AUROC of 67.0% and a combination of the two classifiers yielded an improvement to 74.1%. Whereas the clinical classifier's performance remained stable on the TCGA set, performance of the SBAIP dropped to an AUROC of 61.2%. Performance of the clinical classifier depended strongly on the T stage. CONCLUSION Deep learning-based image analysis may help predict LNM of patients with CRC using routine histological slides. Combination with clinical data such as T stage might be useful. Strategies to increase performance of the SBAIP on external images should be investigated.
Collapse
Affiliation(s)
- Lennard Kiehl
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Sara Kuntz
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Julia Höhn
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tanja Jutzi
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jakob N Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Tim Holland-Letz
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Jenny Chang-Claude
- Division of Cancer Epidemiology, German Cancer Research Center (DKFZ), Heidelberg, Germany; Cancer Epidemiology Group, University Cancer Center Hamburg (UCCH), University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Alexander Brobeil
- Institute of Pathology, University of Heidelberg, Heidelberg, Germany; Tissue Bank of the National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Christof von Kalle
- Berlin Institute of Health (BIH) and Charité University Medicine, Berlin, Germany
| | - Stefan Fröhling
- Department of Translational Medical Oncology, National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Elizabeth Alwers
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hermann Brenner
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany; Division of Preventive Oncology, German Cancer Research Center (DKFZ) and National Center for Tumor Diseases (NCT), Heidelberg, Germany; German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Hoffmeister
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany; German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
30
|
Skin cancer classification via convolutional neural networks: systematic review of studies involving human experts. Eur J Cancer 2021; 156:202-216. [PMID: 34509059 DOI: 10.1016/j.ejca.2021.06.049] [Citation(s) in RCA: 79] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 06/18/2021] [Accepted: 06/28/2021] [Indexed: 12/23/2022]
Abstract
BACKGROUND Multiple studies have compared the performance of artificial intelligence (AI)-based models for automated skin cancer classification to human experts, thus setting the cornerstone for a successful translation of AI-based tools into clinicopathological practice. OBJECTIVE The objective of the study was to systematically analyse the current state of research on reader studies involving melanoma and to assess their potential clinical relevance by evaluating three main aspects: test set characteristics (holdout/out-of-distribution data set, composition), test setting (experimental/clinical, inclusion of metadata) and representativeness of participating clinicians. METHODS PubMed, Medline and ScienceDirect were screened for peer-reviewed studies published between 2017 and 2021 and dealing with AI-based skin cancer classification involving melanoma. The search terms skin cancer classification, deep learning, convolutional neural network (CNN), melanoma (detection), digital biomarkers, histopathology and whole slide imaging were combined. Based on the search results, only studies that considered direct comparison of AI results with clinicians and had a diagnostic classification as their main objective were included. RESULTS A total of 19 reader studies fulfilled the inclusion criteria. Of these, 11 CNN-based approaches addressed the classification of dermoscopic images; 6 concentrated on the classification of clinical images, whereas 2 dermatopathological studies utilised digitised histopathological whole slide images. CONCLUSIONS All 19 included studies demonstrated superior or at least equivalent performance of CNN-based classifiers compared with clinicians. However, almost all studies were conducted in highly artificial settings based exclusively on single images of the suspicious lesions. Moreover, test sets mainly consisted of holdout images and did not represent the full range of patient populations and melanoma subtypes encountered in clinical practice.
Collapse
|
31
|
Kuntz S, Krieghoff-Henning E, Kather JN, Jutzi T, Höhn J, Kiehl L, Hekler A, Alwers E, von Kalle C, Fröhling S, Utikal JS, Brenner H, Hoffmeister M, Brinker TJ. Gastrointestinal cancer classification and prognostication from histology using deep learning: Systematic review. Eur J Cancer 2021; 155:200-215. [PMID: 34391053 DOI: 10.1016/j.ejca.2021.07.012] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Accepted: 07/06/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Gastrointestinal cancers account for approximately 20% of all cancer diagnoses and are responsible for 22.5% of cancer deaths worldwide. Artificial intelligence-based diagnostic support systems, in particular convolutional neural network (CNN)-based image analysis tools, have shown great potential in medical computer vision. In this systematic review, we summarise recent studies reporting CNN-based approaches for digital biomarkers for characterization and prognostication of gastrointestinal cancer pathology. METHODS Pubmed and Medline were screened for peer-reviewed papers dealing with CNN-based gastrointestinal cancer analyses from histological slides, published between 2015 and 2020.Seven hundred and ninety titles and abstracts were screened, and 58 full-text articles were assessed for eligibility. RESULTS Sixteen publications fulfilled our inclusion criteria dealing with tumor or precursor lesion characterization or prognostic and predictive biomarkers: 14 studies on colorectal or rectal cancer, three studies on gastric cancer and none on esophageal cancer. These studies were categorised according to their end-points: polyp characterization, tumor characterization and patient outcome. Regarding the translation into clinical practice, we identified several studies demonstrating generalization of the classifier with external tests and comparisons with pathologists, but none presenting clinical implementation. CONCLUSIONS Results of recent studies on CNN-based image analysis in gastrointestinal cancer pathology are promising, but studies were conducted in observational and retrospective settings. Large-scale trials are needed to assess performance and predict clinical usefulness. Furthermore, large-scale trials are required for approval of CNN-based prediction models as medical devices.
Collapse
Affiliation(s)
- Sara Kuntz
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Eva Krieghoff-Henning
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jakob N Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany
| | - Tanja Jutzi
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Julia Höhn
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lennard Kiehl
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Achim Hekler
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Elizabeth Alwers
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Christof von Kalle
- Department of Clinical-Translational Sciences, Charité University Medicine and Berlin Institute of Health (BIH), Berlin, Germany
| | - Stefan Fröhling
- Department of Translational Medical Oncology, National Center for Tumor Diseases (NCT), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jochen S Utikal
- Department of Dermatology, Heidelberg University, Mannheim, Germany; Skin Cancer Unit, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Hermann Brenner
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany; Division of Preventive Oncology, German Cancer Research Center (DKFZ), National Center for Tumor Diseases (NCT), Heidelberg, Germany; German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Michael Hoffmeister
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|