1
|
Lin S, Tran C, Bandari E, Romagnoli T, Li Y, Chu M, Amirthakatesan AS, Dallmann A, Kostiukov A, Panizo A, Hodgson A, Laury AR, Polonia A, Stueck AE, Menon AA, Morini A, Özamrak B, Cooper C, Trinidad CMG, Eisenlöffel C, Suleiman DE, Suster D, Dorward DA, Aljufairi EA, Maclean F, Gul G, Sansano I, Erana-Rojas IE, Machado I, Kholova I, Karunanithi J, Gibier JB, Schulte JJ, Li JJ, Kini JR, Collins K, Galea LA, Muller L, Cima L, Nova-Camacho LM, Dabner M, Muscara MJ, Hanna MG, Agoumi M, Wiebe NJP, Oswald NK, Zahra N, Folaranmi OO, Kravtsov O, Semerci O, Patil NN, Muthusamy Sundar P, Charles P, Kumaraswamy Rajeswaran P, Zhang Q, van der Griend R, Pillappa R, Perret R, Gonzalez RS, Reed RC, Patil S, Jiang X“S, Qayoom S, Prendeville S, Baskota SU, Tran TT, San TH, Kukkonen TM, Kendall TJ, Taskin T, Rutland T, Manucha V, Cockenpot V, Rosen Y, Rodriguez-Velandia YP, Ordulu Z, Cecchini MJ. The 1000 Mitoses Project: A Consensus-Based International Collaborative Study on Mitotic Figures Classification. Int J Surg Pathol 2024; 32:1449-1458. [PMID: 38627896 PMCID: PMC11497755 DOI: 10.1177/10668969241234321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 01/23/2024] [Accepted: 01/25/2024] [Indexed: 10/23/2024]
Abstract
Introduction. The identification of mitotic figures is essential for the diagnosis, grading, and classification of various different tumors. Despite its importance, there is a paucity of literature reporting the consistency in interpreting mitotic figures among pathologists. This study leverages publicly accessible datasets and social media to recruit an international group of pathologists to score an image database of more than 1000 mitotic figures collectively. Materials and Methods. Pathologists were instructed to randomly select a digital slide from The Cancer Genome Atlas (TCGA) datasets and annotate 10-20 mitotic figures within a 2 mm2 area. The first 1010 submitted mitotic figures were used to create an image dataset, with each figure transformed into an individual tile at 40x magnification. The dataset was redistributed to all pathologists to review and determine whether each tile constituted a mitotic figure. Results. Overall pathologists had a median agreement rate of 80.2% (range 42.0%-95.7%). Individual mitotic figure tiles had a median agreement rate of 87.1% and a fair inter-rater agreement across all tiles (kappa = 0.284). Mitotic figures in prometaphase had lower percentage agreement rates compared to other phases of mitosis. Conclusion. This dataset stands as the largest international consensus study for mitotic figures to date and can be utilized as a training set for future studies. The agreement range reflects a spectrum of criteria that pathologists use to decide what constitutes a mitotic figure, which may have potential implications in tumor diagnostics and clinical management.
Collapse
Affiliation(s)
- Sherman Lin
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Canada
| | - Christopher Tran
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Canada
| | - Ela Bandari
- Schulich School of Medicine & Dentistry, Western University, London, Canada
| | - Tommaso Romagnoli
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Canada
| | - Yueyang Li
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Canada
| | - Michael Chu
- Department of Kinesiology and Health Sciences, Western University, London, Canada
| | | | - Adam Dallmann
- Department of Pathology, Cwm Taf Morgannwg University Health Board, Llantrisant, UK
| | - Andrii Kostiukov
- Pathology Laboratory, The Military Medical Clinical Centre of Central Region, Vinnytsia, Ukraine
| | - Angel Panizo
- Department of Pathology, Hospital Universitario de Navarra, Pamplona, Spain
| | - Anjelica Hodgson
- Laboratory Medicine Program, University Health Network, Toronto, Canada
| | - Anna R. Laury
- Research Program in Systems Oncology, University of Helsinki, Helsinki, Finland
| | - Antonio Polonia
- Department of Pathology, Ipatimup, Porto, Portugal and Instituto de Investigação, Inovação e Desenvolvimento, Fundação Fernando Pessoa (FP-I3ID), Porto, Portugal
| | - Ashley E. Stueck
- Department of Pathology & Laboratory Medicine, Dalhousie University, Halifax, Canada
| | - Aswathy A. Menon
- Department of Pathology, Neuberg Anand Reference Laboratory, Bengaluru, India
| | - Aurélien Morini
- Department of Pathology, Grand Hôpital de l’Est Francilien, Jossigny, France
| | - Birsen Özamrak
- Department of Pathology, İzmir Tepecik Training and Research Hospital, İzmir, Turkey
| | - Caroline Cooper
- Anatomical Pathology, Pathology Queensland Princess Alexandra Hospital Brisbane Australia and The University of Queensland, Brisbane, Australia
| | | | | | - Dauda E. Suleiman
- Department of Histopathology, College of Medical Sciences, Abubakar Tafawa Balewa University, Bauchi, Nigeria
| | - David Suster
- Department of Pathology, Rutgers University, New Jersey Medical School, Newark, NJ, USA
| | - David A. Dorward
- Department of Pathology, Royal Infirmary Edinburgh, Edinburgh, UK
| | - Eman A. Aljufairi
- Department of Pathology, King Hamad University Hospital, Alsayah, Bahrain
| | - Fiona Maclean
- Department of Anatomical Pathology, Douglass Hanly Moir Pathology, Sonic Healthcare, Sydney, Australia
| | - Gulen Gul
- Department of Pathology and Laboratory Medicine, Izmir Provincial Directorate of Health, Health Sciences University İzmir Tepecik Education and Research Hospital, Izmir, Turkey
| | - Irene Sansano
- Department of Pathology, Hospital Universitari Vall d’Hebron, Barcelona, Spain
| | - Irma E. Erana-Rojas
- Department of Pathology, School of Medicine and Health Sciences, Tecnologico de Monterrey, Monterrey, Mexico
| | - Isidro Machado
- Pathology Department, Instituto Valenciano de Oncología. Laboratorio Patologika, Hospital QuironSalud, University of Valencia, Valencia, Spain
- CIBERONC, Madrid, Spain
| | - Ivana Kholova
- Faculty of Medicine and Health Technology, Tampere University and Fimlab Laboratories, Tampere, Finland
| | - Jayanthi Karunanithi
- Department of Anatomical Pathology, Singapore General Hospital, Singapore, Singapore
| | | | - Jefree J. Schulte
- Department of Pathology and Laboratory Medicine, The University of Wisconsin School of Medicine and Public Health, Madison, WI, USA
| | - Joshua J.X. Li
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Jyoti R. Kini
- Department of Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, KA, India
| | - Katrina Collins
- Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Laurence A. Galea
- Department of Anatomical Pathology, Douglass Hanly Moir Pathology, Sonic Healthcare, Sydney, Australia
| | - Louis Muller
- Department of Anatomical Pathology, University of the Free State, Bloemfontein, South Africa
| | - Luca Cima
- Department of the Laboratory Medicine, Pathology Unit, Santa Chiara University Hospital, Trento, Italy
| | | | - Marcus Dabner
- Division of Pathology and Laboratory Medicine, University of Western Australia Medical School, Perth, Australia
| | | | - Matthew G. Hanna
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Mehdi Agoumi
- Department of Pathology, Surrey Memorial Hospital, Surrey, Canada
| | - Nicholas J. P. Wiebe
- Department of Pathology and Laboratory Medicine, University of Calgary, Calgary, Canada
| | - Nicola K. Oswald
- Cellular Pathology, University Hospitals Leicester NHS Trust, Leicester, UK
| | - Nusrat Zahra
- Department of Pathology, Specialized Healthcare and Medical Education Department, Govt. of the Punjab, Lahore, Pakistan
| | - Olaleke O. Folaranmi
- Department of Anatomic Pathology, University of Ilorin Teaching Hospital, Ilorin, Nigeria
| | - Oleksandr Kravtsov
- Department of Pathology, SUNY Upstate Medical University, Syracuse, NY, USA
| | - Orhan Semerci
- Department of Pathology, Trabzon Kanuni Training and Research Hospital, University of Health Sciences, Trabzon, Turkey
| | - Namrata N. Patil
- Department of Oral Pathology, Saraswati-Dhanwantari Dental College and Hospital, Parbhani, Maharashtra, India
| | | | - Prem Charles
- Department of Pathology, Government Erode Medical college, Perundurai, TN, India
| | | | - Qi Zhang
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Canada
| | - Rachael van der Griend
- Anatomical Pathology, Canterbury Health Laboratories (Te Whatu Ora, Health New Zealand), Christchurch, New Zealand
| | - Raghavendra Pillappa
- Department of Pathology and Laboratory Medicine, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Raul Perret
- Department of Biopathology, Institut Bergonié, Comprehensive Cancer Center, Bordeaux, France
| | - Raul S. Gonzalez
- Department of Pathology and Laboratory Medicine, Emory University Hospital, Atlanta, GA, USA
| | - Robyn C. Reed
- Department of Laboratory Medicine and Pathology, Seattle Children's Hospital, Seattle, WA, USA
| | - Sachin Patil
- Department of Pathology, Shri Siddhivinayak Ganapati Cancer Hospital, Sangli, India
| | | | - Sumaira Qayoom
- Department of Pathology, King George's Medical University, Lucknow, India
| | - Susan Prendeville
- Laboratory Medicine Program, University Health Network, Toronto, Canada
| | - Swikrity U. Baskota
- Department of Pathology and Cell Biology, Columbia University Irving Medical Center, New York, NY, USA
| | - Thanh-Truc Tran
- Department of Pathology, Ho Chi Minh Oncology Hospital, Ho Chi Minh City, Vietnam
| | - Thar-Htet San
- Department of Pathology, University of Medicine (2), Yangon, Myanmar
| | | | - Timothy J. Kendall
- Centre for Inflammation Research, Institute for Regeneration and Repair, University of Edinburgh, Edinburgh, UK
| | - Toros Taskin
- Department of Pathology, Agri Training and Research Hospital, Agri, Turkey
| | - Tristan Rutland
- Department of Anatomical Pathology, Liverpool Hospital, Sydney, Australia
| | - Varsha Manucha
- Department of Pathology and Laboratory Medicine, University of Mississippi Medical Center, Jackson, MS, USA
| | - Vincent Cockenpot
- Department of Pathology-Genetics and Immunology, Institut Curie, PSL Research University, Paris, France
| | - Yale Rosen
- Department of Pathology, SUNY Downstate Health Sciences University, Bellmore, NY, USA
| | | | - Zehra Ordulu
- Department of Pathology, University of Florida, Gainesville, FL, USA
| | - Matthew J. Cecchini
- Department of Pathology and Laboratory Medicine, Western University and London Health Sciences Centre, London, Canada
| |
Collapse
|
2
|
Jiang S, Hondelink L, Suriawinata AA, Hassanpour S. Masked pre-training of transformers for histology image analysis. J Pathol Inform 2024; 15:100386. [PMID: 39006998 PMCID: PMC11246055 DOI: 10.1016/j.jpi.2024.100386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 04/02/2024] [Accepted: 05/28/2024] [Indexed: 07/16/2024] Open
Abstract
In digital pathology, whole-slide images (WSIs) are widely used for applications such as cancer diagnosis and prognosis prediction. Vision transformer (ViT) models have recently emerged as a promising method for encoding large regions of WSIs while preserving spatial relationships among patches. However, due to the large number of model parameters and limited labeled data, applying transformer models to WSIs remains challenging. In this study, we propose a pretext task to train the transformer model in a self-supervised manner. Our model, MaskHIT, uses the transformer output to reconstruct masked patches, measured by contrastive loss. We pre-trained MaskHIT model using over 7000 WSIs from TCGA and extensively evaluated its performance in multiple experiments, covering survival prediction, cancer subtype classification, and grade prediction tasks. Our experiments demonstrate that the pre-training procedure enables context-aware understanding of WSIs, facilitates the learning of representative histological features based on patch positions and visual patterns, and is essential for the ViT model to achieve optimal results on WSI-level tasks. The pre-trained MaskHIT surpasses various multiple instance learning approaches by 3% and 2% on survival prediction and cancer subtype classification tasks, and also outperforms recent state-of-the-art transformer-based methods. Finally, a comparison between the attention maps generated by the MaskHIT model with pathologist's annotations indicates that the model can accurately identify clinically relevant histological structures on the whole slide for each task.
Collapse
Affiliation(s)
- Shuai Jiang
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Liesbeth Hondelink
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Arief A. Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Saeed Hassanpour
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
- Department of Epidemiology, Geisel School of Medicine at Dartmouth and the Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| |
Collapse
|
3
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
4
|
Huang J, Zhang X, Jin R, Xu T, Jin Z, Shen M, Lv F, Chen J, Liu J. Wavelet-based selection-and-recalibration network for Parkinson's disease screening in OCT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 256:108368. [PMID: 39154408 DOI: 10.1016/j.cmpb.2024.108368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/30/2024] [Accepted: 08/07/2024] [Indexed: 08/20/2024]
Abstract
BACKGROUND AND OBJECTIVE Parkinson's disease (PD) is one of the most prevalent neurodegenerative brain diseases worldwide. Therefore, accurate PD screening is crucial for early clinical intervention and treatment. Recent clinical research indicates that changes in pathology, such as the texture and thickness of the retinal layers, can serve as biomarkers for clinical PD diagnosis based on optical coherence tomography (OCT) images. However, the pathological manifestations of PD in the retinal layers are subtle compared to the more salient lesions associated with retinal diseases. METHODS Inspired by textural edge feature extraction in frequency domain learning, we aim to explore a potential approach to enhance the distinction between the feature distributions in retinal layers of PD cases and healthy controls. In this paper, we introduce a simple yet novel wavelet-based selection and recalibration module to effectively enhance the feature representations of the deep neural network by aggregating the unique clinical properties, such as the retinal layers in each frequency band. We combine this module with the residual block to form a deep network named Wavelet-based Selection and Recalibration Network (WaveSRNet) for automatic PD screening. RESULTS The extensive experiments on a clinical PD-OCT dataset and two publicly available datasets demonstrate that our approach outperforms state-of-the-art methods. Visualization analysis and ablation studies are conducted to enhance the explainability of WaveSRNet in the decision-making process. CONCLUSIONS Our results suggest the potential role of the retina as an assessment tool for PD. Visual analysis shows that PD-related elements include not only certain retinal layers but also the location of the fovea in OCT images.
Collapse
Affiliation(s)
- Jingqi Huang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Xiaoqing Zhang
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; Center for High Performance Computing and Shenzhen Key Laboratory of Intelligent Bioinformatics, Shenzhen institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Richu Jin
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China
| | - Tao Xu
- The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China
| | - Zi Jin
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Meixiao Shen
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Fan Lv
- The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, China
| | - Jiangfan Chen
- The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; The Oujiang Laboratory; The Affiliated Eye Hospital, Wenzhou Medical University, 270 Xueyuan Road, Wenzhou, Zhejiang, China
| | - Jiang Liu
- Research Institute of Trustworthy Autonomous Systems and Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen, 518055, China; The State Key Laboratory of Ophthalmology, Optometry and Vision Science, Wenzhou Medical University, Wenzhou, Zhejiang, China; Singapore Eye Research Institute, 169856, Singapore.
| |
Collapse
|
5
|
Stathonikos N, Aubreville M, de Vries S, Wilm F, Bertram CA, Veta M, van Diest PJ. Breast cancer survival prediction using an automated mitosis detection pipeline. J Pathol Clin Res 2024; 10:e70008. [PMID: 39466133 DOI: 10.1002/2056-4538.70008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Revised: 08/26/2024] [Accepted: 10/07/2024] [Indexed: 10/29/2024]
Abstract
Mitotic count (MC) is the most common measure to assess tumor proliferation in breast cancer patients and is highly predictive of patient outcomes. It is, however, subject to inter- and intraobserver variation and reproducibility challenges that may hamper its clinical utility. In past studies, artificial intelligence (AI)-supported MC has been shown to correlate well with traditional MC on glass slides. Considering the potential of AI to improve reproducibility of MC between pathologists, we undertook the next validation step by evaluating the prognostic value of a fully automatic method to detect and count mitoses on whole slide images using a deep learning model. The model was developed in the context of the Mitosis Domain Generalization Challenge 2021 (MIDOG21) grand challenge and was expanded by a novel automatic area selector method to find the optimal mitotic hotspot and calculate the MC per 2 mm2. We employed this method on a breast cancer cohort with long-term follow-up from the University Medical Centre Utrecht (N = 912) and compared predictive values for overall survival of AI-based MC and light-microscopic MC, previously assessed during routine diagnostics. The MIDOG21 model was prognostically comparable to the original MC from the pathology report in uni- and multivariate survival analysis. In conclusion, a fully automated MC AI algorithm was validated in a large cohort of breast cancer with regard to retained prognostic value compared with traditional light-microscopic MC.
Collapse
Affiliation(s)
| | | | - Sjoerd de Vries
- Digital Health, University Medical Centre Utrecht, Utrecht, The Netherlands
- Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands
| | - Frauke Wilm
- Pattern Recognition Lab, Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg, Erlangen, Germany
| | - Christof A Bertram
- Institute of Pathology, University of Veterinary Medicine Vienna, Vienna, Austria
| | - Mitko Veta
- Pathology, University Medical Centre Utrecht, Utrecht, The Netherlands
- Medical Image Analysis Group, TU Eindhoven, Eindhoven, The Netherlands
| | - Paul J van Diest
- Pathology, University Medical Centre Utrecht, Utrecht, The Netherlands
| |
Collapse
|
6
|
Mezei T, Kolcsár M, Joó A, Gurzu S. Image Analysis in Histopathology and Cytopathology: From Early Days to Current Perspectives. J Imaging 2024; 10:252. [PMID: 39452415 PMCID: PMC11508754 DOI: 10.3390/jimaging10100252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2024] [Revised: 10/03/2024] [Accepted: 10/12/2024] [Indexed: 10/26/2024] Open
Abstract
Both pathology and cytopathology still rely on recognizing microscopical morphologic features, and image analysis plays a crucial role, enabling the identification, categorization, and characterization of different tissue types, cell populations, and disease states within microscopic images. Historically, manual methods have been the primary approach, relying on expert knowledge and experience of pathologists to interpret microscopic tissue samples. Early image analysis methods were often constrained by computational power and the complexity of biological samples. The advent of computers and digital imaging technologies challenged the exclusivity of human eye vision and brain computational skills, transforming the diagnostic process in these fields. The increasing digitization of pathological images has led to the application of more objective and efficient computer-aided analysis techniques. Significant advancements were brought about by the integration of digital pathology, machine learning, and advanced imaging technologies. The continuous progress in machine learning and the increasing availability of digital pathology data offer exciting opportunities for the future. Furthermore, artificial intelligence has revolutionized this field, enabling predictive models that assist in diagnostic decision making. The future of pathology and cytopathology is predicted to be marked by advancements in computer-aided image analysis. The future of image analysis is promising, and the increasing availability of digital pathology data will invariably lead to enhanced diagnostic accuracy and improved prognostic predictions that shape personalized treatment strategies, ultimately leading to better patient outcomes.
Collapse
Affiliation(s)
- Tibor Mezei
- Department of Pathology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540139 Targu Mures, Romania;
| | - Melinda Kolcsár
- Department of Pharmacology and Clinical Pharmacy, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540142 Targu Mures, Romania;
| | - András Joó
- Accenture Romania, 540035 Targu Mures, Romania;
| | - Simona Gurzu
- Department of Pathology, George Emil Palade University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540139 Targu Mures, Romania;
| |
Collapse
|
7
|
Boulogne LH, Lorenz J, Kienzle D, Schön R, Ludwig K, Lienhart R, Jégou S, Li G, Chen C, Wang Q, Shi D, Maniparambil M, Müller D, Mertes S, Schröter N, Hellmann F, Elia M, Dirks I, Bossa MN, Berenguer AD, Mukherjee T, Vandemeulebroucke J, Sahli H, Deligiannis N, Gonidakis P, Huynh ND, Razzak I, Bouadjenek R, Verdicchio M, Borrelli P, Aiello M, Meakin JA, Lemm A, Russ C, Ionasec R, Paragios N, van Ginneken B, Revel MP. The STOIC2021 COVID-19 AI challenge: Applying reusable training methodologies to private data. Med Image Anal 2024; 97:103230. [PMID: 38875741 DOI: 10.1016/j.media.2024.103230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 01/11/2024] [Accepted: 06/03/2024] [Indexed: 06/16/2024]
Abstract
Challenges drive the state-of-the-art of automated medical image analysis. The quantity of public training data that they provide can limit the performance of their solutions. Public access to the training methodology for these solutions remains absent. This study implements the Type Three (T3) challenge format, which allows for training solutions on private data and guarantees reusable training methodologies. With T3, challenge organizers train a codebase provided by the participants on sequestered training data. T3 was implemented in the STOIC2021 challenge, with the goal of predicting from a computed tomography (CT) scan whether subjects had a severe COVID-19 infection, defined as intubation or death within one month. STOIC2021 consisted of a Qualification phase, where participants developed challenge solutions using 2000 publicly available CT scans, and a Final phase, where participants submitted their training methodologies with which solutions were trained on CT scans of 9724 subjects. The organizers successfully trained six of the eight Final phase submissions. The submitted codebases for training and running inference were released publicly. The winning solution obtained an area under the receiver operating characteristic curve for discerning between severe and non-severe COVID-19 of 0.815. The Final phase solutions of all finalists improved upon their Qualification phase solutions.
Collapse
Affiliation(s)
- Luuk H Boulogne
- Radboud university medical center, P.O. Box 9101, 6500HB Nijmegen, The Netherlands.
| | - Julian Lorenz
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany.
| | - Daniel Kienzle
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | - Robin Schön
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | - Katja Ludwig
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | - Rainer Lienhart
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | | | - Guang Li
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China.
| | - Cong Chen
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China
| | - Qi Wang
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China
| | - Derik Shi
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China
| | - Mayug Maniparambil
- ML-Labs, Dublin City University, N210, Marconi building, Dublin City University, Glasnevin, Dublin 9, Ireland.
| | - Dominik Müller
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany; Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Silvan Mertes
- Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Niklas Schröter
- Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Fabio Hellmann
- Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Miriam Elia
- Faculty of Applied Computer Science, University of Augsburg, Germany.
| | - Ine Dirks
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium.
| | - Matías Nicolás Bossa
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Abel Díaz Berenguer
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Tanmoy Mukherjee
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Hichem Sahli
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Nikos Deligiannis
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Panagiotis Gonidakis
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | | | - Imran Razzak
- University of New South Wales, Sydney, Australia.
| | | | | | | | | | - James A Meakin
- Radboud university medical center, P.O. Box 9101, 6500HB Nijmegen, The Netherlands
| | - Alexander Lemm
- Amazon Web Services, Marcel-Breuer-Str. 12, 80807 München, Germany
| | - Christoph Russ
- Amazon Web Services, Marcel-Breuer-Str. 12, 80807 München, Germany
| | - Razvan Ionasec
- Amazon Web Services, Marcel-Breuer-Str. 12, 80807 München, Germany
| | - Nikos Paragios
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China; TheraPanacea, 75004, Paris, France
| | - Bram van Ginneken
- Radboud university medical center, P.O. Box 9101, 6500HB Nijmegen, The Netherlands
| | - Marie-Pierre Revel
- Department of Radiology, Université de Paris, APHP, Hôpital Cochin, 27 rue du Fg Saint Jacques, 75014 Paris, France
| |
Collapse
|
8
|
Ganz J, Ammeling J, Jabari S, Breininger K, Aubreville M. Re-identification from histopathology images. Med Image Anal 2024; 99:103335. [PMID: 39316996 DOI: 10.1016/j.media.2024.103335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 05/17/2024] [Accepted: 09/02/2024] [Indexed: 09/26/2024]
Abstract
In numerous studies, deep learning algorithms have proven their potential for the analysis of histopathology images, for example, for revealing the subtypes of tumors or the primary origin of metastases. These models require large datasets for training, which must be anonymized to prevent possible patient identity leaks. This study demonstrates that even relatively simple deep learning algorithms can re-identify patients in large histopathology datasets with substantial accuracy. In addition, we compared a comprehensive set of state-of-the-art whole slide image classifiers and feature extractors for the given task. We evaluated our algorithms on two TCIA datasets including lung squamous cell carcinoma (LSCC) and lung adenocarcinoma (LUAD). We also demonstrate the algorithm's performance on an in-house dataset of meningioma tissue. We predicted the source patient of a slide with F1 scores of up to 80.1% and 77.19% on the LSCC and LUAD datasets, respectively, and with 77.09% on our meningioma dataset. Based on our findings, we formulated a risk assessment scheme to estimate the risk to the patient's privacy prior to publication.
Collapse
Affiliation(s)
- Jonathan Ganz
- Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany
| | - Jonas Ammeling
- Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany
| | - Samir Jabari
- Klinikum Nuremberg, Institute of Pathology, Paracelsus Medical University, Prof. Ernst-Nathan-Straße 1, 90419, Nuremberg, Germany; Institute of Pathology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Krankenhausstraße 8-10, 91054, Erlangen, Germany
| | - Katharina Breininger
- Center for AI and Data Science, Julius-Maximilians-Universität Würzburg, John-Skilton-Straße 4a, 97074, Würzbug, Germany; Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Werner-von-Siemens-Straße 61, 91052, Erlangen, Germany
| | - Marc Aubreville
- Technische Hochschule Ingolstadt, Esplanade 10, 85049, Ingolstadt, Germany; Flensburg Artificial Intelligence Research (FLAIR) and Department Information and Communication, Flensburg University of Applied Sciences, Kanzleistraße 91-93, 24943, Flensburg, Germany.
| |
Collapse
|
9
|
Colomer R, González-Farré B, Ballesteros AI, Peg V, Bermejo B, Pérez-Mies B, de la Cruz S, Rojo F, Pernas S, Palacios J. Biomarkers in breast cancer 2024: an updated consensus statement by the Spanish Society of Medical Oncology and the Spanish Society of Pathology. Clin Transl Oncol 2024:10.1007/s12094-024-03541-1. [PMID: 38869741 DOI: 10.1007/s12094-024-03541-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 05/25/2024] [Indexed: 06/14/2024]
Abstract
This revised consensus statement of the Spanish Society of Medical Oncology (SEOM) and the Spanish Society of Pathological Anatomy (SEAP) updates the recommendations for biomarkers use in the diagnosis and treatment of breast cancer that we first published in 2018. The expert group recommends determining in early breast cancer the estrogen receptor (ER), progesterone receptor (PR), Ki-67, and Human Epidermal growth factor Receptor 2 (HER2), as well as BReast CAncer (BRCA) genes in high-risk HER2-negative breast cancer, to assist prognosis and help in indicating the therapeutic options, including hormone therapy, chemotherapy, anti-HER2 therapy, and other targeted therapies. One of the four available genetic prognostic platforms (Oncotype DX®, MammaPrint®, Prosigna®, or EndoPredict®) may be used in ER-positive patients with early breast cancer to establish a prognostic category and help decide with the patient whether adjuvant treatment may be limited to hormonal therapy. In second-line advanced breast cancer, in addition, phosphatidylinositol-4,5-bisphosphate 3-kinase catalytic subunit alpha (PIK3CA) and estrogen receptor 1 (ESR1) should be tested in hormone-sensitive cases, BRCA gene mutations in HER2-negative cancers, and in triple-negative breast cancer (TNBC), programmed cell death-1 ligand (PD-L1). Newer biomarkers and technologies, including tumor-infiltrating lymphocytes (TILs), homologous recombination deficiency (HRD) testing, serine/threonine kinase (AKT) pathway activation, and next-generation sequencing (NGS), are at this point investigational.
Collapse
Affiliation(s)
- Ramon Colomer
- UAM Personalised Precision Medicine Chair & Medical Oncology Department, La Princesa University Hospital and Research Institute, C/Diego de León, 62, 28006, Madrid, Spain.
| | | | | | - Vicente Peg
- Pathological Anatomy Service, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Begoña Bermejo
- Medical Oncology Department, Biomedical Research Institute INCLIVA, Medicine Department of the University of Valencia and Clinic University Hospital, Valencia, Spain
| | - Belén Pérez-Mies
- Pathological Anatomy Service, Ramón y Cajal University Hospital, Faculty of Medicine, University of Alcalá, IRYCIS and CIBERONC, Madrid, Spain
| | - Susana de la Cruz
- Medical Oncology Department, Navarra University Hospital, Navarre, Spain
| | - Federico Rojo
- Anatomy Service, Fundación Jiménez Díaz University Hospital and CIBERONC, Madrid, Spain
| | - Sonia Pernas
- Oncology Department, Catalan Institute of Oncology (ICO)-IDIBELL, L'Hospitalet, Barcelona, Spain
| | - José Palacios
- Pathological Anatomy Service, Department of Pathology, Ramón y Cajal University Hospital, Faculty of Medicine, University of Alcalá, IRYCIS and CIBERONC, Ctra. Colmenar Viejo, Km 9,1, 28034, Madrid, Spain.
| |
Collapse
|
10
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
11
|
Sun K, Zheng Y, Yang X, Jia W. A novel transformer-based aggregation model for predicting gene mutations in lung adenocarcinoma. Med Biol Eng Comput 2024; 62:1427-1440. [PMID: 38233683 DOI: 10.1007/s11517-023-03004-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 12/11/2023] [Indexed: 01/19/2024]
Abstract
In recent years, predicting gene mutations on whole slide imaging (WSI) has gained prominence. The primary challenge is extracting global information and achieving unbiased semantic aggregation. To address this challenge, we propose a novel Transformer-based aggregation model, employing a self-learning weight aggregation mechanism to mitigate semantic bias caused by the abundance of features in WSI. Additionally, we adopt a random patch training method, which enhances model learning richness by randomly extracting feature vectors from WSI, thus addressing the issue of limited data. To demonstrate the model's effectiveness in predicting gene mutations, we leverage the lung adenocarcinoma dataset from Shandong Provincial Hospital for prior knowledge learning. Subsequently, we assess TP53, CSMD3, LRP1B, and TTN gene mutations using lung adenocarcinoma tissue pathology images and clinical data from The Cancer Genome Atlas (TCGA). The results indicate a notable increase in the AUC (Area Under the ROC Curve) value, averaging 4%, attesting to the model's performance improvement. Our research offers an efficient model to explore the correlation between pathological image features and molecular characteristics in lung adenocarcinoma patients. This model introduces a novel approach to clinical genetic testing, expected to enhance the efficiency of identifying molecular features and genetic testing in lung adenocarcinoma patients, ultimately providing more accurate and reliable results for related studies.
Collapse
Affiliation(s)
- Kai Sun
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China.
| | - Xinbo Yang
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China
| | - Weikuan Jia
- School of Information Science and Engineering, Shandong Normal University, Jinan, Shandong, 250014, China.
| |
Collapse
|
12
|
Aubreville M, Stathonikos N, Donovan TA, Klopfleisch R, Ammeling J, Ganz J, Wilm F, Veta M, Jabari S, Eckstein M, Annuscheit J, Krumnow C, Bozaba E, Çayır S, Gu H, Chen X'A, Jahanifar M, Shephard A, Kondo S, Kasai S, Kotte S, Saipradeep VG, Lafarge MW, Koelzer VH, Wang Z, Zhang Y, Yang S, Wang X, Breininger K, Bertram CA. Domain generalization across tumor types, laboratories, and species - Insights from the 2022 edition of the Mitosis Domain Generalization Challenge. Med Image Anal 2024; 94:103155. [PMID: 38537415 DOI: 10.1016/j.media.2024.103155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 01/19/2024] [Accepted: 03/20/2024] [Indexed: 04/16/2024]
Abstract
Recognition of mitotic figures in histologic tumor specimens is highly relevant to patient outcome assessment. This task is challenging for algorithms and human experts alike, with deterioration of algorithmic performance under shifts in image representations. Considerable covariate shifts occur when assessment is performed on different tumor types, images are acquired using different digitization devices, or specimens are produced in different laboratories. This observation motivated the inception of the 2022 challenge on MItosis Domain Generalization (MIDOG 2022). The challenge provided annotated histologic tumor images from six different domains and evaluated the algorithmic approaches for mitotic figure detection provided by nine challenge participants on ten independent domains. Ground truth for mitotic figure detection was established in two ways: a three-expert majority vote and an independent, immunohistochemistry-assisted set of labels. This work represents an overview of the challenge tasks, the algorithmic strategies employed by the participants, and potential factors contributing to their success. With an F1 score of 0.764 for the top-performing team, we summarize that domain generalization across various tumor domains is possible with today's deep learning-based recognition pipelines. However, we also found that domain characteristics not present in the training set (feline as new species, spindle cell shape as new morphology and a new scanner) led to small but significant decreases in performance. When assessed against the immunohistochemistry-assisted reference standard, all methods resulted in reduced recall scores, with only minor changes in the order of participants in the ranking.
Collapse
Affiliation(s)
| | | | - Taryn A Donovan
- Department of Anatomic Pathology, The Schwarzman Animal Medical Center, NY, USA
| | - Robert Klopfleisch
- Institute of Veterinary Pathology, Freie Universität Berlin, Berlin, Germany
| | | | - Jonathan Ganz
- Technische Hochschule Ingolstadt, Ingolstadt, Germany
| | - Frauke Wilm
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Mitko Veta
- Computational Pathology Group, Radboud UMC Nijmegen, The Netherlands
| | - Samir Jabari
- Institute of Neuropathology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Markus Eckstein
- Institute of Pathology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nünberg, Erlangen, Germany
| | | | | | - Engin Bozaba
- Artificial Intelligence Research Team, Virasoft Corporation, NY, USA
| | - Sercan Çayır
- Artificial Intelligence Research Team, Virasoft Corporation, NY, USA
| | - Hongyan Gu
- University of California, Los Angeles, USA
| | | | | | | | | | - Satoshi Kasai
- Niigata University of Health and Welfare, Niigata, Japan
| | - Sujatha Kotte
- TCS Research, Tata Consultancy Services Ltd, Hyderabad, India
| | - V G Saipradeep
- TCS Research, Tata Consultancy Services Ltd, Hyderabad, India
| | - Maxime W Lafarge
- Department of Pathology and Molecular Pathology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Viktor H Koelzer
- Department of Pathology and Molecular Pathology, University Hospital Zurich, University of Zurich, Zurich, Switzerland
| | - Ziyue Wang
- Harbin Institute of Technology, Shenzhen, China
| | | | - Sen Yang
- College of Biomedical Engineering, Sichuan University, Chengdu, China
| | - Xiyue Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Palo Alto, USA
| | - Katharina Breininger
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Christof A Bertram
- Institute of Pathology, University of Veterinary Medicine, Vienna, Austria
| |
Collapse
|
13
|
van Diest PJ, Flach RN, van Dooijeweert C, Makineli S, Breimer GE, Stathonikos N, Pham P, Nguyen TQ, Veta M. Pros and cons of artificial intelligence implementation in diagnostic pathology. Histopathology 2024; 84:924-934. [PMID: 38433288 DOI: 10.1111/his.15153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/29/2023] [Accepted: 01/19/2024] [Indexed: 03/05/2024]
Abstract
The rapid introduction of digital pathology has greatly facilitated development of artificial intelligence (AI) models in pathology that have shown great promise in assisting morphological diagnostics and quantitation of therapeutic targets. We are now at a tipping point where companies have started to bring algorithms to the market, and questions arise whether the pathology community is ready to implement AI in routine workflow. However, concerns also arise about the use of AI in pathology. This article reviews the pros and cons of introducing AI in diagnostic pathology.
Collapse
Affiliation(s)
- Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Rachel N Flach
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Seher Makineli
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Surgical Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Gerben E Breimer
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Paul Pham
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Tri Q Nguyen
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mitko Veta
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| |
Collapse
|
14
|
Jahanifar M, Shephard A, Zamanitajeddin N, Graham S, Raza SEA, Minhas F, Rajpoot N. Mitosis detection, fast and slow: Robust and efficient detection of mitotic figures. Med Image Anal 2024; 94:103132. [PMID: 38442527 DOI: 10.1016/j.media.2024.103132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/28/2024] [Accepted: 03/01/2024] [Indexed: 03/07/2024]
Abstract
Counting of mitotic figures is a fundamental step in grading and prognostication of several cancers. However, manual mitosis counting is tedious and time-consuming. In addition, variation in the appearance of mitotic figures causes a high degree of discordance among pathologists. With advances in deep learning models, several automatic mitosis detection algorithms have been proposed but they are sensitive to domain shift often seen in histology images. We propose a robust and efficient two-stage mitosis detection framework, which comprises mitosis candidate segmentation (Detecting Fast) and candidate refinement (Detecting Slow) stages. The proposed candidate segmentation model, termed EUNet, is fast and accurate due to its architectural design. EUNet can precisely segment candidates at a lower resolution to considerably speed up candidate detection. Candidates are then refined using a deeper classifier network, EfficientNet-B7, in the second stage. We make sure both stages are robust against domain shift by incorporating domain generalization methods. We demonstrate state-of-the-art performance and generalizability of the proposed model on the three largest publicly available mitosis datasets, winning the two mitosis domain generalization challenge contests (MIDOG21 and MIDOG22). Finally, we showcase the utility of the proposed algorithm by processing the TCGA breast cancer cohort (1,124 whole-slide images) to generate and release a repository of more than 620K potential mitotic figures (not exhaustively validated).
Collapse
Affiliation(s)
- Mostafa Jahanifar
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK.
| | - Adam Shephard
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Neda Zamanitajeddin
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Simon Graham
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK; Histofy Ltd, Birmingham, UK
| | - Shan E Ahmed Raza
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Fayyaz Minhas
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytic (TIA) Center, Department of Computer Science, University of Warwick, UK; Histofy Ltd, Birmingham, UK.
| |
Collapse
|
15
|
Jin D, Liang S, Shmatko A, Arnold A, Horst D, Grünewald TGP, Gerstung M, Bai X. Teacher-student collaborated multiple instance learning for pan-cancer PDL1 expression prediction from histopathology slides. Nat Commun 2024; 15:3063. [PMID: 38594278 PMCID: PMC11004138 DOI: 10.1038/s41467-024-46764-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 03/08/2024] [Indexed: 04/11/2024] Open
Abstract
Programmed cell death ligand 1 (PDL1), as an important biomarker, is quantified by immunohistochemistry (IHC) with few established histopathological patterns. Deep learning aids in histopathological assessment, yet heterogeneity and lacking spatially resolved annotations challenge precise analysis. Here, we present a weakly supervised learning approach using bulk RNA sequencing for PDL1 expression prediction from hematoxylin and eosin (H&E) slides. Our method extends the multiple instance learning paradigm with the teacher-student framework, which assigns dynamic pseudo-labels for intra-slide heterogeneity and retrieves unlabeled instances using temporal ensemble model distillation. The approach, evaluated on 12,299 slides across 20 solid tumor types, achieves a weighted average area under the curve of 0.83 on fresh-frozen and 0.74 on formalin-fixed specimens for 9 tumors with PDL1 as an established biomarker. Our method predicts PDL1 expression patterns, validated by IHC on 20 slides, offering insights into histologies relevant to PDL1. This demonstrates the potential of deep learning in identifying diverse histological patterns for molecular changes from H&E images.
Collapse
Affiliation(s)
- Darui Jin
- Image Processing Center, Beihang University, Beijing, 102206, China
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Shen Yuan Honors College, Beihang University, Beijing, 100191, China
| | - Shangying Liang
- Image Processing Center, Beihang University, Beijing, 102206, China
| | - Artem Shmatko
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Arnold
- Charité - Universitätsmedizin Berlin, Institute of Pathology, 10117, Berlin, Germany
| | - David Horst
- Charité - Universitätsmedizin Berlin, Institute of Pathology, 10117, Berlin, Germany
- German Cancer Consortium (DKTK), partner site Berlin, a partnership between DKFZ and Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Thomas G P Grünewald
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany.
- Division of Translational Pediatric Sarcoma Research, German Cancer Research Center (DKFZ), German Cancer Consortium (DKTK), Heidelberg, Germany.
- Hopp Children's Cancer Center (KiTZ) Heidelberg, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Heidelberg, Germany.
| | - Moritz Gerstung
- Division of AI in Oncology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing, 102206, China.
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
- Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100083, China.
| |
Collapse
|
16
|
Leon-Ferre RA, Carter JM, Zahrieh D, Sinnwell JP, Salgado R, Suman VJ, Hillman DW, Boughey JC, Kalari KR, Couch FJ, Ingle JN, Balkenhol M, Ciompi F, van der Laak J, Goetz MP. Automated mitotic spindle hotspot counts are highly associated with clinical outcomes in systemically untreated early-stage triple-negative breast cancer. NPJ Breast Cancer 2024; 10:25. [PMID: 38553444 PMCID: PMC10980681 DOI: 10.1038/s41523-024-00629-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 03/08/2024] [Indexed: 04/02/2024] Open
Abstract
Operable triple-negative breast cancer (TNBC) has a higher risk of recurrence and death compared to other subtypes. Tumor size and nodal status are the primary clinical factors used to guide systemic treatment, while biomarkers of proliferation have not demonstrated value. Recent studies suggest that subsets of TNBC have a favorable prognosis, even without systemic therapy. We evaluated the association of fully automated mitotic spindle hotspot (AMSH) counts with recurrence-free (RFS) and overall survival (OS) in two separate cohorts of patients with early-stage TNBC who did not receive systemic therapy. AMSH counts were obtained from areas with the highest mitotic density in digitized whole slide images processed with a convolutional neural network trained to detect mitoses. In 140 patients from the Mayo Clinic TNBC cohort, AMSH counts were significantly associated with RFS and OS in a multivariable model controlling for nodal status, tumor size, and tumor-infiltrating lymphocytes (TILs) (p < 0.0001). For every 10-point increase in AMSH counts, there was a 16% increase in the risk of an RFS event (HR 1.16, 95% CI 1.08-1.25), and a 7% increase in the risk of death (HR 1.07, 95% CI 1.00-1.14). We corroborated these findings in a separate cohort of systemically untreated TNBC patients from Radboud UMC in the Netherlands. Our findings suggest that AMSH counts offer valuable prognostic information in patients with early-stage TNBC who did not receive systemic therapy, independent of tumor size, nodal status, and TILs. If further validated, AMSH counts could help inform future systemic therapy de-escalation strategies.
Collapse
Affiliation(s)
| | | | | | | | - Roberto Salgado
- GZA-ZNA-Hospitals, Antwerp, Belgium
- Peter Mac Callum Cancer Centre, Melbourne, Australia
| | | | | | | | | | | | | | | | | | - Jeroen van der Laak
- Radboud University Medical Center, Nijmegen, Netherlands
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | | |
Collapse
|
17
|
Fernandez-Martín C, Silva-Rodriguez J, Kiraz U, Morales S, Janssen EAM, Naranjo V. Uninformed Teacher-Student for hard-samples distillation in weakly supervised mitosis localization. Comput Med Imaging Graph 2024; 112:102328. [PMID: 38244279 DOI: 10.1016/j.compmedimag.2024.102328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 11/02/2023] [Accepted: 12/12/2023] [Indexed: 01/22/2024]
Abstract
BACKGROUND AND OBJECTIVE Mitotic activity is a crucial biomarker for diagnosing and predicting outcomes for different types of cancers, particularly breast cancer. However, manual mitosis counting is challenging and time-consuming for pathologists, with moderate reproducibility due to biopsy slide size, low mitotic cell density, and pattern heterogeneity. In recent years, deep learning methods based on convolutional neural networks (CNNs) have been proposed to address these limitations. Nonetheless, these methods have been hampered by the available data labels, which usually consist only of the centroids of mitosis, and by the incoming noise from annotated hard negatives. As a result, complex algorithms with multiple stages are often required to refine the labels at the pixel level and reduce the number of false positives. METHODS This article presents a novel weakly supervised approach for mitosis detection that utilizes only image-level labels on histological hematoxylin and eosin (H&E) images, avoiding the need for complex labeling scenarios. Also, an Uninformed Teacher-Student (UTS) pipeline is introduced to detect and distill hard samples by comparing weakly supervised localizations and the annotated centroids, using strong augmentations to enhance uncertainty. Additionally, an automatic proliferation score is proposed that mimicks the pathologist-annotated mitotic activity index (MAI). The proposed approach is evaluated on three publicly available datasets for mitosis detection on breast histology samples, and two datasets for mitotic activity counting in whole-slide images. RESULTS The proposed framework achieves competitive performance with relevant prior literature in all the datasets used for evaluation without explicitly using the mitosis location information during training. This approach challenges previous methods that rely on strong mitosis location information and multiple stages to refine false positives. Furthermore, the proposed pipeline for hard-sample distillation demonstrates promising dataset-specific improvements. Concretely, when the annotation has not been thoroughly refined by multiple pathologists, the UTS model offers improvements of up to ∼4% in mitosis localization, thanks to the detection and distillation of uncertain cases. Concerning the mitosis counting task, the proposed automatic proliferation score shows a moderate positive correlation with the MAI annotated by pathologists at the biopsy level on two external datasets. CONCLUSIONS The proposed Uninformed Teacher-Student pipeline leverages strong augmentations to distill uncertain samples and measure dissimilarities between predicted and annotated mitosis. Results demonstrate the feasibility of the weakly supervised approach and highlight its potential as an objective evaluation tool for tumor proliferation.
Collapse
Affiliation(s)
- Claudio Fernandez-Martín
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain.
| | | | - Umay Kiraz
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway; Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Sandra Morales
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain
| | - Emiel A M Janssen
- Department of Chemistry, Bioscience and Environmental Engineering, University of Stavanger, Stavanger, Norway; Department of Pathology, Stavanger University Hospital, Stavanger, Norway
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech, Universitat Politècnica de València, Valencia, Spain
| |
Collapse
|
18
|
Li Z, Li X, Wu W, Lyu H, Tang X, Zhou C, Xu F, Luo B, Jiang Y, Liu X, Xiang W. A novel dilated contextual attention module for breast cancer mitosis cell detection. Front Physiol 2024; 15:1337554. [PMID: 38332988 PMCID: PMC10850563 DOI: 10.3389/fphys.2024.1337554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 01/03/2024] [Indexed: 02/10/2024] Open
Abstract
Background and object: Mitotic count (MC) is a critical histological parameter for accurately assessing the degree of invasiveness in breast cancer, holding significant clinical value for cancer treatment and prognosis. However, accurately identifying mitotic cells poses a challenge due to their morphological and size diversity. Objective: We propose a novel end-to-end deep-learning method for identifying mitotic cells in breast cancer pathological images, with the aim of enhancing the performance of recognizing mitotic cells. Methods: We introduced the Dilated Cascading Network (DilCasNet) composed of detection and classification stages. To enhance the model's ability to capture distant feature dependencies in mitotic cells, we devised a novel Dilated Contextual Attention Module (DiCoA) that utilizes sparse global attention during the detection. For reclassifying mitotic cell areas localized in the detection stage, we integrate the EfficientNet-B7 and VGG16 pre-trained models (InPreMo) in the classification step. Results: Based on the canine mammary carcinoma (CMC) mitosis dataset, DilCasNet demonstrates superior overall performance compared to the benchmark model. The specific metrics of the model's performance are as follows: F1 score of 82.9%, Precision of 82.6%, and Recall of 83.2%. With the incorporation of the DiCoA attention module, the model exhibited an improvement of over 3.5% in the F1 during the detection stage. Conclusion: The DilCasNet achieved a favorable detection performance of mitotic cells in breast cancer and provides a solution for detecting mitotic cells in pathological images of other cancers.
Collapse
Affiliation(s)
- Zhiqiang Li
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Xiangkui Li
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Weixuan Wu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - He Lyu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Xuezhi Tang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Chenchen Zhou
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Fanxin Xu
- Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Bin Luo
- Sichuan Huhui Software Co., LTD., Mianyang, Sichuan, China
| | - Yulian Jiang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Xingwen Liu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Wei Xiang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| |
Collapse
|
19
|
Gu H, Yang C, Al-Kharouf I, Magaki S, Lakis N, Williams CK, Alrosan SM, Onstott EK, Yan W, Khanlou N, Cobos I, Zhang XR, Zarrin-Khameh N, Vinters HV, Chen XA, Haeri M. Enhancing mitosis quantification and detection in meningiomas with computational digital pathology. Acta Neuropathol Commun 2024; 12:7. [PMID: 38212848 PMCID: PMC10782692 DOI: 10.1186/s40478-023-01707-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 12/10/2023] [Indexed: 01/13/2024] Open
Abstract
Mitosis is a critical criterion for meningioma grading. However, pathologists' assessment of mitoses is subject to significant inter-observer variation due to challenges in locating mitosis hotspots and accurately detecting mitotic figures. To address this issue, we leverage digital pathology and propose a computational strategy to enhance pathologists' mitosis assessment. The strategy has two components: (1) A depth-first search algorithm that quantifies the mathematically maximum mitotic count in 10 consecutive high-power fields, which can enhance the preciseness, especially in cases with borderline mitotic count. (2) Implementing a collaborative sphere to group a set of pathologists to detect mitoses under each high-power field, which can mitigate subjective random errors in mitosis detection originating from individual detection errors. By depth-first search algorithm (1) , we analyzed 19 meningioma slides and discovered that the proposed algorithm upgraded two borderline cases verified at consensus conferences. This improvement is attributed to the algorithm's ability to quantify the mitotic count more comprehensively compared to other conventional methods of counting mitoses. In implementing a collaborative sphere (2) , we evaluated the correctness of mitosis detection from grouped pathologists and/or pathology residents, where each member of the group annotated a set of 48 high-power field images for mitotic figures independently. We report that groups with sizes of three can achieve an average precision of 0.897 and sensitivity of 0.699 in mitosis detection, which is higher than an average pathologist in this study (precision: 0.750, sensitivity: 0.667). The proposed computational strategy can be integrated with artificial intelligence workflow, which envisions the future of achieving a rapid and robust mitosis assessment by interactive assisting algorithms that can ultimately benefit patient management.
Collapse
Affiliation(s)
- Hongyan Gu
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Chunxu Yang
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Issa Al-Kharouf
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Shino Magaki
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Nelli Lakis
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Christopher Kazu Williams
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Sallam Mohammad Alrosan
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Ellie Kate Onstott
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA
| | - Wenzhong Yan
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Negar Khanlou
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Inma Cobos
- Department of Pathology, Stanford Medical School, Stanford, CA, 94305, USA
| | | | | | - Harry V Vinters
- Pathology and Laboratory Medicine, UCLA David Geffen School of Medicine, Los Angeles, CA, 90095, USA
| | - Xiang Anthony Chen
- Electrical and Computer Engineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA.
| | - Mohammad Haeri
- Pathology and Laboratory Medicine, The University of Kansas Medical Center, Kansas City, KS, 66160, USA.
| |
Collapse
|
20
|
Schreiber BA, Denholm J, Jaeckle F, Arends MJ, Branson KM, Schönlieb CB, Soilleux EJ. Rapid artefact removal and H&E-stained tissue segmentation. Sci Rep 2024; 14:309. [PMID: 38172562 PMCID: PMC10764721 DOI: 10.1038/s41598-023-50183-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Accepted: 12/16/2023] [Indexed: 01/05/2024] Open
Abstract
We present an innovative method for rapidly segmenting haematoxylin and eosin (H&E)-stained tissue in whole-slide images (WSIs) that eliminates a wide range of undesirable artefacts such as pen marks and scanning artefacts. Our method involves taking a single-channel representation of a low-magnification RGB overview of the WSI in which the pixel values are bimodally distributed such that H&E-stained tissue is easily distinguished from both background and a wide variety of artefacts. We demonstrate our method on 30 WSIs prepared from a wide range of institutions and WSI digital scanners, each containing substantial artefacts, and compare it to segmentations provided by Otsu thresholding and Histolab tissue segmentation and pen filtering tools. We found that our method segmented the tissue and fully removed all artefacts in 29 out of 30 WSIs, whereas Otsu thresholding failed to remove any artefacts, and the Histolab pen filtering tools only partially removed the pen marks. The beauty of our approach lies in its simplicity: manipulating RGB colour space and using Otsu thresholding allows for the segmentation of H&E-stained tissue and the rapid removal of artefacts without the need for machine learning or parameter tuning.
Collapse
Affiliation(s)
- B A Schreiber
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK.
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, Cambridgeshire, UK.
| | - J Denholm
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, Cambridgeshire, UK
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK
| | - F Jaeckle
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK
| | - M J Arends
- Edinburgh Pathology, Institute of Genetics and Cancer, University of Edinburgh, Crewe Road, Edinburgh, EH4 2XR, UK
| | - K M Branson
- Artificial Intelligence and Machine Learning, GSK plc., Great West Road, Brentford, TW8 9GS, Middlesex, UK
| | - C-B Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, Cambridgeshire, UK
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK
| | - E J Soilleux
- Department of Pathology, University of Cambridge, Tennis Court Road, Cambridge, CB2 1QP, Cambridgeshire, UK.
- Lyzeum Ltd., Cambridge, CB1 2LA, Cambridgeshire, UK.
| |
Collapse
|
21
|
Farooq H, Saleem S, Aleem I, Iftikhar A, Sheikh UN, Naveed H. Toward interpretable and generalized mitosis detection in digital pathology using deep learning. Digit Health 2024; 10:20552076241255471. [PMID: 38778869 PMCID: PMC11110526 DOI: 10.1177/20552076241255471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 05/01/2024] [Indexed: 05/25/2024] Open
Abstract
Objective The mitotic activity index is an important prognostic factor in the diagnosis of cancer. The task of mitosis detection is difficult as the nuclei are microscopic in size and partially labeled, and there are many more non-mitotic nuclei compared to mitotic ones. In this paper, we highlight the challenges of current mitosis detection pipelines and propose a method to tackle these challenges. Methods Our proposed methodology is inspired from recent research on deep learning and an extensive analysis on the dataset and training pipeline. We first used the MiDoG'22 dataset for training, validation, and testing. We then tested the methodology without fine-tuning on the TUPAC'16 dataset and on a real-time case from Shaukat Khanum Memorial Cancer Hospital and Research Centre. Results Our methodology has shown promising results both quantitatively and qualitatively. Quantitatively, our methodology achieved an F1-score of 0.87 on the MiDoG'22 dataset and an F1-score of 0.83 on the TUPAC dataset. Qualitatively, our methodology is generalizable and interpretable across various datasets and clinical settings. Conclusion In this paper, we highlight the challenges of current mitosis detection pipelines and propose a method that can accurately predict mitotic nuclei. We illustrate the accuracy, generalizability, and interpretability of our approach across various datasets and clinical settings. Our methodology can speed up the adoption of computer-aided digital pathology in clinical settings.
Collapse
Affiliation(s)
- Hasan Farooq
- Computational Biology Research Lab, National University of Computer & Emerging Sciences, Islamabad, Pakistan
| | - Saira Saleem
- Shaukat Khanum Memorial Cancer Hospital and Research Centre, Lahore, Pakistan
| | - Iffat Aleem
- Shaukat Khanum Memorial Cancer Hospital and Research Centre, Lahore, Pakistan
| | - Ayesha Iftikhar
- Shaukat Khanum Memorial Cancer Hospital and Research Centre, Lahore, Pakistan
| | - Umer Nisar Sheikh
- Shaukat Khanum Memorial Cancer Hospital and Research Centre, Lahore, Pakistan
| | - Hammad Naveed
- Computational Biology Research Lab, National University of Computer & Emerging Sciences, Islamabad, Pakistan
| |
Collapse
|
22
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
23
|
Wagner SJ, Matek C, Shetab Boushehri S, Boxberg M, Lamm L, Sadafi A, Winter DJE, Marr C, Peng T. Built to Last? Reproducibility and Reusability of Deep Learning Algorithms in Computational Pathology. Mod Pathol 2024; 37:100350. [PMID: 37827448 DOI: 10.1016/j.modpat.2023.100350] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 10/02/2023] [Accepted: 10/03/2023] [Indexed: 10/14/2023]
Abstract
Recent progress in computational pathology has been driven by deep learning. While code and data availability are essential to reproduce findings from preceding publications, ensuring a deep learning model's reusability is more challenging. For that, the codebase should be well-documented and easy to integrate into existing workflows and models should be robust toward noise and generalizable toward data from different sources. Strikingly, only a few computational pathology algorithms have been reused by other researchers so far, let alone employed in a clinical setting. To assess the current state of reproducibility and reusability of computational pathology algorithms, we evaluated peer-reviewed articles available in PubMed, published between January 2019 and March 2021, in 5 use cases: stain normalization; tissue type segmentation; evaluation of cell-level features; genetic alteration prediction; and inference of grading, staging, and prognostic information. We compiled criteria for data and code availability and statistical result analysis and assessed them in 160 publications. We found that only one-quarter (41 of 160 publications) made code publicly available. Among these 41 studies, three-quarters (30 of 41) analyzed their results statistically, half of them (20 of 41) released their trained model weights, and approximately a third (16 of 41) used an independent cohort for evaluation. Our review is intended for both pathologists interested in deep learning and researchers applying algorithms to computational pathology challenges. We provide a detailed overview of publications with published code in the field, list reusable data handling tools, and provide criteria for reproducibility and reusability.
Collapse
Affiliation(s)
- Sophia J Wagner
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Computation, Information and Technology, Technical University of Munich, Garching, Germany
| | - Christian Matek
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Institute of Pathology, University Hospital Erlangen, Erlangen, Germany
| | - Sayedali Shetab Boushehri
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Data & Analytics (D&A), Roche Pharma Research and Early Development (pRED), Roche Innovation Center Munich, Germany
| | - Melanie Boxberg
- Institute of Pathology, Technical University Munich, Munich, Germany; Institute of Pathology Munich-North, Munich, Germany
| | - Lorenz Lamm
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; Helmholtz Pioneer Campus, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Ario Sadafi
- School of Computation, Information and Technology, Technical University of Munich, Garching, Germany; Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany
| | - Dominik J E Winter
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany; School of Life Sciences, Technical University of Munich, Weihenstephan, Germany
| | - Carsten Marr
- Institute of AI for Health, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| | - Tingying Peng
- Helmholtz AI, Helmholtz Munich-German Research Center for Environmental Health, Neuherberg, Germany.
| |
Collapse
|
24
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024; 40:1-25. [PMID: 38517775 PMCID: PMC11191493 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
25
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. A Systematic Comparison of Task Adaptation Techniques for Digital Histopathology. Bioengineering (Basel) 2023; 11:19. [PMID: 38247897 PMCID: PMC10813343 DOI: 10.3390/bioengineering11010019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
Due to an insufficient amount of image annotation, artificial intelligence in computational histopathology usually relies on fine-tuning pre-trained neural networks. While vanilla fine-tuning has shown to be effective, research on computer vision has recently proposed improved algorithms, promising better accuracy. While initial studies have demonstrated the benefits of these algorithms for medical AI, in particular for radiology, there is no empirical evidence for improved accuracy in histopathology. Therefore, based on the ConvNeXt architecture, our study performs a systematic comparison of nine task adaptation techniques, namely, DELTA, L2-SP, MARS-PGM, Bi-Tuning, BSS, MultiTune, SpotTune, Co-Tuning, and vanilla fine-tuning, on five histopathological classification tasks using eight datasets. The results are based on external testing and statistical validation and reveal a multifaceted picture: some techniques are better suited for histopathology than others, but depending on the classification task, a significant relative improvement in accuracy was observed for five advanced task adaptation techniques over the control method, i.e., vanilla fine-tuning (e.g., Co-Tuning: P(≫) = 0.942, d = 2.623). Furthermore, we studied the classification accuracy for three of the nine methods with respect to the training set size (e.g., Co-Tuning: P(≫) = 0.951, γ = 0.748). Overall, our results show that the performance of advanced task adaptation techniques in histopathology is affected by influencing factors such as the specific classification task or the size of the training dataset.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany;
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (G.L.); (D.S.); (E.L.)
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany;
- Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (G.L.); (D.S.); (E.L.)
| | - Elisabeth Livingstone
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany; (G.L.); (D.S.); (E.L.)
| | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany;
| |
Collapse
|
26
|
Bertram CA, Bartel A, Donovan TA, Kiupel M. Atypical Mitotic Figures Are Prognostically Meaningful for Canine Cutaneous Mast Cell Tumors. Vet Sci 2023; 11:5. [PMID: 38275921 PMCID: PMC10821277 DOI: 10.3390/vetsci11010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/15/2023] [Accepted: 12/18/2023] [Indexed: 01/27/2024] Open
Abstract
Cell division through mitosis (microscopically visible as mitotic figures, MFs) is a highly regulated process. However, neoplastic cells may exhibit errors in chromosome segregation (microscopically visible as atypical mitotic figures, AMFs) resulting in aberrant chromosome structures. AMFs have been shown to be of prognostic relevance for some neoplasms in humans but not in animals. In this study, the prognostic relevance of AMFs was evaluated for canine cutaneous mast cell tumors (ccMCT). Histological examination was conducted by one pathologist in whole slide images of 96 cases of ccMCT with a known survival time. Tumor-related death occurred in 11/18 high-grade and 2/78 low-grade cases (2011 two-tier system). The area under the curve (AUC) was 0.859 for the AMF count and 0.880 for the AMF to MF ratio with regard to tumor-related mortality. In comparison, the AUC for the mitotic count was 0.885. Based on our data, a prognostically meaningful threshold of ≥3 per 2.37 mm2 for the AMF count (sensitivity: 76.9%, specificity: 98.8%) and >7.5% for the AMF:MF ratio (sensitivity: 76.9%, specificity: 100%) is suggested. While the mitotic count of ≥ 6 resulted in six false positive cases, these could be eliminated when combined with the AMF to MF ratio. In conclusion, the results of this study suggests that AMF enumeration is a prognostically valuable test, particularly due to its high specificity with regard to tumor-related mortality. Additional validation and reproducibility studies are needed to further evaluate AMFs as a prognostic criterion for ccMCT and other tumor types.
Collapse
Affiliation(s)
- Christof A. Bertram
- Institute of Veterinary Pathology, University of Veterinary Medicine Vienna, 1210 Vienna, Austria
| | - Alexander Bartel
- Institute for Veterinary Epidemiology and Biostatistics, Freie Universität Berlin, 14163 Berlin, Germany;
| | - Taryn A. Donovan
- Department of Anatomic Pathology, The Schwarzman Animal Medical Center, New York, NY 10065, USA;
| | - Matti Kiupel
- Veterinary Diagnostic Laboratory, Michigan State University, Lansing, MI 48910, USA
| |
Collapse
|
27
|
Jesus R, Bastião Silva L, Sousa V, Carvalho L, Garcia Gonzalez D, Carias J, Costa C. Personalizable AI platform for universal access to research and diagnosis in digital pathology. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107787. [PMID: 37717524 DOI: 10.1016/j.cmpb.2023.107787] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/22/2023] [Accepted: 09/01/2023] [Indexed: 09/19/2023]
Abstract
BACKGROUND AND MOTIVATION Digital pathology has been evolving over the last years, proposing significant workflow advantages that have fostered its adoption in professional environments. Patient clinical and image data are readily available in remote data banks that can be consumed efficiently over standard communication technologies. The appearance of new imaging techniques and advanced artificial intelligence algorithms has significantly reduced the burden on medical professionals by speeding up the screening process. Despite these advancements, the usage of digital pathology in professional environments has been slowed down by poor interoperability between services resulting from a lack of standard interfaces and integrative solutions. This work addresses this issue by proposing a cloud-based digital pathology platform built on standard and open interfaces. METHODS The work proposes and describes a vendor-neutral platform that provides interfaces for managing digital slides, and medical reports, and integrating digital image analysis services compatible with existing standards. The solution integrates the open-source plugin-based Dicoogle PACS for interoperability and extensibility, which grants the proposed solution great feature customization. RESULTS The solution was developed in collaboration with iPATH research project partners, including the validation by medical pathologists. The result is a pure Web collaborative framework that supports both research and production environments. A total of 566 digital slides from different pathologies were successfully uploaded to the platform. Using the integration interfaces, a mitosis detection algorithm was successfully installed into the platform, and it was trained with 2400 annotations collected from breast carcinoma images. CONCLUSION Interoperability is a key factor when discussing digital pathology solutions, as it facilitates their integration into existing institutions' information systems. Moreover, it improves data sharing and integration of third-party services such as image analysis services, which have become relevant in today's digital pathology workflow. The proposed solution fully embraces the DICOM standard for digital pathology, presenting an interoperable cloud-based solution that provides great feature customization thanks to its extensible architecture.
Collapse
Affiliation(s)
- Rui Jesus
- University of A. Coruña, A Coruña, Spain; BMD Software, Aveiro, Portugal.
| | | | - Vítor Sousa
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | - Lina Carvalho
- Faculty of Medicine, University of Coimbra, Coimbra, Portugal
| | | | - João Carias
- Center for Computer Graphics, Braga, Portugal
| | - Carlos Costa
- IEETA/DETI, University of Aveiro, Aveiro, Portugal
| |
Collapse
|
28
|
Gao W, Wang D, Huang Y. Designing a Deep Learning-Driven Resource-Efficient Diagnostic System for Metastatic Breast Cancer: Reducing Long Delays of Clinical Diagnosis and Improving Patient Survival in Developing Countries. Cancer Inform 2023; 22:11769351231214446. [PMID: 38033362 PMCID: PMC10683375 DOI: 10.1177/11769351231214446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 10/27/2023] [Indexed: 12/02/2023] Open
Abstract
Breast cancer is one of the leading causes of cancer mortality. Breast cancer patients in developing countries, especially sub-Saharan Africa, South Asia, and South America, suffer from the highest mortality rate in the world. One crucial factor contributing to the global disparity in mortality rate is long delay of diagnosis due to a severe shortage of trained pathologists, which consequently has led to a large proportion of late-stage presentation at diagnosis. To tackle this critical healthcare disparity, we have developed a deep learning-based diagnosis system for metastatic breast cancer that can achieve high diagnostic accuracy as well as computational efficiency and mobile readiness suitable for an under-resourced environment. We evaluated 4 Convolutional Neural Network (CNN) architectures: MobileNetV2, VGG16, ResNet50 and ResNet101. The MobileNetV2-based diagnostic model outperformed the more complex VGG16, ResNet50 and ResNet101 models in diagnostic accuracy, model generalization, and model training efficiency. The ROC AUC of MobilenetV2 (0.933, 95% CI: 0.930, 0.936) was higher than VGG16 (0.911, 95% CI: 0.908, 0.915), ResNet50 (0.869, 95% CI: 0.866, 0.873), and ResNet101 (0.873, 95% CI: 0.869, 0.876). The time per inference step for the MobileNetV2 model (15 ms/step) was substantially lower than that of VGG16 (48 ms/step), ResNet50 (37 ms/step), and ResNet110 (56 ms/step). The visual comparisons between the model prediction and ground truth have demonstrated that the MobileNetV2 diagnostic models can identify very small cancerous nodes embedded in a large area of normal cells which is challenging for manual image analysis. Equally Important, the light weight MobleNetV2 models were computationally efficient and ready for mobile devices or devices of low computational power. These advances empower the development of a resource-efficient and high performing AI-based metastatic breast cancer diagnostic system that can adapt to under-resourced healthcare facilities in developing countries.
Collapse
Affiliation(s)
- William Gao
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, USA
| | | | - Yi Huang
- Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore, MD, USA
| |
Collapse
|
29
|
AlGhamdi R. Mitotic Nuclei Segmentation and Classification Using Chaotic Butterfly Optimization Algorithm with Deep Learning on Histopathology Images. Biomimetics (Basel) 2023; 8:474. [PMID: 37887605 PMCID: PMC10604189 DOI: 10.3390/biomimetics8060474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/05/2023] [Accepted: 09/20/2023] [Indexed: 10/28/2023] Open
Abstract
Histopathological grading of the tumors provides insights about the patient's disease conditions, and it also helps in customizing the treatment plans. Mitotic nuclei classification involves the categorization and identification of nuclei in histopathological images based on whether they are undergoing the cell division (mitosis) process or not. This is an essential procedure in several research and medical contexts, especially in diagnosis and prognosis of cancer. Mitotic nuclei classification is a challenging task since the size of the nuclei is too small to observe, while the mitotic figures possess a different appearance as well. Automated calculation of mitotic nuclei is a stimulating one due to their great similarity to non-mitotic nuclei and their heteromorphic appearance. Both Computer Vision (CV) and Machine Learning (ML) approaches are used in the automated identification and the categorization of mitotic nuclei in histopathological images that endure the procedure of cell division (mitosis). With this background, the current research article introduces the mitotic nuclei segmentation and classification using the chaotic butterfly optimization algorithm with deep learning (MNSC-CBOADL) technique. The main objective of the MNSC-CBOADL technique is to perform automated segmentation and the classification of the mitotic nuclei. In the presented MNSC-CBOADL technique, the U-Net model is initially applied for the purpose of segmentation. Additionally, the MNSC-CBOADL technique applies the Xception model for feature vector generation. For the classification process, the MNSC-CBOADL technique employs the deep belief network (DBN) algorithm. In order to enhance the detection performance of the DBN approach, the CBOA is designed for the hyperparameter tuning model. The proposed MNSC-CBOADL system was validated through simulation using the benchmark database. The extensive results confirmed the superior performance of the proposed MNSC-CBOADL system in the classification of mitotic nuclei.
Collapse
Affiliation(s)
- Rayed AlGhamdi
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
30
|
Ahn JS, Shin S, Yang SA, Park EK, Kim KH, Cho SI, Ock CY, Kim S. Artificial Intelligence in Breast Cancer Diagnosis and Personalized Medicine. J Breast Cancer 2023; 26:405-435. [PMID: 37926067 PMCID: PMC10625863 DOI: 10.4048/jbc.2023.26.e45] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 09/25/2023] [Accepted: 10/06/2023] [Indexed: 11/07/2023] Open
Abstract
Breast cancer is a significant cause of cancer-related mortality in women worldwide. Early and precise diagnosis is crucial, and clinical outcomes can be markedly enhanced. The rise of artificial intelligence (AI) has ushered in a new era, notably in image analysis, paving the way for major advancements in breast cancer diagnosis and individualized treatment regimens. In the diagnostic workflow for patients with breast cancer, the role of AI encompasses screening, diagnosis, staging, biomarker evaluation, prognostication, and therapeutic response prediction. Although its potential is immense, its complete integration into clinical practice is challenging. Particularly, these challenges include the imperatives for extensive clinical validation, model generalizability, navigating the "black-box" conundrum, and pragmatic considerations of embedding AI into everyday clinical environments. In this review, we comprehensively explored the diverse applications of AI in breast cancer care, underlining its transformative promise and existing impediments. In radiology, we specifically address AI in mammography, tomosynthesis, risk prediction models, and supplementary imaging methods, including magnetic resonance imaging and ultrasound. In pathology, our focus is on AI applications for pathologic diagnosis, evaluation of biomarkers, and predictions related to genetic alterations, treatment response, and prognosis in the context of breast cancer diagnosis and treatment. Our discussion underscores the transformative potential of AI in breast cancer management and emphasizes the importance of focused research to realize the full spectrum of benefits of AI in patient care.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Seokhwi Kim
- Department of Pathology, Ajou University School of Medicine, Suwon, Korea
- Department of Biomedical Sciences, Ajou University Graduate School of Medicine, Suwon, Korea.
| |
Collapse
|
31
|
Chen Y, Shao Z, Bian H, Fang Z, Wang Y, Cai Y, Wang H, Liu G, Li X, Zhang Y. dMIL-Transformer: Multiple Instance Learning Via Integrating Morphological and Spatial Information for Lymph Node Metastasis Classification. IEEE J Biomed Health Inform 2023; 27:4433-4443. [PMID: 37310831 DOI: 10.1109/jbhi.2023.3285275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automated classification of lymph node metastasis (LNM) plays an important role in the diagnosis and prognosis. However, it is very challenging to achieve satisfactory performance in LNM classification, because both the morphology and spatial distribution of tumor regions should be taken into account. To address this problem, this article proposes a two-stage dMIL-Transformer framework, which integrates both the morphological and spatial information of the tumor regions based on the theory of multiple instance learning (MIL). In the first stage, a double Max-Min MIL (dMIL) strategy is devised to select the suspected top-K positive instances from each input histopathology image, which contains tens of thousands of patches (primarily negative). The dMIL strategy enables a better decision boundary for selecting the critical instances compared with other methods. In the second stage, a Transformer-based MIL aggregator is designed to integrate all the morphological and spatial information of the selected instances from the first stage. The self-attention mechanism is further employed to characterize the correlation between different instances and learn the bag-level representation for predicting the LNM category. The proposed dMIL-Transformer can effectively deal with the thorny classification in LNM with great visualization and interpretability. We conduct various experiments over three LNM datasets, and achieve 1.79%-7.50% performance improvement compared with other state-of-the-art methods.
Collapse
|
32
|
Liu Q, Jiang N, Hao Y, Hao C, Wang W, Bian T, Wang X, Li H, zhang Y, Kang Y, Xie F, Li Y, Jiang X, Feng Y, Mao Z, Wang Q, Gao Q, Zhang W, Cui B, Dong T. Identification of lymph node metastasis in pre-operation cervical cancer patients by weakly supervised deep learning from histopathological whole-slide biopsy images. Cancer Med 2023; 12:17952-17966. [PMID: 37559500 PMCID: PMC10523985 DOI: 10.1002/cam4.6437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 07/28/2023] [Accepted: 07/31/2023] [Indexed: 08/11/2023] Open
Abstract
BACKGROUND Lymph node metastasis (LNM) significantly impacts the prognosis of individuals diagnosed with cervical cancer, as it is closely linked to disease recurrence and mortality, thereby impacting therapeutic schedule choices for patients. However, accurately predicting LNM prior to treatment remains challenging. Consequently, this study seeks to utilize digital pathological features extracted from histopathological slides of primary cervical cancer patients to preoperatively predict the presence of LNM. METHODS A deep learning (DL) model was trained using the Vision transformer (ViT) and recurrent neural network (RNN) frameworks to predict LNM. This prediction was based on the analysis of 554 histopathological whole-slide images (WSIs) obtained from Qilu Hospital of Shandong University. To validate the model's performance, an external test was conducted using 336 WSIs from four other hospitals. Additionally, the efficiency of the DL model was evaluated using 190 cervical biopsies WSIs in a prospective set. RESULTS In the internal test set, our DL model achieved an area under the curve (AUC) of 0.919, with sensitivity and specificity values of 0.923 and 0.905, respectively, and an accuracy (ACC) of 0.909. The performance of the DL model remained strong in the external test set. In the prospective cohort, the AUC was 0.91, and the ACC was 0.895. Additionally, the DL model exhibited higher accuracy compared to imaging examination in the evaluation of LNM. By utilizing the transformer visualization method, we generated a heatmap that illustrates the local pathological features in primary lesions relevant to LNM. CONCLUSION DL-based image analysis has demonstrated efficiency in predicting LNM in early operable cervical cancer through the utilization of biopsies WSI. This approach has the potential to enhance therapeutic decision-making for patients diagnosed with cervical cancer.
Collapse
Affiliation(s)
- Qingqing Liu
- Cheeloo College of MedicineShandong UniversityJinan CityChina
| | - Nan Jiang
- Cheeloo College of MedicineShandong UniversityJinan CityChina
| | - Yiping Hao
- Cheeloo College of MedicineShandong UniversityJinan CityChina
| | - Chunyan Hao
- Department of Pathology, School of Basic Medical Science, Cheeloo College of MedicineShandong UniversityJinan CityChina
- Department of PathologyQilu Hospital of Shandong UniversityJinan CityChina
| | - Wei Wang
- Department of PathologyAffiliated Hospital of Jining Medical UniversityJining CityChina
| | - Tingting Bian
- Department of Medical ImagingAffiliated Hospital of Jining Medical UniversityJining CityChina
| | - Xiaohong Wang
- Department of Obstetrics and GynecologyJinan People's HospitalJinan CityChina
| | - Hua Li
- Department of Obstetrics and GynecologyTai'an City Central HospitalTai'an CityChina
| | - Yan zhang
- Department of Obstetrics and GynecologyWeifang People's HospitalWeifang CityChina
| | - Yanjun Kang
- Department of Obstetrics and GynecologyWomen and Children's Hospital, Qingdao UniversityQingdao CityChina
| | - Fengxiang Xie
- Department of PathologyKingMed DiagnosticsJinan CityChina
| | - Yawen Li
- Department of PathologyQilu Hospital of Shandong UniversityJinan CityChina
| | - XuJi Jiang
- Cheeloo College of MedicineShandong UniversityJinan CityChina
| | - Yuan Feng
- Cheeloo College of MedicineShandong UniversityJinan CityChina
| | - Zhonghao Mao
- Cheeloo College of MedicineShandong UniversityJinan CityChina
| | - Qi Wang
- Department of Obstetrics and Gynecology, Shandong Provincial Qianfoshan HospitalShandong UniversityJinan CityChina
| | - Qun Gao
- Department of Obstetrics and GynecologyThe Affiliated Hospital of Qingdao UniversityQingdao CityChina
| | - Wenjing Zhang
- Department of Obstetrics and GynecologyQilu Hospital of Shandong UniversityJinan CityChina
| | - Baoxia Cui
- Department of Obstetrics and GynecologyQilu Hospital of Shandong UniversityJinan CityChina
| | - Taotao Dong
- Department of Obstetrics and GynecologyQilu Hospital of Shandong UniversityJinan CityChina
| |
Collapse
|
33
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
34
|
Aubreville M, Wilm F, Stathonikos N, Breininger K, Donovan TA, Jabari S, Veta M, Ganz J, Ammeling J, van Diest PJ, Klopfleisch R, Bertram CA. A comprehensive multi-domain dataset for mitotic figure detection. Sci Data 2023; 10:484. [PMID: 37491536 PMCID: PMC10368709 DOI: 10.1038/s41597-023-02327-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 06/22/2023] [Indexed: 07/27/2023] Open
Abstract
The prognostic value of mitotic figures in tumor tissue is well-established for many tumor types and automating this task is of high research interest. However, especially deep learning-based methods face performance deterioration in the presence of domain shifts, which may arise from different tumor types, slide preparation and digitization devices. We introduce the MIDOG++ dataset, an extension of the MIDOG 2021 and 2022 challenge datasets. We provide region of interest images from 503 histological specimens of seven different tumor types with variable morphology with in total labels for 11,937 mitotic figures: breast carcinoma, lung carcinoma, lymphosarcoma, neuroendocrine tumor, cutaneous mast cell tumor, cutaneous melanoma, and (sub)cutaneous soft tissue sarcoma. The specimens were processed in several laboratories utilizing diverse scanners. We evaluated the extent of the domain shift by using state-of-the-art approaches, observing notable differences in single-domain training. In a leave-one-domain-out setting, generalizability improved considerably. This mitotic figure dataset is the first that incorporates a wide domain shift based on different tumor types, laboratories, whole slide image scanners, and species.
Collapse
Affiliation(s)
| | - Frauke Wilm
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Katharina Breininger
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | - Samir Jabari
- Department of Neuropathology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Mitko Veta
- Medical Image Analysis Group, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Jonathan Ganz
- Technische Hochschule Ingolstadt, Ingolstadt, Germany
| | | | - Paul J van Diest
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Robert Klopfleisch
- Institute of Veterinary Pathology, Freie Universität Berlin, Berlin, Germany
| | - Christof A Bertram
- Institute of Pathology, University of Veterinary Medicine Vienna, Vienna, Austria
| |
Collapse
|
35
|
Fu Y, Karanian M, Perret R, Camara A, Le Loarer F, Jean-Denis M, Hostein I, Michot A, Ducimetiere F, Giraud A, Courreges JB, Courtet K, Laizet Y, Bendjebbar E, Du Terrail JO, Schmauch B, Maussion C, Blay JY, Italiano A, Coindre JM. Deep learning predicts patients outcome and mutations from digitized histology slides in gastrointestinal stromal tumor. NPJ Precis Oncol 2023; 7:71. [PMID: 37488222 PMCID: PMC10366108 DOI: 10.1038/s41698-023-00421-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 06/29/2023] [Indexed: 07/26/2023] Open
Abstract
Risk assessment of gastrointestinal stromal tumor (GIST) according to the AFIP/Miettinen classification and mutational profiling are major tools for patient management. However, the AFIP/Miettinen classification depends heavily on mitotic counts, which is laborious and sometimes inconsistent between pathologists. It has also been shown to be imperfect in stratifying patients. Molecular testing is costly and time-consuming, therefore, not systematically performed in all countries. New methods to improve risk and molecular predictions are hence crucial to improve the tailoring of adjuvant therapy. We have built deep learning (DL) models on digitized HES-stained whole slide images (WSI) to predict patients' outcome and mutations. Models were trained with a cohort of 1233 GIST and validated on an independent cohort of 286 GIST. DL models yielded comparable results to the Miettinen classification for relapse-free-survival prediction in localized GIST without adjuvant Imatinib (C-index=0.83 in cross-validation and 0.72 for independent testing). DL splitted Miettinen intermediate risk GIST into high/low-risk groups (p value = 0.002 in the training set and p value = 0.29 in the testing set). DL models achieved an area under the receiver operating characteristic curve (AUC) of 0.81, 0.91, and 0.71 for predicting mutations in KIT, PDGFRA and wild type, respectively, in cross-validation and 0.76, 0.90, and 0.55 in independent testing. Notably, PDGFRA exon18 D842V mutation, which is resistant to Imatinib, was predicted with an AUC of 0.87 and 0.90 in cross-validation and independent testing, respectively. Additionally, novel histological criteria predictive of patients' outcome and mutations were identified by reviewing the tiles selected by the models. As a proof of concept, our study showed the possibility of implementing DL with digitized WSI and may represent a reproducible way to improve tailoring therapy and precision medicine for patients with GIST.
Collapse
Affiliation(s)
- Yu Fu
- Owkin, Inc., New York, NY, USA
| | - Marie Karanian
- Cancer Research Center of Lyon, Centre Léon Bérard, Lyon, France
| | - Raul Perret
- Department of Biopathology, Institut Bergonié, Bordeaux, France
| | | | - François Le Loarer
- Department of Biopathology, Institut Bergonié, Bordeaux, France
- Faculty of Medicine, University of Bordeaux, Bordeaux, France
| | | | | | - Audrey Michot
- Department of Surgical Oncology, Institut Bergonié, Bordeaux, France
| | | | - Antoine Giraud
- Clinical Research and Clinical Epidemiology Unit, Institut Bergonié, Bordeaux, France
| | | | - Kevin Courtet
- Department of Biopathology, Institut Bergonié, Bordeaux, France
| | - Yech'an Laizet
- Department of Biopathology, Institut Bergonié, Bordeaux, France
| | | | | | | | | | - Jean-Yves Blay
- Cancer Research Center of Lyon, Centre Léon Bérard, Lyon, France
| | - Antoine Italiano
- Faculty of Medicine, University of Bordeaux, Bordeaux, France
- Department of Medicine, Institut Bergonié, Bordeaux, France
| | - Jean-Michel Coindre
- Department of Biopathology, Institut Bergonié, Bordeaux, France
- Faculty of Medicine, University of Bordeaux, Bordeaux, France
| |
Collapse
|
36
|
Distante A, Marandino L, Bertolo R, Ingels A, Pavan N, Pecoraro A, Marchioni M, Carbonara U, Erdem S, Amparore D, Campi R, Roussel E, Caliò A, Wu Z, Palumbo C, Borregales LD, Mulders P, Muselaers CHJ. Artificial Intelligence in Renal Cell Carcinoma Histopathology: Current Applications and Future Perspectives. Diagnostics (Basel) 2023; 13:2294. [PMID: 37443687 DOI: 10.3390/diagnostics13132294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/01/2023] [Accepted: 07/04/2023] [Indexed: 07/15/2023] Open
Abstract
Renal cell carcinoma (RCC) is characterized by its diverse histopathological features, which pose possible challenges to accurate diagnosis and prognosis. A comprehensive literature review was conducted to explore recent advancements in the field of artificial intelligence (AI) in RCC pathology. The aim of this paper is to assess whether these advancements hold promise in improving the precision, efficiency, and objectivity of histopathological analysis for RCC, while also reducing costs and interobserver variability and potentially alleviating the labor and time burden experienced by pathologists. The reviewed AI-powered approaches demonstrate effective identification and classification abilities regarding several histopathological features associated with RCC, facilitating accurate diagnosis, grading, and prognosis prediction and enabling precise and reliable assessments. Nevertheless, implementing AI in renal cell carcinoma generates challenges concerning standardization, generalizability, benchmarking performance, and integration of data into clinical workflows. Developing methodologies that enable pathologists to interpret AI decisions accurately is imperative. Moreover, establishing more robust and standardized validation workflows is crucial to instill confidence in AI-powered systems' outcomes. These efforts are vital for advancing current state-of-the-art practices and enhancing patient care in the future.
Collapse
Affiliation(s)
- Alfredo Distante
- Department of Urology, Catholic University of the Sacred Heart, 00168 Roma, Italy
- Department of Urology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA Nijmegen, The Netherlands
| | - Laura Marandino
- Department of Medical Oncology, IRCCS Ospedale San Raffaele, 20132 Milan, Italy
| | - Riccardo Bertolo
- Department of Urology, San Carlo Di Nancy Hospital, 00165 Rome, Italy
| | - Alexandre Ingels
- Department of Urology, University Hospital Henri Mondor, APHP (Assistance Publique-Hôpitaux de Paris), 94000 Créteil, France
| | - Nicola Pavan
- Department of Surgical, Oncological and Oral Sciences, Section of Urology, University of Palermo, 90133 Palermo, Italy
| | - Angela Pecoraro
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, 10043 Turin, Italy
| | - Michele Marchioni
- Department of Medical, Oral and Biotechnological Sciences, G. d'Annunzio University of Chieti, 66100 Chieti, Italy
| | - Umberto Carbonara
- Andrology and Kidney Transplantation Unit, Department of Emergency and Organ Transplantation-Urology, University of Bari, 70121 Bari, Italy
| | - Selcuk Erdem
- Division of Urologic Oncology, Department of Urology, Istanbul University Istanbul Faculty of Medicine, Istanbul 34093, Turkey
| | - Daniele Amparore
- Department of Urology, San Luigi Gonzaga Hospital, University of Turin, Orbassano, 10043 Turin, Italy
| | - Riccardo Campi
- Urological Robotic Surgery and Renal Transplantation Unit, Careggi Hospital, University of Florence, 50121 Firenze, Italy
| | - Eduard Roussel
- Department of Urology, University Hospitals Leuven, 3000 Leuven, Belgium
| | - Anna Caliò
- Section of Pathology, Department of Diagnostic and Public Health, University of Verona, 37134 Verona, Italy
| | - Zhenjie Wu
- Department of Urology, Changhai Hospital, Naval Medical University, Shanghai 200433, China
| | - Carlotta Palumbo
- Division of Urology, Maggiore della Carità Hospital of Novara, Department of Translational Medicine, University of Eastern Piedmont, 13100 Novara, Italy
| | - Leonardo D Borregales
- Department of Urology, Well Cornell Medicine, New York-Presbyterian Hospital, New York, NY 10032, USA
| | - Peter Mulders
- Department of Urology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA Nijmegen, The Netherlands
| | - Constantijn H J Muselaers
- Department of Urology, Radboud University Medical Center, Geert Grooteplein 10, 6525 GA Nijmegen, The Netherlands
| |
Collapse
|
37
|
Hu W, Li X, Li C, Li R, Jiang T, Sun H, Huang X, Grzegorzek M, Li X. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput Biol Med 2023; 161:107034. [PMID: 37230019 DOI: 10.1016/j.compbiomed.2023.107034] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 04/13/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xintong Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
38
|
Plass M, Kargl M, Kiehl TR, Regitnig P, Geißler C, Evans T, Zerbe N, Carvalho R, Holzinger A, Müller H. Explainability and causability in digital pathology. J Pathol Clin Res 2023. [PMID: 37045794 DOI: 10.1002/cjp2.322] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 02/17/2023] [Accepted: 03/16/2023] [Indexed: 04/14/2023]
Abstract
The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains - even to their developers - often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive 'what-if'-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.
Collapse
Affiliation(s)
- Markus Plass
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| | - Michaela Kargl
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| | - Tim-Rasmus Kiehl
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany
| | - Peter Regitnig
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| | - Christian Geißler
- DAI-Labor, Agent Oriented Technologies (AOT), Technische Universität Berlin, Berlin, Germany
| | - Theodore Evans
- DAI-Labor, Agent Oriented Technologies (AOT), Technische Universität Berlin, Berlin, Germany
| | - Norman Zerbe
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany
| | - Rita Carvalho
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany
| | - Andreas Holzinger
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
- Human-Centered AI Lab, University of Natural Resources and Life Sciences Vienna, Vienna, Austria
| | - Heimo Müller
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| |
Collapse
|
39
|
AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions. Cancers (Basel) 2023; 15:cancers15041183. [PMID: 36831525 PMCID: PMC9953963 DOI: 10.3390/cancers15041183] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 02/07/2023] [Accepted: 02/08/2023] [Indexed: 02/15/2023] Open
Abstract
Skin cancer continues to remain one of the major healthcare issues across the globe. If diagnosed early, skin cancer can be treated successfully. While early diagnosis is paramount for an effective cure for cancer, the current process requires the involvement of skin cancer specialists, which makes it an expensive procedure and not easily available and affordable in developing countries. This dearth of skin cancer specialists has given rise to the need to develop automated diagnosis systems. In this context, Artificial Intelligence (AI)-based methods have been proposed. These systems can assist in the early detection of skin cancer and can consequently lower its morbidity, and, in turn, alleviate the mortality rate associated with it. Machine learning and deep learning are branches of AI that deal with statistical modeling and inference, which progressively learn from data fed into them to predict desired objectives and characteristics. This survey focuses on Machine Learning and Deep Learning techniques deployed in the field of skin cancer diagnosis, while maintaining a balance between both techniques. A comparison is made to widely used datasets and prevalent review papers, discussing automated skin cancer diagnosis. The study also discusses the insights and lessons yielded by the prior works. The survey culminates with future direction and scope, which will subsequently help in addressing the challenges faced within automated skin cancer diagnosis.
Collapse
|
40
|
Rauf Z, Sohail A, Khan SH, Khan A, Gwak J, Maqbool M. Attention-guided multi-scale deep object detection framework for lymphocyte analysis in IHC histological images. Microscopy (Oxf) 2023; 72:27-42. [PMID: 36239597 DOI: 10.1093/jmicro/dfac051] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 09/21/2022] [Accepted: 10/13/2022] [Indexed: 11/14/2022] Open
Abstract
Tumor-infiltrating lymphocytes are specialized lymphocytes that can detect and kill cancerous cells. Their detection poses many challenges due to significant morphological variations, overlapping occurrence, artifact regions and high-class resemblance between clustered areas and artifacts. In this regard, a Lymphocyte Analysis Framework based on Deep Convolutional neural network (DC-Lym-AF) is proposed to analyze lymphocytes in immunohistochemistry images. The proposed framework comprises (i) pre-processing, (ii) screening phase, (iii) localization phase and (iv) post-processing. In the screening phase, a custom convolutional neural network architecture (lymphocyte dilated network) is developed to screen lymphocytic regions by performing a patch-level classification. This proposed architecture uses dilated convolutions and shortcut connections to capture multi-level variations and ensure reference-based learning. In contrast, the localization phase utilizes an attention-guided multi-scale lymphocyte detector to detect lymphocytes. The proposed detector extracts refined and multi-scale features by exploiting dilated convolutions, attention mechanism and feature pyramid network (FPN) using its custom attention-aware backbone. The proposed DC-Lym-AF shows exemplary performance on the NuClick dataset compared with the existing detection models, with an F-score and precision of 0.84 and 0.83, respectively. We verified the generalizability of our proposed framework by participating in a publically open LYON'19 challenge. Results in terms of detection rate (0.76) and F-score (0.73) suggest that the proposed DC-Lym-AF can effectively detect lymphocytes in immunohistochemistry-stained images collected from different laboratories. In addition, its promising generalization on several datasets implies that it can be turned into a medical diagnostic tool to investigate various histopathological problems. Graphical Abstract.
Collapse
Affiliation(s)
- Zunaira Rauf
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,PIEAS Artificial Intelligence Center, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Anabia Sohail
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,Department of Computer Science, Faculty of Computing and Artificial Intelligence, Air University, E-9, Islamabad 44230, Pakistan
| | - Saddam Hussain Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,Department of Computer Systems Engineering, University of Engineering and Applied Sciences, Swat, Khyber Pakhtunkhwa 19130, Pakistan
| | - Asifullah Khan
- Pattern Recognition Lab, Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,PIEAS Artificial Intelligence Center, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan.,Center for Mathematical Sciences, Pakistan Institute of Engineering and Applied Sciences, Nilore, Islamabad 45650, Pakistan
| | - Jeonghwan Gwak
- Department of Software, Korea National University of Transportation, Chungju 27469, Republic of Korea
| | - Muhammad Maqbool
- The University of Alabama at Birmingham, 1720 2nd Ave South, Birmingham, AL 35294, USA
| |
Collapse
|
41
|
Aubreville M, Stathonikos N, Bertram CA, Klopfleisch R, Ter Hoeve N, Ciompi F, Wilm F, Marzahl C, Donovan TA, Maier A, Breen J, Ravikumar N, Chung Y, Park J, Nateghi R, Pourakpour F, Fick RHJ, Ben Hadj S, Jahanifar M, Shephard A, Dexl J, Wittenberg T, Kondo S, Lafarge MW, Koelzer VH, Liang J, Wang Y, Long X, Liu J, Razavi S, Khademi A, Yang S, Wang X, Erber R, Klang A, Lipnik K, Bolfa P, Dark MJ, Wasinger G, Veta M, Breininger K. Mitosis domain generalization in histopathology images - The MIDOG challenge. Med Image Anal 2023; 84:102699. [PMID: 36463832 DOI: 10.1016/j.media.2022.102699] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 10/28/2022] [Accepted: 11/17/2022] [Indexed: 11/27/2022]
Abstract
The density of mitotic figures (MF) within tumor tissue is known to be highly correlated with tumor proliferation and thus is an important marker in tumor grading. Recognition of MF by pathologists is subject to a strong inter-rater bias, limiting its prognostic value. State-of-the-art deep learning methods can support experts but have been observed to strongly deteriorate when applied in a different clinical environment. The variability caused by using different whole slide scanners has been identified as one decisive component in the underlying domain shift. The goal of the MICCAI MIDOG 2021 challenge was the creation of scanner-agnostic MF detection algorithms. The challenge used a training set of 200 cases, split across four scanning systems. As test set, an additional 100 cases split across four scanning systems, including two previously unseen scanners, were provided. In this paper, we evaluate and compare the approaches that were submitted to the challenge and identify methodological factors contributing to better performance. The winning algorithm yielded an F1 score of 0.748 (CI95: 0.704-0.781), exceeding the performance of six experts on the same task.
Collapse
Affiliation(s)
| | | | - Christof A Bertram
- Institute of Pathology, University of Veterinary Medicine, Vienna, Austria
| | - Robert Klopfleisch
- Institute of Veterinary Pathology, Freie Universität Berlin, Berlin, Germany
| | | | - Francesco Ciompi
- Computational Pathology Group, Radboud UMC, Nijmegen, The Netherlands
| | - Frauke Wilm
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany; Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Christian Marzahl
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Taryn A Donovan
- Department of Anatomic Pathology, Schwarzman Animal Medical Center, NY, USA
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Jack Breen
- CISTIB Center for Computational Imaging and Simulation Technologies in Biomedicine, School of Computing, University of Leeds, Leeds, UK
| | - Nishant Ravikumar
- CISTIB Center for Computational Imaging and Simulation Technologies in Biomedicine, School of Computing, University of Leeds, Leeds, UK
| | - Youjin Chung
- Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Jinah Park
- Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Ramin Nateghi
- Electrical and Electronics Engineering Department, Shiraz University of Technology, Shiraz, Iran
| | - Fattaneh Pourakpour
- Iranian Brain Mapping Biobank (IBMB), National Brain Mapping Laboratory (NBML), Tehran, Iran
| | | | | | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Warwick, UK
| | - Adam Shephard
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Warwick, UK
| | - Jakob Dexl
- Fraunhofer-Institute for Integrated Circuits IIS, Erlangen, Germany
| | | | | | - Maxime W Lafarge
- Department of Pathology and Molecular Pathology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Viktor H Koelzer
- Department of Pathology and Molecular Pathology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Jingtang Liang
- School of Life Science and Technology, Xidian University, Shannxi, China
| | - Yubo Wang
- School of Life Science and Technology, Xidian University, Shannxi, China
| | - Xi Long
- Histo Pathology Diagnostic Center, Shanghai, China
| | - Jingxin Liu
- Xi'an Jiaotong-Liverpool University, Suzhou, China
| | - Salar Razavi
- Image Analysis in Medicine Lab (IAMLAB), Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
| | - April Khademi
- Image Analysis in Medicine Lab (IAMLAB), Electrical, Computer and Biomedical Engineering, Ryerson University, Toronto, ON, Canada
| | - Sen Yang
- Tencent AI Lab, Shenzhen 518057, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Ramona Erber
- Institute of Pathology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andrea Klang
- Institute of Pathology, University of Veterinary Medicine, Vienna, Austria
| | - Karoline Lipnik
- Institute of Pathology, University of Veterinary Medicine, Vienna, Austria
| | - Pompei Bolfa
- Ross University School of Veterinary Medicine, Basseterre, Saint Kitts and Nevis
| | - Michael J Dark
- College of Veterinary Medicine, University of Florida, Gainesville, FL, USA
| | - Gabriel Wasinger
- Department of Pathology, General Hospital of Vienna, Medical University of Vienna, Vienna, Austria
| | - Mitko Veta
- Medical Image Analysis Group, TU Eindhoven, Eindhoven, The Netherlands
| | - Katharina Breininger
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
42
|
A generalizable and robust deep learning algorithm for mitosis detection in multicenter breast histopathological images. Med Image Anal 2023; 84:102703. [PMID: 36481608 DOI: 10.1016/j.media.2022.102703] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 09/16/2022] [Accepted: 11/21/2022] [Indexed: 11/24/2022]
Abstract
Mitosis counting of biopsies is an important biomarker for breast cancer patients, which supports disease prognostication and treatment planning. Developing a robust mitotic cell detection model is highly challenging due to its complex growth pattern and high similarities with non-mitotic cells. Most mitosis detection algorithms have poor generalizability across image domains and lack reproducibility and validation in multicenter settings. To overcome these issues, we propose a generalizable and robust mitosis detection algorithm (called FMDet), which is independently tested on multicenter breast histopathological images. To capture more refined morphological features of cells, we convert the object detection task as a semantic segmentation problem. The pixel-level annotations for mitotic nuclei are obtained by taking the intersection of the masks generated from a well-trained nuclear segmentation model and the bounding boxes provided by the MIDOG 2021 challenge. In our segmentation framework, a robust feature extractor is developed to capture the appearance variations of mitotic cells, which is constructed by integrating a channel-wise multi-scale attention mechanism into a fully convolutional network structure. Benefiting from the fact that the changes in the low-level spectrum do not affect the high-level semantic perception, we employ a Fourier-based data augmentation method to reduce domain discrepancies by exchanging the low-frequency spectrum between two domains. Our FMDet algorithm has been tested in the MIDOG 2021 challenge and ranked first place. Further, our algorithm is also externally validated on four independent datasets for mitosis detection, which exhibits state-of-the-art performance in comparison with previously published results. These results demonstrate that our algorithm has the potential to be deployed as an assistant decision support tool in clinical practice. Our code has been released at https://github.com/Xiyue-Wang/1st-in-MICCAI-MIDOG-2021-challenge.
Collapse
|
43
|
Piansaddhayanaon C, Santisukwongchote S, Shuangshoti S, Tao Q, Sriswasdi S, Chuangsuwanich E. ReCasNet: Improving consistency within the two-stage mitosis detection framework. Artif Intell Med 2023; 135:102462. [PMID: 36628784 DOI: 10.1016/j.artmed.2022.102462] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 10/11/2022] [Accepted: 11/23/2022] [Indexed: 11/26/2022]
Abstract
Mitotic count (MC) is an important histological parameter for cancer diagnosis and grading, but the manual process for obtaining MC from whole-slide histopathological images is very time-consuming and prone to error. Therefore, deep learning models have been proposed to facilitate this process. Existing approaches utilize a two-stage pipeline: the detection stage for identifying the locations of potential mitotic cells and the classification stage for refining prediction confidences. However, this pipeline formulation can lead to inconsistencies in the classification stage due to the poor prediction quality of the detection stage and the mismatches in training data distributions between the two stages. In this study, we propose a Refine Cascade Network (ReCasNet), an enhanced deep learning pipeline that mitigates the aforementioned problems with three improvements. First, window relocation was used to reduce the number of poor quality false positives generated during the detection stage. Second, object re-cropping was performed with another deep learning model to adjust poorly centered objects. Third, improved data selection strategies were introduced during the classification stage to reduce the mismatches in training data distributions. ReCasNet was evaluated on two large-scale mitotic figure recognition datasets, canine cutaneous mast cell tumor (CCMCT) and canine mammary carcinoma (CMC), which resulted in up to 4.8% percentage point improvements in the F1 scores for mitotic cell detection and 44.1% reductions in mean absolute percentage error (MAPE) for MC prediction. Techniques that underlie ReCasNet can be generalized to other two-stage object detection pipeline and should contribute to improving the performances of deep learning models in broad digital pathology applications.
Collapse
Affiliation(s)
- Chawan Piansaddhayanaon
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand; Chula Intelligent and Complex Systems, Faculty of Science, Chulalongkorn University, Bangkok, Thailand
| | - Sakun Santisukwongchote
- Department of Pathology, King Chulalongkorn Memorial Hospital and Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Shanop Shuangshoti
- Department of Pathology, King Chulalongkorn Memorial Hospital and Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | | | - Sira Sriswasdi
- Chula Intelligent and Complex Systems, Faculty of Science, Chulalongkorn University, Bangkok, Thailand; Center for Artificial Intelligence in Medicine, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand; Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand.
| | - Ekapol Chuangsuwanich
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand; Chula Intelligent and Complex Systems, Faculty of Science, Chulalongkorn University, Bangkok, Thailand; Center of Excellence in Computational Molecular Biology, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand.
| |
Collapse
|
44
|
Tavolara TE, Gurcan MN, Niazi MKK. Contrastive Multiple Instance Learning: An Unsupervised Framework for Learning Slide-Level Representations of Whole Slide Histopathology Images without Labels. Cancers (Basel) 2022; 14:5778. [PMID: 36497258 PMCID: PMC9738801 DOI: 10.3390/cancers14235778] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/16/2022] [Accepted: 11/19/2022] [Indexed: 11/25/2022] Open
Abstract
Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.
Collapse
Affiliation(s)
- Thomas E. Tavolara
- Center for Biomedical Informatics, Wake Forest School of Medicine, Winston-Salem, NC 27101, USA
| | | | | |
Collapse
|
45
|
Kim I, Kang K, Song Y, Kim TJ. Application of Artificial Intelligence in Pathology: Trends and Challenges. Diagnostics (Basel) 2022; 12:2794. [PMID: 36428854 PMCID: PMC9688959 DOI: 10.3390/diagnostics12112794] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2022] [Accepted: 11/11/2022] [Indexed: 11/16/2022] Open
Abstract
Given the recent success of artificial intelligence (AI) in computer vision applications, many pathologists anticipate that AI will be able to assist them in a variety of digital pathology tasks. Simultaneously, tremendous advancements in deep learning have enabled a synergy with artificial intelligence (AI), allowing for image-based diagnosis on the background of digital pathology. There are efforts for developing AI-based tools to save pathologists time and eliminate errors. Here, we describe the elements in the development of computational pathology (CPATH), its applicability to AI development, and the challenges it faces, such as algorithm validation and interpretability, computing systems, reimbursement, ethics, and regulations. Furthermore, we present an overview of novel AI-based approaches that could be integrated into pathology laboratory workflows.
Collapse
Affiliation(s)
- Inho Kim
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Kyungmin Kang
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Youngjae Song
- College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, 10, 63-ro, Yeongdeungpo-gu, Seoul 07345, Republic of Korea
| |
Collapse
|
46
|
Mercan C, Balkenhol M, Salgado R, Sherman M, Vielh P, Vreuls W, Polónia A, Horlings HM, Weichert W, Carter JM, Bult P, Christgen M, Denkert C, van de Vijver K, Bokhorst JM, van der Laak J, Ciompi F. Deep learning for fully-automated nuclear pleomorphism scoring in breast cancer. NPJ Breast Cancer 2022; 8:120. [DOI: 10.1038/s41523-022-00488-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 10/21/2022] [Indexed: 11/11/2022] Open
Abstract
AbstractTo guide the choice of treatment, every new breast cancer is assessed for aggressiveness (i.e., graded) by an experienced histopathologist. Typically, this tumor grade consists of three components, one of which is the nuclear pleomorphism score (the extent of abnormalities in the overall appearance of tumor nuclei). The degree of nuclear pleomorphism is subjectively classified from 1 to 3, where a score of 1 most closely resembles epithelial cells of normal breast epithelium and 3 shows the greatest abnormalities. Establishing numerical criteria for grading nuclear pleomorphism is challenging, and inter-observer agreement is poor. Therefore, we studied the use of deep learning to develop fully automated nuclear pleomorphism scoring in breast cancer. The reference standard used for training the algorithm consisted of the collective knowledge of an international panel of 10 pathologists on a curated set of regions of interest covering the entire spectrum of tumor morphology in breast cancer. To fully exploit the information provided by the pathologists, a first-of-its-kind deep regression model was trained to yield a continuous scoring rather than limiting the pleomorphism scoring to the standard three-tiered system. Our approach preserves the continuum of nuclear pleomorphism without necessitating a large data set with explicit annotations of tumor nuclei. Once translated to the traditional system, our approach achieves top pathologist-level performance in multiple experiments on regions of interest and whole-slide images, compared to a panel of 10 and 4 pathologists, respectively.
Collapse
|
47
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
48
|
Brancati N, Anniciello AM, Pati P, Riccio D, Scognamiglio G, Jaume G, De Pietro G, Di Bonito M, Foncubierta A, Botti G, Gabrani M, Feroce F, Frucci M. BRACS: A Dataset for BReAst Carcinoma Subtyping in H&E Histology Images. Database (Oxford) 2022; 2022:6762252. [PMID: 36251776 PMCID: PMC9575967 DOI: 10.1093/database/baac093] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 09/16/2022] [Accepted: 10/01/2022] [Indexed: 11/11/2022]
Abstract
Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women. Advances in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by pathologists is cumbersome, time-consuming and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems has empowered the rapid digitization of pathology slides and enabled the development of Artificial Intelligence (AI)-assisted digital workflows. However, AI techniques, especially Deep Learning, require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as data-acquisition level constraints, time-consuming and expensive annotations and anonymization of patient information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin and Eosin (H&E)-stained images to advance AI development in the automatic characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs) and 4539 Regions Of Interest (ROIs) extracted from the WSIs. Each WSI and respective ROIs are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI and ROI levels. Furthermore, by including the understudied atypical lesions, BRACS offers a unique opportunity for leveraging AI to better understand their characteristics. We encourage AI practitioners to develop and evaluate novel algorithms on the BRACS dataset to further breast cancer diagnosis and patient care. Database URL: https://www.bracs.icar.cnr.it/
Collapse
|
49
|
Wakili MA, Shehu HA, Sharif MH, Sharif MHU, Umar A, Kusetogullari H, Ince IF, Uyaver S. Classification of Breast Cancer Histopathological Images Using DenseNet and Transfer Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8904768. [PMID: 36262621 PMCID: PMC9576400 DOI: 10.1155/2022/8904768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 06/19/2022] [Accepted: 07/30/2022] [Indexed: 11/22/2022]
Abstract
Breast cancer is one of the most common invading cancers in women. Analyzing breast cancer is nontrivial and may lead to disagreements among experts. Although deep learning methods achieved an excellent performance in classification tasks including breast cancer histopathological images, the existing state-of-the-art methods are computationally expensive and may overfit due to extracting features from in-distribution images. In this paper, our contribution is mainly twofold. First, we perform a short survey on deep-learning-based models for classifying histopathological images to investigate the most popular and optimized training-testing ratios. Our findings reveal that the most popular training-testing ratio for histopathological image classification is 70%: 30%, whereas the best performance (e.g., accuracy) is achieved by using the training-testing ratio of 80%: 20% on an identical dataset. Second, we propose a method named DenTnet to classify breast cancer histopathological images chiefly. DenTnet utilizes the principle of transfer learning to solve the problem of extracting features from the same distribution using DenseNet as a backbone model. The proposed DenTnet method is shown to be superior in comparison to a number of leading deep learning methods in terms of detection accuracy (up to 99.28% on BreaKHis dataset deeming training-testing ratio of 80%: 20%) with good generalization ability and computational speed. The limitation of existing methods including the requirement of high computation and utilization of the same feature distribution is mitigated by dint of the DenTnet.
Collapse
Affiliation(s)
| | - Harisu Abdullahi Shehu
- School of Engineering and Computer Science, Victoria University of Wellington, Wellington 6012, New Zealand
| | - Md. Haidar Sharif
- College of Computer Science and Engineering, University of Hail, Hail 2440, Saudi Arabia
| | - Md. Haris Uddin Sharif
- School of Computer & Information Sciences, University of the Cumberlands, Williamsburg, KY 40769, USA
| | - Abubakar Umar
- Abubakar Tafawa Balewa University, Bauchi 740272, Nigeria
| | - Huseyin Kusetogullari
- Department of Computer Science, Blekinge Institute of Technology, Karlskrona 37141, Sweden
| | - Ibrahim Furkan Ince
- Department of Digital Game Design, Nisantasi University, 34485 Istanbul, Turkey
| | - Sahin Uyaver
- Department of Energy Science and Technologies, Turkish-German University, 34820 Istanbul, Turkey
| |
Collapse
|
50
|
Deep learning models for histologic grading of breast cancer and association with disease prognosis. NPJ Breast Cancer 2022; 8:113. [PMID: 36192400 PMCID: PMC9530224 DOI: 10.1038/s41523-022-00478-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 09/01/2022] [Indexed: 12/02/2022] Open
Abstract
Histologic grading of breast cancer involves review and scoring of three well-established morphologic features: mitotic count, nuclear pleomorphism, and tubule formation. Taken together, these features form the basis of the Nottingham Grading System which is used to inform breast cancer characterization and prognosis. In this study, we develop deep learning models to perform histologic scoring of all three components using digitized hematoxylin and eosin-stained slides containing invasive breast carcinoma. We first evaluate model performance using pathologist-based reference standards for each component. To complement this typical approach to evaluation, we further evaluate the deep learning models via prognostic analyses. The individual component models perform at or above published benchmarks for algorithm-based grading approaches, achieving high concordance rates with pathologist grading. Further, prognostic performance using deep learning-based grading is on par with that of pathologists performing review of matched slides. By providing scores for each component feature, the deep-learning based approach also provides the potential to identify the grading components contributing most to prognostic value. This may enable optimized prognostic models, opportunities to improve access to consistent grading, and approaches to better understand the links between histologic features and clinical outcomes in breast cancer.
Collapse
|