1
|
Cruz‐Guerrero IA, Campos‐Delgado DU, Mejía‐Rodríguez AR, Leon R, Ortega S, Fabelo H, Camacho R, Plaza MDLL, Callico G. Hybrid brain tumor classification of histopathology hyperspectral images by linear unmixing and an ensemble of deep neural networks. Healthc Technol Lett 2024; 11:240-251. [PMID: 39100499 PMCID: PMC11294933 DOI: 10.1049/htl2.12084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Revised: 02/09/2024] [Accepted: 03/18/2024] [Indexed: 08/06/2024] Open
Abstract
Hyperspectral imaging has demonstrated its potential to provide correlated spatial and spectral information of a sample by a non-contact and non-invasive technology. In the medical field, especially in histopathology, HSI has been applied for the classification and identification of diseased tissue and for the characterization of its morphological properties. In this work, we propose a hybrid scheme to classify non-tumor and tumor histological brain samples by hyperspectral imaging. The proposed approach is based on the identification of characteristic components in a hyperspectral image by linear unmixing, as a features engineering step, and the subsequent classification by a deep learning approach. For this last step, an ensemble of deep neural networks is evaluated by a cross-validation scheme on an augmented dataset and a transfer learning scheme. The proposed method can classify histological brain samples with an average accuracy of 88%, and reduced variability, computational cost, and inference times, which presents an advantage over methods in the state-of-the-art. Hence, the work demonstrates the potential of hybrid classification methodologies to achieve robust and reliable results by combining linear unmixing for features extraction and deep learning for classification.
Collapse
Affiliation(s)
- Inés A. Cruz‐Guerrero
- Facultad de CienciasUniversidad Autonoma de San Luis Potosí (UASLP)San Luis PotosiMexico
- Department of Biostatistics and Informatics, Colorado School of Public HealthUniversity of Colorado Anschutz Medical CampusColoradoUSA
| | | | | | - Raquel Leon
- Institute for Applied Microelectronics (IUMA)University of Las Palmas de Gran CanariaLas Palmas de Gran CanariaSpain
| | - Samuel Ortega
- Institute for Applied Microelectronics (IUMA)University of Las Palmas de Gran CanariaLas Palmas de Gran CanariaSpain
| | - Himar Fabelo
- Institute for Applied Microelectronics (IUMA)University of Las Palmas de Gran CanariaLas Palmas de Gran CanariaSpain
| | - Rafael Camacho
- Department of Pathological AnatomyUniversity Hospital Doctor Negrin of Gran CanariaLas Palmas de Gran CanariaSpain
| | - Maria de la Luz Plaza
- Department of Pathological AnatomyUniversity Hospital Doctor Negrin of Gran CanariaLas Palmas de Gran CanariaSpain
| | - Gustavo Callico
- Institute for Applied Microelectronics (IUMA)University of Las Palmas de Gran CanariaLas Palmas de Gran CanariaSpain
| |
Collapse
|
2
|
Liu M, Liu Y, Xu P, Cui H, Ke J, Ma J. Exploiting Geometric Features via Hierarchical Graph Pyramid Transformer for Cancer Diagnosis Using Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2888-2900. [PMID: 38530716 DOI: 10.1109/tmi.2024.3381994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/28/2024]
Abstract
Cancer is widely recognized as the primary cause of mortality worldwide, and pathology analysis plays a pivotal role in achieving accurate cancer diagnosis. The intricate representation of features in histopathological images encompasses abundant information crucial for disease diagnosis, regarding cell appearance, tumor microenvironment, and geometric characteristics. However, recent deep learning methods have not adequately exploited geometric features for pathological image classification due to the absence of effective descriptors that can capture both cell distribution and gathering patterns, which often serve as potent indicators. In this paper, inspired by clinical practice, a Hierarchical Graph Pyramid Transformer (HGPT) is proposed to guide pathological image classification by effectively exploiting a geometric representation of tissue distribution which was ignored by existing state-of-the-art methods. First, a graph representation is constructed according to morphological feature of input pathological image and learn geometric representation through the proposed multi-head graph aggregator. Then, the image and its graph representation are feed into the transformer encoder layer to model long-range dependency. Finally, a locality feature enhancement block is designed to enhance the 2D local representation of feature embedding, which is not well explored in the existing vision transformers. An extensive experimental study is conducted on Kather-5K, MHIST, NCT-CRC-HE, and GasHisSDB for binary or multi-category classification of multiple cancer types. Results demonstrated that our method is capable of consistently reaching superior classification outcomes for histopathological images, which provide an effective diagnostic tool for malignant tumors in clinical practice.
Collapse
|
3
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
4
|
Jensen MP, Qiang Z, Khan DZ, Stoyanov D, Baldeweg SE, Jaunmuktane Z, Brandner S, Marcus HJ. Artificial intelligence in histopathological image analysis of central nervous system tumours: A systematic review. Neuropathol Appl Neurobiol 2024; 50:e12981. [PMID: 38738494 DOI: 10.1111/nan.12981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 04/05/2024] [Accepted: 04/10/2024] [Indexed: 05/14/2024]
Abstract
The convergence of digital pathology and artificial intelligence could assist histopathology image analysis by providing tools for rapid, automated morphological analysis. This systematic review explores the use of artificial intelligence for histopathological image analysis of digitised central nervous system (CNS) tumour slides. Comprehensive searches were conducted across EMBASE, Medline and the Cochrane Library up to June 2023 using relevant keywords. Sixty-eight suitable studies were identified and qualitatively analysed. The risk of bias was evaluated using the Prediction model Risk of Bias Assessment Tool (PROBAST) criteria. All the studies were retrospective and preclinical. Gliomas were the most frequently analysed tumour type. The majority of studies used convolutional neural networks or support vector machines, and the most common goal of the model was for tumour classification and/or grading from haematoxylin and eosin-stained slides. The majority of studies were conducted when legacy World Health Organisation (WHO) classifications were in place, which at the time relied predominantly on histological (morphological) features but have since been superseded by molecular advances. Overall, there was a high risk of bias in all studies analysed. Persistent issues included inadequate transparency in reporting the number of patients and/or images within the model development and testing cohorts, absence of external validation, and insufficient recognition of batch effects in multi-institutional datasets. Based on these findings, we outline practical recommendations for future work including a framework for clinical implementation, in particular, better informing the artificial intelligence community of the needs of the neuropathologist.
Collapse
Affiliation(s)
- Melanie P Jensen
- Pathology Department, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
- Briscoe Lab, The Francis Crick Institute, London, UK
| | - Zekai Qiang
- School of Medicine and Population Health, University of Sheffield Medical School, Sheffield, UK
| | - Danyal Z Khan
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| | - Danail Stoyanov
- Department of Computer Science, University College London, London, UK
| | - Stephanie E Baldeweg
- Department of Diabetes and Endocrinology, University College London Hospitals, London, UK
- Centre for Obesity and Metabolism, Department of Experimental and Translational Medicine, Division of Medicine, University College London, London, UK
| | - Zane Jaunmuktane
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
- Department of Clinical and Movement Neurosciences, University College London Queen Square Institute of Neurology, London, UK
| | - Sebastian Brandner
- Division of Neuropathology, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Neurodegenerative Disease, University College London Queen Square Institute of Neurology, London, UK
| | - Hani J Marcus
- Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, University College London Hospitals NHS Foundation Trust, London, UK
- Department of Computer Science, University College London, London, UK
| |
Collapse
|
5
|
Calin VL, Mihailescu M, Petrescu GE, Lisievici MG, Tarba N, Calin D, Ungureanu VG, Pasov D, Brehar FM, Gorgan RM, Moisescu MG, Savopol T. Grading of glioma tumors using digital holographic microscopy. Heliyon 2024; 10:e29897. [PMID: 38694030 PMCID: PMC11061684 DOI: 10.1016/j.heliyon.2024.e29897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 03/14/2024] [Accepted: 04/17/2024] [Indexed: 05/03/2024] Open
Abstract
Gliomas are the most common type of cerebral tumors; they occur with increasing incidence in the last decade and have a high rate of mortality. For efficient treatment, fast accurate diagnostic and grading of tumors are imperative. Presently, the grading of tumors is established by histopathological evaluation, which is a time-consuming procedure and relies on the pathologists' experience. Here we propose a supervised machine learning procedure for tumor grading which uses quantitative phase images of unstained tissue samples acquired by digital holographic microscopy. The algorithm is using an extensive set of statistical and texture parameters computed from these images. The procedure has been able to classify six classes of images (normal tissue and five glioma subtypes) and to distinguish between gliomas types from grades II to IV (with the highest sensitivity and specificity for grade II astrocytoma and grade III oligodendroglioma and very good scores in recognizing grade III anaplastic astrocytoma and grade IV glioblastoma). The procedure bolsters clinical diagnostic accuracy, offering a swift and reliable means of tumor characterization and grading, ultimately the enhancing treatment decision-making process.
Collapse
Affiliation(s)
- Violeta L. Calin
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
- Excellence Center for Research in Biophysics and Cellular Biotechnology, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Mona Mihailescu
- Digital Holography Imaging and Processing Laboratory, Physics Department, Faculty of Applied Sciences, National University for Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
- Centre for Fundamental Sciences Applied in Engineering, National University for Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - George E.D. Petrescu
- Department of Neurosurgery, “Bagdasar-Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Mihai Gheorghe Lisievici
- Department of Pathology, “Bagdasar Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
| | - Nicolae Tarba
- Doctoral School of Automatic Control and Computers, National University for Science and Technology Politehnica Bucharest, 313 Splaiul Independentei, 060042, Bucharest, Romania
| | - Daniel Calin
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Victor Gabriel Ungureanu
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Diana Pasov
- Department of Pathology, “Bagdasar Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
| | - Felix M. Brehar
- Department of Neurosurgery, “Bagdasar-Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Radu M. Gorgan
- Department of Neurosurgery, “Bagdasar-Arseni” Clinical Emergency Hospital, 12 Berceni st., 041915, Bucharest, Romania
- Department of Neurosurgery, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Mihaela G. Moisescu
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
- Excellence Center for Research in Biophysics and Cellular Biotechnology, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| | - Tudor Savopol
- Biophysics and Cellular Biotechnology Dept., Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
- Excellence Center for Research in Biophysics and Cellular Biotechnology, Faculty of Medicine, University of Medicine and Pharmacy Carol Davila, 8 Eroii Sanitari Blvd., 050474, Bucharest, Romania
| |
Collapse
|
6
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
7
|
Egemen D, Perkins RB, Cheung LC, Befano B, Rodriguez AC, Desai K, Lemay A, Ahmed SR, Antani S, Jeronimo J, Wentzensen N, Kalpathy-Cramer J, De Sanjose S, Schiffman M. Artificial intelligence-based image analysis in clinical testing: lessons from cervical cancer screening. J Natl Cancer Inst 2024; 116:26-33. [PMID: 37758250 PMCID: PMC10777665 DOI: 10.1093/jnci/djad202] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 09/11/2023] [Accepted: 09/21/2023] [Indexed: 10/03/2023] Open
Abstract
Novel screening and diagnostic tests based on artificial intelligence (AI) image recognition algorithms are proliferating. Some initial reports claim outstanding accuracy followed by disappointing lack of confirmation, including our own early work on cervical screening. This is a presentation of lessons learned, organized as a conceptual step-by-step approach to bridge the gap between the creation of an AI algorithm and clinical efficacy. The first fundamental principle is specifying rigorously what the algorithm is designed to identify and what the test is intended to measure (eg, screening, diagnostic, or prognostic). Second, designing the AI algorithm to minimize the most clinically important errors. For example, many equivocal cervical images cannot yet be labeled because the borderline between cases and controls is blurred. To avoid a misclassified case-control dichotomy, we have isolated the equivocal cases and formally included an intermediate, indeterminate class (severity order of classes: case>indeterminate>control). The third principle is evaluating AI algorithms like any other test, using clinical epidemiologic criteria. Repeatability of the algorithm at the borderline, for indeterminate images, has proven extremely informative. Distinguishing between internal and external validation is also essential. Linking the AI algorithm results to clinical risk estimation is the fourth principle. Absolute risk (not relative) is the critical metric for translating a test result into clinical use. Finally, generating risk-based guidelines for clinical use that match local resources and priorities is the last principle in our approach. We are particularly interested in applications to lower-resource settings to address health disparities. We note that similar principles apply to other domains of AI-based image analysis for medical diagnostic testing.
Collapse
Affiliation(s)
- Didem Egemen
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Rebecca B Perkins
- Department of Obstetrics and Gynecology, Boston Medical Center/Boston University School of Medicine, Boston, MA, USA
| | - Li C Cheung
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Brian Befano
- Information Management Services Inc, Calverton, MD, USA
- Department of Epidemiology, School of Public Health, University of Washington, Seattle, WA, USA
| | - Ana Cecilia Rodriguez
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Kanan Desai
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Andreanne Lemay
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Syed Rakin Ahmed
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
- Harvard Graduate Program in Biophysics, Harvard Medical School, Harvard University, Cambridge, MA, USA
- Massachusetts Institute of Technology, Cambridge, MA, USA
- Geisel School of Medicine at Dartmouth, Dartmouth College, Hanover, NH, USA
| | - Sameer Antani
- National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Jose Jeronimo
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Nicolas Wentzensen
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| | - Jayashree Kalpathy-Cramer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Silvia De Sanjose
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
- ISGlobal, Barcelona, Spain
| | - Mark Schiffman
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD, USA
| |
Collapse
|
8
|
Tu C, Du D, Zeng T, Zhang Y. Deep Multi-Dictionary Learning for Survival Prediction With Multi-Zoom Histopathological Whole Slide Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:14-25. [PMID: 37788195 DOI: 10.1109/tcbb.2023.3321593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Survival prediction based on histopathological whole slide images (WSIs) is of great significance for risk-benefit assessment and clinical decision. However, complex microenvironments and heterogeneous tissue structures in WSIs bring challenges to learning informative prognosis-related representations. Additionally, previous studies mainly focus on modeling using mono-scale WSIs, which commonly ignore useful subtle differences existed in multi-zoom WSIs. To this end, we propose a deep multi-dictionary learning framework for cancer survival prediction with multi-zoom histopathological WSIs. The framework can recognize and learn discriminative clusters (i.e., microenvironments) based on multi-scale deep representations for survival analysis. Specifically, we learn multi-scale features based on multi-zoom tiles from WSIs via stacked deep autoencoders network followed by grouping different microenvironments by cluster algorithm. Based on multi-scale deep features of clusters, a multi-dictionary learning method with a post-pruning strategy is devised to learn discriminative representations from selected prognosis-related clusters in a task-driven manner. Finally, a survival model (i.e., EN-Cox) is constructed to estimate the risk index of an individual patient. The proposed model is evaluated on three datasets derived from The Cancer Genome Atlas (TCGA), and the experimental results demonstrate that it outperforms several state-of-the-art survival analysis approaches.
Collapse
|
9
|
Prabhune A, Bhat S, Mallavaram A, Mehar Shagufta A, Srinivasan S. A Situational Analysis of the Impact of the COVID-19 Pandemic on Digital Health Research Initiatives in South Asia. Cureus 2023; 15:e48977. [PMID: 38111408 PMCID: PMC10726017 DOI: 10.7759/cureus.48977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/17/2023] [Indexed: 12/20/2023] Open
Abstract
The objective of this paper was to evaluate and compare the quantity and sustainability of digital health initiatives in the South Asia region before and during the COVID-19 pandemic. The study used a two-step methodology of (a) descriptive analysis of digital health research articles published from 2016 to 2021 from South Asia in terms of stratification of research articles based on diseases and conditions they were developed, geography, and tasks wherein the initiative was applied and (b) a simple and replicable tool developed by authors to assess the sustainability of digital health initiatives using experimental or observational study designs. The results of the descriptive analysis highlight the following: (a) there was a 40% increase in the number of studies reported in 2020 when compared to 2019; (b) the three most common areas wherein substantive digital health research has been focused are health systems strengthening, ophthalmic disorders, and COVID-19; and (c) remote consultation, health information delivery, and clinical decision support systems are the top three commonly developed tools. We developed and estimated the inter-rater operability of the sustainability assessment tool ascertained with a Kappa value of 0.806 (±0.088). We conclude that the COVID-19 pandemic has had a positive impact on digital health research with an improvement in the number of digital health initiatives and an improvement in the sustainability score of studies published during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Akash Prabhune
- Health and Information Technology, Institute of Health Management Research, Bangalore, IND
| | - Sachin Bhat
- Health and Information Technology, Institute of Health Management Research, Bangalore, IND
| | | | | | - Surya Srinivasan
- Health and Information Technology, Institute of Health Management Research, Bangalore, IND
| |
Collapse
|
10
|
Khalili N, Kazerooni AF, Familiar A, Haldar D, Kraya A, Foster J, Koptyra M, Storm PB, Resnick AC, Nabavizadeh A. Radiomics for characterization of the glioma immune microenvironment. NPJ Precis Oncol 2023; 7:59. [PMID: 37337080 DOI: 10.1038/s41698-023-00413-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 06/02/2023] [Indexed: 06/21/2023] Open
Abstract
Increasing evidence suggests that besides mutational and molecular alterations, the immune component of the tumor microenvironment also substantially impacts tumor behavior and complicates treatment response, particularly to immunotherapies. Although the standard method for characterizing tumor immune profile is through performing integrated genomic analysis on tissue biopsies, the dynamic change in the immune composition of the tumor microenvironment makes this approach not feasible, especially for brain tumors. Radiomics is a rapidly growing field that uses advanced imaging techniques and computational algorithms to extract numerous quantitative features from medical images. Recent advances in machine learning methods are facilitating biological validation of radiomic signatures and allowing them to "mine" for a variety of significant correlates, including genetic, immunologic, and histologic data. Radiomics has the potential to be used as a non-invasive approach to predict the presence and density of immune cells within the microenvironment, as well as to assess the expression of immune-related genes and pathways. This information can be essential for patient stratification, informing treatment decisions and predicting patients' response to immunotherapies. This is particularly important for tumors with difficult surgical access such as gliomas. In this review, we provide an overview of the glioma microenvironment, describe novel approaches for clustering patients based on their tumor immune profile, and discuss the latest progress on utilization of radiomics for immune profiling of glioma based on current literature.
Collapse
Affiliation(s)
- Nastaran Khalili
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Anahita Fathi Kazerooni
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
- AI2D Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Neurosurgery, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ariana Familiar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Debanjan Haldar
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Institute of Translational Medicine and Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Adam Kraya
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Jessica Foster
- Division of Oncology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Pediatrics, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Mateusz Koptyra
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Phillip B Storm
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Adam C Resnick
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine (D3b), Children's Hospital of Philadelphia, Philadelphia, PA, USA.
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
11
|
Wang X, Han H, Xu M, Li S, Zhang D, Du S, Xu M. STNet: shape and texture joint learning through two-stream network for knowledge-guided image recognition. Front Neurosci 2023; 17:1212049. [PMID: 37397450 PMCID: PMC10309034 DOI: 10.3389/fnins.2023.1212049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 05/31/2023] [Indexed: 07/04/2023] Open
Abstract
Introduction The human brain processes shape and texture information separately through different neurons in the visual system. In intelligent computer-aided imaging diagnosis, pre-trained feature extractors are commonly used in various medical image recognition methods, common pre-training datasets such as ImageNet tend to improve the texture representation of the model but make it ignore many shape features. Weak shape feature representation is disadvantageous for some tasks that focus on shape features in medical image analysis. Methods Inspired by the function of neurons in the human brain, in this paper, we proposed a shape-and-texture-biased two-stream network to enhance the shape feature representation in knowledge-guided medical image analysis. First, the two-stream network shape-biased stream and a texture-biased stream are constructed through classification and segmentation multi-task joint learning. Second, we propose pyramid-grouped convolution to enhance the texture feature representation and introduce deformable convolution to enhance the shape feature extraction. Third, we used a channel-attention-based feature selection module in shape and texture feature fusion to focus on the key features and eliminate information redundancy caused by feature fusion. Finally, aiming at the problem of model optimization difficulty caused by the imbalance in the number of benign and malignant samples in medical images, an asymmetric loss function was introduced to improve the robustness of the model. Results and conclusion We applied our method to the melanoma recognition task on ISIC-2019 and XJTU-MM datasets, which focus on both the texture and shape of the lesions. The experimental results on dermoscopic image recognition and pathological image recognition datasets show the proposed method outperforms the compared algorithms and prove the effectiveness of our method.
Collapse
Affiliation(s)
- Xijing Wang
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Hongcheng Han
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Mengrui Xu
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
- The School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shengpeng Li
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Dong Zhang
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
- The School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shaoyi Du
- National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Meifeng Xu
- The Second Affiliated Hospital of Xi'an Jiaotong University (Xibei Hospital), Xi'an, China
| |
Collapse
|
12
|
Foucart A, Debeir O, Decaestecker C. Shortcomings and areas for improvement in digital pathology image segmentation challenges. Comput Med Imaging Graph 2023; 103:102155. [PMID: 36525770 DOI: 10.1016/j.compmedimag.2022.102155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 09/13/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022]
Abstract
Digital pathology image analysis challenges have been organised regularly since 2010, often with events hosted at major conferences and results published in high-impact journals. These challenges mobilise a lot of energy from organisers, participants, and expert annotators (especially for image segmentation challenges). This study reviews image segmentation challenges in digital pathology and the top-ranked methods, with a particular focus on how reference annotations are generated and how the methods' predictions are evaluated. We found important shortcomings in the handling of inter-expert disagreement and the relevance of the evaluation process chosen. We also noted key problems with the quality control of various challenge elements that can lead to uncertainties in the published results. Our findings show the importance of greatly increasing transparency in the reporting of challenge results, and the need to make publicly available the evaluation codes, test set annotations and participants' predictions. The aim is to properly ensure the reproducibility and interpretation of the results and to increase the potential for exploitation of the substantial work done in these challenges.
Collapse
Affiliation(s)
- Adrien Foucart
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium.
| | - Olivier Debeir
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis, Université Libre de Bruxelles, Av. F.D. Roosevelt 50, 1050 Brussels, Belgium; Center for Microscopy and Molecular Imaging, Université Libre de Bruxelles, Charleroi, Belgium.
| |
Collapse
|
13
|
Familiar AM, Mahtabfar A, Fathi Kazerooni A, Kiani M, Vossough A, Viaene A, Storm PB, Resnick AC, Nabavizadeh A. Radio-pathomic approaches in pediatric neuro-oncology: Opportunities and challenges. Neurooncol Adv 2023; 5:vdad119. [PMID: 37841693 PMCID: PMC10576517 DOI: 10.1093/noajnl/vdad119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023] Open
Abstract
With medical software platforms moving to cloud environments with scalable storage and computing, the translation of predictive artificial intelligence (AI) models to aid in clinical decision-making and facilitate personalized medicine for cancer patients is becoming a reality. Medical imaging, namely radiologic and histologic images, has immense analytical potential in neuro-oncology, and models utilizing integrated radiomic and pathomic data may yield a synergistic effect and provide a new modality for precision medicine. At the same time, the ability to harness multi-modal data is met with challenges in aggregating data across medical departments and institutions, as well as significant complexity in modeling the phenotypic and genotypic heterogeneity of pediatric brain tumors. In this paper, we review recent pathomic and integrated pathomic, radiomic, and genomic studies with clinical applications. We discuss current challenges limiting translational research on pediatric brain tumors and outline technical and analytical solutions. Overall, we propose that to empower the potential residing in radio-pathomics, systemic changes in cross-discipline data management and end-to-end software platforms to handle multi-modal data sets are needed, in addition to embracing modern AI-powered approaches. These changes can improve the performance of predictive models, and ultimately the ability to advance brain cancer treatments and patient outcomes through the development of such models.
Collapse
Affiliation(s)
- Ariana M Familiar
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Aria Mahtabfar
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Thomas Jefferson University Hospital, Philadelphia, PA, USA
| | - Anahita Fathi Kazerooni
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Mahsa Kiani
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arastoo Vossough
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Angela Viaene
- Department of Pathology and Laboratory Medicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Phillip B Storm
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Adam C Resnick
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Neurosurgery, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
| | - Ali Nabavizadeh
- Center for Data-Driven Discovery in Biomedicine, Children’s Hospital of Philadelphia, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
14
|
Hewitt KJ, Löffler CML, Muti HS, Berghoff AS, Eisenlöffel C, van Treeck M, Carrero ZI, El Nahhas OSM, Veldhuizen GP, Weil S, Saldanha OL, Bejan L, Millner TO, Brandner S, Brückmann S, Kather JN. Direct image to subtype prediction for brain tumors using deep learning. Neurooncol Adv 2023; 5:vdad139. [PMID: 38106649 PMCID: PMC10724115 DOI: 10.1093/noajnl/vdad139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023] Open
Abstract
Background Deep Learning (DL) can predict molecular alterations of solid tumors directly from routine histopathology slides. Since the 2021 update of the World Health Organization (WHO) diagnostic criteria, the classification of brain tumors integrates both histopathological and molecular information. We hypothesize that DL can predict molecular alterations as well as WHO subtyping of brain tumors from hematoxylin and eosin-stained histopathology slides. Methods We used weakly supervised DL and applied it to three large cohorts of brain tumor samples, comprising N = 2845 patients. Results We found that the key molecular alterations for subtyping, IDH and ATRX, as well as 1p19q codeletion, were predictable from histology with an area under the receiver operating characteristic curve (AUROC) of 0.95, 0.90, and 0.80 in the training cohort, respectively. These findings were upheld in external validation cohorts with AUROCs of 0.90, 0.79, and 0.87 for prediction of IDH, ATRX, and 1p19q codeletion, respectively. Conclusions In the future, such DL-based implementations could ease diagnostic workflows, particularly for situations in which advanced molecular testing is not readily available.
Collapse
Affiliation(s)
- Katherine J Hewitt
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, North Rhine-Westphalia, Germany
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
| | - Chiara M L Löffler
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, North Rhine-Westphalia, Germany
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, Dresden, Saxony, Germany
| | - Hannah Sophie Muti
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
- Department of Visceral, Thoracic and Vascular Surgery, University Hospital Carl Gustav Carus Dresden, Dresden, Saxony, Germany
| | - Anna Sophie Berghoff
- Department of Medicine 1, Division of Oncology, Medical University of Vienna, Vienna, Vienna, Austria
| | - Christian Eisenlöffel
- Department of Pathology, St. Georg Teaching Hospital, University of Leipzig, Leipzig, Saxony, Germany
| | - Marko van Treeck
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
| | - Zunamys I Carrero
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
| | - Omar S M El Nahhas
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
| | - Gregory P Veldhuizen
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
| | - Sophie Weil
- Neurology Clinic, Department of Neurology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Baden- Württemberg, Germany
- Clinical Cooperation Unit Neuro-oncology, Department of Neurology, German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Heidelberg, Baden- Württemberg, Germany
| | - Oliver Lester Saldanha
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, North Rhine-Westphalia, Germany
| | - Laura Bejan
- School of Medicine, Faculty of Medicine and Dentistry, University College London, London, Greater London, UK
| | - Thomas O Millner
- Division of Neuropathology, Queen Square Institute of Neurology, University College London, London, Greater London, UK
- Blizard Institute, Faculty of Medicine and Dentistry, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, Greater London, UK
| | - Sebastian Brandner
- Division of Neuropathology, Queen Square Institute of Neurology, University College London, London, Greater London, UK
| | - Sascha Brückmann
- Institut für Pathologie, University Hospital Carl Gustav Carus, Dresden, Saxony, Germany
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, North Rhine-Westphalia, Germany
- Clinical Artificial Intelligence, Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Saxony, Germany
- Department of Internal Medicine I, University Hospital Carl Gustav Carus, Dresden, Saxony, Germany
- Pathology & Data Analytics, Faculty of Medicine and Health, Leeds Institute of Medical Research at St James’s, University of Leeds, Leeds, West Yorkshire, UK
- Department of Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Baden- Württemberg, Germany
| |
Collapse
|
15
|
Kosaraju S, Park J, Lee H, Yang JW, Kang M. Deep learning-based framework for slide-based histopathological image analysis. Sci Rep 2022; 12:19075. [PMID: 36351997 PMCID: PMC9646838 DOI: 10.1038/s41598-022-23166-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 10/26/2022] [Indexed: 11/11/2022] Open
Abstract
Digital pathology coupled with advanced machine learning (e.g., deep learning) has been changing the paradigm of whole-slide histopathological images (WSIs) analysis. Major applications in digital pathology using machine learning include automatic cancer classification, survival analysis, and subtyping from pathological images. While most pathological image analyses are based on patch-wise processing due to the extremely large size of histopathology images, there are several applications that predict a single clinical outcome or perform pathological diagnosis per slide (e.g., cancer classification, survival analysis). However, current slide-based analyses are task-dependent, and a general framework of slide-based analysis in WSI has been seldom investigated. We propose a novel slide-based histopathology analysis framework that creates a WSI representation map, called HipoMap, that can be applied to any slide-based problems, coupled with convolutional neural networks. HipoMap converts a WSI of various shapes and sizes to structured image-type representation. Our proposed HipoMap outperformed existing methods in intensive experiments with various settings and datasets. HipoMap showed the Area Under the Curve (AUC) of 0.96±0.026 (5% improved) in the experiments for lung cancer classification, and c-index of 0.787±0.013 (3.5% improved) and coefficient of determination ([Formula: see text]) of 0.978±0.032 (24% improved) in survival analysis and survival prediction with TCGA lung cancer data respectively, as a general framework of slide-based analysis with a flexible capability. The results showed significant improvement comparing to the current state-of-the-art methods on each task. We further discussed experimental results of HipoMap as pathological viewpoints and verified the performance using publicly available TCGA datasets. A Python package is available at https://pypi.org/project/hipomap , and the package can be easily installed using Python PIP. The open-source codes in Python are available at: https://github.com/datax-lab/HipoMap .
Collapse
Affiliation(s)
- Sai Kosaraju
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| | - Jeongyeon Park
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Hyun Lee
- grid.412859.30000 0004 0533 4202Department of Computer Science, Sun Moon University, Asan, 336708 South Korea
| | - Jung Wook Yang
- grid.256681.e0000 0001 0661 1492Department of Pathology, Gyeongsang National University Hospital, Gyeongsang National University College of Medicine, Jinju, South Korea
| | - Mingon Kang
- grid.272362.00000 0001 0806 6926Department of Computer Science, University of Nevada, Las Vegas, Las Vegas, NV 89154 USA
| |
Collapse
|
16
|
Herbsthofer L, Tomberger M, Smolle MA, Prietl B, Pieber TR, López-García P. Cell2Grid: an efficient, spatial, and convolutional neural network-ready representation of cell segmentation data. J Med Imaging (Bellingham) 2022; 9:067501. [PMID: 36466076 PMCID: PMC9709305 DOI: 10.1117/1.jmi.9.6.067501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/03/2022] [Indexed: 12/03/2022] Open
Abstract
Purpose Cell segmentation algorithms are commonly used to analyze large histologic images as they facilitate interpretation, but on the other hand they complicate hypothesis-free spatial analysis. Therefore, many applications train convolutional neural networks (CNNs) on high-resolution images that resolve individual cells instead, but their practical application is severely limited by computational resources. In this work, we propose and investigate an alternative spatial data representation based on cell segmentation data for direct training of CNNs. Approach We introduce and analyze the properties of Cell2Grid, an algorithm that generates compact images from cell segmentation data by placing individual cells into a low-resolution grid and resolves possible cell conflicts. For evaluation, we present a case study on colorectal cancer relapse prediction using fluorescent multiplex immunohistochemistry images. Results We could generate Cell2Grid images at 5 - μ m resolution that were 100 times smaller than the original ones. Cell features, such as phenotype counts and nearest-neighbor cell distances, remain similar to those of original cell segmentation tables ( p < 0.0001 ). These images could be directly fed to a CNN for predicting colon cancer relapse. Our experiments showed that test set error rate was reduced by 25% compared with CNNs trained on images rescaled to 5 μ m with bilinear interpolation. Compared with images at 1 - μ m resolution (bilinear rescaling), our method reduced CNN training time by 85%. Conclusions Cell2Grid is an efficient spatial data representation algorithm that enables the use of conventional CNNs on cell segmentation data. Its cell-based representation additionally opens a door for simplified model interpretation and synthetic image generation.
Collapse
Affiliation(s)
- Laurin Herbsthofer
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
| | - Martina Tomberger
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
| | - Maria A. Smolle
- Medical University of Graz, Department of Orthopaedics and Trauma, Graz, Austria
| | - Barbara Prietl
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
| | - Thomas R. Pieber
- CBmed, Center for Biomarker Research in Medicine GmbH, Graz, Austria
- BioTechMed, Graz, Austria
- Medical University of Graz, Division of Endocrinology and Diabetology, Graz, Austria
- Health Institute for Biomedicine and Health Sciences, Joanneum Research Forschungsgesellschaft mbH, Graz, Austria
| | | |
Collapse
|
17
|
Differentiating Glioblastoma Multiforme from Brain Metastases Using Multidimensional Radiomics Features Derived from MRI and Multiple Machine Learning Models. BIOMED RESEARCH INTERNATIONAL 2022; 2022:2016006. [PMID: 36212721 PMCID: PMC9534611 DOI: 10.1155/2022/2016006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 08/06/2022] [Accepted: 09/08/2022] [Indexed: 11/18/2022]
Abstract
Due to different treatment strategies, it is extremely important to differentiate between glioblastoma multiforme (GBM) and brain metastases (MET). It often proves difficult to distinguish between GBM and MET using MRI due to their similar appearance on the imaging modalities. Surgical methods are still necessary for definitive diagnosis, despite the importance of magnetic resonance imaging in detecting, characterizing, and monitoring brain tumors. We introduced an accurate, convenient, and user-friendly method to differentiate between GBM and MET through routine MRI sequence and radiomics analyses. We collected 91 patients from one institution, including 50 with GBM and 41 with MET, which were proven pathologically. The tumors separately were segmented on all MRI images (T1-weighted imaging (T1WI), contrast-enhanced T1-weighted imaging (T1C), T2-weighted imaging (T2WI), and fluid-attenuated inversion recovery (FLAIR)) to form the volume of interest (VOI). Eight ML models and feature reduction strategies were evaluated using routine MRI sequences (T1W, T2W, T1-CE, and FLAIR) in two methods with (second model) and without wavelet transform (first model) radiomics. The optimal model was selected based on each model’s accuracy, AUC-roc, and F1-score values. In this study, we have achieved the result of 0.98, 0.99, and 0.98 percent for accuracy, AUC-roc, and F1-score, respectively, which have yielded a better result than the first model. In most investigated models, there were significant improvements in the multidimensional wavelets model compared to the non-multidimensional wavelets model. Multidimensional discrete wavelet transform can analyze hidden features of the MRI from a different perspective and generate accurate features which are highly correlated with the model accuracy.
Collapse
|
18
|
Verdugo E, Puerto I, Medina MÁ. An update on the molecular biology of glioblastoma, with clinical implications and progress in its treatment. CANCER COMMUNICATIONS (LONDON, ENGLAND) 2022; 42:1083-1111. [PMID: 36129048 DOI: 10.1002/cac2.12361] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 08/07/2022] [Accepted: 09/05/2022] [Indexed: 11/08/2022]
Abstract
Glioblastoma multiforme (GBM) is the most aggressive and common malignant primary brain tumor. Patients with GBM often have poor prognoses, with a median survival of ∼15 months. Enhanced understanding of the molecular biology of central nervous system tumors has led to modifications in their classifications, the most recent of which classified these tumors into new categories and made some changes in their nomenclature and grading system. This review aims to give a panoramic view of the last 3 years' findings in glioblastoma characterization, its heterogeneity, and current advances in its treatment. Several molecular parameters have been used to achieve an accurate and personalized characterization of glioblastoma in patients, including epigenetic, genetic, transcriptomic and metabolic features, as well as age- and sex-related patterns and the involvement of several noncoding RNAs in glioblastoma progression. Astrocyte-like neural stem cells and outer radial glial-like cells from the subventricular zone have been proposed as agents involved in GBM of IDH-wildtype origin, but this remains controversial. Glioblastoma metabolism is characterized by upregulation of the PI3K/Akt/mTOR signaling pathway, promotion of the glycolytic flux, maintenance of lipid storage, and other features. This metabolism also contributes to glioblastoma's resistance to conventional therapies. Tumor heterogeneity, a hallmark of GBM, has been shown to affect the genetic expression, modulation of metabolic pathways, and immune system evasion. GBM's aggressive invasion potential is modulated by cell-to-cell crosstalk within the tumor microenvironment and altered expressions of specific genes, such as ANXA2, GBP2, FN1, PHIP, and GLUT3. Nevertheless, the rising number of active clinical trials illustrates the efforts to identify new targets and drugs to treat this malignancy. Immunotherapy is still relevant for research purposes, given the amount of ongoing clinical trials based on this strategy to treat GBM, and neoantigen and nucleic acid-based vaccines are gaining importance due to their antitumoral activity by inducing the immune response. Furthermore, there are clinical trials focused on the PI3K/Akt/mTOR axis, angiogenesis, and tumor heterogeneity for developing molecular-targeted therapies against GBM. Other strategies, such as nanodelivery and computational models, may improve the drug pharmacokinetics and the prognosis of patients with GBM.
Collapse
Affiliation(s)
- Elena Verdugo
- Department of Molecular Biology and Biochemistry, University of Málaga, Málaga, Málaga, E-29071, Spain
| | - Iker Puerto
- Department of Molecular Biology and Biochemistry, University of Málaga, Málaga, Málaga, E-29071, Spain
| | - Miguel Ángel Medina
- Department of Molecular Biology and Biochemistry, University of Málaga, Málaga, Málaga, E-29071, Spain.,Biomedical Research Institute of Málaga (IBIMA-Plataforma Bionand), Málaga, Málaga, E-29071, Spain.,Spanish Biomedical Research Network Center for Rare Diseases (CIBERER), Spanish Health Institute Carlos III (ISCIII), Málaga, Málaga, E-29071, Spain
| |
Collapse
|
19
|
Martin PCN, Kim H, Lövkvist C, Hong BW, Won KJ. Vesalius: high-resolution in silico anatomization of spatial transcriptomic data using image analysis. Mol Syst Biol 2022; 18:e11080. [PMID: 36065846 PMCID: PMC9446088 DOI: 10.15252/msb.202211080] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 08/18/2022] [Accepted: 08/19/2022] [Indexed: 11/25/2022] Open
Abstract
Characterization of tissue architecture promises to deliver insights into development, cell communication, and disease. In silico spatial domain retrieval methods have been developed for spatial transcriptomics (ST) data assuming transcriptional similarity of neighboring barcodes. However, domain retrieval approaches with this assumption cannot work in complex tissues composed of multiple cell types. This task becomes especially challenging in cellular resolution ST methods. We developed Vesalius to decipher tissue anatomy from ST data by applying image processing technology. Vesalius uniquely detected territories composed of multiple cell types and successfully recovered tissue structures in high‐resolution ST data including in mouse brain, embryo, liver, and colon. Utilizing this tissue architecture, Vesalius identified tissue morphology‐specific gene expression and regional specific gene expression changes for astrocytes, interneuron, oligodendrocytes, and entorhinal cells in the mouse brain.
Collapse
Affiliation(s)
- Patrick C N Martin
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Hollywood, CA, USA.,Biotech Research and Innovation Centre (BRIC), University of Copenhagen, Copenhagen, Denmark
| | - Hyobin Kim
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Hollywood, CA, USA.,Biotech Research and Innovation Centre (BRIC), University of Copenhagen, Copenhagen, Denmark
| | - Cecilia Lövkvist
- Biotech Research and Innovation Centre (BRIC), University of Copenhagen, Copenhagen, Denmark
| | - Byung-Woo Hong
- Computer Science Department, Chung-Ang University, Seoul, Korea
| | - Kyoung Jae Won
- Department of Computational Biomedicine, Cedars-Sinai Medical Center, Hollywood, CA, USA.,Biotech Research and Innovation Centre (BRIC), University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
20
|
Xie Y, Zaccagna F, Rundo L, Testa C, Agati R, Lodi R, Manners DN, Tonon C. Convolutional Neural Network Techniques for Brain Tumor Classification (from 2015 to 2022): Review, Challenges, and Future Perspectives. Diagnostics (Basel) 2022; 12:diagnostics12081850. [PMID: 36010200 PMCID: PMC9406354 DOI: 10.3390/diagnostics12081850] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 07/20/2022] [Accepted: 07/28/2022] [Indexed: 12/21/2022] Open
Abstract
Convolutional neural networks (CNNs) constitute a widely used deep learning approach that has frequently been applied to the problem of brain tumor diagnosis. Such techniques still face some critical challenges in moving towards clinic application. The main objective of this work is to present a comprehensive review of studies using CNN architectures to classify brain tumors using MR images with the aim of identifying useful strategies for and possible impediments in the development of this technology. Relevant articles were identified using a predefined, systematic procedure. For each article, data were extracted regarding training data, target problems, the network architecture, validation methods, and the reported quantitative performance criteria. The clinical relevance of the studies was then evaluated to identify limitations by considering the merits of convolutional neural networks and the remaining challenges that need to be solved to promote the clinical application and development of CNN algorithms. Finally, possible directions for future research are discussed for researchers in the biomedical and machine learning communities. A total of 83 studies were identified and reviewed. They differed in terms of the precise classification problem targeted and the strategies used to construct and train the chosen CNN. Consequently, the reported performance varied widely, with accuracies of 91.63–100% in differentiating meningiomas, gliomas, and pituitary tumors (26 articles) and of 60.0–99.46% in distinguishing low-grade from high-grade gliomas (13 articles). The review provides a survey of the state of the art in CNN-based deep learning methods for brain tumor classification. Many networks demonstrated good performance, and it is not evident that any specific methodological choice greatly outperforms the alternatives, especially given the inconsistencies in the reporting of validation methods, performance metrics, and training data encountered. Few studies have focused on clinical usability.
Collapse
Affiliation(s)
- Yuting Xie
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
| | - Fulvio Zaccagna
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Leonardo Rundo
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, Italy;
| | - Claudia Testa
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
- Department of Physics and Astronomy, University of Bologna, 40127 Bologna, Italy
| | - Raffaele Agati
- Programma Neuroradiologia con Tecniche ad elevata complessità, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| | - Raffaele Lodi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy
| | - David Neil Manners
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Correspondence:
| | - Caterina Tonon
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy; (Y.X.); (F.Z.); (R.L.); (C.T.)
- Functional and Molecular Neuroimaging Unit, IRCCS Istituto delle Scienze Neurologiche di Bologna, Bellaria Hospital, 40139 Bologna, Italy;
| |
Collapse
|
21
|
Saednia K, Tran WT, Sadeghi-Naini A. A Cascaded Deep Learning Framework for Segmentation of Nuclei in Digital Histology Images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:4764-4767. [PMID: 36086360 DOI: 10.1109/embc48229.2022.9871996] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Accurate segmentation of nuclei is an essential step in analysis of digital histology images for diagnostic and prognostic applications. Despite recent advances in automated frameworks for nuclei segmentation, this task is still challenging. Specifically, detecting small nuclei in large-scale histology images and delineating the border of touching nuclei accurately is a complicated task even for advanced deep neural networks. In this study, a cascaded deep learning framework is proposed to segment nuclei accurately in digitized microscopy images of histology slides. A U-Net based model with customized pixel-wised weighted loss function is adapted in the proposed framework, followed by a U-Net based model with VGG16 backbone and a soft Dice loss function. The model was pretrained on the Post-NAT-BRCA public dataset before training and independent evaluation on the MoNuSeg dataset. The cascaded model could outperform the other state-of-the-art models with an AJI of 0.72 and a F1-score of 0.83 on the MoNuSeg test set.
Collapse
|
22
|
Machine learning in neuro-oncology: toward novel development fields. J Neurooncol 2022; 159:333-346. [PMID: 35761160 DOI: 10.1007/s11060-022-04068-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 06/11/2022] [Indexed: 10/17/2022]
Abstract
PURPOSE Artificial Intelligence (AI) involves several and different techniques able to elaborate a large amount of data responding to a specific planned outcome. There are several possible applications of this technology in neuro-oncology. METHODS We reviewed, according to PRISMA guidelines, available studies adopting AI in different fields of neuro-oncology including neuro-radiology, pathology, surgery, radiation therapy, and systemic treatments. RESULTS Neuro-radiology presented the major number of studies assessing AI. However, this technology is being successfully tested also in other operative settings including surgery and radiation therapy. In this context, AI shows to significantly reduce resources and costs maintaining an elevated qualitative standard. Pathological diagnosis and development of novel systemic treatments are other two fields in which AI showed promising preliminary data. CONCLUSION It is likely that AI will be quickly included in some aspects of daily clinical practice. Possible applications of these techniques are impressive and cover all aspects of neuro-oncology.
Collapse
|
23
|
Hsu WW, Guo JM, Pei L, Chiang LA, Li YF, Hsiao JC, Colen R, Liu P. A weakly supervised deep learning-based method for glioma subtype classification using WSI and mpMRIs. Sci Rep 2022; 12:6111. [PMID: 35414643 PMCID: PMC9005548 DOI: 10.1038/s41598-022-09985-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 03/30/2022] [Indexed: 11/09/2022] Open
Abstract
Accurate glioma subtype classification is critical for the treatment management of patients with brain tumors. Developing an automatically computer-aided algorithm for glioma subtype classification is challenging due to many factors. One of the difficulties is the label constraint. Specifically, each case is simply labeled the glioma subtype without precise annotations of lesion regions information. In this paper, we propose a novel hybrid fully convolutional neural network (CNN)-based method for glioma subtype classification using both whole slide imaging (WSI) and multiparametric magnetic resonance imagings (mpMRIs). It is comprised of two methods: a WSI-based method and a mpMRIs-based method. For the WSI-based method, we categorize the glioma subtype using a 2D CNN on WSIs. To overcome the label constraint issue, we extract the truly representative patches for the glioma subtype classification in a weakly supervised fashion. For the mpMRIs-based method, we develop a 3D CNN-based method by analyzing the mpMRIs. The mpMRIs-based method consists of brain tumor segmentation and classification. Finally, to enhance the robustness of the predictions, we fuse the WSI-based and mpMRIs-based results guided by a confidence index. The experimental results on the validation dataset in the competition of CPM-RadPath 2020 show the comprehensive judgments from both two modalities can achieve better performance than the ones by solely using WSI or mpMRIs. Furthermore, our result using the proposed method ranks the third place in the CPM-RadPath 2020 in the testing phase. The proposed method demonstrates a competitive performance, which is creditable to the success of weakly supervised approach and the strategy of label agreement from multi-modality data.
Collapse
Affiliation(s)
- Wei-Wen Hsu
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Jing-Ming Guo
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Linmin Pei
- Imaging and Visualization Group, ABCS, Frederick National Laboratory for Cancer Research, Frederick, MD, 21702, USA.
| | - Ling-An Chiang
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Yao-Feng Li
- Department of Pathology, Tri-Service General Hospital and National Defense Medical Center, Taipei, 11490, Taiwan, ROC
| | - Jui-Chien Hsiao
- Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, ROC
| | - Rivka Colen
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, 15232, USA.,Hillman Cancer Center, University of Pittsburgh Medical Center, Pittsburgh, PA, 15260, USA
| | - Peizhong Liu
- College of Engineering, Huaqiao University, Quanzhou, China
| |
Collapse
|
24
|
Su CH, Chung PC, Lin SF, Tsai HW, Yang TL, Su YC. Multi-Scale Attention Convolutional Network for Masson Stained Bile Duct Segmentation from Liver Pathology Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22072679. [PMID: 35408293 PMCID: PMC9003085 DOI: 10.3390/s22072679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/24/2022] [Accepted: 03/29/2022] [Indexed: 05/07/2023]
Abstract
In clinical practice, the Ishak Score system would be adopted to perform the evaluation of the grading and staging of hepatitis according to whether portal areas have fibrous expansion, bridging with other portal areas, or bridging with central veins. Based on these staging criteria, it is necessary to identify portal areas and central veins when performing the Ishak Score staging. The bile ducts have variant types and are very difficult to be detected under a single magnification, hence pathologists must observe bile ducts at different magnifications to obtain sufficient information. This pathologic examinations in routine clinical practice, however, would result in the labor intensive and expensive examination process. Therefore, the automatic quantitative analysis for pathologic examinations has had an increased demand and attracted significant attention recently. A multi-scale inputs of attention convolutional network is proposed in this study to simulate pathologists' examination procedure for observing bile ducts under different magnifications in liver biopsy. The proposed multi-scale attention network integrates cell-level information and adjacent structural feature information for bile duct segmentation. In addition, the attention mechanism of proposed model enables the network to focus the segmentation task on the input of high magnification, reducing the influence from low magnification input, but still helps to provide wider field of surrounding information. In comparison with existing models, including FCN, U-Net, SegNet, DeepLabv3 and DeepLabv3-plus, the experimental results demonstrated that the proposed model improved the segmentation performance on Masson bile duct segmentation task with 72.5% IOU and 84.1% F1-score.
Collapse
Affiliation(s)
- Chun-Han Su
- Institute of Computer and Communication Engineering, National Cheng Kung University, Tainan City 701, Taiwan; (C.-H.S.); (P.-C.C.)
| | - Pau-Choo Chung
- Institute of Computer and Communication Engineering, National Cheng Kung University, Tainan City 701, Taiwan; (C.-H.S.); (P.-C.C.)
| | - Sheng-Fung Lin
- Division of Hematology and Oncology, Department of Internal Medicine, E-Da Hospital, Kaohsiung 824, Taiwan;
| | - Hung-Wen Tsai
- Department of Pathology, National Cheng Kung University Hospital, Tainan City 704, Taiwan;
| | - Tsung-Lung Yang
- Kaohsiung Veterans General Hospital, Kaohsiung 813414, Taiwan;
| | - Yu-Chieh Su
- Division of Hematology and Oncology, Department of Internal Medicine, E-Da Hospital, Kaohsiung 824, Taiwan;
- School of Medicine, I-Shou University, Kaohsiung 824, Taiwan
- Correspondence:
| |
Collapse
|
25
|
Güley O, Pati S, Bakas S. Classification of Infection and Ischemia in Diabetic Foot Ulcers Using VGG Architectures. DIABETIC FOOT ULCERS GRAND CHALLENGE : SECOND CHALLENGE, DFUC 2021, HELD IN CONJUNCTION WITH MICCAI 2021, STRASBOURG, FRANCE, SEPTEMBER 27, 2021 : PROCEEDINGS. DFUC (CONFERENCE) (2ND : 2021 : ONLINE) 2022; 13183:76-89. [PMID: 35465060 PMCID: PMC9026672 DOI: 10.1007/978-3-030-94907-5_6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Diabetic foot ulceration (DFU) is a serious complication of diabetes, and a major challenge for healthcare systems around the world. Further infection and ischemia in DFU can significantly prolong treatment and often result in limb amputation, with more severe cases resulting in terminal illness. Thus, early identification and regular monitoring is necessary to improve care, and reduce the burden on healthcare systems. With that in mind, this study attempts to address the problem of infection and ischemia classification in diabetic food ulcers, in four distinct classes. We have evaluated a series of VGG architectures with different layers, following numerous training strategies, including k-fold cross validation, data pre-processing options, augmentation techniques, and weighted loss calculations. In favor of transparency and reproducibility, we make all the implementations available through the Generally Nuanced Deep Learning Framework (GaNDLF, github.com/CBICA/GaNDLF. Our best model was evaluated during the DFU Challenge 2021, and was ranked 2nd, 5th, and 7th based on the macro-averaged AUC (area under the curve), macro-averaged F1 score, and macro-averaged recall metrics, respectively. Our findings support that current state-of-the-art architectures provide good results for the DFU image classification task, and further experimentation is required to study the effects of pre-processing and augmentation strategies.
Collapse
Affiliation(s)
- Orhun Güley
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Sarthak Pati
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
26
|
Chunduru P, Phillips JJ, Molinaro AM. Prognostic risk stratification of gliomas using deep learning in digital pathology images. Neurooncol Adv 2022; 4:vdac111. [PMID: 35990705 PMCID: PMC9389424 DOI: 10.1093/noajnl/vdac111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Background Evaluation of tumor-tissue images stained with hematoxylin and eosin (H&E) is pivotal in diagnosis, yet only a fraction of the rich phenotypic information is considered for clinical care. Here, we propose a survival deep learning (SDL) framework to extract this information to predict glioma survival. Methods Digitized whole slide images were downloaded from The Cancer Genome Atlas (TCGA) for 766 diffuse glioma patients, including isocitrate dehydrogenase (IDH)-mutant/1p19q-codeleted oligodendroglioma, IDH-mutant/1p19q-intact astrocytoma, and IDH-wildtype astrocytoma/glioblastoma. Our SDL framework employs a residual convolutional neural network with a survival model to predict patient risk from H&E-stained whole-slide images. We used statistical sampling techniques and randomized the transformation of images to address challenges in learning from histology images. The SDL risk score was evaluated in traditional and recursive partitioning (RPA) survival models. Results The SDL risk score demonstrated substantial univariate prognostic power (median concordance index of 0.79 [se: 0.01]). After adjusting for age and World Health Organization 2016 subtype, the SDL risk score was significantly associated with overall survival (OS; hazard ratio = 2.45; 95% CI: 2.01 to 3.00). Four distinct survival risk groups were characterized by RPA based on SDL risk score, IDH status, and age with markedly different median OS ranging from 1.03 years to 14.14 years. Conclusions The present study highlights the independent prognostic power of the SDL risk score for objective and accurate prediction of glioma outcomes. Further, we show that the RPA delineation of patient-specific risk scores and clinical prognostic factors can successfully demarcate the OS of glioma patients.
Collapse
Affiliation(s)
- Pranathi Chunduru
- Department of Neurological Surgery, University of California San Francisco, San Francisco, California, USA
| | - Joanna J Phillips
- Department of Neurological Surgery, University of California San Francisco, San Francisco, California, USA
- Department of Pathology, University of California San Francisco, San Francisco, California, USA
| | - Annette M Molinaro
- Department of Neurological Surgery, University of California San Francisco, San Francisco, California, USA
- Department of Epidemiology and Biostatistics, University of California San Francisco, San Francisco, California, USA
| |
Collapse
|
27
|
Radiomics and radiogenomics in gliomas: a contemporary update. Br J Cancer 2021; 125:641-657. [PMID: 33958734 PMCID: PMC8405677 DOI: 10.1038/s41416-021-01387-w] [Citation(s) in RCA: 82] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Revised: 03/10/2021] [Accepted: 03/31/2021] [Indexed: 02/03/2023] Open
Abstract
The natural history and treatment landscape of primary brain tumours are complicated by the varied tumour behaviour of primary or secondary gliomas (high-grade transformation of low-grade lesions), as well as the dilemmas with identification of radiation necrosis, tumour progression, and pseudoprogression on MRI. Radiomics and radiogenomics promise to offer precise diagnosis, predict prognosis, and assess tumour response to modern chemotherapy/immunotherapy and radiation therapy. This is achieved by a triumvirate of morphological, textural, and functional signatures, derived from a high-throughput extraction of quantitative voxel-level MR image metrics. However, the lack of standardisation of acquisition parameters and inconsistent methodology between working groups have made validations unreliable, hence multi-centre studies involving heterogenous study populations are warranted. We elucidate novel radiomic and radiogenomic workflow concepts and state-of-the-art descriptors in sub-visual MR image processing, with relevant literature on applications of such machine learning techniques in glioma management.
Collapse
|
28
|
Diagnostic performance of deep-learning-based screening methods for diabetic retinopathy in primary care-A meta-analysis. PLoS One 2021; 16:e0255034. [PMID: 34375355 PMCID: PMC8354436 DOI: 10.1371/journal.pone.0255034] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 07/09/2021] [Indexed: 02/01/2023] Open
Abstract
Background Diabetic retinopathy (DR) affects 10–24% of patients with diabetes mellitus type 1 or 2 in the primary care (PC) sector. As early detection is crucial for treatment, deep learning screening methods in PC setting could potentially aid in an accurate and timely diagnosis. Purpose The purpose of this meta-analysis was to determine the current state of knowledge regarding deep learning (DL) screening methods for DR in PC. Data sources A systematic literature search was conducted using Medline, Web of Science, and Scopus to identify suitable studies. Study selection Suitable studies were selected by two researchers independently. Studies assessing DL methods and the suitability of these screening systems (diagnostic parameters such as sensitivity and specificity, information on datasets and setting) in PC were selected. Excluded were studies focusing on lesions, applying conventional diagnostic imaging tools, conducted in secondary or tertiary care, and all publication types other than original research studies on human subjects. Data extraction The following data was extracted from included studies: authors, title, year of publication, objectives, participants, setting, type of intervention/method, reference standard, grading scale, outcome measures, dataset, risk of bias, and performance measures. Data synthesis and conclusion The summed sensitivity of all included studies was 87% and specificity was 90%. Given a prevalence of DR of 10% in patients with DM Type 2 in PC, the negative predictive value is 98% while the positive predictive value is 49%. Limitations Selected studies showed a high variation in sample size and quality and quantity of available data.
Collapse
|
29
|
Liu D, Chen J, Hu X, Yang K, Liu Y, Hu G, Ge H, Zhang W, Liu H. Imaging-Genomics in Glioblastoma: Combining Molecular and Imaging Signatures. Front Oncol 2021; 11:699265. [PMID: 34295824 PMCID: PMC8290166 DOI: 10.3389/fonc.2021.699265] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 06/23/2021] [Indexed: 12/12/2022] Open
Abstract
Based on artificial intelligence (AI), computer-assisted medical diagnosis can scientifically and efficiently deal with a large quantity of medical imaging data. AI technologies including deep learning have shown remarkable progress across medical image recognition and genome analysis. Imaging-genomics attempts to explore the associations between potential gene expression patterns and specific imaging phenotypes. These associations provide potential cellular pathophysiology information, allowing sampling of the lesion habitat with high spatial resolution. Glioblastoma (GB) poses spatial and temporal heterogeneous characteristics, challenging to current precise diagnosis and treatments for the disease. Imaging-genomics provides a powerful tool for non-invasive global assessment of GB and its response to treatment. Imaging-genomics also has the potential to advance our understanding of underlying cancer biology, gene alterations, and corresponding biological processes. This article reviews the recent progress in the utilization of the imaging-genomics analysis in GB patients, focusing on its implications and prospects in individualized diagnosis and management.
Collapse
Affiliation(s)
- Dongming Liu
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Jiu Chen
- Institute of Neuropsychiatry, The Affiliated Brain Hospital of Nanjing Medical University, Fourth Clinical College of Nanjing Medical University, Nanjing, China.,Department of Neurosurgery, Institute of Brain Sciences, The Affilated Nanjing Brain Hosptial of Nanjing Medical University, Nanjing, China
| | - Xinhua Hu
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China.,Department of Neurosurgery, Institute of Brain Sciences, The Affilated Nanjing Brain Hosptial of Nanjing Medical University, Nanjing, China
| | - Kun Yang
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Yong Liu
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Guanjie Hu
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Honglin Ge
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China
| | - Wenbin Zhang
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China.,Department of Neurosurgery, Institute of Brain Sciences, The Affilated Nanjing Brain Hosptial of Nanjing Medical University, Nanjing, China
| | - Hongyi Liu
- Department of Neurosurgery, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing, China.,Department of Neurosurgery, Institute of Brain Sciences, The Affilated Nanjing Brain Hosptial of Nanjing Medical University, Nanjing, China
| |
Collapse
|
30
|
Crouzet C, Jeong G, Chae RH, LoPresti KT, Dunn CE, Xie DF, Agu C, Fang C, Nunes ACF, Lau WL, Kim S, Cribbs DH, Fisher M, Choi B. Spectroscopic and deep learning-based approaches to identify and quantify cerebral microhemorrhages. Sci Rep 2021; 11:10725. [PMID: 34021170 PMCID: PMC8140127 DOI: 10.1038/s41598-021-88236-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Accepted: 03/25/2021] [Indexed: 02/04/2023] Open
Abstract
Cerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5-40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.
Collapse
Affiliation(s)
- Christian Crouzet
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Gwangjin Jeong
- grid.411982.70000 0001 0705 4288Department of Biomedical Engineering, Beckman Laser Institute Korea, Dankook University, Cheonan, 31116 Republic of Korea
| | - Rachel H. Chae
- grid.116068.80000 0001 2341 2786Massachusetts Institute of Technology, Cambridge, MA USA
| | - Krystal T. LoPresti
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Cody E. Dunn
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Danny F. Xie
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA
| | - Chiagoziem Agu
- grid.251990.60000 0000 9562 8554Albany State University, Albany, GA USA
| | - Chuo Fang
- grid.266093.80000 0001 0668 7243Neurology and Pathology and Laboratory Medicine, University of California-Irvine, Irvine, CA USA
| | - Ane C. F. Nunes
- grid.266093.80000 0001 0668 7243Department of Medicine, Division of Nephrology, University of California-Irvine, Irvine, CA USA
| | - Wei Ling Lau
- grid.266093.80000 0001 0668 7243Department of Medicine, Division of Nephrology, University of California-Irvine, Irvine, CA USA
| | - Sehwan Kim
- grid.411982.70000 0001 0705 4288Department of Biomedical Engineering, Beckman Laser Institute Korea, Dankook University, Cheonan, 31116 Republic of Korea
| | - David H. Cribbs
- grid.266093.80000 0001 0668 7243Institute for Memory Impairments and Neurological Disorders, University of California-Irvine, Irvine, CA USA
| | - Mark Fisher
- grid.266093.80000 0001 0668 7243Neurology and Pathology and Laboratory Medicine, University of California-Irvine, Irvine, CA USA
| | - Bernard Choi
- grid.266093.80000 0001 0668 7243Beckman Laser Institute and Medical Clinic, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Biomedical Engineering, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Department of Surgery, University of California-Irvine, Irvine, CA USA ,grid.266093.80000 0001 0668 7243Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California-Irvine, Irvin, CA USA
| |
Collapse
|
31
|
Im S, Hyeon J, Rha E, Lee J, Choi HJ, Jung Y, Kim TJ. Classification of Diffuse Glioma Subtype from Clinical-Grade Pathological Images Using Deep Transfer Learning. SENSORS 2021; 21:s21103500. [PMID: 34067934 PMCID: PMC8156672 DOI: 10.3390/s21103500] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 05/06/2021] [Accepted: 05/14/2021] [Indexed: 11/16/2022]
Abstract
Diffuse gliomas are the most common primary brain tumors and they vary considerably in their morphology, location, genetic alterations, and response to therapy. In 2016, the World Health Organization (WHO) provided new guidelines for making an integrated diagnosis that incorporates both morphologic and molecular features to diffuse gliomas. In this study, we demonstrate how deep learning approaches can be used for an automatic classification of glioma subtypes and grading using whole-slide images that were obtained from routine clinical practice. A deep transfer learning method using the ResNet50V2 model was trained to classify subtypes and grades of diffuse gliomas according to the WHO’s new 2016 classification. The balanced accuracy of the diffuse glioma subtype classification model with majority voting was 0.8727. These results highlight an emerging role of deep learning in the future practice of pathologic diagnosis.
Collapse
Affiliation(s)
- Sanghyuk Im
- Department of Neurosurgery, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Jonghwan Hyeon
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Eunyoung Rha
- Department of Plastic and Reconstructive Surgery, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - Janghyeon Lee
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Ho-Jin Choi
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Yuchae Jung
- School of Computing, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (J.H.); (J.L.); (H.-J.C.); (Y.J.)
| | - Tae-Jung Kim
- Department of Hospital Pathology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea
- Correspondence: ; Tel.: +82-2-3779-2157
| |
Collapse
|
32
|
Darrigues E, Elberson BW, De Loose A, Lee MP, Green E, Benton AM, Sink LG, Scott H, Gokden M, Day JD, Rodriguez A. Brain Tumor Biobank Development for Precision Medicine: Role of the Neurosurgeon. Front Oncol 2021; 11:662260. [PMID: 33981610 PMCID: PMC8108694 DOI: 10.3389/fonc.2021.662260] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 03/29/2021] [Indexed: 12/18/2022] Open
Abstract
Neuro-oncology biobanks are critical for the implementation of a precision medicine program. In this perspective, we review our first year experience of a brain tumor biobank with integrated next generation sequencing. From our experience, we describe the critical role of the neurosurgeon in diagnosis, research, and precision medicine efforts. In the first year of implementation of the biobank, 117 patients (Female: 62; Male: 55) had 125 brain tumor surgeries. 75% of patients had tumors biobanked, and 16% were of minority race/ethnicity. Tumors biobanked were as follows: diffuse gliomas (45%), brain metastases (29%), meningioma (21%), and other (5%). Among biobanked patients, 100% also had next generation sequencing. Eleven patients qualified for targeted therapy based on identification of actionable gene mutations. One patient with a hereditary cancer predisposition syndrome was also identified. An iterative quality improvement process was implemented to streamline the workflow between the operating room, pathology, and the research laboratory. Dedicated tumor bank personnel in the department of neurosurgery greatly improved standard operating procedure. Intraoperative selection and processing of tumor tissue by the neurosurgeon was integral to increasing success with cell culture assays. Currently, our institutional protocol integrates standard histopathological diagnosis, next generation sequencing, and functional assays on surgical specimens to develop precision medicine protocols for our patients. This perspective reviews the critical role of neurosurgeons in brain tumor biobank implementation and success as well as future directions for enhancing precision medicine efforts.
Collapse
Affiliation(s)
- Emilie Darrigues
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States.,Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Benjamin W Elberson
- Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Annick De Loose
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States.,Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Madison P Lee
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States.,Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Ebonye Green
- Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Ashley M Benton
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Ladye G Sink
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Hayden Scott
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Murat Gokden
- Division of Neuropathology, Department of Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - John D Day
- Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| | - Analiz Rodriguez
- Winthrop P. Rockefeller Cancer Institute, University of Arkansas for Medical Sciences, Little Rock, AR, United States.,Department of Neurosurgery, University of Arkansas for Medical Sciences, Little Rock, AR, United States
| |
Collapse
|
33
|
Paijens ST, Vledder A, de Bruyn M, Nijman HW. Tumor-infiltrating lymphocytes in the immunotherapy era. Cell Mol Immunol 2021; 18:842-859. [PMID: 33139907 PMCID: PMC8115290 DOI: 10.1038/s41423-020-00565-9] [Citation(s) in RCA: 390] [Impact Index Per Article: 130.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 09/24/2020] [Indexed: 02/07/2023] Open
Abstract
The clinical success of cancer immune checkpoint blockade (ICB) has refocused attention on tumor-infiltrating lymphocytes (TILs) across cancer types. The outcome of immune checkpoint inhibitor therapy in cancer patients has been linked to the quality and magnitude of T cell, NK cell, and more recently, B cell responses within the tumor microenvironment. State-of-the-art single-cell analysis of TIL gene expression profiles and clonality has revealed a remarkable degree of cellular heterogeneity and distinct patterns of immune activation and exhaustion. Many of these states are conserved across tumor types, in line with the broad responses observed clinically. Despite this homology, not all cancer types with similar TIL landscapes respond similarly to immunotherapy, highlighting the complexity of the underlying tumor-immune interactions. This observation is further confounded by the strong prognostic benefit of TILs observed for tumor types that have so far respond poorly to immunotherapy. Thus, while a holistic view of lymphocyte infiltration and dysfunction on a single-cell level is emerging, the search for response and prognostic biomarkers is just beginning. Within this review, we discuss recent advances in the understanding of TIL biology, their prognostic benefit, and their predictive value for therapy.
Collapse
Affiliation(s)
- Sterre T Paijens
- Department of Obstetrics and Gynecology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Annegé Vledder
- Department of Obstetrics and Gynecology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Marco de Bruyn
- Department of Obstetrics and Gynecology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
| | - Hans W Nijman
- Department of Obstetrics and Gynecology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| |
Collapse
|
34
|
Machine learning and augmented human intelligence use in histomorphology for haematolymphoid disorders. Pathology 2021; 53:400-407. [PMID: 33642096 DOI: 10.1016/j.pathol.2020.12.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 12/21/2020] [Indexed: 02/06/2023]
Abstract
Advances in digital pathology have allowed a number of opportunities such as decision support using artificial intelligence (AI). The application of AI to digital pathology data shows promise as an aid for pathologists in the diagnosis of haematological disorders. AI-based applications have embraced benign haematology, diagnosing leukaemia and lymphoma, as well as ancillary testing modalities including flow cytometry. In this review, we highlight the progress made to date in machine learning applications in haematopathology, summarise important studies in this field, and highlight key limitations. We further present our outlook on the future direction and trends for AI to support diagnostic decisions in haematopathology.
Collapse
|
35
|
Convergence of Digital Pathology and Artificial Intelligence Tools in Anatomic Pathology Practice: Current Landscape and Future Directions. Adv Anat Pathol 2020; 27:221-226. [PMID: 32541593 DOI: 10.1097/pap.0000000000000271] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|