1
|
Abas Mohamed Y, Ee Khoo B, Shahrimie Mohd Asaari M, Ezane Aziz M, Rahiman Ghazali F. Decoding the black box: Explainable AI (XAI) for cancer diagnosis, prognosis, and treatment planning-A state-of-the art systematic review. Int J Med Inform 2025; 193:105689. [PMID: 39522406 DOI: 10.1016/j.ijmedinf.2024.105689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 10/28/2024] [Accepted: 10/31/2024] [Indexed: 11/16/2024]
Abstract
OBJECTIVE Explainable Artificial Intelligence (XAI) is increasingly recognized as a crucial tool in cancer care, with significant potential to enhance diagnosis, prognosis, and treatment planning. However, the holistic integration of XAI across all stages of cancer care remains underexplored. This review addresses this gap by systematically evaluating the role of XAI in these critical areas, identifying key challenges and emerging trends. MATERIALS AND METHODS Following the PRISMA guidelines, a comprehensive literature search was conducted across Scopus and Web of Science, focusing on publications from January 2020 to May 2024. After rigorous screening and quality assessment, 69 studies were selected for in-depth analysis. RESULTS The review identified critical gaps in the application of XAI within cancer care, notably the exclusion of clinicians in 83% of studies, which raises concerns about real-world applicability and may lead to explanations that are technically sound but clinically irrelevant. Additionally, 87% of studies lacked rigorous evaluation of XAI explanations, compromising their reliability in clinical practice. The dominance of post-hoc visual methods like SHAP, LIME and Grad-CAM reflects a trend toward explanations that may be inherently flawed due to specific input perturbations and simplifying assumptions. The lack of formal evaluation metrics and standardization constrains broader XAI adoption in clinical settings, creating a disconnect between AI development and clinical integration. Moreover, translating XAI insights into actionable clinical decisions remains challenging due to the absence of clear guidelines for integrating these tools into clinical workflows. CONCLUSION This review highlights the need for greater clinician involvement, standardized XAI evaluation metrics, clinician-centric interfaces, context-aware XAI systems, and frameworks for integrating XAI into clinical workflows for informed clinical decision-making and improved outcomes in cancer care.
Collapse
Affiliation(s)
- Yusuf Abas Mohamed
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia
| | - Bee Ee Khoo
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia.
| | - Mohd Shahrimie Mohd Asaari
- School of Electrical & Electronic Engineering, Engineering Campus, Universiti Sains Malaysia (USM), Malaysia
| | - Mohd Ezane Aziz
- Department of Radiology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia (USM), Kelantan, Malaysia
| | - Fattah Rahiman Ghazali
- Department of Radiology, School of Medical Sciences, Health Campus, Universiti Sains Malaysia (USM), Kelantan, Malaysia
| |
Collapse
|
2
|
Isosalo A, Inkinen SI, Prostredná L, Heino H, Ipatti PS, Reponen J, Nieminen MT. Imaging phenotype evaluation from digital breast tomosynthesis data: A preliminary study. Comput Biol Med 2024; 183:109285. [PMID: 39454527 DOI: 10.1016/j.compbiomed.2024.109285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 10/02/2024] [Accepted: 10/14/2024] [Indexed: 10/28/2024]
Abstract
BACKGROUND Digital breast tomosynthesis (DBT) has been widely adopted as a supplemental imaging modality for diagnostic evaluation of breast cancer and confirmation studies. In this study, a deep learning-based method for characterizing breast tissue patterns in DBT data is presented. METHODS A set of 5388 2D image patches was produced from 230 right mediolateral oblique, 259 left mediolateral oblique, 18 right craniocaudal, and 15 left craniocaudal single-breast DBT studies, using slice-wise annotations of abnormalities and normal tissue. We implemented a patch classifier to predict samples according to two differing scenarios and train it using the patch dataset. First, tissue samples were classified into the following classes: malignant, benign, and normal breast tissue. Second, tissue samples were classified into the following classes: malignant mass, benign mass, benign architectural distortion, malignant architectural distortion, and normal breast tissue. We employed transfer learning and initialized the model base layers with existing pre-trained weights obtained from Globally-Aware Multiple Instance Classifier. RESULTS High class-wise recall values of 0.8906, 0.8541 and 0.7345 and specificities 0.9558, 0.9575 and 0.8830 were obtained for normal, benign, and malignant classification, respectively. More intricate classification yielded class-wise recall values of 0.8708, 0.8299, 0.9444 and 0.5723 and specificities 0.9406, 0.9833, 0.8943 and 0.9652 for benign mass, normal, malignant architectural distortion, and malignant mass, respectively. However, benign architectural distortion was confused with benign mass and malignant architectural distortion. CONCLUSIONS Combining the proposed phenotype classifier with the commonly used malignant-benign-normal classification enables a more detailed assessment of digital breast tomosynthesis images.
Collapse
Affiliation(s)
- Antti Isosalo
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland.
| | - Satu I Inkinen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland; HUS Diagnostic Center, Radiology, Helsinki University and Helsinki University Hospital, Helsinki, Finland
| | - Lucia Prostredná
- Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Helinä Heino
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Pieta S Ipatti
- Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Jarmo Reponen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
| | - Miika T Nieminen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| |
Collapse
|
3
|
Zaccaria GM, Berloco F, Buongiorno D, Brunetti A, Altini N, Bevilacqua V. A time-dependent explainable radiomic analysis from the multi-omic cohort of CPTAC-Pancreatic Ductal Adenocarcinoma. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 257:108408. [PMID: 39342876 DOI: 10.1016/j.cmpb.2024.108408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 07/05/2024] [Accepted: 09/04/2024] [Indexed: 10/01/2024]
Abstract
BACKGROUND AND OBJECTIVE In Pancreatic Ductal Adenocarcinoma (PDA), multi-omic models are emerging to answer unmet clinical needs to derive novel quantitative prognostic factors. We realized a pipeline that relies on survival machine-learning (SML) classifiers and explainability based on patients' follow-up (FU) to stratify prognosis from the public-available multi-omic datasets of the CPTAC-PDA project. MATERIALS AND METHODS Analyzed datasets included tumor-annotated radiologic images, clinical, and mutational data. A feature selection was based on univariate (UV) and multivariate (MV) survival analyses according to Overall Survival (OS) and recurrence (REC). In this study, we considered seven multi-omic datasets and compared four SML classifiers: Cox, survival random forest, generalized boosted, and support vector machines (SVM). For each classifier, we assessed the concordance (C) index on the validation set. The best classifiers for the validation set on both OS and REC underwent explainability analyses using SurvSHAP(t), which extends SHapley Additive exPlanations (SHAP). RESULTS According to OS, after UV and MV analyses we selected 18/37 and 10/37 multi-omic features, respectively. According to REC, based on UV and MV analyses we selected 10/35 and 5/35 determinants, respectively. Generally, SML classifiers including radiomics outperformed those modelled on clinical or mutational predictors. For OS, the Cox model encompassing radiomic, clinical, and mutational features reached 75 % of C index, outperforming other classifiers. On the other hand, for REC, the SVM model including only radiomics emerged as the best-performing, with 68 % of C index. For OS, SurvSHAP(t) identified the first order Median Gray Level (GL) intensities, the gender, the tumor grade, the Joint Energy GL Co-occurrence Matrix (GLCM), and the GLCM Informational Measures of Correlations of type 1 as the most important features. For REC, the first order Median GL intensities, the GL size zone matrix Small Area Low GL Emphasis, and first order variance of GL intensities emerged as the most discriminative. CONCLUSIONS In this work, radiomics showed the potential for improving patients' risk stratification in PDA. Furthermore, a deeper understanding of how radiomics can contribute to prognosis in PDA was achieved with a time-dependent explainability of the top multi-omic predictors.
Collapse
Affiliation(s)
- Gian Maria Zaccaria
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari, 70126, Italy
| | - Francesco Berloco
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari, 70126, Italy.
| | - Domenico Buongiorno
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari, 70126, Italy; Apulian Bioengineering srl, Via delle Violette, 14, Modugno, 70026, Italy
| | - Antonio Brunetti
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari, 70126, Italy; Apulian Bioengineering srl, Via delle Violette, 14, Modugno, 70026, Italy
| | - Nicola Altini
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari, 70126, Italy
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari, 70126, Italy; Apulian Bioengineering srl, Via delle Violette, 14, Modugno, 70026, Italy
| |
Collapse
|
4
|
Ghasemi A, Hashtarkhani S, Schwartz DL, Shaban‐Nejad A. Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review. CANCER INNOVATION 2024; 3:e136. [PMID: 39430216 PMCID: PMC11488119 DOI: 10.1002/cai2.136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/19/2024] [Accepted: 04/30/2024] [Indexed: 10/22/2024]
Abstract
With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes.
Collapse
Affiliation(s)
- Amirehsan Ghasemi
- Department of Pediatrics, Center for Biomedical Informatics, College of MedicineUniversity of Tennessee Health Science CenterMemphisTennesseeUSA
- The Bredesen Center for Interdisciplinary Research and Graduate EducationUniversity of TennesseeKnoxvilleTennesseeUSA
| | - Soheil Hashtarkhani
- Department of Pediatrics, Center for Biomedical Informatics, College of MedicineUniversity of Tennessee Health Science CenterMemphisTennesseeUSA
| | - David L. Schwartz
- Department of Radiation Oncology, College of MedicineUniversity of Tennessee Health Science CenterMemphisTennesseeUSA
| | - Arash Shaban‐Nejad
- Department of Pediatrics, Center for Biomedical Informatics, College of MedicineUniversity of Tennessee Health Science CenterMemphisTennesseeUSA
- The Bredesen Center for Interdisciplinary Research and Graduate EducationUniversity of TennesseeKnoxvilleTennesseeUSA
| |
Collapse
|
5
|
Zaccaria GM, Altini N, Mezzolla G, Vegliante MC, Stranieri M, Pappagallo SA, Ciavarella S, Guarini A, Bevilacqua V. SurvIAE: Survival prediction with Interpretable Autoencoders from Diffuse Large B-Cells Lymphoma gene expression data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107966. [PMID: 38091844 DOI: 10.1016/j.cmpb.2023.107966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 11/24/2023] [Accepted: 12/01/2023] [Indexed: 01/26/2024]
Abstract
BACKGROUND In Diffuse Large B-Cell Lymphoma (DLBCL), several methodologies are emerging to derive novel biomarkers to be incorporated in the risk assessment. We realized a pipeline that relies on autoencoders (AE) and Explainable Artificial Intelligence (XAI) to stratify prognosis and derive a gene-based signature. METHODS AE was exploited to learn an unsupervised representation of the gene expression (GE) from three publicly available datasets, each with its own technology. Multi-layer perceptron (MLP) was used to classify prognosis from latent representation. GE data were preprocessed as normalized, scaled, and standardized. Four different AE architectures (Large, Medium, Small and Extra Small) were compared to find the most suitable for GE data. The joint AE-MLP classified patients on six different outcomes: overall survival at 12, 36, 60 months and progression-free survival (PFS) at 12, 36, 60 months. XAI techniques were used to derive a gene-based signature aimed at refining the Revised International Prognostic Index (R-IPI) risk, which was validated in a fourth independent publicly available dataset. We named our tool SurvIAE: Survival prediction with Interpretable AE. RESULTS From the latent space of AEs, we observed that scaled and standardized data reduced the batch effect. SurvIAE models outperformed R-IPI with Matthews Correlation Coefficient up to 0.42 vs. 0.18 for the validation-set (PFS36) and to 0.30 vs. 0.19 for the test-set (PFS60). We selected the SurvIAE-Small-PFS36 as the best model and, from its gene signature, we stratified patients in three risk groups: R-IPI Poor patients with High levels of GAB1, R-IPI Poor patients with Low levels of GAB1 or R-IPI Good/Very Good patients with Low levels of GPR132, and R-IPI Good/Very Good patients with High levels of GPR132. CONCLUSIONS SurvIAE showed the potential to derive a gene signature with translational purpose in DLBCL. The pipeline was made publicly available and can be reused for other pathologies.
Collapse
Affiliation(s)
- Gian Maria Zaccaria
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy
| | - Nicola Altini
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy.
| | - Giuseppe Mezzolla
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy
| | - Maria Carmela Vegliante
- Hematology and Cell Therapy Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Marianna Stranieri
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy
| | - Susanna Anita Pappagallo
- Hematology and Cell Therapy Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Sabino Ciavarella
- Hematology and Cell Therapy Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Attilio Guarini
- Hematology and Cell Therapy Unit, IRCCS Istituto Tumori "Giovanni Paolo II", Via O. Flacco, 65, Bari 70124, Italy
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona, 4, Bari 70126, Italy; Apulian Bioengineering srl, Via delle Violette, 14, Modugno 70026, Italy
| |
Collapse
|
6
|
Champendal M, Müller H, Prior JO, Dos Reis CS. A scoping review of interpretability and explainability concerning artificial intelligence methods in medical imaging. Eur J Radiol 2023; 169:111159. [PMID: 37976760 DOI: 10.1016/j.ejrad.2023.111159] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/26/2023] [Accepted: 10/19/2023] [Indexed: 11/19/2023]
Abstract
PURPOSE To review eXplainable Artificial Intelligence/(XAI) methods available for medical imaging/(MI). METHOD A scoping review was conducted following the Joanna Briggs Institute's methodology. The search was performed on Pubmed, Embase, Cinhal, Web of Science, BioRxiv, MedRxiv, and Google Scholar. Studies published in French and English after 2017 were included. Keyword combinations and descriptors related to explainability, and MI modalities were employed. Two independent reviewers screened abstracts, titles and full text, resolving differences through discussion. RESULTS 228 studies met the criteria. XAI publications are increasing, targeting MRI (n = 73), radiography (n = 47), CT (n = 46). Lung (n = 82) and brain (n = 74) pathologies, Covid-19 (n = 48), Alzheimer's disease (n = 25), brain tumors (n = 15) are the main pathologies explained. Explanations are presented visually (n = 186), numerically (n = 67), rule-based (n = 11), textually (n = 11), and example-based (n = 6). Commonly explained tasks include classification (n = 89), prediction (n = 47), diagnosis (n = 39), detection (n = 29), segmentation (n = 13), and image quality improvement (n = 6). The most frequently provided explanations were local (78.1 %), 5.7 % were global, and 16.2 % combined both local and global approaches. Post-hoc approaches were predominantly employed. The used terminology varied, sometimes indistinctively using explainable (n = 207), interpretable (n = 187), understandable (n = 112), transparent (n = 61), reliable (n = 31), and intelligible (n = 3). CONCLUSION The number of XAI publications in medical imaging is increasing, primarily focusing on applying XAI techniques to MRI, CT, and radiography for classifying and predicting lung and brain pathologies. Visual and numerical output formats are predominantly used. Terminology standardisation remains a challenge, as terms like "explainable" and "interpretable" are sometimes being used indistinctively. Future XAI development should consider user needs and perspectives.
Collapse
Affiliation(s)
- Mélanie Champendal
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland; Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland.
| | - Henning Müller
- Informatics Institute, University of Applied Sciences Western Switzerland (HES-SO Valais) Sierre, CH, Switzerland; Medical faculty, University of Geneva, CH, Switzerland.
| | - John O Prior
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, CH, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV), Lausanne, CH, Switzerland.
| | - Cláudia Sá Dos Reis
- School of Health Sciences HESAV, HES-SO, University of Applied Sciences Western Switzerland, Lausanne, CH, Switzerland.
| |
Collapse
|
7
|
Ali S, Akhlaq F, Imran AS, Kastrati Z, Daudpota SM, Moosa M. The enlightening role of explainable artificial intelligence in medical & healthcare domains: A systematic literature review. Comput Biol Med 2023; 166:107555. [PMID: 37806061 DOI: 10.1016/j.compbiomed.2023.107555] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 08/13/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
In domains such as medical and healthcare, the interpretability and explainability of machine learning and artificial intelligence systems are crucial for building trust in their results. Errors caused by these systems, such as incorrect diagnoses or treatments, can have severe and even life-threatening consequences for patients. To address this issue, Explainable Artificial Intelligence (XAI) has emerged as a popular area of research, focused on understanding the black-box nature of complex and hard-to-interpret machine learning models. While humans can increase the accuracy of these models through technical expertise, understanding how these models actually function during training can be difficult or even impossible. XAI algorithms such as Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) can provide explanations for these models, improving trust in their predictions by providing feature importance and increasing confidence in the systems. Many articles have been published that propose solutions to medical problems by using machine learning models alongside XAI algorithms to provide interpretability and explainability. In our study, we identified 454 articles published from 2018-2022 and analyzed 93 of them to explore the use of these techniques in the medical domain.
Collapse
Affiliation(s)
- Subhan Ali
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Filza Akhlaq
- Department of Computer Science, Sukkur IBA University, Sukkur, 65200, Sindh, Pakistan.
| | - Ali Shariq Imran
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| | - Zenun Kastrati
- Department of Informatics, Linnaeus University, Växjö, 351 95, Sweden.
| | | | - Muhammad Moosa
- Department of Computer Science, Norwegian University of Science & Technology (NTNU), Gjøvik, 2815, Norway.
| |
Collapse
|
8
|
Altini N, Puro E, Taccogna MG, Marino F, De Summa S, Saponaro C, Mattioli E, Zito FA, Bevilacqua V. Tumor Cellularity Assessment of Breast Histopathological Slides via Instance Segmentation and Pathomic Features Explainability. Bioengineering (Basel) 2023; 10:396. [PMID: 37106583 PMCID: PMC10135772 DOI: 10.3390/bioengineering10040396] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/14/2023] [Accepted: 03/19/2023] [Indexed: 04/29/2023] Open
Abstract
The segmentation and classification of cell nuclei are pivotal steps in the pipelines for the analysis of bioimages. Deep learning (DL) approaches are leading the digital pathology field in the context of nuclei detection and classification. Nevertheless, the features that are exploited by DL models to make their predictions are difficult to interpret, hindering the deployment of such methods in clinical practice. On the other hand, pathomic features can be linked to an easier description of the characteristics exploited by the classifiers for making the final predictions. Thus, in this work, we developed an explainable computer-aided diagnosis (CAD) system that can be used to support pathologists in the evaluation of tumor cellularity in breast histopathological slides. In particular, we compared an end-to-end DL approach that exploits the Mask R-CNN instance segmentation architecture with a two steps pipeline, where the features are extracted while considering the morphological and textural characteristics of the cell nuclei. Classifiers that are based on support vector machines and artificial neural networks are trained on top of these features in order to discriminate between tumor and non-tumor nuclei. Afterwards, the SHAP (Shapley additive explanations) explainable artificial intelligence technique was employed to perform a feature importance analysis, which led to an understanding of the features processed by the machine learning models for making their decisions. An expert pathologist validated the employed feature set, corroborating the clinical usability of the model. Even though the models resulting from the two-stage pipeline are slightly less accurate than those of the end-to-end approach, the interpretability of their features is clearer and may help build trust for pathologists to adopt artificial intelligence-based CAD systems in their clinical workflow. To further show the validity of the proposed approach, it has been tested on an external validation dataset, which was collected from IRCCS Istituto Tumori "Giovanni Paolo II" and made publicly available to ease research concerning the quantification of tumor cellularity.
Collapse
Affiliation(s)
- Nicola Altini
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Emilia Puro
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Maria Giovanna Taccogna
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Francescomaria Marino
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
| | - Simona De Summa
- Molecular Diagnostics and Pharmacogenetics Unit, IRCCS Istituto Tumori “Giovanni Paolo II”, Via O. Flacco n. 65, 70124 Bari, Italy
| | - Concetta Saponaro
- Laboratory of Preclinical and Translational Research, Centro di Riferimento Oncologico della Basilicata (IRCCS-CROB), Via Padre Pio n. 1, 85028 Rionero in Vulture, Italy
| | - Eliseo Mattioli
- Pathology Department, IRCCS Istituto Tumori “Giovanni Paolo II”, Via O. Flacco n. 65, 70124 Bari, Italy
| | - Francesco Alfredo Zito
- Pathology Department, IRCCS Istituto Tumori “Giovanni Paolo II”, Via O. Flacco n. 65, 70124 Bari, Italy
| | - Vitoantonio Bevilacqua
- Department of Electrical and Information Engineering (DEI), Polytechnic University of Bari, Via Edoardo Orabona n. 4, 70126 Bari, Italy
- Apulian Bioengineering s.r.l., Via delle Violette n. 14, 70026 Modugno, Italy
| |
Collapse
|
9
|
Sheu RK, Pardeshi MS. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. SENSORS (BASEL, SWITZERLAND) 2022; 22:8068. [PMID: 36298417 PMCID: PMC9609212 DOI: 10.3390/s22208068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/18/2022] [Accepted: 10/18/2022] [Indexed: 06/16/2023]
Abstract
The emerging field of eXplainable AI (XAI) in the medical domain is considered to be of utmost importance. Meanwhile, incorporating explanations in the medical domain with respect to legal and ethical AI is necessary to understand detailed decisions, results, and current status of the patient's conditions. Successively, we will be presenting a detailed survey for the medical XAI with the model enhancements, evaluation methods, significant overview of case studies with open box architecture, medical open datasets, and future improvements. Potential differences in AI and XAI methods are provided with the recent XAI methods stated as (i) local and global methods for preprocessing, (ii) knowledge base and distillation algorithms, and (iii) interpretable machine learning. XAI characteristics details with future healthcare explainability is included prominently, whereas the pre-requisite provides insights for the brainstorming sessions before beginning a medical XAI project. Practical case study determines the recent XAI progress leading to the advance developments within the medical field. Ultimately, this survey proposes critical ideas surrounding a user-in-the-loop approach, with an emphasis on human-machine collaboration, to better produce explainable solutions. The surrounding details of the XAI feedback system for human rating-based evaluation provides intelligible insights into a constructive method to produce human enforced explanation feedback. For a long time, XAI limitations of the ratings, scores and grading are present. Therefore, a novel XAI recommendation system and XAI scoring system are designed and approached from this work. Additionally, this paper encourages the importance of implementing explainable solutions into the high impact medical field.
Collapse
Affiliation(s)
- Ruey-Kai Sheu
- Department of Computer Science, Tunghai University, No. 1727, Section 4, Taiwan Blvd, Xitun District, Taichung 407224, Taiwan
| | - Mayuresh Sunil Pardeshi
- AI Center, Tunghai University, No. 1727, Section 4, Taiwan Blvd, Xitun District, Taichung 407224, Taiwan
| |
Collapse
|
10
|
Time-Series Clustering of Single-Cell Trajectories in Collective Cell Migration. Cancers (Basel) 2022; 14:cancers14194587. [PMID: 36230509 PMCID: PMC9559181 DOI: 10.3390/cancers14194587] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/11/2022] [Accepted: 09/16/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary In this study, we normalized trajectories containing both mesenchymal and epithelial cells to remove the effect of cell location on clustering, and performed a dimensionality reduction on the time series data before clustering. When the clustering results were superimposed on the trajectories prior to normalization, the results still showed similarities in location, indicating that this method can find cells with similar migration patterns. These data highlight the reliability of this method in identifying consistent migration patterns in collective cell migration. Abstract Collective invasion drives multicellular cancer cells to spread to surrounding normal tissues. To fully comprehend metastasis, the methodology of analysis of individual cell migration in tissue should be well developed. Extracting and classifying cells with similar migratory characteristics in a colony would facilitate an understanding of complex cell migration patterns. Here, we used electrospun fibers as the extracellular matrix for the in vitro modeling of collective cell migration, clustering of mesenchymal and epithelial cells based on trajectories, and analysis of collective migration patterns based on trajectory similarity. We normalized the trajectories to eliminate the effect of cell location on clustering and used uniform manifold approximation and projection to perform dimensionality reduction on the time-series data before clustering. When the clustering results were superimposed on the trajectories before normalization, the results still exhibited positional similarity, thereby demonstrating that this method can identify cells with similar migration patterns. The same cluster contained both mesenchymal and epithelial cells, and this result was related to cell location and cell division. These data highlight the reliability of this method in identifying consistent migration patterns during collective cell migration. This provides new insights into the epithelial–mesenchymal interactions that affect migration patterns.
Collapse
|
11
|
NDG-CAM: Nuclei Detection in Histopathology Images with Semantic Segmentation Networks and Grad-CAM. Bioengineering (Basel) 2022; 9:bioengineering9090475. [PMID: 36135021 PMCID: PMC9495364 DOI: 10.3390/bioengineering9090475] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 09/07/2022] [Accepted: 09/13/2022] [Indexed: 11/17/2022] Open
Abstract
Nuclei identification is a fundamental task in many areas of biomedical image analysis related to computational pathology applications. Nowadays, deep learning is the primary approach by which to segment the nuclei, but accuracy is closely linked to the amount of histological ground truth data for training. In addition, it is known that most of the hematoxylin and eosin (H&E)-stained microscopy nuclei images contain complex and irregular visual characteristics. Moreover, conventional semantic segmentation architectures grounded on convolutional neural networks (CNNs) are unable to recognize distinct overlapping and clustered nuclei. To overcome these problems, we present an innovative method based on gradient-weighted class activation mapping (Grad-CAM) saliency maps for image segmentation. The proposed solution is comprised of two steps. The first is the semantic segmentation obtained by the use of a CNN; then, the detection step is based on the calculation of local maxima of the Grad-CAM analysis evaluated on the nucleus class, allowing us to determine the positions of the nuclei centroids. This approach, which we denote as NDG-CAM, has performance in line with state-of-the-art methods, especially in isolating the different nuclei instances, and can be generalized for different organs and tissues. Experimental results demonstrated a precision of 0.833, recall of 0.815 and a Dice coefficient of 0.824 on the publicly available validation set. When used in combined mode with instance segmentation architectures such as Mask R-CNN, the method manages to surpass state-of-the-art approaches, with precision of 0.838, recall of 0.934 and a Dice coefficient of 0.884. Furthermore, performance on the external, locally collected validation set, with a Dice coefficient of 0.914 for the combined model, shows the generalization capability of the implemented pipeline, which has the ability to detect nuclei not only related to tumor or normal epithelium but also to other cytotypes.
Collapse
|
12
|
A Fusion Biopsy Framework for Prostate Cancer Based on Deformable Superellipses and nnU-Net. Bioengineering (Basel) 2022; 9:bioengineering9080343. [PMID: 35892756 PMCID: PMC9394419 DOI: 10.3390/bioengineering9080343] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 07/13/2022] [Accepted: 07/21/2022] [Indexed: 11/24/2022] Open
Abstract
In prostate cancer, fusion biopsy, which couples magnetic resonance imaging (MRI) with transrectal ultrasound (TRUS), poses the basis for targeted biopsy by allowing the comparison of information coming from both imaging modalities at the same time. Compared with the standard clinical procedure, it provides a less invasive option for the patients and increases the likelihood of sampling cancerous tissue regions for the subsequent pathology analyses. As a prerequisite to image fusion, segmentation must be achieved from both MRI and TRUS domains. The automatic contour delineation of the prostate gland from TRUS images is a challenging task due to several factors including unclear boundaries, speckle noise, and the variety of prostate anatomical shapes. Automatic methodologies, such as those based on deep learning, require a huge quantity of training data to achieve satisfactory results. In this paper, the authors propose a novel optimization formulation to find the best superellipse, a deformable model that can accurately represent the prostate shape. The advantage of the proposed approach is that it does not require extensive annotations, and can be used independently of the specific transducer employed during prostate biopsies. Moreover, in order to show the clinical applicability of the method, this study also presents a module for the automatic segmentation of the prostate gland from MRI, exploiting the nnU-Net framework. Lastly, segmented contours from both imaging domains are fused with a customized registration algorithm in order to create a tool that can help the physician to perform a targeted prostate biopsy by interacting with the graphical user interface.
Collapse
|