1
|
Parvaiz A, Nasir ES, Fraz MM. From Pixels to Prognosis: A Survey on AI-Driven Cancer Patient Survival Prediction Using Digital Histology Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1728-1751. [PMID: 38429563 PMCID: PMC11300721 DOI: 10.1007/s10278-024-01049-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/30/2023] [Accepted: 12/20/2023] [Indexed: 03/03/2024]
Abstract
Survival analysis is an integral part of medical statistics that is extensively utilized to establish prognostic indices for mortality or disease recurrence, assess treatment efficacy, and tailor effective treatment plans. The identification of prognostic biomarkers capable of predicting patient survival is a primary objective in the field of cancer research. With the recent integration of digital histology images into routine clinical practice, a plethora of Artificial Intelligence (AI)-based methods for digital pathology has emerged in scholarly literature, facilitating patient survival prediction. These methods have demonstrated remarkable proficiency in analyzing and interpreting whole slide images, yielding results comparable to those of expert pathologists. The complexity of AI-driven techniques is magnified by the distinctive characteristics of digital histology images, including their gigapixel size and diverse tissue appearances. Consequently, advanced patch-based methods are employed to effectively extract features that correlate with patient survival. These computational methods significantly enhance survival prediction accuracy and augment prognostic capabilities in cancer patients. The review discusses the methodologies employed in the literature, their performance metrics, ongoing challenges, and potential solutions for future advancements. This paper explains survival analysis and feature extraction methods for analyzing cancer patients. It also compiles essential acronyms related to cancer precision medicine. Furthermore, it is noteworthy that this is the inaugural review paper in the field. The target audience for this interdisciplinary review comprises AI practitioners, medical statisticians, and progressive oncologists who are enthusiastic about translating AI-driven solutions into clinical practice. We expect this comprehensive review article to guide future research directions in the field of cancer research.
Collapse
Affiliation(s)
- Arshi Parvaiz
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | - Esha Sadia Nasir
- National University of Sciences and Technology (NUST), Islamabad, Pakistan
| | | |
Collapse
|
2
|
Sculthorpe D, Denton A, Rusnita D, Fadhil W, Ilyas M, Mukherjee A. Advantages of automated immunostain analyses for complex membranous immunostains: An exemplar investigating loss of E-cadherin expression in colorectal cancer. Pathol Res Pract 2024; 260:155470. [PMID: 39032383 DOI: 10.1016/j.prp.2024.155470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/10/2024] [Accepted: 07/14/2024] [Indexed: 07/23/2024]
Abstract
As pathology moves towards digitisation, biomarker profiling through automated image analysis provides potentially objective and time-efficient means of assessment. This study set out to determine how a complex membranous immunostain, E-cadherin, assessed using an automated digital platform fares in comparison to manual evaluation in terms of clinical correlations and prognostication. Tissue microarrays containing 1000 colorectal cancer samples, stained with clinical E-cadherin antibodies were assessed through both manual scoring and automated image analysis. Both manual and automated scores were correlated to clinicopathological and survival data. E-cadherin data generated through digital image analysis was superior to manual evaluation when investigating for clinicopathological correlations in colorectal cancer. Loss of membranous E-cadherin, assessed on automated platforms, correlated with: right sided tumours (p = <0.001), higher T-stage (p = <0.001), higher grade (p = <0.001), N2 nodal stage (p = <0.001), intramural lymphovascular invasion (p = 0.006), perineural invasion (p = 0.028), infiltrative tumour edge (p = 0.001) high tumour budding score (p = 0.038), distant metastasis (p = 0.035), and poorer 5-year (p= 0.042) survival status. Manual assessment was only correlated with higher grade tumours, though other correlations become apparent only when assessed for morphological expression pattern (circumferential, basolateral, parallel) irrespective of intensity. Digital assessment of E-cadherin is effective for prognostication of colorectal cancer and may potentially offer benefits of improved objectivity, accuracy, and economy of time. Incorporating tools to assess patterns of staining may further improve such digital assessment in the future.
Collapse
Affiliation(s)
- Declan Sculthorpe
- Translational Medical Sciences, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, United Kingdom.
| | - Amy Denton
- Translational Medical Sciences, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Dewi Rusnita
- Translational Medical Sciences, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Wakkas Fadhil
- Translational Medical Sciences, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Mohammad Ilyas
- Translational Medical Sciences, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, United Kingdom; Department of Histopathology, Nottingham University Hospitals NHS Trust, Queen's Medical Centre, Nottingham, United Kingdom
| | - Abhik Mukherjee
- Translational Medical Sciences, Biodiscovery Institute, School of Medicine, University of Nottingham, Nottingham, United Kingdom; Department of Histopathology, Nottingham University Hospitals NHS Trust, Queen's Medical Centre, Nottingham, United Kingdom
| |
Collapse
|
3
|
McGenity C, Clarke EL, Jennings C, Matthews G, Cartlidge C, Freduah-Agyemang H, Stocken DD, Treanor D. Artificial intelligence in digital pathology: a systematic review and meta-analysis of diagnostic test accuracy. NPJ Digit Med 2024; 7:114. [PMID: 38704465 PMCID: PMC11069583 DOI: 10.1038/s41746-024-01106-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 04/12/2024] [Indexed: 05/06/2024] Open
Abstract
Ensuring diagnostic performance of artificial intelligence (AI) before introduction into clinical practice is essential. Growing numbers of studies using AI for digital pathology have been reported over recent years. The aim of this work is to examine the diagnostic accuracy of AI in digital pathology images for any disease. This systematic review and meta-analysis included diagnostic accuracy studies using any type of AI applied to whole slide images (WSIs) for any disease. The reference standard was diagnosis by histopathological assessment and/or immunohistochemistry. Searches were conducted in PubMed, EMBASE and CENTRAL in June 2022. Risk of bias and concerns of applicability were assessed using the QUADAS-2 tool. Data extraction was conducted by two investigators and meta-analysis was performed using a bivariate random effects model, with additional subgroup analyses also performed. Of 2976 identified studies, 100 were included in the review and 48 in the meta-analysis. Studies were from a range of countries, including over 152,000 whole slide images (WSIs), representing many diseases. These studies reported a mean sensitivity of 96.3% (CI 94.1-97.7) and mean specificity of 93.3% (CI 90.5-95.4). There was heterogeneity in study design and 99% of studies identified for inclusion had at least one area at high or unclear risk of bias or applicability concerns. Details on selection of cases, division of model development and validation data and raw performance data were frequently ambiguous or missing. AI is reported as having high diagnostic accuracy in the reported areas but requires more rigorous evaluation of its performance.
Collapse
Affiliation(s)
- Clare McGenity
- University of Leeds, Leeds, UK.
- Leeds Teaching Hospitals NHS Trust, Leeds, UK.
| | - Emily L Clarke
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | - Charlotte Jennings
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
| | | | | | | | | | - Darren Treanor
- University of Leeds, Leeds, UK
- Leeds Teaching Hospitals NHS Trust, Leeds, UK
- Department of Clinical Pathology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
- Centre for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| |
Collapse
|
4
|
Evans RP, Bryant LD, Russell G, Absolom K. Trust and acceptability of data-driven clinical recommendations in everyday practice: A scoping review. Int J Med Inform 2024; 183:105342. [PMID: 38266426 DOI: 10.1016/j.ijmedinf.2024.105342] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 12/08/2023] [Accepted: 01/14/2024] [Indexed: 01/26/2024]
Abstract
BACKGROUND Increasing attention is being given to the analysis of large health datasets to derive new clinical decision support systems (CDSS). However, few data-driven CDSS are being adopted into clinical practice. Trust in these tools is believed to be fundamental for acceptance and uptake but to date little attention has been given to defining or evaluating trust in clinical settings. OBJECTIVES A scoping review was conducted to explore how and where acceptability and trustworthiness of data-driven CDSS have been assessed from the health professional's perspective. METHODS Medline, Embase, PsycInfo, Web of Science, Scopus, ACM Digital, IEEE Xplore and Google Scholar were searched in March 2022 using terms expanded from: "data-driven" AND "clinical decision support" AND "acceptability". Included studies focused on healthcare practitioner-facing data-driven CDSS, relating directly to clinical care. They included trust or a proxy as an outcome, or in the discussion. The preferred reporting items for systematic reviews and meta-analyses extension for scoping reviews (PRISMA-ScR) is followed in the reporting of this review. RESULTS 3291 papers were screened, with 85 primary research studies eligible for inclusion. Studies covered a diverse range of clinical specialisms and intended contexts, but hypothetical systems (24) outnumbered those in clinical use (18). Twenty-five studies measured trust, via a wide variety of quantitative, qualitative and mixed methods. A further 24 discussed themes of trust without it being explicitly evaluated, and from these, themes of transparency, explainability, and supporting evidence were identified as factors influencing healthcare practitioner trust in data-driven CDSS. CONCLUSION There is a growing body of research on data-driven CDSS, but few studies have explored stakeholder perceptions in depth, with limited focused research on trustworthiness. Further research on healthcare practitioner acceptance, including requirements for transparency and explainability, should inform clinical implementation.
Collapse
Affiliation(s)
- Ruth P Evans
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| | | | - Gregor Russell
- Bradford District Care Trust, Bradford, New Mill, Victoria Rd, BD18 3LD, UK.
| | - Kate Absolom
- University of Leeds, Woodhouse Lane, Leeds LS2 9JT, UK.
| |
Collapse
|
5
|
Calderón-Díaz M, Silvestre Aguirre R, Vásconez JP, Yáñez R, Roby M, Querales M, Salas R. Explainable Machine Learning Techniques to Predict Muscle Injuries in Professional Soccer Players through Biomechanical Analysis. SENSORS (BASEL, SWITZERLAND) 2023; 24:119. [PMID: 38202981 PMCID: PMC10780883 DOI: 10.3390/s24010119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 11/25/2023] [Accepted: 12/18/2023] [Indexed: 01/12/2024]
Abstract
There is a significant risk of injury in sports and intense competition due to the demanding physical and psychological requirements. Hamstring strain injuries (HSIs) are the most prevalent type of injury among professional soccer players and are the leading cause of missed days in the sport. These injuries stem from a combination of factors, making it challenging to pinpoint the most crucial risk factors and their interactions, let alone find effective prevention strategies. Recently, there has been growing recognition of the potential of tools provided by artificial intelligence (AI). However, current studies primarily concentrate on enhancing the performance of complex machine learning models, often overlooking their explanatory capabilities. Consequently, medical teams have difficulty interpreting these models and are hesitant to trust them fully. In light of this, there is an increasing need for advanced injury detection and prediction models that can aid doctors in diagnosing or detecting injuries earlier and with greater accuracy. Accordingly, this study aims to identify the biomarkers of muscle injuries in professional soccer players through biomechanical analysis, employing several ML algorithms such as decision tree (DT) methods, discriminant methods, logistic regression, naive Bayes, support vector machine (SVM), K-nearest neighbor (KNN), ensemble methods, boosted and bagged trees, artificial neural networks (ANNs), and XGBoost. In particular, XGBoost is also used to obtain the most important features. The findings highlight that the variables that most effectively differentiate the groups and could serve as reliable predictors for injury prevention are the maximum muscle strength of the hamstrings and the stiffness of the same muscle. With regard to the 35 techniques employed, a precision of up to 78% was achieved with XGBoost, indicating that by considering scientific evidence, suggestions based on various data sources, and expert opinions, it is possible to attain good precision, thus enhancing the reliability of the results for doctors and trainers. Furthermore, the obtained results strongly align with the existing literature, although further specific studies about this sport are necessary to draw a definitive conclusion.
Collapse
Affiliation(s)
- Mailyn Calderón-Díaz
- Faculty of Engineering, Universidad Andres Bello, Santiago 7550196, Chile;
- Ph.D. Program in Health Sciences and Engineering, Universidad de Valparaiso, Valparaiso 2362735, Chile
- Millennium Institute for Intelligent Healthcare Engineering (iHealth), Valparaiso 2362735, Chile
| | - Rony Silvestre Aguirre
- Laboratorio de Biomecánica, Centro de Innovación Clínica MEDS, Santiago 7691236, Chile; (R.S.A.); (R.Y.); (M.R.)
| | - Juan P. Vásconez
- Faculty of Engineering, Universidad Andres Bello, Santiago 7550196, Chile;
| | - Roberto Yáñez
- Laboratorio de Biomecánica, Centro de Innovación Clínica MEDS, Santiago 7691236, Chile; (R.S.A.); (R.Y.); (M.R.)
| | - Matías Roby
- Laboratorio de Biomecánica, Centro de Innovación Clínica MEDS, Santiago 7691236, Chile; (R.S.A.); (R.Y.); (M.R.)
| | - Marvin Querales
- School of Medical Technology, Universidad de Valparaiso, Valparaiso 2362735, Chile;
| | - Rodrigo Salas
- Ph.D. Program in Health Sciences and Engineering, Universidad de Valparaiso, Valparaiso 2362735, Chile
- Millennium Institute for Intelligent Healthcare Engineering (iHealth), Valparaiso 2362735, Chile
- School of Biomedical Engineering, Universidad de Valparaiso, Valparaiso 2362735, Chile
| |
Collapse
|
6
|
Gallos IK, Tryfonopoulos D, Shani G, Amditis A, Haick H, Dionysiou DD. Advancing Colorectal Cancer Diagnosis with AI-Powered Breathomics: Navigating Challenges and Future Directions. Diagnostics (Basel) 2023; 13:3673. [PMID: 38132257 PMCID: PMC10743128 DOI: 10.3390/diagnostics13243673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 12/23/2023] Open
Abstract
Early detection of colorectal cancer is crucial for improving outcomes and reducing mortality. While there is strong evidence of effectiveness, currently adopted screening methods present several shortcomings which negatively impact the detection of early stage carcinogenesis, including low uptake due to patient discomfort. As a result, developing novel, non-invasive alternatives is an important research priority. Recent advancements in the field of breathomics, the study of breath composition and analysis, have paved the way for new avenues for non-invasive cancer detection and effective monitoring. Harnessing the utility of Volatile Organic Compounds in exhaled breath, breathomics has the potential to disrupt colorectal cancer screening practices. Our goal is to outline key research efforts in this area focusing on machine learning methods used for the analysis of breathomics data, highlight challenges involved in artificial intelligence application in this context, and suggest possible future directions which are currently considered within the framework of the European project ONCOSCREEN.
Collapse
Affiliation(s)
- Ioannis K. Gallos
- Institute of Communication and Computer Systems, National Technical University of Athens, Zografos Campus, 15780 Athens, Greece; (D.T.); (A.A.)
| | - Dimitrios Tryfonopoulos
- Institute of Communication and Computer Systems, National Technical University of Athens, Zografos Campus, 15780 Athens, Greece; (D.T.); (A.A.)
| | - Gidi Shani
- Laboratory for Nanomaterial-Based Devices, Technion—Israel Institute of Technology, Haifa 3200003, Israel; (G.S.); (H.H.)
| | - Angelos Amditis
- Institute of Communication and Computer Systems, National Technical University of Athens, Zografos Campus, 15780 Athens, Greece; (D.T.); (A.A.)
| | - Hossam Haick
- Laboratory for Nanomaterial-Based Devices, Technion—Israel Institute of Technology, Haifa 3200003, Israel; (G.S.); (H.H.)
| | - Dimitra D. Dionysiou
- Institute of Communication and Computer Systems, National Technical University of Athens, Zografos Campus, 15780 Athens, Greece; (D.T.); (A.A.)
| |
Collapse
|
7
|
Okada Y, Ning Y, Ong MEH. Explainable artificial intelligence in emergency medicine: an overview. Clin Exp Emerg Med 2023; 10:354-362. [PMID: 38012816 PMCID: PMC10790070 DOI: 10.15441/ceem.23.145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 11/06/2023] [Accepted: 11/16/2023] [Indexed: 11/29/2023] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) have potential to revolutionize emergency medical care by enhancing triage systems, improving diagnostic accuracy, refining prognostication, and optimizing various aspects of clinical care. However, as clinicians often lack AI expertise, they might perceive AI as a "black box," leading to trust issues. To address this, "explainable AI," which teaches AI functionalities to end-users, is important. This review presents the definitions, importance, and role of explainable AI, as well as potential challenges in emergency medicine. First, we introduce the terms explainability, interpretability, and transparency of AI models. These terms sound similar but have different roles in discussion of AI. Second, we indicate that explainable AI is required in clinical settings for reasons of justification, control, improvement, and discovery and provide examples. Third, we describe three major categories of explainability: pre-modeling explainability, interpretable models, and post-modeling explainability and present examples (especially for post-modeling explainability), such as visualization, simplification, text justification, and feature relevance. Last, we show the challenges of implementing AI and ML models in clinical settings and highlight the importance of collaboration between clinicians, developers, and researchers. This paper summarizes the concept of "explainable AI" for emergency medicine clinicians. This review may help clinicians understand explainable AI in emergency contexts.
Collapse
Affiliation(s)
- Yohei Okada
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Preventive Services, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Yilin Ning
- Centre for Quantitative Medicine, Duke-NUS Medical School, Singapore
| | - Marcus Eng Hock Ong
- Health Services and Systems Research, Duke-NUS Medical School, Singapore
- Department of Emergency Medicine, Singapore General Hospital, Singapore
| |
Collapse
|
8
|
Chattopadhyay S, Singh PK, Ijaz MF, Kim S, Sarkar R. SnapEnsemFS: a snapshot ensembling-based deep feature selection model for colorectal cancer histological analysis. Sci Rep 2023; 13:9937. [PMID: 37336964 PMCID: PMC10279666 DOI: 10.1038/s41598-023-36921-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 06/12/2023] [Indexed: 06/21/2023] Open
Abstract
Colorectal cancer is the third most common type of cancer diagnosed annually, and the second leading cause of death due to cancer. Early diagnosis of this ailment is vital for preventing the tumours to spread and plan treatment to possibly eradicate the disease. However, population-wide screening is stunted by the requirement of medical professionals to analyse histological slides manually. Thus, an automated computer-aided detection (CAD) framework based on deep learning is proposed in this research that uses histological slide images for predictions. Ensemble learning is a popular strategy for fusing the salient properties of several models to make the final predictions. However, such frameworks are computationally costly since it requires the training of multiple base learners. Instead, in this study, we adopt a snapshot ensemble method, wherein, instead of the traditional method of fusing decision scores from the snapshots of a Convolutional Neural Network (CNN) model, we extract deep features from the penultimate layer of the CNN model. Since the deep features are extracted from the same CNN model but for different learning environments, there may be redundancy in the feature set. To alleviate this, the features are fed into Particle Swarm Optimization, a popular meta-heuristic, for dimensionality reduction of the feature space and better classification. Upon evaluation on a publicly available colorectal cancer histology dataset using a five-fold cross-validation scheme, the proposed method obtains a highest accuracy of 97.60% and F1-Score of 97.61%, outperforming existing state-of-the-art methods on the same dataset. Further, qualitative investigation of class activation maps provide visual explainability to medical practitioners, as well as justifies the use of the CAD framework in screening of colorectal histology. Our source codes are publicly accessible at: https://github.com/soumitri2001/SnapEnsemFS .
Collapse
Affiliation(s)
- Soumitri Chattopadhyay
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Jadavpur University Second Campus, Plot No. 8, Salt Lake Bypass, LB Block, Sector III, Salt Lake City, Kolkata, 700106, West Bengal, India
| | - Muhammad Fazal Ijaz
- Department of Mechanical Engineering, Faculty of Engineering and Information Technology, The University of Melbourne, Grattam Street, Parkville, VIC, 3010, Australia.
| | - SeongKi Kim
- National Centre of Excellence in Software, Sangmyung University, Seoul, 03016, Korea.
| | - Ram Sarkar
- Department of Computer Science & Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
9
|
Jung J, Lee H, Jung H, Kim H. Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon 2023; 9:e16110. [PMID: 37234618 PMCID: PMC10205582 DOI: 10.1016/j.heliyon.2023.e16110] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 03/26/2023] [Accepted: 05/05/2023] [Indexed: 05/28/2023] Open
Abstract
Background Significant advancements in the field of information technology have influenced the creation of trustworthy explainable artificial intelligence (XAI) in healthcare. Despite improved performance of XAI, XAI techniques have not yet been integrated into real-time patient care. Objective The aim of this systematic review is to understand the trends and gaps in research on XAI through an assessment of the essential properties of XAI and an evaluation of explanation effectiveness in the healthcare field. Methods A search of PubMed and Embase databases for relevant peer-reviewed articles on development of an XAI model using clinical data and evaluating explanation effectiveness published between January 1, 2011, and April 30, 2022, was conducted. All retrieved papers were screened independently by the two authors. Relevant papers were also reviewed for identification of the essential properties of XAI (e.g., stakeholders and objectives of XAI, quality of personalized explanations) and the measures of explanation effectiveness (e.g., mental model, user satisfaction, trust assessment, task performance, and correctability). Results Six out of 882 articles met the criteria for eligibility. Artificial Intelligence (AI) users were the most frequently described stakeholders. XAI served various purposes, including evaluation, justification, improvement, and learning from AI. Evaluation of the quality of personalized explanations was based on fidelity, explanatory power, interpretability, and plausibility. User satisfaction was the most frequently used measure of explanation effectiveness, followed by trust assessment, correctability, and task performance. The methods of assessing these measures also varied. Conclusion XAI research should address the lack of a comprehensive and agreed-upon framework for explaining XAI and standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders.
Collapse
Affiliation(s)
- Jinsun Jung
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Center for Human-Caring Nurse Leaders for the Future by Brain Korea 21 (BK 21) Four Project, College of Nursing, Seoul National University, Seoul, Republic of Korea
| | - Hyungbok Lee
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Emergency Nursing Department, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyunggu Jung
- Department of Computer Science and Engineering, University of Seoul, Seoul, Republic of Korea
- Department of Artificial Intelligence, University of Seoul, Seoul, Republic of Korea
| | - Hyeoneui Kim
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Research Institute of Nursing Science, College of Nursing, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|
10
|
Nazir S, Dickson DM, Akram MU. Survey of explainable artificial intelligence techniques for biomedical imaging with deep neural networks. Comput Biol Med 2023; 156:106668. [PMID: 36863192 DOI: 10.1016/j.compbiomed.2023.106668] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Revised: 01/12/2023] [Accepted: 02/10/2023] [Indexed: 02/21/2023]
Abstract
Artificial Intelligence (AI) techniques of deep learning have revolutionized the disease diagnosis with their outstanding image classification performance. In spite of the outstanding results, the widespread adoption of these techniques in clinical practice is still taking place at a moderate pace. One of the major hindrance is that a trained Deep Neural Networks (DNN) model provides a prediction, but questions about why and how that prediction was made remain unanswered. This linkage is of utmost importance for the regulated healthcare domain to increase the trust in the automated diagnosis system by the practitioners, patients and other stakeholders. The application of deep learning for medical imaging has to be interpreted with caution due to the health and safety concerns similar to blame attribution in the case of an accident involving autonomous cars. The consequences of both a false positive and false negative cases are far reaching for patients' welfare and cannot be ignored. This is exacerbated by the fact that the state-of-the-art deep learning algorithms comprise of complex interconnected structures, millions of parameters, and a 'black box' nature, offering little understanding of their inner working unlike the traditional machine learning algorithms. Explainable AI (XAI) techniques help to understand model predictions which help develop trust in the system, accelerate the disease diagnosis, and meet adherence to regulatory requirements. This survey provides a comprehensive review of the promising field of XAI for biomedical imaging diagnostics. We also provide a categorization of the XAI techniques, discuss the open challenges, and provide future directions for XAI which would be of interest to clinicians, regulators and model developers.
Collapse
Affiliation(s)
- Sajid Nazir
- Department of Computing, Glasgow Caledonian University, Glasgow, UK.
| | - Diane M Dickson
- Department of Podiatry and Radiography, Research Centre for Health, Glasgow Caledonian University, Glasgow, UK
| | - Muhammad Usman Akram
- Computer and Software Engineering Department, National University of Sciences and Technology, Islamabad, Pakistan
| |
Collapse
|
11
|
Kumar A, Vishwakarma A, Bajaj V. CRCCN-Net: Automated framework for classification of colorectal tissue using histopathological images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
12
|
Liu CF, Chen ZC, Kuo SC, Lin TC. Does AI explainability affect physicians’ intention to use AI? Int J Med Inform 2022; 168:104884. [DOI: 10.1016/j.ijmedinf.2022.104884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/24/2022] [Accepted: 09/30/2022] [Indexed: 11/06/2022]
|
13
|
Hanis TM, Ruhaiyem NIR, Arifin WN, Haron J, Wan Abdul Rahman WF, Abdullah R, Musa KI. Over-the-Counter Breast Cancer Classification Using Machine Learning and Patient Registration Records. Diagnostics (Basel) 2022; 12:diagnostics12112826. [PMID: 36428886 PMCID: PMC9689364 DOI: 10.3390/diagnostics12112826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 10/13/2022] [Accepted: 10/15/2022] [Indexed: 11/18/2022] Open
Abstract
This study aims to determine the feasibility of machine learning (ML) and patient registration record to be utilised to develop an over-the-counter (OTC) screening model for breast cancer risk estimation. Data were retrospectively collected from women who came to the Hospital Universiti Sains Malaysia, Malaysia for breast-related problems. Eight ML models were used: k-nearest neighbour (kNN), elastic-net logistic regression, multivariate adaptive regression splines, artificial neural network, partial least square, random forest, support vector machine (SVM), and extreme gradient boosting. Features utilised for the development of the screening models were limited to information in the patient registration form. The final model was evaluated in terms of performance across a mammographic density. Additionally, the feature importance of the final model was assessed using the model agnostic approach. kNN had the highest Youden J index, precision, and PR-AUC, while SVM had the highest F2 score. The kNN model was selected as the final model. The model had a balanced performance in terms of sensitivity, specificity, and PR-AUC across the mammographic density groups. The most important feature was the age at examination. In conclusion, this study showed that ML and patient registration information are feasible to be used as the OTC screening model for breast cancer.
Collapse
Affiliation(s)
- Tengku Muhammad Hanis
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Correspondence: (T.M.H.); (K.I.M.)
| | | | - Wan Nor Arifin
- Biostatistics and Research Methodology Unit, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | - Juhara Haron
- Department of Radiology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Breast Cancer Awareness and Research Unit, Hospital Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | - Wan Faiziah Wan Abdul Rahman
- Breast Cancer Awareness and Research Unit, Hospital Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Department of Pathology, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| | - Rosni Abdullah
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Penang, Malaysia
| | - Kamarul Imran Musa
- Department of Community Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
- Correspondence: (T.M.H.); (K.I.M.)
| |
Collapse
|
14
|
Loh HW, Ooi CP, Seoni S, Barua PD, Molinari F, Acharya UR. Application of explainable artificial intelligence for healthcare: A systematic review of the last decade (2011-2022). COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 226:107161. [PMID: 36228495 DOI: 10.1016/j.cmpb.2022.107161] [Citation(s) in RCA: 113] [Impact Index Per Article: 56.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2022] [Revised: 09/16/2022] [Accepted: 09/25/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) has branched out to various applications in healthcare, such as health services management, predictive medicine, clinical decision-making, and patient data and diagnostics. Although AI models have achieved human-like performance, their use is still limited because they are seen as a black box. This lack of trust remains the main reason for their low use in practice, especially in healthcare. Hence, explainable artificial intelligence (XAI) has been introduced as a technique that can provide confidence in the model's prediction by explaining how the prediction is derived, thereby encouraging the use of AI systems in healthcare. The primary goal of this review is to provide areas of healthcare that require more attention from the XAI research community. METHODS Multiple journal databases were thoroughly searched using PRISMA guidelines 2020. Studies that do not appear in Q1 journals, which are highly credible, were excluded. RESULTS In this review, we surveyed 99 Q1 articles covering the following XAI techniques: SHAP, LIME, GradCAM, LRP, Fuzzy classifier, EBM, CBR, rule-based systems, and others. CONCLUSION We discovered that detecting abnormalities in 1D biosignals and identifying key text in clinical notes are areas that require more attention from the XAI research community. We hope this is review will encourage the development of a holistic cloud system for a smart city.
Collapse
Affiliation(s)
- Hui Wen Loh
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Chui Ping Ooi
- School of Science and Technology, Singapore University of Social Sciences, Singapore
| | - Silvia Seoni
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - Prabal Datta Barua
- Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | - Filippo Molinari
- Department of Electronics and Telecommunications, Biolab, Politecnico di Torino, Torino 10129, Italy
| | - U Rajendra Acharya
- School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia; School of Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taiwan; Research Organization for Advanced Science and Technology (IROAST), Kumamoto University, Kumamoto, Japan.
| |
Collapse
|
15
|
Chen H, Gomez C, Huang CM, Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med 2022; 5:156. [PMID: 36261476 PMCID: PMC9581990 DOI: 10.1038/s41746-022-00699-2] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Collapse
Affiliation(s)
- Haomin Chen
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Catalina Gomez
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Chien-Ming Huang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
16
|
Deep Neural Network Models for Colon Cancer Screening. Cancers (Basel) 2022; 14:cancers14153707. [PMID: 35954370 PMCID: PMC9367621 DOI: 10.3390/cancers14153707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/26/2022] [Accepted: 07/27/2022] [Indexed: 12/24/2022] Open
Abstract
Simple Summary Deep learning models have been shown to achieve high performance in diagnosing colon cancer compared to conventional image processing and hand-crafted machine learning methods. Hence, several studies have focused on developing hybrid learning, end-to-end, and transfer learning techniques to reduce manual interaction and for labelling the regions of interest. However, these weak learning techniques do not always provide a clear diagnosis. Therefore, it is necessary to develop a clear explainable learning method that can highlight factors and form the basis of clinical decisions. However, there has been little research carried out employing such transparent approaches. This study discussed the aforementioned models for colon cancer diagnosis. Abstract Early detection of colorectal cancer can significantly facilitate clinicians’ decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.
Collapse
|
17
|
Time Is Money: Considerations for Measuring the Radiological Reading Time. J Imaging 2022; 8:jimaging8080208. [PMID: 35893086 PMCID: PMC9394242 DOI: 10.3390/jimaging8080208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 07/13/2022] [Accepted: 07/22/2022] [Indexed: 11/17/2022] Open
Abstract
Timestamps in the Radiology Information System (RIS) are a readily available and valuable source of information with increasing significance, among others, due to the current focus on the clinical impact of artificial intelligence applications. We aimed to evaluate timestamp-based radiological dictation time, introduce timestamp modeling techniques, and compare those with prospective measured reporting. Dictation time was calculated from RIS timestamps between 05/2010 and 01/2021 at our institution (n = 108,310). We minimized contextual outliers by simulating the raw data by iteration (1000, vector size (µ/sd/λ) = 100/loop), assuming normally distributed reporting times. In addition, 329 reporting times were prospectively measured by two radiologists (1 and 4 years of experience). Altogether, 106,127 of 108,310 exams were included after simulation, with a mean dictation time of 16.62 min. Mean dictation time was 16.05 min head CT (44,743/45,596), 15.84 min for chest CT (32,797/33,381), 17.92 min for abdominal CT (n = 22,805/23,483), 10.96 min for CT foot (n = 937/958), 9.14 min for lumbar spine (881/892), 8.83 min for shoulder (409/436), 8.83 min for CT wrist (1201/1322), and 39.20 min for a polytrauma patient (2127/2242), without a significant difference to the prospective reporting times. In conclusion, timestamp analysis is useful to measure current reporting practice, whereas body-region and radiological experience are confounders. This could aid in cost–benefit assessments of workflow changes (e.g., AI implementation).
Collapse
|
18
|
Infusing common-sensical prior knowledge into topological representations of learning robots. ARTIFICIAL LIFE AND ROBOTICS 2022. [DOI: 10.1007/s10015-022-00776-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Buchsbaum JC, Jaffray DA, Ba D, Borkon LL, Chalk C, Chung C, Coleman MA, Coleman CN, Diehn M, Droegemeier KK, Enderling H, Espey MG, Greenspan EJ, Hartshorn CM, Hoang T, Hsiao HT, Keppel C, Moore NW, Prior F, Stahlberg EA, Tourassi G, Willcox KE. Predictive Radiation Oncology - A New NCI-DOE Scientific Space and Community. Radiat Res 2022; 197:434-445. [PMID: 35090025 DOI: 10.1667/rade-22-00012.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Accepted: 01/10/2022] [Indexed: 11/03/2022]
Abstract
With a widely attended virtual kickoff event on January 29, 2021, the National Cancer Institute (NCI) and the Department of Energy (DOE) launched a series of 4 interactive, interdisciplinary workshops-and a final concluding "World Café" on March 29, 2021-focused on advancing computational approaches for predictive oncology in the clinical and research domains of radiation oncology. These events reflect 3,870 human hours of virtual engagement with representation from 8 DOE national laboratories and the Frederick National Laboratory for Cancer Research (FNL), 4 research institutes, 5 cancer centers, 17 medical schools and teaching hospitals, 5 companies, 5 federal agencies, 3 research centers, and 27 universities. Here we summarize the workshops by first describing the background for the workshops. Participants identified twelve key questions-and collaborative parallel ideas-as the focus of work going forward to advance the field. These were then used to define short-term and longer-term "Blue Sky" goals. In addition, the group determined key success factors for predictive oncology in the context of radiation oncology, if not the future of all of medicine. These are: cross-discipline collaboration, targeted talent development, development of mechanistic mathematical and computational models and tools, and access to high-quality multiscale data that bridges mechanisms to phenotype. The workshop participants reported feeling energized and highly motivated to pursue next steps together to address the unmet needs in radiation oncology specifically and in cancer research generally and that NCI and DOE project goals align at the convergence of radiation therapy and advanced computing.
Collapse
Affiliation(s)
| | - David A Jaffray
- The University of Texas, MD Anderson Cancer Center, Houston, Texas 77030
| | - Demba Ba
- Harvard University, Cambridge, Massachusetts 02138
| | - Lynn L Borkon
- Frederick National Laboratory for Cancer Research, Frederick, Maryland, 21701
| | | | - Caroline Chung
- The University of Texas, MD Anderson Cancer Center, Houston, Texas 77030
| | | | | | | | | | - Heiko Enderling
- H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida 33612
| | | | | | | | - Thuc Hoang
- U.S. Department of Energy, Washington, DC 20585
| | - H Timothy Hsiao
- American Society for Radiation Oncology (ASTRO), Arlington, Virginia 22202
| | | | - Nathan W Moore
- Sandia National Laboratories, Albuquerque, New Mexico 87123
| | - Fred Prior
- University of Arkansas for Medical Sciences, Little Rock, Arkansas 72205
| | - Eric A Stahlberg
- Frederick National Laboratory for Cancer Research, Frederick, Maryland, 21701
| | | | | |
Collapse
|
20
|
Zhang Y, Weng Y, Lund J. Applications of Explainable Artificial Intelligence in Diagnosis and Surgery. Diagnostics (Basel) 2022; 12:diagnostics12020237. [PMID: 35204328 PMCID: PMC8870992 DOI: 10.3390/diagnostics12020237] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 01/11/2022] [Accepted: 01/13/2022] [Indexed: 02/06/2023] Open
Abstract
In recent years, artificial intelligence (AI) has shown great promise in medicine. However, explainability issues make AI applications in clinical usages difficult. Some research has been conducted into explainable artificial intelligence (XAI) to overcome the limitation of the black-box nature of AI methods. Compared with AI techniques such as deep learning, XAI can provide both decision-making and explanations of the model. In this review, we conducted a survey of the recent trends in medical diagnosis and surgical applications using XAI. We have searched articles published between 2019 and 2021 from PubMed, IEEE Xplore, Association for Computing Machinery, and Google Scholar. We included articles which met the selection criteria in the review and then extracted and analyzed relevant information from the studies. Additionally, we provide an experimental showcase on breast cancer diagnosis, and illustrate how XAI can be applied in medical XAI applications. Finally, we summarize the XAI methods utilized in the medical XAI applications, the challenges that the researchers have met, and discuss the future research directions. The survey result indicates that medical XAI is a promising research direction, and this study aims to serve as a reference to medical experts and AI scientists when designing medical XAI applications.
Collapse
Affiliation(s)
- Yiming Zhang
- School of Computer Science, Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China;
- School of Medicine, University of Nottingham, Nottingham NG7 2RD, UK;
| | - Ying Weng
- School of Computer Science, Faculty of Science and Engineering, University of Nottingham Ningbo China, Ningbo 315100, China;
- Correspondence:
| | - Jonathan Lund
- School of Medicine, University of Nottingham, Nottingham NG7 2RD, UK;
| |
Collapse
|
21
|
Alheejawi S, Berendt R, Jha N, Maity SP, Mandal M. Detection of malignant melanoma in H&E-stained images using deep learning techniques. Tissue Cell 2021; 73:101659. [PMID: 34634635 DOI: 10.1016/j.tice.2021.101659] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 09/25/2021] [Accepted: 09/27/2021] [Indexed: 11/18/2022]
Abstract
Histopathological images are widely used to diagnose diseases including skin cancer. As digital histopathological images are typically of very large size, in the order of several billion pixels, automated identification of all abnormal cell nuclei and their distribution within multiple tissue sections would assist rapid comprehensive diagnostic assessment. In this paper, we propose a deep learning-based technique to segment the melanoma regions in Hematoxylin and Eosin (H&E) stained histopathological images. In this technique, the nuclei in the image are first segmented using a Convolutional Neural Network (CNN). The segmented nuclei are then used to generate melanoma region masks. Experimental results with a small melanoma dataset show that the proposed method can potentially segment the nuclei with more than 94 % accuracy and segment the melanoma regions with a Dice coefficient of around 85 %. The proposed technique also has a small execution time making it suitable for clinical diagnosis with a fast turnaround time.
Collapse
Affiliation(s)
- Salah Alheejawi
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.
| | - Richard Berendt
- Department of Medicine, University of Alberta, Edmonton, AB, Canada.
| | - Naresh Jha
- Department of Medicine, University of Alberta, Edmonton, AB, Canada.
| | - Santi P Maity
- Department of Information Technology, Indian Institute of Engineering Science and Technology, Shibpur, India.
| | - Mrinal Mandal
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada.
| |
Collapse
|
22
|
|
23
|
Xu X, Li J, Guan Y, Zhao L, Zhao Q, Zhang L, Li L. GLA-Net: A global-local attention network for automatic cataract classification. J Biomed Inform 2021; 124:103939. [PMID: 34752858 DOI: 10.1016/j.jbi.2021.103939] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 10/02/2021] [Accepted: 10/25/2021] [Indexed: 10/19/2022]
Abstract
Cataracts are the most crucial cause of blindness among all ophthalmic diseases. Convenient and cost-effective early cataract screening is urgently needed to reduce the risks of visual loss. To date, many studies have investigated automatic cataract classification based on fundus images. However, existing methods mainly rely on global image information while ignoring various local and subtle features. Notably, these local features are highly helpful for the identification of cataracts with different severities. To avoid this disadvantage, we introduce a deep learning technique to learn multilevel feature representations of the fundus image simultaneously. Specifically, a global-local attention network (GLA-Net) is proposed to handle the cataract classification task, which consists of two levels of subnets: the global-level attention subnet pays attention to the global structure information of the fundus image, while the local-level attention subnet focuses on the local discriminative features of the specific regions. These two types of subnets extract retinal features at different attention levels, which are then combined for final cataract classification. Our GLA-Net achieves the best performance in all metrics (90.65% detection accuracy, 83.47% grading accuracy, and 81.11% classification accuracy of grades 1 and 2). The experimental results on a real clinical dataset show that the combination of global-level and local-level attention models is effective for cataract screening and provides significant potential for other medical tasks.
Collapse
Affiliation(s)
- Xi Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Yu Guan
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Linna Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Qing Zhao
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China.
| | - Li Zhang
- Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Li Li
- National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
24
|
Alheejawi S, Berendt R, Jha N, Maity SP, Mandal M. An efficient CNN based algorithm for detecting melanoma cancer regions in H&E-stained images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3982-3985. [PMID: 34892103 DOI: 10.1109/embc46164.2021.9630443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Histopathological images are widely used to diagnose diseases such as skin cancer. As digital histopathological images are typically of very large size, in the order of several billion pixels, automated identification of abnormal cell nuclei and their distribution within multiple tissue sections would enable rapid comprehensive diagnostic assessment. In this paper, we propose a deep learning-based technique to segment the melanoma regions in Hematoxylin and Eosin-stained histopathological images. In this technique, the nuclei in an image are first segmented using a deep learning neural network. The segmented nuclei are then used to generate the melanoma region masks. Experimental results show that the proposed method can provide nuclei segmentation accuracy of around 90% and the melanoma region segmentation accuracy of around 98%. The proposed technique also has a low computational complexity.
Collapse
|
25
|
Oliveira SP, Neto PC, Fraga J, Montezuma D, Monteiro A, Monteiro J, Ribeiro L, Gonçalves S, Pinto IM, Cardoso JS. CAD systems for colorectal cancer from WSI are still not ready for clinical acceptance. Sci Rep 2021; 11:14358. [PMID: 34257363 PMCID: PMC8277780 DOI: 10.1038/s41598-021-93746-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 06/28/2021] [Indexed: 02/07/2023] Open
Abstract
Most oncological cases can be detected by imaging techniques, but diagnosis is based on pathological assessment of tissue samples. In recent years, the pathology field has evolved to a digital era where tissue samples are digitised and evaluated on screen. As a result, digital pathology opened up many research opportunities, allowing the development of more advanced image processing techniques, as well as artificial intelligence (AI) methodologies. Nevertheless, despite colorectal cancer (CRC) being the second deadliest cancer type worldwide, with increasing incidence rates, the application of AI for CRC diagnosis, particularly on whole-slide images (WSI), is still a young field. In this review, we analyse some relevant works published on this particular task and highlight the limitations that hinder the application of these works in clinical practice. We also empirically investigate the feasibility of using weakly annotated datasets to support the development of computer-aided diagnosis systems for CRC from WSI. Our study underscores the need for large datasets in this field and the use of an appropriate learning methodology to gain the most benefit from partially annotated datasets. The CRC WSI dataset used in this study, containing 1,133 colorectal biopsy and polypectomy samples, is available upon reasonable request.
Collapse
Affiliation(s)
- Sara P Oliveira
- INESCTEC, 4200-465, Porto, Portugal.
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal.
| | - Pedro C Neto
- INESCTEC, 4200-465, Porto, Portugal
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal
| | - João Fraga
- IMP Diagnostics, 4150-146, Porto, Portugal
| | - Diana Montezuma
- IMP Diagnostics, 4150-146, Porto, Portugal
- ICBAS, University of Porto, 4050-313, Porto , Portugal
- Cancer Biology and Epigenetics Group, IPO-Porto, 4200-072, Porto, Portugal
| | | | | | | | | | | | - Jaime S Cardoso
- INESCTEC, 4200-465, Porto, Portugal
- Faculty of Engineering (FEUP), University of Porto, 4200-465, Porto, Portugal
| |
Collapse
|
26
|
de Souza LA, Mendel R, Strasser S, Ebigbo A, Probst A, Messmann H, Papa JP, Palm C. Convolutional Neural Networks for the evaluation of cancer in Barrett's esophagus: Explainable AI to lighten up the black-box. Comput Biol Med 2021; 135:104578. [PMID: 34171639 DOI: 10.1016/j.compbiomed.2021.104578] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 06/11/2021] [Accepted: 06/12/2021] [Indexed: 01/10/2023]
Abstract
Even though artificial intelligence and machine learning have demonstrated remarkable performances in medical image computing, their level of accountability and transparency must be provided in such evaluations. The reliability related to machine learning predictions must be explained and interpreted, especially if diagnosis support is addressed. For this task, the black-box nature of deep learning techniques must be lightened up to transfer its promising results into clinical practice. Hence, we aim to investigate the use of explainable artificial intelligence techniques to quantitatively highlight discriminative regions during the classification of early-cancerous tissues in Barrett's esophagus-diagnosed patients. Four Convolutional Neural Network models (AlexNet, SqueezeNet, ResNet50, and VGG16) were analyzed using five different interpretation techniques (saliency, guided backpropagation, integrated gradients, input × gradients, and DeepLIFT) to compare their agreement with experts' previous annotations of cancerous tissue. We could show that saliency attributes match best with the manual experts' delineations. Moreover, there is moderate to high correlation between the sensitivity of a model and the human-and-computer agreement. The results also lightened that the higher the model's sensitivity, the stronger the correlation of human and computational segmentation agreement. We observed a relevant relation between computational learning and experts' insights, demonstrating how human knowledge may influence the correct computational learning.
Collapse
Affiliation(s)
- Luis A de Souza
- Department of Computing, São Carlos Federal University - UFSCar, Brazil; Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany
| | - Robert Mendel
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany
| | - Sophia Strasser
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany
| | - Alanna Ebigbo
- Medizinische Klinik III, Universitätsklinikum Augsburg, Germany
| | - Andreas Probst
- Medizinische Klinik III, Universitätsklinikum Augsburg, Germany
| | - Helmut Messmann
- Medizinische Klinik III, Universitätsklinikum Augsburg, Germany
| | - João P Papa
- Department of Computing, São Paulo State University, UNESP, Brazil.
| | - Christoph Palm
- Regensburg Medical Image Computing (ReMIC), Ostbayerische Technische Hochschule Regensburg (OTH Regensburg), Germany; Regensburg Center of Health Sciences and Technology (RCHST), OTH Regensburg, Germany
| |
Collapse
|
27
|
Castillo-Secilla D, Gálvez JM, Carrillo-Perez F, Verona-Almeida M, Redondo-Sánchez D, Ortuno FM, Herrera LJ, Rojas I. KnowSeq R-Bioc package: The automatic smart gene expression tool for retrieving relevant biological knowledge. Comput Biol Med 2021; 133:104387. [PMID: 33872966 DOI: 10.1016/j.compbiomed.2021.104387] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/05/2021] [Accepted: 04/05/2021] [Indexed: 02/07/2023]
Abstract
KnowSeq R/Bioc package is designed as a powerful, scalable and modular software focused on automatizing and assembling renowned bioinformatic tools with new features and functionalities. It comprises a unified environment to perform complex gene expression analyses, covering all the needed processing steps to identify a gene signature for a specific disease to gather understandable knowledge. This process may be initiated from raw files either available at well-known platforms or provided by the users themselves, and in either case coming from different information sources and different Transcriptomic technologies. The pipeline makes use of a set of advanced algorithms, including the adaptation of a novel procedure for the selection of the most representative genes in a given multiclass problem. Similarly, an intelligent system able to classify new patients, providing the user the opportunity to choose one among a number of well-known and widespread classification and feature selection methods in Bioinformatics, is embedded. Furthermore, KnowSeq is engineered to automatically develop a complete and detailed HTML report of the whole process which is also modular and scalable. Biclass breast cancer and multiclass lung cancer study cases were addressed to rigorously assess the usability and efficiency of KnowSeq. The models built by using the Differential Expressed Genes achieved from both experiments reach high classification rates. Furthermore, biological knowledge was extracted in terms of Gene Ontologies, Pathways and related diseases with the aim of helping the expert in the decision-making process. KnowSeq is available at Bioconductor (https://bioconductor.org/packages/KnowSeq), GitHub (https://github.com/CasedUgr/KnowSeq) and Docker (https://hub.docker.com/r/casedugr/knowseq).
Collapse
Affiliation(s)
- Daniel Castillo-Secilla
- Department of Computer Architecture and Technology,University of Granada. C.I.T.I.C., Periodista Rafael Gómez Montero 2, 18014, Granada, Spain.
| | - Juan Manuel Gálvez
- Department of Computer Architecture and Technology,University of Granada. C.I.T.I.C., Periodista Rafael Gómez Montero 2, 18014, Granada, Spain
| | - Francisco Carrillo-Perez
- Department of Computer Architecture and Technology,University of Granada. C.I.T.I.C., Periodista Rafael Gómez Montero 2, 18014, Granada, Spain
| | - Marta Verona-Almeida
- Department of Computer Architecture and Technology,University of Granada. C.I.T.I.C., Periodista Rafael Gómez Montero 2, 18014, Granada, Spain
| | - Daniel Redondo-Sánchez
- Instituto de Investigación Biosanitaria de Granada, Non-Communicable Disease and Cancer Epidemiology Group, ibs.GRANADA, Avda. de Madrid, 15. Pabellón de Consultas Externas 2, 2a Planta, CP, 18012, Granada, Spain
| | - Francisco Manuel Ortuno
- Clinical Bioinformatics Area, Fundación Andaluza Progreso y Salud (FPS), Hospital Universitario Virgen del Rocío, Avenida Manuel Siurot s/n, 41013, Sevilla, Spain
| | - Luis Javier Herrera
- Department of Computer Architecture and Technology,University of Granada. C.I.T.I.C., Periodista Rafael Gómez Montero 2, 18014, Granada, Spain
| | - Ignacio Rojas
- Department of Computer Architecture and Technology,University of Granada. C.I.T.I.C., Periodista Rafael Gómez Montero 2, 18014, Granada, Spain
| |
Collapse
|
28
|
Farouq MW, Boulila W, Hussain Z, Rashid A, Shah M, Hussain S, Ng N, Ng D, Hanif H, Shaikh MG, Sheikh A, Hussain A. A Novel Coupled Reaction-Diffusion System for Explainable Gene Expression Profiling. SENSORS (BASEL, SWITZERLAND) 2021; 21:2190. [PMID: 33801002 PMCID: PMC8003942 DOI: 10.3390/s21062190] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 03/06/2021] [Accepted: 03/08/2021] [Indexed: 12/20/2022]
Abstract
Machine learning (ML)-based algorithms are playing an important role in cancer diagnosis and are increasingly being used to aid clinical decision-making. However, these commonly operate as 'black boxes' and it is unclear how decisions are derived. Recently, techniques have been applied to help us understand how specific ML models work and explain the rational for outputs. This study aims to determine why a given type of cancer has a certain phenotypic characteristic. Cancer results in cellular dysregulation and a thorough consideration of cancer regulators is required. This would increase our understanding of the nature of the disease and help discover more effective diagnostic, prognostic, and treatment methods for a variety of cancer types and stages. Our study proposes a novel explainable analysis of potential biomarkers denoting tumorigenesis in non-small cell lung cancer. A number of these biomarkers are known to appear following various treatment pathways. An enhanced analysis is enabled through a novel mathematical formulation for the regulators of mRNA, the regulators of ncRNA, and the coupled mRNA-ncRNA regulators. Temporal gene expression profiles are approximated in a two-dimensional spatial domain for the transition states before converging to the stationary state, using a system comprised of coupled-reaction partial differential equations. Simulation experiments demonstrate that the proposed mathematical gene-expression profile represents a best fit for the population abundance of these oncogenes. In future, our proposed solution can lead to the development of alternative interpretable approaches, through the application of ML models to discover unknown dynamics in gene regulatory systems.
Collapse
Affiliation(s)
- Muhamed Wael Farouq
- Department of Statistics, Mathematics and Insurance, University of Ain Shams, Cairo 11566, Egypt;
- School of Computing, Edinburgh Napier University, Edinburgh EH11 4BN, UK
| | - Wadii Boulila
- RIADI Laboratory, National School of Computer Sciences, University of Manouba, Manouba 2010, Tunisia;
- IS Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Zain Hussain
- College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh EH8 9YL, UK; (Z.H.); (N.N.); (A.S.)
| | | | - Moiz Shah
- NHS Greater Glasgow and Clyde, Glasgow G12 0XH, UK; (M.S.); (M.G.S.)
| | - Sajid Hussain
- Albany Gastroenterology Consultants, Albany, NY 12206, USA;
| | - Nathan Ng
- College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh EH8 9YL, UK; (Z.H.); (N.N.); (A.S.)
| | - Dominic Ng
- Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, UK; (D.N.); (H.H.)
| | - Haris Hanif
- Faculty of Biology, Medicine and Health, University of Manchester, Manchester M13 9PL, UK; (D.N.); (H.H.)
| | | | - Aziz Sheikh
- College of Medicine and Veterinary Medicine, University of Edinburgh, Edinburgh EH8 9YL, UK; (Z.H.); (N.N.); (A.S.)
| | - Amir Hussain
- School of Computing, Edinburgh Napier University, Edinburgh EH11 4BN, UK
| |
Collapse
|
29
|
Nguyen HG, Blank A, Dawson HE, Lugli A, Zlobec I. Classification of colorectal tissue images from high throughput tissue microarrays by ensemble deep learning methods. Sci Rep 2021; 11:2371. [PMID: 33504830 PMCID: PMC7840737 DOI: 10.1038/s41598-021-81352-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 01/05/2021] [Indexed: 12/13/2022] Open
Abstract
Tissue microarray (TMA) core images are a treasure trove for artificial intelligence applications. However, a common problem of TMAs is multiple sectioning, which can change the content of the intended tissue core and requires re-labelling. Here, we investigate different ensemble methods for colorectal tissue classification using high-throughput TMAs. Hematoxylin and Eosin (H&E) core images of 0.6 mm or 1.0 mm diameter from three international cohorts were extracted from 54 digital slides (n = 15,150 cores). After TMA core extraction and color enhancement, five different flows of independent and ensemble deep learning were applied. Training and testing data with 2144 and 13,006 cores included three classes: tumor, normal or "other" tissue. Ground-truth data were collected from 30 ngTMA slides (n = 8689 cores). A test augmentation is applied to reduce the uncertain prediction. Predictive accuracy of the best method, namely Soft Voting Ensemble of one VGG and one CapsNet models was 0.982, 0.947 and 0.939 for normal, "other" and tumor, which outperformed to independent or ensemble learning with one base-estimator. Our high-accuracy algorithm for colorectal tissue classification in high-throughput TMAs is amenable to images from different institutions, core sizes and stain intensity. It helps to reduce error in TMA core evaluations with previously given labels.
Collapse
Affiliation(s)
- Huu-Giao Nguyen
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
| | - Annika Blank
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
- Institute of Pathology, Triemli City Hospital, Birmensdorferstrasse 497, 8063, Zurich, Switzerland
| | - Heather E Dawson
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
| | - Alessandro Lugli
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland
| | - Inti Zlobec
- Institute of Pathology, University of Bern, Murtenstrasse 31, 3008, Bern, Switzerland.
| |
Collapse
|
30
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|