1
|
Kondamuri SR, Thadikemalla VSG, Suryanarayana G, Karthik C, Reddy VS, Sahithi VB, Anitha Y, Yogitha V, Valli PR. Chest CT Image based Lung Disease Classification - A Review. Curr Med Imaging 2024; 20:1-14. [PMID: 38389342 DOI: 10.2174/0115734056248176230923143105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/22/2023] [Accepted: 08/22/2023] [Indexed: 02/24/2024]
Abstract
Computed tomography (CT) scans are widely used to diagnose lung conditions due to their ability to provide a detailed overview of the body's respiratory system. Despite its popularity, visual examination of CT scan images can lead to misinterpretations that impede a timely diagnosis. Utilizing technology to evaluate images for disease detection is also a challenge. As a result, there is a significant demand for more advanced systems that can accurately classify lung diseases from CT scan images. In this work, we provide an extensive analysis of different approaches and their performances that can help young researchers to build more advanced systems. First, we briefly introduce diagnosis and treatment procedures for various lung diseases. Then, a brief description of existing methods used for the classification of lung diseases is presented. Later, an overview of the general procedures for lung disease classification using machine learning (ML) is provided. Furthermore, an overview of recent progress in ML-based classification of lung diseases is provided. Finally, existing challenges in ML techniques are presented. It is concluded that deep learning techniques have revolutionized the early identification of lung disorders. We expect that this work will equip medical professionals with the awareness they require in order to recognize and classify certain medical disorders.
Collapse
Affiliation(s)
- Shri Ramtej Kondamuri
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | | | - Gunnam Suryanarayana
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - Chandran Karthik
- Department of Robotics and Automation, Jyothi Engineering College, Thrissur, Kerala 679531, India
| | - Vanga Siva Reddy
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - V Bhuvana Sahithi
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - Y Anitha
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - V Yogitha
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - P Reshma Valli
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| |
Collapse
|
2
|
Plass M, Kargl M, Kiehl TR, Regitnig P, Geißler C, Evans T, Zerbe N, Carvalho R, Holzinger A, Müller H. Explainability and causability in digital pathology. J Pathol Clin Res 2023. [PMID: 37045794 DOI: 10.1002/cjp2.322] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 02/17/2023] [Accepted: 03/16/2023] [Indexed: 04/14/2023]
Abstract
The current move towards digital pathology enables pathologists to use artificial intelligence (AI)-based computer programmes for the advanced analysis of whole slide images. However, currently, the best-performing AI algorithms for image analysis are deemed black boxes since it remains - even to their developers - often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black-box machine-learning systems more transparent. These XAI methods are a first step towards making black-box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive 'what-if'-questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human-in-the-loop and bringing medical experts' experience and conceptual knowledge to AI processes.
Collapse
Affiliation(s)
- Markus Plass
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| | - Michaela Kargl
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| | - Tim-Rasmus Kiehl
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany
| | - Peter Regitnig
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| | - Christian Geißler
- DAI-Labor, Agent Oriented Technologies (AOT), Technische Universität Berlin, Berlin, Germany
| | - Theodore Evans
- DAI-Labor, Agent Oriented Technologies (AOT), Technische Universität Berlin, Berlin, Germany
| | - Norman Zerbe
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany
| | - Rita Carvalho
- Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Pathology, Berlin, Germany
| | - Andreas Holzinger
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
- Human-Centered AI Lab, University of Natural Resources and Life Sciences Vienna, Vienna, Austria
| | - Heimo Müller
- Diagnostic and Research Institute of Pathology, Medical University of Graz, Graz, Austria
| |
Collapse
|
3
|
Nabaei M. Cerebral aneurysm evolution modeling from microstructural computational models to machine learning: A review. Comput Biol Chem 2022; 98:107676. [DOI: 10.1016/j.compbiolchem.2022.107676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Revised: 02/13/2022] [Accepted: 03/30/2022] [Indexed: 11/03/2022]
|
4
|
Hudec M, Mináriková E, Mesiar R, Saranti A, Holzinger A. Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.106916] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
5
|
陈 雯, 王 旭, 段 辉, 张 小, 董 婷, 聂 生. [Application of deep learning in cancer prognosis prediction model]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2020; 37:918-929. [PMID: 33140618 PMCID: PMC10320539 DOI: 10.7507/1001-5515.201909066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Indexed: 11/03/2022]
Abstract
In recent years, deep learning has provided a new method for cancer prognosis analysis. The literatures related to the application of deep learning in the prognosis of cancer are summarized and their advantages and disadvantages are analyzed, which can be provided for in-depth research. Based on this, this paper systematically reviewed the latest research progress of deep learning in the construction of cancer prognosis model, and made an analysis on the strengths and weaknesses of relevant methods. Firstly, the construction idea and performance evaluation index of deep learning cancer prognosis model were clarified. Secondly, the basic network structure was introduced, and the data type, data amount, and specific network structures and their merits and demerits were discussed. Then, the mainstream method of establishing deep learning cancer prognosis model was verified and the experimental results were analyzed. Finally, the challenges and future research directions in this field were summarized and expected. Compared with the previous models, the deep learning cancer prognosis model can better improve the prognosis prediction ability of cancer patients. In the future, we should continue to explore the research of deep learning in cancer recurrence rate, cancer treatment program and drug efficacy evaluation, and fully explore the application value and potential of deep learning in cancer prognosis model, so as to establish an efficient and accurate cancer prognosis model and realize the goal of precision medicine.
Collapse
Affiliation(s)
- 雯 陈
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 旭 王
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 辉宏 段
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 小兵 张
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 婷 董
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| | - 生东 聂
- 上海理工大学 医学影像工程研究所(上海 200093)Institute of Medical Imaging, University of Shanghai for Science and Technology, Shanghai 200093, P.R.China
| |
Collapse
|
6
|
Azuaje F. Artificial intelligence for precision oncology: beyond patient stratification. NPJ Precis Oncol 2019; 3:6. [PMID: 30820462 PMCID: PMC6389974 DOI: 10.1038/s41698-019-0078-1] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 01/22/2019] [Indexed: 12/18/2022] Open
Abstract
The data-driven identification of disease states and treatment options is a crucial challenge for precision oncology. Artificial intelligence (AI) offers unique opportunities for enhancing such predictive capabilities in the lab and the clinic. AI, including its best-known branch of research, machine learning, has significant potential to enable precision oncology well beyond relatively well-known pattern recognition applications, such as the supervised classification of single-source omics or imaging datasets. This perspective highlights key advances and challenges in that direction. Furthermore, it argues that AI's scope and depth of research need to be expanded to achieve ground-breaking progress in precision oncology.
Collapse
Affiliation(s)
- Francisco Azuaje
- Bioinformatics and Modelling Research Group, Department of Oncology, Luxembourg Institute of Health (LIH), L-1445 Strassen, Luxembourg
- Present Address: Computational Biomedicine Research Group, Center for Quantitative Biology, Luxembourg Institute of Health (LIH), L-1445 Strassen, Luxembourg
| |
Collapse
|
7
|
|
8
|
Tschandl P, Argenziano G, Razmara M, Yap J. Diagnostic accuracy of content-based dermatoscopic image retrieval with deep classification features. Br J Dermatol 2018; 181:155-165. [PMID: 30207594 PMCID: PMC7379719 DOI: 10.1111/bjd.17189] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/04/2018] [Indexed: 12/20/2022]
Abstract
BACKGROUND Automated classification of medical images through neural networks can reach high accuracy rates but lacks interpretability. OBJECTIVES To compare the diagnostic accuracy obtained by using content-based image retrieval (CBIR) to retrieve visually similar dermatoscopic images with corresponding disease labels against predictions made by a neural network. METHODS A neural network was trained to predict disease classes on dermatoscopic images from three retrospectively collected image datasets containing 888, 2750 and 16 691 images, respectively. Diagnosis predictions were made based on the most commonly occurring diagnosis in visually similar images, or based on the top-1 class prediction of the softmax output from the network. Outcome measures were area under the receiver operating characteristic curve (AUC) for predicting a malignant lesion, multiclass-accuracy and mean average precision (mAP), measured on unseen test images of the corresponding dataset. RESULTS In all three datasets the skin cancer predictions from CBIR (evaluating the 16 most similar images) showed AUC values similar to softmax predictions (0·842, 0·806 and 0·852 vs. 0·830, 0·810 and 0·847, respectively; P > 0·99 for all). Similarly, the multiclass-accuracy of CBIR was comparable with softmax predictions. Compared with softmax predictions, networks trained for detecting only three classes performed better on a dataset with eight classes when using CBIR (mAP 0·184 vs. 0·368 and 0·198 vs. 0·403, respectively). CONCLUSIONS Presenting visually similar images based on features from a neural network shows comparable accuracy with the softmax probability-based diagnoses of convolutional neural networks. CBIR may be more helpful than a softmax classifier in improving diagnostic accuracy of clinicians in a routine clinical setting.
Collapse
Affiliation(s)
- P Tschandl
- School of Computing Science, Simon Fraser University, Burnaby, Canada.,Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - G Argenziano
- Department of Dermatology, University of Campania, Naples, Italy
| | - M Razmara
- MetaOptima Technology Inc., Vancouver, BC, Canada
| | - J Yap
- MetaOptima Technology Inc., Vancouver, BC, Canada
| |
Collapse
|
9
|
O'Sullivan S, Holzinger A, Wichmann D, Saldiva PHN, Sajid MI, Zatloukal K. Virtual autopsy: Machine Learning and AI provide new opportunities for investigating minimal tumor burden and therapy resistance by cancer patients. AUTOPSY AND CASE REPORTS 2018. [PMID: 29515978 PMCID: PMC5828285 DOI: 10.4322/acr.2018.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Affiliation(s)
- Shane O'Sullivan
- University of São Paulo, Faculty of Medicine, Department of Pathology. São Paulo, SP, Brazil
| | - Andreas Holzinger
- Medical University of Graz, Institute for Medical Informatics/Statistics, Holzinger Group. Graz, Austria
| | - Dominic Wichmann
- University Hospital Hamburg Eppendorf, Department of Intensive Care. Hamburg, Germany
| | | | - Mohammed Imran Sajid
- Wirral University Teaching Hospital, Department of Upper GI Surgery. United Kingdom
| | - Kurt Zatloukal
- Medical University of Graz, Institute of Pathology. Graz, Austria
| |
Collapse
|
10
|
Masino AJ, Grundmeier RW, Pennington JW, Germiller JA, Crenshaw EB. Temporal bone radiology report classification using open source machine learning and natural langue processing libraries. BMC Med Inform Decis Mak 2016; 16:65. [PMID: 27267768 PMCID: PMC4896018 DOI: 10.1186/s12911-016-0306-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2015] [Accepted: 06/01/2016] [Indexed: 12/15/2022] Open
Abstract
Background Radiology reports are a rich resource for biomedical research. Prior to utilization, trained experts must manually review reports to identify discrete outcomes. The Audiological and Genetic Database (AudGenDB) is a public, de-identified research database that contains over 16,000 radiology reports. Because the reports are unlabeled, it is difficult to select those with specific abnormalities. We implemented a classification pipeline using a human-in-the-loop machine learning approach and open source libraries to label the reports with one or more of four abnormality region labels: inner, middle, outer, and mastoid, indicating the presence of an abnormality in the specified ear region. Methods Trained abstractors labeled radiology reports taken from AudGenDB to form a gold standard. These were split into training (80 %) and test (20 %) sets. We applied open source libraries to normalize and convert every report to an n-gram feature vector. We trained logistic regression, support vector machine (linear and Gaussian), decision tree, random forest, and naïve Bayes models for each ear region. The models were evaluated on the hold-out test set. Results Our gold-standard data set contained 726 reports. The best classifiers were linear support vector machine for inner and outer ear, logistic regression for middle ear, and decision tree for mastoid. Classifier test set accuracy was 90 %, 90 %, 93 %, and 82 % for the inner, middle, outer and mastoid regions, respectively. The logistic regression method was very consistent, achieving accuracy scores within 2.75 % of the best classifier across regions and a receiver operator characteristic area under the curve of 0.92 or greater across all regions. Conclusions Our results indicate that the applied methods achieve accuracy scores sufficient to support our objective of extracting discrete features from radiology reports to enhance cohort identification in AudGenDB. The models described here are available in several free, open source libraries that make them more accessible and simplify their utilization as demonstrated in this work. We additionally implemented the models as a web service that accepts radiology report text in an HTTP request and provides the predicted region labels. This service has been used to label the reports in AudGenDB and is freely available. Electronic supplementary material The online version of this article (doi:10.1186/s12911-016-0306-3) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Aaron J Masino
- Department of Biomedical and Health Informatics, The Children's Hospital of Philadelphia, 3535 Market Street, Suite 1024, Philadelphia, PA, 19104, USA.
| | - Robert W Grundmeier
- Department of Biomedical and Health Informatics, The Children's Hospital of Philadelphia, 3535 Market Street, Suite 1024, Philadelphia, PA, 19104, USA.,Department of Pediatrics, Perelman School of Medicine at the University of Pennsylvania, 34th Street & Civic Center Boulevard, Philadelphia, PA, 19104, USA
| | - Jeffrey W Pennington
- Department of Biomedical and Health Informatics, The Children's Hospital of Philadelphia, 3535 Market Street, Suite 1024, Philadelphia, PA, 19104, USA
| | - John A Germiller
- Center for Childhood Communication, The Children's Hospital of Philadelphia, 34th Street & Civic Center Boulevard, Philadelphia, PA, 19104, USA.,Department of Otorhinolaryngology: Head and Neck Surgery, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA, 19104, USA
| | - E Bryan Crenshaw
- Center for Childhood Communication, The Children's Hospital of Philadelphia, 34th Street & Civic Center Boulevard, Philadelphia, PA, 19104, USA.,Department of Otorhinolaryngology: Head and Neck Surgery, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA, 19104, USA
| |
Collapse
|
11
|
Havaei M, Guizard N, Larochelle H, Jodoin PM. Deep Learning Trends for Focal Brain Pathology Segmentation in MRI. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-50478-0_6] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|