1
|
Kaur J, Kaur P. A systematic literature analysis of multi-organ cancer diagnosis using deep learning techniques. Comput Biol Med 2024; 179:108910. [PMID: 39032244 DOI: 10.1016/j.compbiomed.2024.108910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 07/14/2024] [Accepted: 07/15/2024] [Indexed: 07/23/2024]
Abstract
Cancer is becoming the most toxic ailment identified among individuals worldwide. The mortality rate has been increasing rapidly every year, which causes progression in the various diagnostic technologies to handle this illness. The manual procedure for segmentation and classification with a large set of data modalities can be a challenging task. Therefore, a crucial requirement is to significantly develop the computer-assisted diagnostic system intended for the initial cancer identification. This article offers a systematic review of Deep Learning approaches using various image modalities to detect multi-organ cancers from 2012 to 2023. It emphasizes the detection of five supreme predominant tumors, i.e., breast, brain, lung, skin, and liver. Extensive review has been carried out by collecting research and conference articles and book chapters from reputed international databases, i.e., Springer Link, IEEE Xplore, Science Direct, PubMed, and Wiley that fulfill the criteria for quality evaluation. This systematic review summarizes the overview of convolutional neural network model architectures and datasets used for identifying and classifying the diverse categories of cancer. This study accomplishes an inclusive idea of ensemble deep learning models that have achieved better evaluation results for classifying the different images into cancer or healthy cases. This paper will provide a broad understanding to the research scientists within the domain of medical imaging procedures of which deep learning technique perform best over which type of dataset, extraction of features, different confrontations, and their anticipated solutions for the complex problems. Lastly, some challenges and issues which control the health emergency have been discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab, India.
| |
Collapse
|
2
|
Chen JX, Shen YC, Peng SL, Chen YW, Fang HY, Lan JL, Shih CT. Pattern classification of interstitial lung diseases from computed tomography images using a ResNet-based network with a split-transform-merge strategy and split attention. Phys Eng Sci Med 2024; 47:755-767. [PMID: 38436886 DOI: 10.1007/s13246-024-01404-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/09/2024] [Indexed: 03/05/2024]
Abstract
In patients with interstitial lung disease (ILD), accurate pattern assessment from their computed tomography (CT) images could help track lung abnormalities and evaluate treatment efficacy. Based on excellent image classification performance, convolutional neural networks (CNNs) have been massively investigated for classifying and labeling pathological patterns in the CT images of ILD patients. However, previous studies rarely considered the three-dimensional (3D) structure of the pathological patterns of ILD and used two-dimensional network input. In addition, ResNet-based networks such as SE-ResNet and ResNeXt with high classification performance have not been used for pattern classification of ILD. This study proposed a SE-ResNeXt-SA-18 for classifying pathological patterns of ILD. The SE-ResNeXt-SA-18 integrated the multipath design of the ResNeXt and the feature weighting of the squeeze-and-excitation network with split attention. The classification performance of the SE-ResNeXt-SA-18 was compared with the ResNet-18 and SE-ResNeXt-18. The influence of the input patch size on classification performance was also evaluated. Results show that the classification accuracy was increased with the increase of the patch size. With a 32 × 32 × 16 input, the SE-ResNeXt-SA-18 presented the highest performance with average accuracy, sensitivity, and specificity of 0.991, 0.979, and 0.994. High-weight regions in the class activation maps of the SE-ResNeXt-SA-18 also matched the specific pattern features. In comparison, the performance of the SE-ResNeXt-SA-18 is superior to the previously reported CNNs in classifying the ILD patterns. We concluded that the SE-ResNeXt-SA-18 could help track or monitor the progress of ILD through accuracy pattern classification.
Collapse
Affiliation(s)
- Jian-Xun Chen
- Department of Thoracic Surgery, China Medical University Hospital, Taichung, Taiwan
| | - Yu-Cheng Shen
- Department of Thoracic Surgery, China Medical University Hospital, Taichung, Taiwan
| | - Shin-Lei Peng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Yi-Wen Chen
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan
| | - Hsin-Yuan Fang
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan
- School of Medicine, China Medical University, Taichung, Taiwan
| | - Joung-Liang Lan
- School of Medicine, China Medical University, Taichung, Taiwan
- Rheumatology and Immunology Center, China Medical University Hospital, Taichung, Taiwan
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan.
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
3
|
Alshamrani K, Alshamrani HA. Classification of Chest CT Lung Nodules Using Collaborative Deep Learning Model. J Multidiscip Healthc 2024; 17:1459-1472. [PMID: 38596001 PMCID: PMC11002784 DOI: 10.2147/jmdh.s456167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 03/08/2024] [Indexed: 04/11/2024] Open
Abstract
Background Early detection of lung cancer through accurate diagnosis of malignant lung nodules using chest CT scans offers patients the highest chance of successful treatment and survival. Despite advancements in computer vision through deep learning algorithms, the detection of malignant nodules faces significant challenges due to insufficient training datasets. Methods This study introduces a model based on collaborative deep learning (CDL) to differentiate between cancerous and non-cancerous nodules in chest CT scans with limited available data. The model dissects a nodule into its constituent parts using six characteristics, allowing it to learn detailed features of lung nodules. It utilizes a CDL submodel that incorporates six types of feature patches to fine-tune a network previously trained with ResNet-50. An adaptive weighting method learned through error backpropagation enhances the process of identifying lung nodules, incorporating these CDL submodels for improved accuracy. Results The CDL model demonstrated a high level of performance in classifying lung nodules, achieving an accuracy of 93.24%. This represents a significant improvement over current state-of-the-art methods, indicating the effectiveness of the proposed approach. Conclusion The findings suggest that the CDL model, with its unique structure and adaptive weighting method, offers a promising solution to the challenge of accurately detecting malignant lung nodules with limited data. This approach not only improves diagnostic accuracy but also contributes to the early detection and treatment of lung cancer, potentially saving lives.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiological Sciences Department, Najran University, Najran, Saudi Arabia
- Department of Oncology and Metabolism, University of Sheffield, Sheffield, UK
| | | |
Collapse
|
4
|
Chang M, Reicher JJ, Kalra A, Muelly M, Ahmad Y. Analysis of Validation Performance of a Machine Learning Classifier in Interstitial Lung Disease Cases Without Definite or Probable Usual Interstitial Pneumonia Pattern on CT Using Clinical and Pathology-Supported Diagnostic Labels. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:297-307. [PMID: 38343230 DOI: 10.1007/s10278-023-00914-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 07/17/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
We previously validated Fibresolve, a machine learning classifier system that non-invasively predicts idiopathic pulmonary fibrosis (IPF) diagnosis. The system incorporates an automated deep learning algorithm that analyzes chest computed tomography (CT) imaging to assess for features associated with idiopathic pulmonary fibrosis. Here, we assess performance in assessment of patterns beyond those that are characteristic features of usual interstitial pneumonia (UIP) pattern. The machine learning classifier was previously developed and validated using standard training, validation, and test sets, with clinical plus pathologically determined ground truth. The multi-site 295-patient validation dataset was used for focused subgroup analysis in this investigation to evaluate the classifier's performance range in cases with and without radiologic UIP and probable UIP designations. Radiologic assessment of specific features for UIP including the presence and distribution of reticulation, ground glass, bronchiectasis, and honeycombing was used for assignment of radiologic pattern. Output from the classifier was assessed within various UIP subgroups. The machine learning classifier was able to classify cases not meeting the criteria for UIP or probable UIP as IPF with estimated sensitivity of 56-65% and estimated specificity of 92-94%. Example cases demonstrated non-basilar-predominant as well as ground glass patterns that were indeterminate for UIP by subjective imaging criteria but for which the classifier system was able to correctly identify the case as IPF as confirmed by multidisciplinary discussion generally inclusive of histopathology. The machine learning classifier Fibresolve may be helpful in the diagnosis of IPF in cases without radiological UIP and probable UIP patterns.
Collapse
Affiliation(s)
- Marcello Chang
- Stanford School of Medicine, 291 Campus Drive, Stanford, CA, USA
| | | | | | | | - Yousef Ahmad
- Department of Pulmonary and Critical Care, University of Cincinnati Medical Center, Cincinnati, USA
| |
Collapse
|
5
|
Lai Y, Liu X, Hou F, Han Z, E L, Su N, Du D, Wang Z, Zheng W, Wu Y. Severity-stratification of interstitial lung disease by deep learning enabled assessment and quantification of lesion indicators from HRCT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:323-338. [PMID: 38306087 DOI: 10.3233/xst-230218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
BACKGROUND Interstitial lung disease (ILD) represents a group of chronic heterogeneous diseases, and current clinical practice in assessment of ILD severity and progression mainly rely on the radiologist-based visual screening, which greatly restricts the accuracy of disease assessment due to the high inter- and intra-subjective observer variability. OBJECTIVE To solve these problems, in this work, we propose a deep learning driven framework that can assess and quantify lesion indicators and outcome the prediction of severity of ILD. METHODS In detail, we first present a convolutional neural network that can segment and quantify five types of lesions including HC, RO, GGO, CONS, and EMPH from HRCT of ILD patients, and then we conduct quantitative analysis to select the features related to ILD based on the segmented lesions and clinical data. Finally, a multivariate prediction model based on nomogram to predict the severity of ILD is established by combining multiple typical lesions. RESULTS Experimental results showed that three lesions of HC, RO, and GGO could accurately predict ILD staging independently or combined with other HRCT features. Based on the HRCT, the used multivariate model can achieve the highest AUC value of 0.755 for HC, and the lowest AUC value of 0.701 for RO in stage I, and obtain the highest AUC value of 0.803 for HC, and the lowest AUC value of 0.733 for RO in stage II. Additionally, our ILD scoring model could achieve an average accuracy of 0.812 (0.736 - 0.888) in predicting the severity of ILD via cross-validation. CONCLUSIONS In summary, our proposed method provides effective segmentation of ILD lesions by a comprehensive deep-learning approach and confirms its potential effectiveness in improving diagnostic accuracy for clinicians.
Collapse
Affiliation(s)
- Yexin Lai
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xueyu Liu
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Fan Hou
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Zhiyong Han
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Linning E
- Department of Radiology, People's Hospital of Longhua, Shenzhen, China
| | - Ningling Su
- Department of Radiology, Shanxi Bethune Hospital, Taiyuan, Shanxi, China
| | - Dianrong Du
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Zhichong Wang
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Wen Zheng
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Yongfei Wu
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
6
|
Suman G, Koo CW. Recent Advancements in Computed Tomography Assessment of Fibrotic Interstitial Lung Diseases. J Thorac Imaging 2023; 38:S7-S18. [PMID: 37015833 DOI: 10.1097/rti.0000000000000705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/06/2023]
Abstract
Interstitial lung disease (ILD) is a heterogeneous group of disorders with complex and varied imaging manifestations and prognosis. High-resolution computed tomography (HRCT) is the current standard-of-care imaging tool for ILD assessment. However, visual evaluation of HRCT is limited by interobserver variation and poor sensitivity for subtle changes. Such challenges have led to tremendous recent research interest in objective and reproducible methods to examine ILDs. Computer-aided CT analysis to include texture analysis and machine learning methods have recently been shown to be viable supplements to traditional visual assessment through improved characterization and quantification of ILDs. These quantitative tools have not only been shown to correlate well with pulmonary function tests and patient outcomes but are also useful in disease diagnosis, surveillance and management. In this review, we provide an overview of recent computer-aided tools in diagnosis, prognosis, and longitudinal evaluation of fibrotic ILDs, while outlining some of the pitfalls and challenges that have precluded further advancement of these tools as well as potential solutions and further endeavors.
Collapse
Affiliation(s)
- Garima Suman
- Division of Thoracic Imaging, Mayo Clinic, Rochester, MN
| | | |
Collapse
|
7
|
Jiang X, Su N, Quan S, E L, Li R. Computed Tomography Radiomics-based Prediction Model for Gender-Age-Physiology Staging of Connective Tissue Disease-associated Interstitial Lung Disease. Acad Radiol 2023; 30:2598-2605. [PMID: 36868880 DOI: 10.1016/j.acra.2023.01.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 01/29/2023] [Accepted: 01/29/2023] [Indexed: 03/05/2023]
Abstract
PURPOSE To analyze the feasibility of predicting gender-age-physiology (GAP) staging in patients with connective tissue disease-associated interstitial lung disease (CTD-ILD) by radiomics based on computed tomography (CT) of the chest. MATERIALS AND METHODS Chest CT images of 184 patients with CTD-ILD were retrospectively analyzed. GAP staging was performed on the basis of gender, age, and pulmonary function test results. GAP I, II, and III have 137, 36, and 11 cases, respectively. The cases in GAP Ⅱ and Ⅲ were then combined into one group, and the two groups of patients were randomly divided into the training and testing groups with a 7:3 ratio. The radiomics features were extracted using AK software. Multivariate logistic regression analysis was then conducted to establish a radiomics model. A nomogram model was established on the basis of Rad-score and clinical factors (age and gender). RESULTS For the radiomics model, four significant radiomics features were selected to construct the model and showed excellent ability to differentiate GAP I from GAP Ⅱ and Ⅲ in both the training group (the area under the curve [AUC] = 0.803, 95% confidence interval [CI]: 0.724-0.874) and testing group (AUC = 0.801, 95% CI:0.663-0.912). The nomogram model that combined clinical factors and radiomics features improved higher accuracy of both training (88.4% vs. 82.1%) and testing (83.3% vs. 79.2%). CONCLUSION The disease severity assessment of patients with CTD-ILD can be evaluated by applying the radiomics method based on CT images. The nomogram model demonstrates better performance for predicting the GAP staging.
Collapse
Affiliation(s)
- Xiaopeng Jiang
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University, China
| | - Ningling Su
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University, China
| | - Shuai Quan
- GE HealthCare China (Shanghai), Shanghai, 210000, China
| | - Linning E
- Affiliated Longhua People's Hospital, Southern Medical University (Longhua People's Hospital), Shenzhen, 518110, China
| | - Rui Li
- Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Tongji Shanxi Hospital, Third Hospital of Shanxi Medical University, Taiyuan, 030032, China; Tongji Hospital, Tongji Medical College, Huazhong University, China.
| |
Collapse
|
8
|
Wang P, Vasconcelos N. A Generalized Explanation Framework for Visualization of Deep Learning Model Predictions. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:9265-9283. [PMID: 37022375 DOI: 10.1109/tpami.2023.3241106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Attribution-based explanations are popular in computer vision but of limited use for fine-grained classification problems typical of expert domains, where classes differ by subtle details. In these domains, users also seek understanding of "why" a class was chosen and "why not" an alternative class. A new GenerAlized expLanatiOn fRamEwork (GALORE) is proposed to satisfy all these requirements, by unifying attributive explanations with explanations of two other types. The first is a new class of explanations, denoted deliberative, proposed to address the "why" question, by exposing the network insecurities about a prediction. The second is the class of counterfactual explanations, which have been shown to address the "why not" question but are now more efficiently computed. GALORE unifies these explanations by defining them as combinations of attribution maps with respect to various classifier predictions and a confidence score. An evaluation protocol that leverages object recognition (CUB200) and scene classification (ADE20 K) datasets combining part and attribute annotations is also proposed. Experiments show that confidence scores can improve explanation accuracy, deliberative explanations provide insight into the network deliberation process, the latter correlates with that performed by humans, and counterfactual explanations enhance the performance of human students in machine teaching experiments.
Collapse
|
9
|
Cai GW, Liu YB, Feng QJ, Liang RH, Zeng QS, Deng Y, Yang W. Semi-Supervised Segmentation of Interstitial Lung Disease Patterns from CT Images via Self-Training with Selective Re-Training. Bioengineering (Basel) 2023; 10:830. [PMID: 37508857 PMCID: PMC10375953 DOI: 10.3390/bioengineering10070830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/22/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
Accurate segmentation of interstitial lung disease (ILD) patterns from computed tomography (CT) images is an essential prerequisite to treatment and follow-up. However, it is highly time-consuming for radiologists to pixel-by-pixel segment ILD patterns from CT scans with hundreds of slices. Consequently, it is hard to obtain large amounts of well-annotated data, which poses a huge challenge for data-driven deep learning-based methods. To alleviate this problem, we propose an end-to-end semi-supervised learning framework for the segmentation of ILD patterns (ESSegILD) from CT images via self-training with selective re-training. The proposed ESSegILD model is trained using a large CT dataset with slice-wise sparse annotations, i.e., only labeling a few slices in each CT volume with ILD patterns. Specifically, we adopt a popular semi-supervised framework, i.e., Mean-Teacher, that consists of a teacher model and a student model and uses consistency regularization to encourage consistent outputs from the two models under different perturbations. Furthermore, we propose introducing the latest self-training technique with a selective re-training strategy to select reliable pseudo-labels generated by the teacher model, which are used to expand training samples to promote the student model during iterative training. By leveraging consistency regularization and self-training with selective re-training, our proposed ESSegILD can effectively utilize unlabeled data from a partially annotated dataset to progressively improve the segmentation performance. Experiments are conducted on a dataset of 67 pneumonia patients with incomplete annotations containing over 11,000 CT images with eight different lung patterns of ILDs, with the results indicating that our proposed method is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Guang-Wei Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Yun-Bi Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qian-Jin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Rui-Hong Liang
- Department of Medical Imaging Center, Nanfang Hospital of Southern Medical University, Guangzhou 510515, China
| | - Qing-Si Zeng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Yu Deng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
10
|
Exarchos KP, Gkrepi G, Kostikas K, Gogali A. Recent Advances of Artificial Intelligence Applications in Interstitial Lung Diseases. Diagnostics (Basel) 2023; 13:2303. [PMID: 37443696 DOI: 10.3390/diagnostics13132303] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/02/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
Interstitial lung diseases (ILDs) comprise a rather heterogeneous group of diseases varying in pathophysiology, presentation, epidemiology, diagnosis, treatment and prognosis. Even though they have been recognized for several years, there are still areas of research debate. In the majority of ILDs, imaging modalities and especially high-resolution Computed Tomography (CT) scans have been the cornerstone in patient diagnostic approach and follow-up. The intricate nature of ILDs and the accompanying data have led to an increasing adoption of artificial intelligence (AI) techniques, primarily on imaging data but also in genetic data, spirometry and lung diffusion, among others. In this literature review, we describe the most prominent applications of AI in ILDs presented approximately within the last five years. We roughly stratify these studies in three categories, namely: (i) screening, (ii) diagnosis and classification, (iii) prognosis.
Collapse
Affiliation(s)
- Konstantinos P Exarchos
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| | - Georgia Gkrepi
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| | - Konstantinos Kostikas
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| | - Athena Gogali
- Respiratory Medicine Department, University of Ioannina School of Medicine, 45110 Ioannina, Greece
| |
Collapse
|
11
|
Das S, Ayus I, Gupta D. A comprehensive review of COVID-19 detection with machine learning and deep learning techniques. HEALTH AND TECHNOLOGY 2023; 13:1-14. [PMID: 37363343 PMCID: PMC10244837 DOI: 10.1007/s12553-023-00757-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/14/2023] [Indexed: 06/28/2023]
Abstract
Purpose The first transmission of coronavirus to humans started in Wuhan city of China, took the shape of a pandemic called Corona Virus Disease 2019 (COVID-19), and posed a principal threat to the entire world. The researchers are trying to inculcate artificial intelligence (Machine learning or deep learning models) for the efficient detection of COVID-19. This research explores all the existing machine learning (ML) or deep learning (DL) models, used for COVID-19 detection which may help the researcher to explore in different directions. The main purpose of this review article is to present a compact overview of the application of artificial intelligence to the research experts, helping them to explore the future scopes of improvement. Methods The researchers have used various machine learning, deep learning, and a combination of machine and deep learning models for extracting significant features and classifying various health conditions in COVID-19 patients. For this purpose, the researchers have utilized different image modalities such as CT-Scan, X-Ray, etc. This study has collected over 200 research papers from various repositories like Google Scholar, PubMed, Web of Science, etc. These research papers were passed through various levels of scrutiny and finally, 50 research articles were selected. Results In those listed articles, the ML / DL models showed an accuracy of 99% and above while performing the classification of COVID-19. This study has also presented various clinical applications of various research. This study specifies the importance of various machine and deep learning models in the field of medical diagnosis and research. Conclusion In conclusion, it is evident that ML/DL models have made significant progress in recent years, but there are still limitations that need to be addressed. Overfitting is one such limitation that can lead to incorrect predictions and overburdening of the models. The research community must continue to work towards finding ways to overcome these limitations and make machine and deep learning models even more effective and efficient. Through this ongoing research and development, we can expect even greater advances in the future.
Collapse
Affiliation(s)
- Sreeparna Das
- Department of Computer Science and Engineering, National Institute of Technology Arunachal Pradesh, Jote, Arunachal Pradesh 791113 India
| | - Ishan Ayus
- Department of Computer Science and Engineering, ITER, Siksha ‘O’ Anusandhan Deemed to be University, Bhubaneswar, Odisha 751030 India
| | - Deepak Gupta
- Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology Allahabad, Prayagraj, UP 211004 India
| |
Collapse
|
12
|
Haubold J, Zeng K, Farhand S, Stalke S, Steinberg H, Bos D, Meetschen M, Kureishi A, Zensen S, Goeser T, Maier S, Forsting M, Nensa F. AI co-pilot: content-based image retrieval for the reading of rare diseases in chest CT. Sci Rep 2023; 13:4336. [PMID: 36928759 PMCID: PMC10020154 DOI: 10.1038/s41598-023-29949-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 02/13/2023] [Indexed: 03/18/2023] Open
Abstract
The aim of the study was to evaluate the impact of the newly developed Similar patient search (SPS) Web Service, which supports reading complex lung diseases in computed tomography (CT), on the diagnostic accuracy of residents. SPS is an image-based search engine for pre-diagnosed cases along with related clinical reference content ( https://eref.thieme.de ). The reference database was constructed using 13,658 annotated regions of interest (ROIs) from 621 patients, comprising 69 lung diseases. For validation, 50 CT scans were evaluated by five radiology residents without SPS, and three months later with SPS. The residents could give a maximum of three diagnoses per case. A maximum of 3 points was achieved if the correct diagnosis without any additional diagnoses was provided. The residents achieved an average score of 17.6 ± 5.0 points without SPS. By using SPS, the residents increased their score by 81.8% to 32.0 ± 9.5 points. The improvement of the score per case was highly significant (p = 0.0001). The residents required an average of 205.9 ± 350.6 s per case (21.9% increase) when SPS was used. However, in the second half of the cases, after the residents became more familiar with SPS, this increase dropped to 7%. Residents' average score in reading complex chest CT scans improved by 81.8% when the AI-driven SPS with integrated clinical reference content was used. The increase in time per case due to the use of the SPS was minimal.
Collapse
Affiliation(s)
- Johannes Haubold
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany.
| | - Ke Zeng
- Siemens Medical Solutions Inc., Malvern, PA, USA
| | | | | | - Hannah Steinberg
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Denise Bos
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Mathias Meetschen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Anisa Kureishi
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Sebastian Zensen
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Tim Goeser
- Department of Radiology and Neuroradiology, Kliniken Maria Hilf, Viersener Str. 450, 41063, Mönchengladbach, NRW, Germany
| | - Sandra Maier
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Michael Forsting
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| | - Felix Nensa
- Department of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
- Institute of Artificial Intelligence in Medicine, University Hospital Essen, Hufelandstraße 55, 45147, Essen, Germany
| |
Collapse
|
13
|
Pawar SP, Talbar SN. Maximization of lung segmentation of generative adversarial network for using taguchi approach. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2172525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Swati P. Pawar
- SVERI’s College of Engineering Pandharpur, Pandharpur, Maharashtra, India
| | - Sanjay N. Talbar
- Center of Excellence in Signal and Image Processing, SGGS Nanded, Nanded, Maharashtra, India
| |
Collapse
|
14
|
Alhares H, Tanha J, Balafar MA. AMTLDC: a new adversarial multi-source transfer learning framework to diagnosis of COVID-19. EVOLVING SYSTEMS 2023; 14:1-15. [PMID: 38625255 PMCID: PMC9838404 DOI: 10.1007/s12530-023-09484-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
In recent years, deep learning techniques have been widely used to diagnose diseases. However, in some tasks, such as the diagnosis of COVID-19 disease, due to insufficient data, the model is not properly trained and as a result, the generalizability of the model decreases. For example, if the model is trained on a CT scan dataset and tested on another CT scan dataset, it predicts near-random results. To address this, data from several different sources can be combined using transfer learning, taking into account the intrinsic and natural differences in existing datasets obtained with different medical imaging tools and approaches. In this paper, to improve the transfer learning technique and better generalizability between multiple data sources, we propose a multi-source adversarial transfer learning model, namely AMTLDC. In AMTLDC, representations are learned that are similar among the sources. In other words, extracted representations are general and not dependent on the particular dataset domain. We apply the AMTLDC to predict Covid-19 from medical images using a convolutional neural network. We show that accuracy can be improved using the AMTLDC framework, and surpass the results of current successful transfer learning approaches. In particular, we show that the AMTLDC works well when using different dataset domains, or when there is insufficient data.
Collapse
Affiliation(s)
- Hadi Alhares
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Jafar Tanha
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| | - Mohammad Ali Balafar
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, 29th Bahman Blvd, Tabriz, 5166616471 Iran
| |
Collapse
|
15
|
Chan J, Auffermann WF. Artificial Intelligence in the Imaging of Diffuse Lung Disease. Radiol Clin North Am 2022; 60:1033-1040. [DOI: 10.1016/j.rcl.2022.06.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
16
|
Multi Level Approach for Segmentation of Interstitial Lung Disease (ILD) Patterns Classification Based on Superpixel Processing and Fusion of K-Means Clusters: SPFKMC. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:4431817. [PMID: 36317075 PMCID: PMC9617705 DOI: 10.1155/2022/4431817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/23/2022] [Accepted: 09/30/2022] [Indexed: 11/17/2022]
Abstract
During the COVID-19 pandemic, huge interstitial lung disease (ILD) lung images have been captured. It is high time to develop the efficient segmentation techniques utilized to separate the anatomical structures and ILD patterns for disease and infection level identification. The effectiveness of disease classification directly depends on the accuracy of initial stages like preprocessing and segmentation. This paper proposed a hybrid segmentation algorithm designed for ILD images by taking advantage of superpixel and K-means clustering approaches. Segmented superpixel images adapt the better irregular local and spatial neighborhoods that are helpful to improving the performance of K-means clustering-based ILD image segmentation. To overcome the limitations of multiclass belongings, semiadaptive wavelet-based fusion is applied over selected K-means clusters. The performance of the proposed SPFKMC was compared with that of 3-class Fuzzy C-Means clustering (FCM) and K-Means clustering in terms of accuracy, Jaccard similarity index, and Dice similarity coefficient. The SPFKMC algorithm gives an accuracy of 99.28%, DSC 98.72%, and JSI 97.87%. The proposed Fused Clustering gives better results as compared to traditional K-means clustering segmentation with wavelet-based fused cluster results.
Collapse
|
17
|
|
18
|
Helen Sulochana C, Praylin Selva Blessy SA. Interstitial lung disease detection using template matching combined sparse coding and blended multi class support vector machine. Proc Inst Mech Eng H 2022; 236:1492-1501. [DOI: 10.1177/09544119221113722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Interstitial lung disease (ILD), representing a collection of disorders, is considered to be the deadliest one, which increases the mortality rate of humans. In this paper, an automated scheme for detection and classification of ILD patterns is presented, which eliminates low inter-class feature variation and high intra-class feature variation in patterns, caused by translation and illumination effects. A novel and efficient feature extraction method named Template-Matching Combined Sparse Coding (TMCSC) is proposed, which extracts features invariant to translation and illumination effects, from defined regions of interest (ROI) within lung parenchyma. The translated image patch is compared with all possible templates of the image using template matching process. The corresponding sparse matrix for the set of translated image patches and their nearest template is obtained by minimizing the objective function of the similarity matrix of translated image patch and the template. A novel Blended-Multi Class Support Vector Machine (B-MCSVM) is designed for tackling high-intra class feature variation problems, which provides improved classification accuracy. Region of interests (ROIs) of five lung tissue patterns (healthy, emphysema, ground glass, micronodule, and fibrosis) selected from an internal multimedia database that contains high-resolution computed tomography (HRCT) image series are identified and utilized in this work. Performance of the proposed scheme outperforms most of the state-of-art multi-class classification algorithms.
Collapse
Affiliation(s)
- C Helen Sulochana
- St. Xaviers Catholic College of Engineering, Chunkankadai, Tamil Nadu, India
| | | |
Collapse
|
19
|
A computer aided diagnosis framework for detection and classification of interstitial lung diseases using computed tomography (CT) images. APPLIED NANOSCIENCE 2022. [DOI: 10.1007/s13204-022-02512-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
20
|
Yaqub M, Jinchao F, Arshid K, Ahmed S, Zhang W, Nawaz MZ, Mahmood T. Deep Learning-Based Image Reconstruction for Different Medical Imaging Modalities. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8750648. [PMID: 35756423 PMCID: PMC9225884 DOI: 10.1155/2022/8750648] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/12/2022] [Accepted: 05/21/2022] [Indexed: 02/08/2023]
Abstract
Image reconstruction in magnetic resonance imaging (MRI) and computed tomography (CT) is a mathematical process that generates images at many different angles around the patient. Image reconstruction has a fundamental impact on image quality. In recent years, the literature has focused on deep learning and its applications in medical imaging, particularly image reconstruction. Due to the performance of deep learning models in a wide variety of vision applications, a considerable amount of work has recently been carried out using image reconstruction in medical images. MRI and CT appear as the ultimate scientifically appropriate imaging mode for identifying and diagnosing different diseases in this ascension age of technology. This study demonstrates a number of deep learning image reconstruction approaches and a comprehensive review of the most widely used different databases. We also give the challenges and promising future directions for medical image reconstruction.
Collapse
Affiliation(s)
- Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kaleem Arshid
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Wenqian Zhang
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Muhammad Zubair Nawaz
- College of Science and Shanghai Institute of Intelligent Electronics and Systems, Donghua University, 24105 Songjiang District, Shanghai, China
| | - Tariq Mahmood
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
- Division of Science and Technology, University of Education, Lahore, Pakistan
| |
Collapse
|
21
|
Multiple instance learning for lung pathophysiological findings detection using CT scans. Med Biol Eng Comput 2022; 60:1569-1584. [PMID: 35386027 DOI: 10.1007/s11517-022-02526-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 01/17/2022] [Indexed: 10/18/2022]
Abstract
Lung diseases affect the lives of billions of people worldwide, and 4 million people, each year, die prematurely due to this condition. These pathologies are characterized by specific imagiological findings in CT scans. The traditional Computer-Aided Diagnosis (CAD) approaches have been showing promising results to help clinicians; however, CADs normally consider a small part of the medical image for analysis, excluding possible relevant information for clinical evaluation. Multiple Instance Learning (MIL) approach takes into consideration different small pieces that are relevant for the final classification and creates a comprehensive analysis of pathophysiological changes. This study uses MIL-based approaches to identify the presence of lung pathophysiological findings in CT scans for the characterization of lung disease development. This work was focus on the detection of the following: Fibrosis, Emphysema, Satellite Nodules in Primary Lesion Lobe, Nodules in Contralateral Lung and Ground Glass, being Fibrosis and Emphysema the ones with more outstanding results, reaching an Area Under the Curve (AUC) of 0.89 and 0.72, respectively. Additionally, the MIL-based approach was used for EGFR mutation status prediction - the most relevant oncogene on lung cancer, with an AUC of 0.69. The results showed that this comprehensive approach can be a useful tool for lung pathophysiological characterization.
Collapse
|
22
|
Sousa J, Pereira T, Neves I, Silva F, Oliveira HP. The Influence of a Coherent Annotation and Synthetic Addition of Lung Nodules for Lung Segmentation in CT Scans. SENSORS 2022; 22:s22093443. [PMID: 35591132 PMCID: PMC9100675 DOI: 10.3390/s22093443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Revised: 04/13/2022] [Accepted: 04/27/2022] [Indexed: 12/10/2022]
Abstract
Lung cancer is a highly prevalent pathology and a leading cause of cancer-related deaths. Most patients are diagnosed when the disease has manifested itself, which usually is a sign of lung cancer in an advanced stage and, as a consequence, the 5-year survival rates are low. To increase the chances of survival, improving the cancer early detection capacity is crucial, for which computed tomography (CT) scans represent a key role. The manual evaluation of the CTs is a time-consuming task and computer-aided diagnosis (CAD) systems can help relieve that burden. The segmentation of the lung is one of the first steps in these systems, yet it is very challenging given the heterogeneity of lung diseases usually present and associated with cancer development. In our previous work, a segmentation model based on a ResNet34 and U-Net combination was developed on a cross-cohort dataset that yielded good segmentation masks for multiple pathological conditions but misclassified some of the lung nodules. The multiple datasets used for the model development were originated from different annotation protocols, which generated inconsistencies for the learning process, and the annotations are usually not adequate for lung cancer studies since they did not comprise lung nodules. In addition, the initial datasets used for training presented a reduced number of nodules, which was showed not to be enough to allow the segmentation model to learn to include them as a lung part. In this work, an objective protocol for the lung mask’s segmentation was defined and the previous annotations were carefully reviewed and corrected to create consistent and adequate ground-truth masks for the development of the segmentation model. Data augmentation with domain knowledge was used to create lung nodules in the cases used to train the model. The model developed achieved a Dice similarity coefficient (DSC) above 0.9350 for all test datasets and it showed an ability to cope, not only with a variety of lung patterns, but also with the presence of lung nodules as well. This study shows the importance of using consistent annotations for the supervised learning process, which is a very time-consuming task, but that has great importance to healthcare applications. Due to the lack of massive datasets in the medical field, which consequently brings a lack of wide representativity, data augmentation with domain knowledge could represent a promising help to overcome this limitation for learning models development.
Collapse
Affiliation(s)
- Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (T.P.); (F.S.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
- Correspondence:
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (T.P.); (F.S.); (H.P.O.)
| | - Inês Neves
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal;
| | - Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (T.P.); (F.S.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (T.P.); (F.S.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
23
|
Zhu Z, Mittendorf A, Shropshire E, Allen B, Miller C, Bashir MR, Mazurowski MA. 3D Pyramid Pooling Network for Abdominal MRI Series Classification. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:1688-1698. [PMID: 33112740 DOI: 10.1109/tpami.2020.3033990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Recognizing and organizing different series in an MRI examination is important both for clinical review and research, but it is poorly addressed by the current generation of picture archiving and communication systems (PACSs) and post-processing workstations. In this paper, we study the problem of using deep convolutional neural networks for automatic classification of abdominal MRI series to one of many series types. Our contributions are three-fold. First, we created a large abdominal MRI dataset containing 3717 MRI series including 188,665 individual images, derived from liver examinations. 30 different series types are represented in this dataset. The dataset was annotated by consensus readings from two radiologists. Both the MRIs and the annotations were made publicly available. Second, we proposed a 3D pyramid pooling network, which can elegantly handle abdominal MRI series with varied sizes of each dimension, and achieved state-of-the-art classification performance. Third, we performed the first ever comparison between the algorithm and the radiologists on an additional dataset and had several meaningful findings.
Collapse
|
24
|
Szmul A, Chandy E, Veiga C, Jacob J, Stavropoulou A, Landau D, Hiley CT, McClelland JR. A Novel and Automated Approach to Classify Radiation Induced Lung Tissue Damage on CT Scans. Cancers (Basel) 2022; 14:1341. [PMID: 35267649 PMCID: PMC8909378 DOI: 10.3390/cancers14051341] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 02/18/2022] [Accepted: 02/25/2022] [Indexed: 02/01/2023] Open
Abstract
Radiation-induced lung damage (RILD) is a common side effect of radiotherapy (RT). The ability to automatically segment, classify, and quantify different types of lung parenchymal change is essential to uncover underlying patterns of RILD and their evolution over time. A RILD dedicated tissue classification system was developed to describe lung parenchymal tissue changes on a voxel-wise level. The classification system was automated for segmentation of five lung tissue classes on computed tomography (CT) scans that described incrementally increasing tissue density, ranging from normal lung (Class 1) to consolidation (Class 5). For ground truth data generation, we employed a two-stage data annotation approach, akin to active learning. Manual segmentation was used to train a stage one auto-segmentation method. These results were manually refined and used to train the stage two auto-segmentation algorithm. The stage two auto-segmentation algorithm was an ensemble of six 2D Unets using different loss functions and numbers of input channels. The development dataset used in this study consisted of 40 cases, each with a pre-radiotherapy, 3-, 6-, 12-, and 24-month follow-up CT scans (n = 200 CT scans). The method was assessed on a hold-out test dataset of 6 cases (n = 30 CT scans). The global Dice score coefficients (DSC) achieved for each tissue class were: Class (1) 99% and 98%, Class (2) 71% and 44%, Class (3) 56% and 26%, Class (4) 79% and 47%, and Class (5) 96% and 92%, for development and test subsets, respectively. The lowest values for the test subsets were caused by imaging artefacts or reflected subgroups that occurred infrequently and with smaller overall parenchymal volumes. We performed qualitative evaluation on the test dataset presenting manual and auto-segmentation to a blinded independent radiologist to rate them as 'acceptable', 'minor disagreement' or 'major disagreement'. The auto-segmentation ratings were similar to the manual segmentation, both having approximately 90% of cases rated as acceptable. The proposed framework for auto-segmentation of different lung tissue classes produces acceptable results in the majority of cases and has the potential to facilitate future large studies of RILD.
Collapse
Affiliation(s)
- Adam Szmul
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; (E.C.); (C.V.); (J.J.); (A.S.); (J.R.M.)
| | - Edward Chandy
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; (E.C.); (C.V.); (J.J.); (A.S.); (J.R.M.)
- Sussex Cancer Centre, Royal Sussex County Hospital, Brighton BN2 5BE, UK
- UCL Cancer Institute, University College London, London WC1E 6BT, UK; (D.L.); (C.T.H.)
| | - Catarina Veiga
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; (E.C.); (C.V.); (J.J.); (A.S.); (J.R.M.)
| | - Joseph Jacob
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; (E.C.); (C.V.); (J.J.); (A.S.); (J.R.M.)
- UCL Respiratory Department, University College London Hospital, London NW1 2PG, UK
| | - Alkisti Stavropoulou
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; (E.C.); (C.V.); (J.J.); (A.S.); (J.R.M.)
| | - David Landau
- UCL Cancer Institute, University College London, London WC1E 6BT, UK; (D.L.); (C.T.H.)
| | - Crispin T. Hiley
- UCL Cancer Institute, University College London, London WC1E 6BT, UK; (D.L.); (C.T.H.)
- University College Hospital, University College London, London NW1 2BU, UK
| | - Jamie R. McClelland
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK; (E.C.); (C.V.); (J.J.); (A.S.); (J.R.M.)
| |
Collapse
|
25
|
Lung Segmentation in CT Images: A Residual U-Net Approach on a Cross-Cohort Dataset. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12041959] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Lung cancer is one of the most common causes of cancer-related mortality, and since the majority of cases are diagnosed when the tumor is in an advanced stage, the 5-year survival rate is dismally low. Nevertheless, the chances of survival can increase if the tumor is identified early on, which can be achieved through screening with computed tomography (CT). The clinical evaluation of CT images is a very time-consuming task and computed-aided diagnosis systems can help reduce this burden. The segmentation of the lungs is usually the first step taken in image analysis automatic models of the thorax. However, this task is very challenging since the lungs present high variability in shape and size. Moreover, the co-occurrence of other respiratory comorbidities alongside lung cancer is frequent, and each pathology can present its own scope of CT imaging appearances. This work investigated the development of a deep learning model, whose architecture consists of the combination of two structures, a U-Net and a ResNet34. The proposed model was designed on a cross-cohort dataset and it achieved a mean dice similarity coefficient (DSC) higher than 0.93 for the 4 different cohorts tested. The segmentation masks were qualitatively evaluated by two experienced radiologists to identify the main limitations of the developed model, despite the good overall performance obtained. The performance per pathology was assessed, and the results confirmed a small degradation for consolidation and pneumocystis pneumonia cases, with a DSC of 0.9015 ± 0.2140 and 0.8750 ± 0.1290, respectively. This work represents a relevant assessment of the lung segmentation model, taking into consideration the pathological cases that can be found in the clinical routine, since a global assessment could not detail the fragilities of the model.
Collapse
|
26
|
Aria M, Nourani E, Golzari Oskouei A. ADA-COVID: Adversarial Deep Domain Adaptation-Based Diagnosis of COVID-19 from Lung CT Scans Using Triplet Embeddings. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2564022. [PMID: 35154300 PMCID: PMC8826267 DOI: 10.1155/2022/2564022] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/08/2021] [Accepted: 01/07/2022] [Indexed: 12/12/2022]
Abstract
Rapid diagnosis of COVID-19 with high reliability is essential in the early stages. To this end, recent research often uses medical imaging combined with machine vision methods to diagnose COVID-19. However, the scarcity of medical images and the inherent differences in existing datasets that arise from different medical imaging tools, methods, and specialists may affect the generalization of machine learning-based methods. Also, most of these methods are trained and tested on the same dataset, reducing the generalizability and causing low reliability of the obtained model in real-world applications. This paper introduces an adversarial deep domain adaptation-based approach for diagnosing COVID-19 from lung CT scan images, termed ADA-COVID. Domain adaptation-based training process receives multiple datasets with different input domains to generate domain-invariant representations for medical images. Also, due to the excessive structural similarity of medical images compared to other image data in machine vision tasks, we use the triplet loss function to generate similar representations for samples of the same class (infected cases). The performance of ADA-COVID is evaluated and compared with other state-of-the-art COVID-19 diagnosis algorithms. The obtained results indicate that ADA-COVID achieves classification improvements of at least 3%, 20%, 20%, and 11% in accuracy, precision, recall, and F1 score, respectively, compared to the best results of competitors, even without directly training on the same data. The implementation source code of the ADA-COVID is publicly available at https://github.com/MehradAria/ADA-COVID.
Collapse
Affiliation(s)
- Mehrad Aria
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tabriz, Iran
| | - Esmaeil Nourani
- Faculty of Information Technology and Computer Engineering, Azarbaijan Shahid Madani University, Tabriz, Iran
| | - Amin Golzari Oskouei
- Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| |
Collapse
|
27
|
Aliboni L, Dias OM, Pennati F, Baldi BG, Sawamura MVY, Chate RC, Carvalho CRR, de Albuquerque ALP, Aliverti A. Quantitative CT Analysis in Chronic Hypersensitivity Pneumonitis: A Convolutional Neural Network Approach. Acad Radiol 2022; 29 Suppl 2:S31-S40. [PMID: 33168391 DOI: 10.1016/j.acra.2020.10.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 10/01/2020] [Accepted: 10/04/2020] [Indexed: 11/01/2022]
Abstract
RATIONALE AND OBJECTIVES Chronic hypersensitivity pneumonitis (cHP) is a heterogeneous condition, where both small airway involvement and fibrosis may simultaneously occur. Computer-aided analysis of CT lung imaging is increasingly used to improve tissue characterization in interstitial lung diseases (ILD), quantifying disease extension, and progression. We aimed to quantify via a convolutional neural network (CNN) method the extent of different pathological classes in cHP, and to determine their correlation to pulmonary function tests (PFTs) and mosaic attenuation pattern. MATERIALS AND METHODS The extension of six textural features, including consolidation (C), ground glass opacity (GGO), fibrosis (F), low attenuation areas (LAA), reticulation (R) and healthy regions (H), was quantified in 27 cHP patients (age: 56 ± 11.5 years, forced vital capacity [FVC]% = 57 ± 17) acquired at full-inspiration via HRCT. Each class extent was correlated to PFTs and to mosaic attenuation pattern. RESULTS H showed a positive correlation with FVC%, FEV1% (forced expiratory volume), total lung capacity%, and diffusion of carbon monoxide (DLCO)% (r = 0.74, r = 0.78, r = 0.73, and r = 0.60, respectively, p < 0.001). GGO, R and C negatively correlated with FVC% and FEV1% with the highest correlations found for R (r = -0.44, and r = -0.46 respectively, p < 0.05); F negatively correlated with DLCO% (r = -0.42, p < 0.05). Patients with mosaic attenuation pattern had significantly more H (p = 0.04) and lower R (p = 0.02) and C (p = 0.0009) areas, and more preserved lung function indices (higher FVC%; p = 0.04 and DLCO%; p = 0.05), but did not show more air trapping in lung function tests. CONCLUSION CNN quantification of pathological tissue extent in cHP improves its characterization and shows correlation with PFTs. LAA can be overestimated by visual, qualitative CT assessment and mosaic attenuation pattern areas in cHP represents patchy ILD rather than small-airways disease.
Collapse
|
28
|
Soffer S, Morgenthau AS, Shimon O, Barash Y, Konen E, Glicksberg BS, Klang E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad Radiol 2022; 29 Suppl 2:S226-S235. [PMID: 34219012 DOI: 10.1016/j.acra.2021.05.014] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES High-resolution computed tomography (HRCT) is paramount in the assessment of interstitial lung disease (ILD). Yet, HRCT interpretation of ILDs may be hampered by inter- and intra-observer variability. Recently, artificial intelligence (AI) has revolutionized medical image analysis. This technology has the potential to advance patient care in ILD. We aimed to systematically evaluate the application of AI for the analysis of ILD in HRCT. MATERIALS AND METHODS We searched MEDLINE/PubMed databases for original publications of deep learning for ILD analysis on chest CT. The search included studies published up to March 1, 2021. The risk of bias evaluation included tailored Quality Assessment of Diagnostic Accuracy Studies and the modified Joanna Briggs Institute Critical Appraisal checklist. RESULTS Data was extracted from 19 retrospective studies. Deep learning techniques included detection, segmentation, and classification of ILD on HRCT. Most studies focused on the classification of ILD into different morphological patterns. Accuracies of 78%-91% were achieved. Two studies demonstrated near-expert performance for the diagnosis of idiopathic pulmonary fibrosis (IPF). The Quality Assessment of Diagnostic Accuracy Studies tool identified a high risk of bias in 15/19 (78.9%) of the studies. CONCLUSION AI has the potential to contribute to the radiologic diagnosis and classification of ILD. However, the accuracy performance is still not satisfactory, and research is limited by a small number of retrospective studies. Hence, the existing published data may not be sufficiently reliable. Only well-designed prospective controlled studies can accurately assess the value of existing AI tools for ILD evaluation.
Collapse
|
29
|
Two-Stage Hybrid Approach of Deep Learning Networks for Interstitial Lung Disease Classification. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7340902. [PMID: 35155680 PMCID: PMC8826206 DOI: 10.1155/2022/7340902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/14/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022]
Abstract
High-resolution computed tomography (HRCT) images in interstitial lung disease (ILD) screening can help improve healthcare quality. However, most of the earlier ILD classification work involves time-consuming manual identification of the region of interest (ROI) from the lung HRCT image before applying the deep learning classification algorithm. This paper has developed a two-stage hybrid approach of deep learning networks for ILD classification. A conditional generative adversarial network (c-GAN) has segmented the lung part from the HRCT images at the first stage. The c-GAN with multiscale feature extraction module has been used for accurate lung segmentation from the HRCT images with lung abnormalities. At the second stage, a pretrained ResNet50 has been used to extract the features from the segmented lung image for classification into six ILD classes using the support vector machine classifier. The proposed two-stage algorithm takes a whole HRCT as input eliminating the need for extracting the ROI and classifies the given HRCT image into an ILD class. The performance of the proposed two-stage deep learning network-based ILD classifier has improved considerably due to the stage-wise improvement of deep learning algorithm performance.
Collapse
|
30
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
31
|
Zhang H, Guo W, Zhang S, Lu H, Zhao X. Unsupervised Deep Anomaly Detection for Medical Images Using an Improved Adversarial Autoencoder. J Digit Imaging 2022; 35:153-161. [PMID: 35013826 PMCID: PMC8921374 DOI: 10.1007/s10278-021-00558-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Revised: 11/28/2021] [Accepted: 11/29/2021] [Indexed: 12/12/2022] Open
Abstract
Anomaly detection has been applied in the various disease of medical practice, such as breast cancer, retinal, lung lesion, and skin disease. However, in real-world anomaly detection, there exist a large number of healthy samples, and but very few sick samples. To alleviate the problem of data imbalance in anomaly detection, this paper proposes an unsupervised learning method for deep anomaly detection based on an improved adversarial autoencoder, in which a module called chain of convolutional block (CCB) is employed instead of the conventional skip-connections used in adversarial autoencoder. Such CCB connections provide considerable advantages via direct connections, not only preserving both global and local information but also alleviating the problem of semantic disparity between the encoding features and the corresponding decoding features. The proposed method is thus able to capture the distribution of normal samples within both image space and latent vector space. By means of minimizing the reconstruction error within both spaces during training phase, higher reconstruction error during test phase is indicative of an anomaly. Our method is trained only on the healthy persons in order to learn the distribution of normal samples and can detect sick samples based on high deviation from the distribution of normality in an unsupervised way. Experimental results for multiple datasets from different fields demonstrate that the proposed method yields superior performance to state-of-the-art methods.
Collapse
Affiliation(s)
- Haibo Zhang
- Taizhou Central Hospital (Taizhou University Hospital), Taizhou University, Zhejiang, 318000, China
| | - Wenping Guo
- Taizhou Central Hospital (Taizhou University Hospital), Taizhou University, Zhejiang, 318000, China
- College of Computer and Information, Hohai University, Nanjing, 210098, China
| | - Shiqing Zhang
- Taizhou Central Hospital (Taizhou University Hospital), Taizhou University, Zhejiang, 318000, China
| | - Hongsheng Lu
- Taizhou Central Hospital (Taizhou University Hospital), Taizhou University, Zhejiang, 318000, China.
| | - Xiaoming Zhao
- Taizhou Central Hospital (Taizhou University Hospital), Taizhou University, Zhejiang, 318000, China.
| |
Collapse
|
32
|
Suzuki Y, Kido S, Mabu S, Yanagawa M, Tomiyama N, Sato Y. Segmentation of Diffuse Lung Abnormality Patterns on Computed Tomography Images using Partially Supervised Learning. ADVANCED BIOMEDICAL ENGINEERING 2022. [DOI: 10.14326/abe.11.25] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Affiliation(s)
- Yuki Suzuki
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine
| | - Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University
| | - Masahiro Yanagawa
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Noriyuki Tomiyama
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine
| | - Yoshinobu Sato
- Division of Information Science, Graduate School of Science and Technology, Nara Institute of Science and Technology
| |
Collapse
|
33
|
Kumar A, Dhara AK, Thakur SB, Sadhu A, Nandi D. Special Convolutional Neural Network for Identification and Positioning of Interstitial Lung Disease Patterns in Computed Tomography Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [PMCID: PMC8711684 DOI: 10.1134/s1054661821040027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper, automated detection of interstitial lung disease patterns in high resolution computed tomography images is achieved by developing a faster region-based convolutional network based detector with GoogLeNet as a backbone. GoogLeNet is simplified by removing few inception models and used as the backbone of the detector network. The proposed framework is developed to detect several interstitial lung disease patterns without doing lung field segmentation. The proposed method is able to detect the five most prevalent interstitial lung disease patterns: fibrosis, emphysema, consolidation, micronodules and ground-glass opacity, as well as normal. Five-fold cross-validation has been used to avoid bias and reduce over-fitting. The proposed framework performance is measured in terms of F-score on the publicly available MedGIFT database. It outperforms state-of-the-art techniques. The detection is performed at slice level and could be used for screening and differential diagnosis of interstitial lung disease patterns using high resolution computed tomography images.
Collapse
Affiliation(s)
- Abhishek Kumar
- School of Computer and Information Sciences University of Hyderabad, 500046 Hyderabad, India
| | - Ashis Kumar Dhara
- Electrical Engineering National Institute of Technology, 713209 Durgapur, India
| | - Sumitra Basu Thakur
- Department of Chest and Respiratory Care Medicine, Medical College, 700073 Kolkata, India
| | - Anup Sadhu
- EKO Diagnostic, Medical College, 700073 Kolkata, India
| | - Debashis Nandi
- Computer Science and Engineering National Institute of Technology, 713209 Durgapur, India
| |
Collapse
|
34
|
Aliboni L, Dias OM, Baldi BG, Sawamura MVY, Chate RC, Carvalho CRR, de Albuquerque ALP, Aliverti A, Pennati F. A Convolutional Neural Network Approach to Quantify Lung Disease Progression in Patients with Fibrotic Hypersensitivity Pneumonitis (HP). Acad Radiol 2021; 29:e149-e156. [PMID: 34794883 DOI: 10.1016/j.acra.2021.10.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/08/2021] [Accepted: 10/10/2021] [Indexed: 11/01/2022]
Abstract
Rationale and Objectives To evaluate associations between longitudinal changes of quantitative CT parameters and spirometry in patients with fibrotic hypersensitivity pneumonitis (HP). Materials and Methods Serial CT images and spirometric data were retrospectively collected in a group of 25 fibrotic HP patients. Quantitative CT analysis included histogram parameters (median, interquartile range, skewness, and kurtosis) and a pretrained convolutional neural network (CNN)-based textural analysis, aimed at quantifying the extent of consolidation (C), fibrosis (F), ground-glass opacity (GGO), low attenuation areas (LAA) and healthy tissue (H). Results At baseline, FVC was 61(44-70) %pred. The median follow-up period was 1.4(0.8-3.2) years, with 3(2-4) visits per patient. Over the study, 8 patients (32%) showed a FVC decline of more than 5%, a significant worsening of all histogram parameters (p≤0.015) and an increased extent of fibrosis via CNN (p=0.038). On histogram analysis, decreased skewness and kurtosis were the parameters most strongly associated with worsened FVC (respectively, r2=0.63 and r2=0.54, p<0.001). On CNN classification, increased extent of fibrosis and consolidation were the measures most strongly correlated with FVC decline (r2=0.54 and r2=0.44, p<0.001). Conclusion CT histogram and CNN measurements provide sensitive measures of functional changes in fibrotic HP patients over time. Increased fibrosis was associated with FVC decline, providing index of disease progression. CNN may help improve fibrotic HP follow-up, providing a sensitive tool for progressive interstitial changes, which can potentially contribute to clinical decisions for individualizing disease management.
Collapse
|
35
|
Silva F, Pereira T, Morgado J, Cunha A, Oliveira HP. The Impact of Interstitial Diseases Patterns on Lung CT Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:2856-2859. [PMID: 34891843 DOI: 10.1109/embc46164.2021.9630354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lung segmentation represents a fundamental step in the development of computer-aided decision systems for the investigation of interstitial lung diseases. In a holistic lung analysis, eliminating background areas from Computed Tomography (CT) images is essential to avoid the inclusion of noise information and spend unnecessary computational resources on non-relevant data. However, the major challenge in this segmentation task relies on the ability of the models to deal with imaging manifestations associated with severe disease. Based on U-net, a general biomedical image segmentation architecture, we proposed a light-weight and faster architecture. In this 2D approach, experiments were conducted with a combination of two publicly available databases to improve the heterogeneity of the training data. Results showed that, when compared to the original U-net, the proposed architecture maintained performance levels, achieving 0.894 ± 0.060, 4.493 ± 0.633 and 4.457 ± 0.628 for DSC, HD and HD-95 metrics, respectively, when using all patients from the ILD database for testing only, while allowing a more effficient computational usage. Quantitative and qualitative evaluations on the ability to cope with high-density lung patterns associated with severe disease were conducted, supporting the idea that more representative and diverse data is necessary to build robust and reliable segmentation tools.
Collapse
|
36
|
Li P, Kong X, Li J, Zhu G, Lu X, Shen P, Shah SAA, Bennamoun M, Hua T. A Dataset of Pulmonary Lesions With Multiple-Level Attributes and Fine Contours. Front Digit Health 2021; 2:609349. [PMID: 34713070 PMCID: PMC8521952 DOI: 10.3389/fdgth.2020.609349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 12/09/2020] [Indexed: 11/13/2022] Open
Abstract
Lung cancer is a life-threatening disease and its diagnosis is of great significance. Data scarcity and unavailability of datasets is a major bottleneck in lung cancer research. In this paper, we introduce a dataset of pulmonary lesions for designing the computer-aided diagnosis (CAD) systems. The dataset has fine contour annotations and nine attribute annotations. We define the structure of the dataset in detail, and then discuss the relationship of the attributes and pathology, and the correlation between the nine attributes with the chi-square test. To demonstrate the contribution of our dataset to computer-aided system design, we define four tasks that can be developed using our dataset. Then, we use our dataset to model multi-attribute classification tasks. We discuss the performance in 2D, 2.5D, and 3D input modes of the classification model. To improve performance, we introduce two attention mechanisms and verify the principles of the attention mechanisms through visualization. Experimental results show the relationship between different models and different levels of attributes.
Collapse
Affiliation(s)
- Ping Li
- Shanghai BNC, Shanghai, China
| | - Xiangwen Kong
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | - Johann Li
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | - Guangming Zhu
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | | | | | - Syed Afaq Ali Shah
- College of Science, Health, Engineering and Education, Murdoch University, Perth, WA, Australia
| | - Mohammed Bennamoun
- School of Computer Science and Software Engineering, The University of Western Australia, Perth, WA, Australia
| | - Tao Hua
- Pet Center, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
37
|
|
38
|
Osadebey M, Andersen HK, Waaler D, Fossaa K, Martinsen ACT, Pedersen M. Three-stage segmentation of lung region from CT images using deep neural networks. BMC Med Imaging 2021; 21:112. [PMID: 34266391 PMCID: PMC8280386 DOI: 10.1186/s12880-021-00640-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 07/06/2021] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Lung region segmentation is an important stage of automated image-based approaches for the diagnosis of respiratory diseases. Manual methods executed by experts are considered the gold standard, but it is time consuming and the accuracy is dependent on radiologists' experience. Automated methods are relatively fast and reproducible with potential to facilitate physician interpretation of images. However, these benefits are possible only after overcoming several challenges. The traditional methods that are formulated as a three-stage segmentation demonstrate promising results on normal CT data but perform poorly in the presence of pathological features and variations in image quality attributes. The implementation of deep learning methods that can demonstrate superior performance over traditional methods is dependent on the quantity, quality, cost and the time it takes to generate training data. Thus, efficient and clinically relevant automated segmentation method is desired for the diagnosis of respiratory diseases. METHODS We implement each of the three stages of traditional methods using deep learning methods trained on five different configurations of training data with ground truths obtained from the 3D Image Reconstruction for Comparison of Algorithm Database (3DIRCAD) and the Interstitial Lung Diseases (ILD) database. The data was augmented with the Lung Image Database Consortium (LIDC-IDRI) image collection and a realistic phantom. A convolutional neural network (CNN) at the preprocessing stage classifies the input into lung and none lung regions. The processing stage was implemented using a CNN-based U-net while the postprocessing stage utilize another U-net and CNN for contour refinement and filtering out false positives, respectively. RESULTS The performance of the proposed method was evaluated on 1230 and 1100 CT slices from the 3DIRCAD and ILD databases. We investigate the performance of the proposed method on five configurations of training data and three configurations of the segmentation system; three-stage segmentation and three-stage segmentation without a CNN classifier and contrast enhancement, respectively. The Dice-score recorded by the proposed method range from 0.76 to 0.95. CONCLUSION The clinical relevance and segmentation accuracy of deep learning models can improve though deep learning-based three-stage segmentation, image quality evaluation and enhancement as well as augmenting the training data with large volume of cheap and quality training data. We propose a new and novel deep learning-based method of contour refinement.
Collapse
Affiliation(s)
- Michael Osadebey
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Hilde K. Andersen
- Department of Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | - Dag Waaler
- Department of Health Sciences, Norwegian University of Science and Technology, Gjøvik, Norway
| | - Kristian Fossaa
- Department of Diagnostic Physics, Oslo University Hospital, Oslo, Norway
| | - Anne C. T. Martinsen
- The Faculty of health sciences, Oslo Metropolitan University, Oslo, Norway
- Sunnaas Rehabilitation Hospital, Nesoddtangen, Norway
| | - Marius Pedersen
- Department of Computer Science, Norwegian University of Science and Technology, Gjøvik, Norway
| |
Collapse
|
39
|
Ozyurt F, Tuncer T, Subasi A. An automated COVID-19 detection based on fused dynamic exemplar pyramid feature extraction and hybrid feature selection using deep learning. Comput Biol Med 2021; 132:104356. [PMID: 33799219 PMCID: PMC7997855 DOI: 10.1016/j.compbiomed.2021.104356] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 03/20/2021] [Accepted: 03/21/2021] [Indexed: 12/16/2022]
Abstract
The new coronavirus disease known as COVID-19 is currently a pandemic that is spread out the whole world. Several methods have been presented to detect COVID-19 disease. Computer vision methods have been widely utilized to detect COVID-19 by using chest X-ray and computed tomography (CT) images. This work introduces a model for the automatic detection of COVID-19 using CT images. A novel handcrafted feature generation technique and a hybrid feature selector are used together to achieve better performance. The primary goal of the proposed framework is to achieve a higher classification accuracy than convolutional neural networks (CNN) using handcrafted features of the CT images. In the proposed framework, there are four fundamental phases, which are preprocessing, fused dynamic sized exemplars based pyramid feature generation, ReliefF, and iterative neighborhood component analysis based feature selection and deep neural network classifier. In the preprocessing phase, CT images are converted into 2D matrices and resized to 256 × 256 sized images. The proposed feature generation network uses dynamic-sized exemplars and pyramid structures together. Two basic feature generation functions are used to extract statistical and textural features. The selected most informative features are forwarded to artificial neural networks (ANN) and deep neural network (DNN) for classification. ANN and DNN models achieved 94.10% and 95.84% classification accuracies respectively. The proposed fused feature generator and iterative hybrid feature selector achieved the best success rate, according to the results obtained by using CT images.
Collapse
Affiliation(s)
- Fatih Ozyurt
- Department of Software Engineering, College of Engineering, Firat University, Elazig, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland,Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia,Corresponding author. Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland
| |
Collapse
|
40
|
Yoo SJ, Yoon SH, Lee JH, Kim KH, Choi HI, Park SJ, Goo JM. Automated Lung Segmentation on Chest Computed Tomography Images with Extensive Lung Parenchymal Abnormalities Using a Deep Neural Network. Korean J Radiol 2021; 22:476-488. [PMID: 33169549 PMCID: PMC7909864 DOI: 10.3348/kjr.2020.0318] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 05/31/2020] [Accepted: 06/28/2020] [Indexed: 01/12/2023] Open
Abstract
OBJECTIVE We aimed to develop a deep neural network for segmenting lung parenchyma with extensive pathological conditions on non-contrast chest computed tomography (CT) images. MATERIALS AND METHODS Thin-section non-contrast chest CT images from 203 patients (115 males, 88 females; age range, 31-89 years) between January 2017 and May 2017 were included in the study, of which 150 cases had extensive lung parenchymal disease involving more than 40% of the parenchymal area. Parenchymal diseases included interstitial lung disease (ILD), emphysema, nontuberculous mycobacterial lung disease, tuberculous destroyed lung, pneumonia, lung cancer, and other diseases. Five experienced radiologists manually drew the margin of the lungs, slice by slice, on CT images. The dataset used to develop the network consisted of 157 cases for training, 20 cases for development, and 26 cases for internal validation. Two-dimensional (2D) U-Net and three-dimensional (3D) U-Net models were used for the task. The network was trained to segment the lung parenchyma as a whole and segment the right and left lung separately. The University Hospitals of Geneva ILD dataset, which contained high-resolution CT images of ILD, was used for external validation. RESULTS The Dice similarity coefficients for internal validation were 99.6 ± 0.3% (2D U-Net whole lung model), 99.5 ± 0.3% (2D U-Net separate lung model), 99.4 ± 0.5% (3D U-Net whole lung model), and 99.4 ± 0.5% (3D U-Net separate lung model). The Dice similarity coefficients for the external validation dataset were 98.4 ± 1.0% (2D U-Net whole lung model) and 98.4 ± 1.0% (2D U-Net separate lung model). In 31 cases, where the extent of ILD was larger than 75% of the lung parenchymal area, the Dice similarity coefficients were 97.9 ± 1.3% (2D U-Net whole lung model) and 98.0 ± 1.2% (2D U-Net separate lung model). CONCLUSION The deep neural network achieved excellent performance in automatically delineating the boundaries of lung parenchyma with extensive pathological conditions on non-contrast chest CT images.
Collapse
Affiliation(s)
- Seung Jin Yoo
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea.
| | - Jong Hyuk Lee
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| | - Ki Hwan Kim
- Department of Radiology, Myongji Hospital, Goyang, Korea
| | | | - Sang Joon Park
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
- MEDICALIP Co. Ltd., Seoul, Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| |
Collapse
|
41
|
Islam MM, Karray F, Alhajj R, Zeng J. A Review on Deep Learning Techniques for the Diagnosis of Novel Coronavirus (COVID-19). IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:30551-30572. [PMID: 34976571 PMCID: PMC8675557 DOI: 10.1109/access.2021.3058537] [Citation(s) in RCA: 114] [Impact Index Per Article: 38.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 02/06/2021] [Indexed: 05/03/2023]
Abstract
Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. The aim of this paper is to facilitate experts (medical or otherwise) and technicians in understanding the ways deep learning techniques are used in this regard and how they can be potentially further utilized to combat the outbreak of COVID-19.
Collapse
Affiliation(s)
- Md. Milon Islam
- Centre for Pattern Analysis and Machine IntelligenceDepartment of Electrical and Computer EngineeringUniversity of WaterlooWaterlooONN2L 3G1Canada
| | - Fakhri Karray
- Centre for Pattern Analysis and Machine IntelligenceDepartment of Electrical and Computer EngineeringUniversity of WaterlooWaterlooONN2L 3G1Canada
| | - Reda Alhajj
- Department of Computer ScienceUniversity of CalgaryCalgaryABT2N 1N4Canada
| | - Jia Zeng
- Institute for Personalized Cancer TherapyMD Anderson Cancer CenterHoustonTX77030USA
| |
Collapse
|
42
|
Pawar SP, Talbar SN. LungSeg-Net: Lung field segmentation using generative adversarial network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102296] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
43
|
LaLonde R, Xu Z, Irmakci I, Jain S, Bagci U. Capsules for biomedical image segmentation. Med Image Anal 2021; 68:101889. [PMID: 33246227 PMCID: PMC7944580 DOI: 10.1016/j.media.2020.101889] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 08/25/2020] [Accepted: 10/23/2020] [Indexed: 01/31/2023]
Abstract
Our work expands the use of capsule networks to the task of object segmentation for the first time in the literature. This is made possible via the introduction of locally-constrained routing and transformation matrix sharing, which reduces the parameter/memory burden and allows for the segmentation of objects at large resolutions. To compensate for the loss of global information in constraining the routing, we propose the concept of "deconvolutional" capsules to create a deep encoder-decoder style network, called SegCaps. We extend the masked reconstruction regularization to the task of segmentation and perform thorough ablation experiments on each component of our method. The proposed convolutional-deconvolutional capsule network, SegCaps, shows state-of-the-art results while using a fraction of the parameters of popular segmentation networks. To validate our proposed method, we perform experiments segmenting pathological lungs from clinical and pre-clinical thoracic computed tomography (CT) scans and segmenting muscle and adipose (fat) tissue from magnetic resonance imaging (MRI) scans of human subjects' thighs. Notably, our experiments in lung segmentation represent the largest-scale study in pathological lung segmentation in the literature, where we conduct experiments across five extremely challenging datasets, containing both clinical and pre-clinical subjects, and nearly 2000 computed-tomography scans. Our newly developed segmentation platform outperforms other methods across all datasets while utilizing less than 5% of the parameters in the popular U-Net for biomedical image segmentation. Further, we demonstrate capsules' ability to generalize to unseen handling of rotations/reflections on natural images.
Collapse
Affiliation(s)
- Rodney LaLonde
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL
| | | | | | - Sanjay Jain
- Johns Hopkins University, Baltimore, MD US State
| | - Ulas Bagci
- Center for Research in Computer Vision (CRCV), University of Central Florida (UCF), Orlando, FL.
| |
Collapse
|
44
|
Abstract
The interest in artificial intelligence (AI) has ballooned within radiology in the past few years primarily due to notable successes of deep learning. With the advances brought by deep learning, AI has the potential to recognize and localize complex patterns from different radiological imaging modalities, many of which even achieve comparable performance to human decision-making in recent applications. In this chapter, we review several AI applications in radiology for different anatomies: chest, abdomen, pelvis, as well as general lesion detection/identification that is not limited to specific anatomies. For each anatomy site, we focus on introducing the tasks of detection, segmentation, and classification with an emphasis on describing the technology development pathway with the aim of providing the reader with an understanding of what AI can do in radiology and what still needs to be done for AI to better fit in radiology. Combining with our own research experience of AI in medicine, we elaborate how AI can enrich knowledge discovery, understanding, and decision-making in radiology, rather than replacing the radiologist.
Collapse
|
45
|
Comelli A, Coronnello C, Dahiya N, Benfante V, Palmucci S, Basile A, Vancheri C, Russo G, Yezzi A, Stefano A. Lung Segmentation on High-Resolution Computerized Tomography Images Using Deep Learning: A Preliminary Step for Radiomics Studies. J Imaging 2020; 6:125. [PMID: 34460569 PMCID: PMC8321165 DOI: 10.3390/jimaging6110125] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 11/11/2020] [Accepted: 11/18/2020] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND The aim of this work is to identify an automatic, accurate, and fast deep learning segmentation approach, applied to the parenchyma, using a very small dataset of high-resolution computed tomography images of patients with idiopathic pulmonary fibrosis. In this way, we aim to enhance the methodology performed by healthcare operators in radiomics studies where operator-independent segmentation methods must be used to correctly identify the target and, consequently, the texture-based prediction model. METHODS Two deep learning models were investigated: (i) U-Net, already used in many biomedical image segmentation tasks, and (ii) E-Net, used for image segmentation tasks in self-driving cars, where hardware availability is limited and accurate segmentation is critical for user safety. Our small image dataset is composed of 42 studies of patients with idiopathic pulmonary fibrosis, of which only 32 were used for the training phase. We compared the performance of the two models in terms of the similarity of their segmentation outcome with the gold standard and in terms of their resources' requirements. RESULTS E-Net can be used to obtain accurate (dice similarity coefficient = 95.90%), fast (20.32 s), and clinically acceptable segmentation of the lung region. CONCLUSIONS We demonstrated that deep learning models can be efficiently applied to rapidly segment and quantify the parenchyma of patients with pulmonary fibrosis, without any radiologist supervision, in order to produce user-independent results.
Collapse
Affiliation(s)
- Albert Comelli
- Ri.MED Foundation, 90133 Palermo, Italy;
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy; (V.B.); (G.R.); (A.S.)
| | | | - Navdeep Dahiya
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA; (N.D.); (A.Y.)
| | - Viviana Benfante
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy; (V.B.); (G.R.); (A.S.)
| | - Stefano Palmucci
- Department of Medical Surgical Sciences and Advanced Technologies, Radiology Unit I, University Hospital “Policlinico-Vittorio Emanuele”, 95123 Catania, Italy; (S.P.); (A.B.)
| | - Antonio Basile
- Department of Medical Surgical Sciences and Advanced Technologies, Radiology Unit I, University Hospital “Policlinico-Vittorio Emanuele”, 95123 Catania, Italy; (S.P.); (A.B.)
| | - Carlo Vancheri
- Regional Referral Centre for Rare Lung Diseases, A.O.U. Policlinico-Vittorio Emanuele, University of Catania, 95123 Catania, Italy;
| | - Giorgio Russo
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy; (V.B.); (G.R.); (A.S.)
| | - Anthony Yezzi
- Department of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA; (N.D.); (A.Y.)
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy; (V.B.); (G.R.); (A.S.)
| |
Collapse
|
46
|
Extracting Lungs from CT Images via Deep Convolutional Neural Network Based Segmentation and Two-Pass Contour Refinement. J Digit Imaging 2020; 33:1465-1478. [PMID: 33057882 DOI: 10.1007/s10278-020-00388-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 08/17/2020] [Accepted: 09/14/2020] [Indexed: 10/23/2022] Open
Abstract
Lung segmentation is a key step of thoracic computed tomography (CT) image processing, and it plays an important role in computer-aided pulmonary disease diagnostics. However, the presence of image noises, pathologies, vessels, individual anatomical varieties, and so on makes lung segmentation a complex task. In this paper, we present a fully automatic algorithm for segmenting lungs from thoracic CT images accurately. An input image is first spilt into a set of non-overlapping fixed-sized image patches, and a deep convolutional neural network model is constructed to extract initial lung regions by classifying image patches. Superpixel segmentation is then performed on the preprocessed thoracic CT image, and the lung contours are locally refined according to corresponding superpixel contours with our adjacent point statistics method. Segmented lung contours are further globally refined by an edge direction tracing technique for the inclusion of juxta-pleural lesions. Our algorithm is tested on a group of thoracic CT scans with interstitial lung diseases. Experiments show that our algorithm creates an average Dice similarity coefficient of 97.95% and Jaccard's similarity index of 94.48%, with 2.8% average over-segmentation rate and 3.3% under-segmentation rate compared with manually segmented results. Meanwhile, it shows better performance compared with several feature-based machine learning methods and current methods on lung segmentation.
Collapse
|
47
|
Draelos RL, Dov D, Mazurowski MA, Lo JY, Henao R, Rubin GD, Carin L. Machine-learning-based multiple abnormality prediction with large-scale chest computed tomography volumes. Med Image Anal 2020; 67:101857. [PMID: 33129142 DOI: 10.1016/j.media.2020.101857] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 09/15/2020] [Accepted: 09/18/2020] [Indexed: 12/11/2022]
Abstract
Machine learning models for radiology benefit from large-scale data sets with high quality labels for abnormalities. We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is the largest multiply-annotated volumetric medical imaging data set reported. To annotate this data set, we developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports with an average F-score of 0.976 (min 0.941, max 1.0). We also developed a model for multi-organ, multi-disease classification of chest CT volumes that uses a deep convolutional neural network (CNN). This model reached a classification performance of AUROC >0.90 for 18 abnormalities, with an average AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of learning from unfiltered whole volume CT data. We show that training on more labels improves performance significantly: for a subset of 9 labels - nodule, opacity, atelectasis, pleural effusion, consolidation, mass, pericardial effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased by 10% when the number of training labels was increased from 9 to all 83. All code for volume preprocessing, automated label extraction, and the volume abnormality prediction model is publicly available. The 36,316 CT volumes and labels will also be made publicly available pending institutional approval.
Collapse
Affiliation(s)
- Rachel Lea Draelos
- Computer Science Department, Duke University, LSRC Building D101, 308 Research Drive, Duke Box 90129, Durham, North Carolina 27708-0129, United States of America; School of Medicine, Duke University, DUMC 3710, Durham, North Carolina 27710, United States of America.
| | - David Dov
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America
| | - Maciej A Mazurowski
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America; Biostatistics and Bioinformatics Department, Duke University, DUMC 2424 Erwin Road, Suite 1102 Hock Plaza, Box 2721 Durham, North Carolina 27710, United States of America
| | - Joseph Y Lo
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America; Biomedical Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Room 1427, Fitzpatrick Center (FCIEMAS), 101 Science Drive, Campus Box 90281, Durham, North Carolina 27708-0281, United States of America
| | - Ricardo Henao
- Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Biostatistics and Bioinformatics Department, Duke University, DUMC 2424 Erwin Road, Suite 1102 Hock Plaza, Box 2721 Durham, North Carolina 27710, United States of America
| | - Geoffrey D Rubin
- Radiology Department, Duke University, Box 3808 DUMC, Durham, North Carolina 27710, United States of America
| | - Lawrence Carin
- Computer Science Department, Duke University, LSRC Building D101, 308 Research Drive, Duke Box 90129, Durham, North Carolina 27708-0129, United States of America; Electrical and Computer Engineering Department, Edmund T. Pratt Jr. School of Engineering, Duke University, Box 90291, Durham, North Carolina 27708, United States of America; Statistical Science Department, Duke University, Box 90251, Durham, North Carolina 27708-0251, United States of America
| |
Collapse
|
48
|
Jin C, Chen W, Cao Y, Xu Z, Tan Z, Zhang X, Deng L, Zheng C, Zhou J, Shi H, Feng J. Development and evaluation of an artificial intelligence system for COVID-19 diagnosis. Nat Commun 2020; 11:5088. [PMID: 33037212 DOI: 10.1101/2020.03.20.20039834] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 09/04/2020] [Indexed: 05/23/2023] Open
Abstract
Early detection of COVID-19 based on chest CT enables timely treatment of patients and helps control the spread of the disease. We proposed an artificial intelligence (AI) system for rapid COVID-19 detection and performed extensive statistical analysis of CTs of COVID-19 based on the AI system. We developed and evaluated our system on a large dataset with more than 10 thousand CT volumes from COVID-19, influenza-A/B, non-viral community acquired pneumonia (CAP) and non-pneumonia subjects. In such a difficult multi-class diagnosis task, our deep convolutional neural network-based system is able to achieve an area under the receiver operating characteristic curve (AUC) of 97.81% for multi-way classification on test cohort of 3,199 scans, AUC of 92.99% and 93.25% on two publicly available datasets, CC-CCII and MosMedData respectively. In a reader study involving five radiologists, the AI system outperforms all of radiologists in more challenging tasks at a speed of two orders of magnitude above them. Diagnosis performance of chest x-ray (CXR) is compared to that of CT. Detailed interpretation of deep network is also performed to relate system outputs with CT presentations. The code is available at https://github.com/ChenWWWeixiang/diagnosis_covid19 .
Collapse
Affiliation(s)
- Cheng Jin
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Weixiang Chen
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Yukun Cao
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Zhanwei Xu
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Zimeng Tan
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Xin Zhang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Lei Deng
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Jie Zhou
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China
| | - Heshui Shi
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China.
| | - Jianjiang Feng
- Department of Automation, Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing, China.
| |
Collapse
|
49
|
Ozsahin I, Sekeroglu B, Musa MS, Mustapha MT, Uzun Ozsahin D. Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:9756518. [PMID: 33014121 PMCID: PMC7519983 DOI: 10.1155/2020/9756518] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 08/28/2020] [Accepted: 09/16/2020] [Indexed: 02/07/2023]
Abstract
The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms "deep learning", "neural networks", "COVID-19", and "chest CT". At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.
Collapse
Affiliation(s)
- Ilker Ozsahin
- Department of Biomedical Engineering, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
- DESAM Institute, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
| | - Boran Sekeroglu
- DESAM Institute, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
- Department of Artificial Intelligence Engineering, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
| | - Musa Sani Musa
- Department of Biomedical Engineering, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
| | - Mubarak Taiwo Mustapha
- Department of Biomedical Engineering, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
- DESAM Institute, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
| | - Dilber Uzun Ozsahin
- Department of Biomedical Engineering, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
- DESAM Institute, Near East University, Nicosia / TRNC, Mersin-10, 99138, Turkey
| |
Collapse
|
50
|
Hatabu H, Hunninghake GM, Richeldi L, Brown KK, Wells AU, Remy-Jardin M, Verschakelen J, Nicholson AG, Beasley MB, Christiani DC, San José Estépar R, Seo JB, Johkoh T, Sverzellati N, Ryerson CJ, Graham Barr R, Goo JM, Austin JHM, Powell CA, Lee KS, Inoue Y, Lynch DA. Interstitial lung abnormalities detected incidentally on CT: a Position Paper from the Fleischner Society. THE LANCET RESPIRATORY MEDICINE 2020; 8:726-737. [PMID: 32649920 DOI: 10.1016/s2213-2600(20)30168-5] [Citation(s) in RCA: 279] [Impact Index Per Article: 69.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Revised: 03/20/2020] [Accepted: 03/31/2020] [Indexed: 12/12/2022]
Abstract
The term interstitial lung abnormalities refers to specific CT findings that are potentially compatible with interstitial lung disease in patients without clinical suspicion of the disease. Interstitial lung abnormalities are increasingly recognised as a common feature on CT of the lung in older individuals, occurring in 4-9% of smokers and 2-7% of non-smokers. Identification of interstitial lung abnormalities will increase with implementation of lung cancer screening, along with increased use of CT for other diagnostic purposes. These abnormalities are associated with radiological progression, increased mortality, and the risk of complications from medical interventions, such as chemotherapy and surgery. Management requires distinguishing interstitial lung abnormalities that represent clinically significant interstitial lung disease from those that are subclinical. In particular, it is important to identify the subpleural fibrotic subtype, which is more likely to progress and to be associated with mortality. This multidisciplinary Position Paper by the Fleischner Society addresses important issues regarding interstitial lung abnormalities, including standardisation of the definition and terminology; predisposing risk factors; clinical outcomes; options for initial evaluation, monitoring, and management; the role of quantitative evaluation; and future research needs.
Collapse
Affiliation(s)
- Hiroto Hatabu
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Gary M Hunninghake
- Department of Pulmonary and Critical Care Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Luca Richeldi
- Unitá Operativa Complessa di Pneumologia, Universitá Cattolica del Sacro Cuore, Fondazione Policlinico A Gemelli IRCCS, Rome, Italy
| | - Kevin K Brown
- Department of Medicine, Denver, CO, USA; National Jewish Health, Denver, CO, USA
| | - Athol U Wells
- Department of Respiratory Medicine, Royal Brompton and Hospital NHS Foundation Trust, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Martine Remy-Jardin
- Department of Thoracic Imaging, Hospital Calmette, University Centre of Lille, Lille, France
| | | | - Andrew G Nicholson
- Department of Histopathology, Royal Brompton and Hospital NHS Foundation Trust, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Mary B Beasley
- Department of Pathology, Icahn School of Medicine at Mount, New York, NY, USA
| | - David C Christiani
- Department of Environmental Health, Harvard T.H. Chan School of Public Health, Boston, MA, USA; Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Raúl San José Estépar
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Joon Beom Seo
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Takeshi Johkoh
- Department of Radiology, Kansai Rosai Hospital, Hyogo, Japan
| | | | - Christopher J Ryerson
- Department of Medicine, University of British Columbia and Centre for Heart Lung Innovations, St Paul's Hospital, Vancouver, BC, Canada
| | - R Graham Barr
- Department of Medicine and Department of Epidemiology, Columbia University Medical Center, New York, NY, USA
| | - Jin Mo Goo
- Department of Radiology, Seoul National University College of Medicine, Seoul, South Korea
| | - John H M Austin
- Department of Radiology, Columbia University Medical Center, New York, NY, USA
| | - Charles A Powell
- Pulmonary, Critical Care and Sleep Medicine, Icahn School of Medicine at Mount, New York, NY, USA
| | - Kyung Soo Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Yoshikazu Inoue
- Clinical Research Center, National Hospital Organization Kinki-Chuo Chest Medical Center, Osaka, Japan
| | | |
Collapse
|