1
|
Chen J, Zeng H, Cheng Y, Yang B. Identifying radiogenomic associations of breast cancer based on DCE-MRI by using Siamese Neural Network with manufacturer bias normalization. Med Phys 2024. [PMID: 38922986 DOI: 10.1002/mp.17266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 06/08/2024] [Accepted: 06/08/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND AND PURPOSE The immunohistochemical test (IHC) for Human Epidermal Growth Factor Receptor 2 (HER2) and hormone receptors (HR) provides prognostic information and guides treatment for patients with invasive breast cancer. The objective of this paper is to establish a non-invasive system for identifying HER2 and HR in breast cancer using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). METHODS In light of the absence of high-performance algorithms and external validation in previously published methods, this study utilizes 3D deep features and radiomics features to represent the information of the Region of Interest (ROI). A Siamese Neural Network was employed as the classifier, with 3D deep features and radiomics features serving as the network input. To neutralize manufacturer bias, a batch effect normalization method, ComBat, was introduced. To enhance the reliability of the study, two datasets, Predict Your Therapeutic Response with Imaging and moLecular Analysis (I-SPY 1) and I-SPY 2, were incorporated. I-SPY 2 was utilized for model training and validation, while I-SPY 1 was exclusively employed for external validation. Additionally, a breast tumor segmentation network was trained to improve radiomic feature extraction. RESULTS The results indicate that our approach achieved an average Area Under the Curve (AUC) of 0.632, with a Standard Error of the Mean (SEM) of 0.042 for HER2 prediction in the I-SPY 2 dataset. For HR prediction, our method attained an AUC of 0.635 (SEM 0.041), surpassing other published methods in the AUC metric. Moreover, the proposed method yielded competitive results in other metrics. In external validation using the I-SPY 1 dataset, our approach achieved an AUC of 0.567 (SEM 0.032) for HR prediction and 0.563 (SEM 0.033) for HER2 prediction. CONCLUSION This study proposes a non-invasive system for identifying HER2 and HR in breast cancer. Although the results do not conclusively demonstrate superiority in both tasks, they indicate that the proposed method achieved good performance and is a competitive classifier compared to other reference methods. Ablation studies demonstrate that both radiomics features and deep features for the Siamese Neural Network are beneficial for the model. The introduced manufacturer bias normalization method has been shown to enhance the method's performance. Furthermore, the external validation of the method enhances the reliability of this research. Source code, pre-trained segmentation network, Radiomics and deep features, data for statistical analysis, and Supporting Information of this article are online at: https://github.com/FORRESTHUACHEN/Siamese_Neural_Network_based_Brest_cancer_Radiogenomic.
Collapse
Affiliation(s)
- Junhua Chen
- School of Medicine, Shanghai University, Shanghai, China
| | - Haiyan Zeng
- Department of Radiation Oncology, Division of Thoracic Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yanyan Cheng
- Medical Engineering Department, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Shandong, China
| | - Banghua Yang
- School of Medicine, Shanghai University, Shanghai, China
- School of Mechatronic Engineering and Automation, Research Center of Brain Computer Engineering, Shanghai University, Shanghai, China
| |
Collapse
|
2
|
Kumar N, Srivastava R. Deep learning in structural bioinformatics: current applications and future perspectives. Brief Bioinform 2024; 25:bbae042. [PMID: 38701422 PMCID: PMC11066934 DOI: 10.1093/bib/bbae042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 01/05/2024] [Accepted: 01/18/2024] [Indexed: 05/05/2024] Open
Abstract
In this review article, we explore the transformative impact of deep learning (DL) on structural bioinformatics, emphasizing its pivotal role in a scientific revolution driven by extensive data, accessible toolkits and robust computing resources. As big data continue to advance, DL is poised to become an integral component in healthcare and biology, revolutionizing analytical processes. Our comprehensive review provides detailed insights into DL, featuring specific demonstrations of its notable applications in bioinformatics. We address challenges tailored for DL, spotlight recent successes in structural bioinformatics and present a clear exposition of DL-from basic shallow neural networks to advanced models such as convolution, recurrent, artificial and transformer neural networks. This paper discusses the emerging use of DL for understanding biomolecular structures, anticipating ongoing developments and applications in the realm of structural bioinformatics.
Collapse
Affiliation(s)
- Niranjan Kumar
- School of Computational and Integrative Sciences, Jawaharlal Nehru University, New Delhi, India
| | - Rakesh Srivastava
- Center for Computational Natural Sciences and Bioinformatics, International Institute of Information Technology, Hyderabad, India
| |
Collapse
|
3
|
Sheikh TS, Cho M. Segmentation of Variants of Nuclei on Whole Slide Images by Using Radiomic Features. Bioengineering (Basel) 2024; 11:252. [PMID: 38534526 DOI: 10.3390/bioengineering11030252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 02/10/2024] [Accepted: 02/26/2024] [Indexed: 03/28/2024] Open
Abstract
The histopathological segmentation of nuclear types is a challenging task because nuclei exhibit distinct morphologies, textures, and staining characteristics. Accurate segmentation is critical because it affects the diagnostic workflow for patient assessment. In this study, a framework was proposed for segmenting various types of nuclei from different organs of the body. The proposed framework improved the segmentation performance for each nuclear type using radiomics. First, we used distinct radiomic features to extract and analyze quantitative information about each type of nucleus and subsequently trained various classifiers based on the best input sub-features of each radiomic feature selected by a LASSO operator. Second, we inputted the outputs of the best classifier to various segmentation models to learn the variants of nuclei. Using the MoNuSAC2020 dataset, we achieved state-of-the-art segmentation performance for each category of nuclei type despite the complexity, overlapping, and obscure regions. The generalized adaptability of the proposed framework was verified by the consistent performance obtained in whole slide images of different organs of the body and radiomic features.
Collapse
Affiliation(s)
- Taimoor Shakeel Sheikh
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| | - Migyung Cho
- AIMI-Artificial Intelligence and Medical Imaging Laboratory, Department of Computer & Media Engineering, Tongmyong University, Busan 48520, Republic of Korea
| |
Collapse
|
4
|
Abdulahi AT, Ogundokun RO, Adenike AR, Shah MA, Ahmed YK. PulmoNet: a novel deep learning based pulmonary diseases detection model. BMC Med Imaging 2024; 24:51. [PMID: 38418987 PMCID: PMC10903074 DOI: 10.1186/s12880-024-01227-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 02/11/2024] [Indexed: 03/02/2024] Open
Abstract
Pulmonary diseases are various pathological conditions that affect respiratory tissues and organs, making the exchange of gas challenging for animals inhaling and exhaling. It varies from gentle and self-limiting such as the common cold and catarrh, to life-threatening ones, such as viral pneumonia (VP), bacterial pneumonia (BP), and tuberculosis, as well as a severe acute respiratory syndrome, such as the coronavirus 2019 (COVID-19). The cost of diagnosis and treatment of pulmonary infections is on the high side, most especially in developing countries, and since radiography images (X-ray and computed tomography (CT) scan images) have proven beneficial in detecting various pulmonary infections, many machine learning (ML) models and image processing procedures have been utilized to identify these infections. The need for timely and accurate detection can be lifesaving, especially during a pandemic. This paper, therefore, suggested a deep convolutional neural network (DCNN) founded image detection model, optimized with image augmentation technique, to detect three (3) different pulmonary diseases (COVID-19, bacterial pneumonia, and viral pneumonia). The dataset containing four (4) different classes (healthy (10,325), COVID-19 (3,749), BP (883), and VP (1,478)) was utilized as training/testing data for the suggested model. The model's performance indicates high potential in detecting the three (3) classes of pulmonary diseases. The model recorded average detection accuracy of 94%, 95.4%, 99.4%, and 98.30%, and training/detection time of about 60/50 s. This result indicates the proficiency of the suggested approach when likened to the traditional texture descriptors technique of pulmonary disease recognition utilizing X-ray and CT scan images. This study introduces an innovative deep convolutional neural network model to enhance the detection of pulmonary diseases like COVID-19 and pneumonia using radiography. This model, notable for its accuracy and efficiency, promises significant advancements in medical diagnostics, particularly beneficial in developing countries due to its potential to surpass traditional diagnostic methods.
Collapse
Affiliation(s)
- AbdulRahman Tosho Abdulahi
- Department of Computer Science, Institute of Information and Communication Technology, Kwara State Polytechnic, Ilorin, Nigeria
| | - Roseline Oluwaseun Ogundokun
- Department of Multimedia Engineering, Kaunas University of Technology, Kaunas, Lithuania
- Department of Computer Science, Landmark University Omu Aran, Omu Aran, Nigeria
| | - Ajiboye Raimot Adenike
- Department of Statistics, Institute of Applied Sciences, Kwara State Polytechnic, Ilorin, Nigeria
| | - Mohd Asif Shah
- Department of Economics, Kebri Dehar University, Kebri Dehar, 250, Somali, Ethiopia.
- Centre of Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
- Chitkara Centre for Research and Development, Chitkara University, Baddi, Himachal Pradesh, 174103, India.
| | - Yusuf Kola Ahmed
- Department of Biomedical Engineering, University of Ilorin, Ilorin, Nigeria
- Department of Occupational Therapy, University of Alberta, Edmonton, Canada
| |
Collapse
|
5
|
Pan F, Feng L, Liu B, Hu Y, Wang Q. Application of radiomics in diagnosis and treatment of lung cancer. Front Pharmacol 2023; 14:1295511. [PMID: 38027000 PMCID: PMC10646419 DOI: 10.3389/fphar.2023.1295511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 10/19/2023] [Indexed: 12/01/2023] Open
Abstract
Radiomics has become a research field that involves the process of converting standard nursing images into quantitative image data, which can be combined with other data sources and subsequently analyzed using traditional biostatistics or artificial intelligence (Al) methods. Due to the capture of biological and pathophysiological information by radiomics features, these quantitative radiomics features have been proven to provide fast and accurate non-invasive biomarkers for lung cancer risk prediction, diagnosis, prognosis, treatment response monitoring, and tumor biology. In this review, radiomics has been emphasized and discussed in lung cancer research, including advantages, challenges, and drawbacks.
Collapse
Affiliation(s)
- Feng Pan
- Department of Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
- Department of CT, Jilin Province FAW General Hospital, Changchun, China
| | - Li Feng
- Department of Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baocai Liu
- Department of Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Yue Hu
- Department of Biobank, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Qian Wang
- Department of Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| |
Collapse
|
6
|
Liang H, Hu M, Ma Y, Yang L, Chen J, Lou L, Chen C, Xiao Y. Performance of Deep-Learning Solutions on Lung Nodule Malignancy Classification: A Systematic Review. Life (Basel) 2023; 13:1911. [PMID: 37763314 PMCID: PMC10532719 DOI: 10.3390/life13091911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/06/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
OBJECTIVE For several years, computer technology has been utilized to diagnose lung nodules. When compared to traditional machine learning methods for image processing, deep-learning methods can improve the accuracy of lung nodule diagnosis by avoiding the laborious pre-processing step of the picture (extraction of fake features, etc.). Our goal is to investigate how well deep-learning approaches classify lung nodule malignancy. METHOD We evaluated the performance of deep-learning methods on lung nodule malignancy classification via a systematic literature search. We conducted searches for appropriate articles in the PubMed and ISI Web of Science databases and chose those that employed deep learning to classify or predict lung nodule malignancy for our investigation. The figures were plotted, and the data were extracted using SAS version 9.4 and Microsoft Excel 2010, respectively. RESULTS Sixteen studies that met the criteria were included in this study. The articles classified or predicted pulmonary nodule malignancy using classification and summarization, using convolutional neural network (CNN), autoencoder (AE), and deep belief network (DBN). The AUC of deep-learning models is typically greater than 90% in articles. It demonstrated that deep learning performed well in the diagnosis and forecasting of lung nodules. CONCLUSION It is a thorough analysis of the most recent advancements in lung nodule deep-learning technologies. The advancement of image processing techniques, traditional machine learning techniques, deep-learning techniques, and other techniques have all been applied to the technology for pulmonary nodule diagnosis. Although the deep-learning model has demonstrated distinct advantages in the detection of pulmonary nodules, it also carries significant drawbacks that warrant additional research.
Collapse
Affiliation(s)
- Hailun Liang
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Meili Hu
- Department of Gynecology, Baoding Maternal and Child Health Care Hospital, Baoding 071000, China;
| | - Yuxin Ma
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Lei Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Beijing Office for Cancer Prevention and Control, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Jie Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Liwei Lou
- School of Statistics, Renmin University of China, Beijing 100872, China
| | - Chen Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Yuan Xiao
- Blockchain Research Institute, Renmin University of China, Beijing 100872, China
| |
Collapse
|
7
|
Shivwanshi RR, Nirala N. Hyperparameter optimization and development of an advanced CNN-based technique for lung nodule assessment. Phys Med Biol 2023; 68:175038. [PMID: 37567211 DOI: 10.1088/1361-6560/acef8c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective. This paper aims to propose an advanced methodology for assessing lung nodules using automated techniques with computed tomography (CT) images to detect lung cancer at an early stage.Approach. The proposed methodology utilizes a fixed-size 3 × 3 kernel in a convolution neural network (CNN) for relevant feature extraction. The network architecture comprises 13 layers, including six convolution layers for deep local and global feature extraction. The nodule detection architecture is enhanced by incorporating a transfer learning-based EfficientNetV_2 network (TLEV2N) to improve training performance. The classification of nodules is achieved by integrating the EfficientNet_V2 architecture of CNN for more accurate benign and malignant classification. The network architecture is fine-tuned to extract relevant features using a deep network while maintaining performance through suitable hyperparameters.Main results. The proposed method significantly reduces the false-negative rate, with the network achieving an accuracy of 97.56% and a specificity of 98.4%. Using the 3 × 3 kernel provides valuable insights into minute pixel variation and enables the extraction of information at a broader morphological level. The continuous responsiveness of the network to fine-tune initial values allows for further optimization possibilities, leading to the design of a standardized system capable of assessing diversified thoracic CT datasets.Significance. This paper highlights the potential of non-invasive techniques for the early detection of lung cancer through the analysis of low-dose CT images. The proposed methodology offers improved accuracy in detecting lung nodules and has the potential to enhance the overall performance of early lung cancer detection. By reconfiguring the proposed method, further advancements can be made to optimize outcomes and contribute to developing a standardized system for assessing diverse thoracic CT datasets.
Collapse
|
8
|
Riaz Z, Khan B, Abdullah S, Khan S, Islam MS. Lung Tumor Image Segmentation from Computer Tomography Images Using MobileNetV2 and Transfer Learning. Bioengineering (Basel) 2023; 10:981. [PMID: 37627866 PMCID: PMC10451633 DOI: 10.3390/bioengineering10080981] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/14/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Lung cancer is one of the most fatal cancers worldwide, and malignant tumors are characterized by the growth of abnormal cells in the tissues of lungs. Usually, symptoms of lung cancer do not appear until it is already at an advanced stage. The proper segmentation of cancerous lesions in CT images is the primary method of detection towards achieving a completely automated diagnostic system. METHOD In this work, we developed an improved hybrid neural network via the fusion of two architectures, MobileNetV2 and UNET, for the semantic segmentation of malignant lung tumors from CT images. The transfer learning technique was employed and the pre-trained MobileNetV2 was utilized as an encoder of a conventional UNET model for feature extraction. The proposed network is an efficient segmentation approach that performs lightweight filtering to reduce computation and pointwise convolution for building more features. Skip connections were established with the Relu activation function for improving model convergence to connect the encoder layers of MobileNetv2 to decoder layers in UNET that allow the concatenation of feature maps with different resolutions from the encoder to decoder. Furthermore, the model was trained and fine-tuned on the training dataset acquired from the Medical Segmentation Decathlon (MSD) 2018 Challenge. RESULTS The proposed network was tested and evaluated on 25% of the dataset obtained from the MSD, and it achieved a dice score of 0.8793, recall of 0.8602 and precision of 0.93. It is pertinent to mention that our technique outperforms the current available networks, which have several phases of training and testing.
Collapse
Affiliation(s)
- Zainab Riaz
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| | - Bangul Khan
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
- Department of Biomedical Engineering, City University Hongkong, Hong Kong SAR, China
| | - Saad Abdullah
- Division of Intelligent Future Technologies, School of Innovation, Design and Engineering, Mälardalen University, P.O. Box 883, 721 23 Västerås, Sweden
| | - Samiullah Khan
- Center for Eye & Vision Research, 17W Science Park, Hong Kong SAR, China;
| | - Md Shohidul Islam
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| |
Collapse
|
9
|
Thanoon MA, Zulkifley MA, Mohd Zainuri MAA, Abdani SR. A Review of Deep Learning Techniques for Lung Cancer Screening and Diagnosis Based on CT Images. Diagnostics (Basel) 2023; 13:2617. [PMID: 37627876 PMCID: PMC10453592 DOI: 10.3390/diagnostics13162617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most common and deadly diseases in the world is lung cancer. Only early identification of lung cancer can increase a patient's probability of survival. A frequently used modality for the screening and diagnosis of lung cancer is computed tomography (CT) imaging, which provides a detailed scan of the lung. In line with the advancement of computer-assisted systems, deep learning techniques have been extensively explored to help in interpreting the CT images for lung cancer identification. Hence, the goal of this review is to provide a detailed review of the deep learning techniques that were developed for screening and diagnosing lung cancer. This review covers an overview of deep learning (DL) techniques, the suggested DL techniques for lung cancer applications, and the novelties of the reviewed methods. This review focuses on two main methodologies of deep learning in screening and diagnosing lung cancer, which are classification and segmentation methodologies. The advantages and shortcomings of current deep learning models will also be discussed. The resultant analysis demonstrates that there is a significant potential for deep learning methods to provide precise and effective computer-assisted lung cancer screening and diagnosis using CT scans. At the end of this review, a list of potential future works regarding improving the application of deep learning is provided to spearhead the advancement of computer-assisted lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Mohammad A. Thanoon
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
- System and Control Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Muhammad Ammirrul Atiqi Mohd Zainuri
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Malaysia;
| |
Collapse
|
10
|
Qiao P, Li H, Song G, Han H, Gao Z, Tian Y, Liang Y, Li X, Zhou SK, Chen J. Semi-Supervised CT Lesion Segmentation Using Uncertainty-Based Data Pairing and SwapMix. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1546-1562. [PMID: 37015649 DOI: 10.1109/tmi.2022.3232572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Semi-supervised learning (SSL) methods show their powerful performance to deal with the issue of data shortage in the field of medical image segmentation. However, existing SSL methods still suffer from the problem of unreliable predictions on unannotated data due to the lack of manual annotations for them. In this paper, we propose an unreliability-diluted consistency training (UDiCT) mechanism to dilute the unreliability in SSL by assembling reliable annotated data into unreliable unannotated data. Specifically, we first propose an uncertainty-based data pairing module to pair annotated data with unannotated data based on a complementary uncertainty pairing rule, which avoids two hard samples being paired off. Secondly, we develop SwapMix, a mixed sample data augmentation method, to integrate annotated data into unannotated data for training our model in a low-unreliability manner. Finally, UDiCT is trained by minimizing a supervised loss and an unreliability-diluted consistency loss, which makes our model robust to diverse backgrounds. Extensive experiments on three chest CT datasets show the effectiveness of our method for semi-supervised CT lesion segmentation.
Collapse
|
11
|
Annavarapu CSR, Parisapogu SAB, Keetha NV, Donta PK, Rajita G. A Bi-FPN-Based Encoder-Decoder Model for Lung Nodule Image Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13081406. [PMID: 37189507 DOI: 10.3390/diagnostics13081406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/02/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Early detection and analysis of lung cancer involve a precise and efficient lung nodule segmentation in computed tomography (CT) images. However, the anonymous shapes, visual features, and surroundings of the nodules as observed in the CT images pose a challenging and critical problem to the robust segmentation of lung nodules. This article proposes a resource-efficient model architecture: an end-to-end deep learning approach for lung nodule segmentation. It incorporates a Bi-FPN (bidirectional feature network) between an encoder and a decoder architecture. Furthermore, it uses the Mish activation function and class weights of masks with the aim of enhancing the efficiency of the segmentation. The proposed model was extensively trained and evaluated on the publicly available LUNA-16 dataset consisting of 1186 lung nodules. To increase the probability of the suitable class of each voxel in the mask, a weighted binary cross-entropy loss of each sample of training was utilized as network training parameter. Moreover, on the account of further evaluation of robustness, the proposed model was evaluated on the QIN Lung CT dataset. The results of the evaluation show that the proposed architecture outperforms existing deep learning models such as U-Net with a Dice Similarity Coefficient of 82.82% and 81.66% on both datasets.
Collapse
Affiliation(s)
| | | | - Nikhil Varma Keetha
- Indian Institute of Technology (Indian School of Mines), Dhanbad 826004, India
| | | | | |
Collapse
|
12
|
Yuan H, Wu Y, Dai M. Multi-Modal Feature Fusion-Based Multi-Branch Classification Network for Pulmonary Nodule Malignancy Suspiciousness Diagnosis. J Digit Imaging 2023; 36:617-626. [PMID: 36478311 PMCID: PMC10039149 DOI: 10.1007/s10278-022-00747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/28/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022] Open
Abstract
Detecting and identifying malignant nodules on chest computed tomography (CT) plays an important role in the early diagnosis and timely treatment of lung cancer, which can greatly reduce the number of deaths worldwide. In view of the existing methods in pulmonary nodule diagnosis, the importance of clinical radiological structured data (laboratory examination, radiological data) is ignored for the accuracy judgment of patients' condition. Hence, a multi-modal fusion multi-branch classification network is constructed to detect and classify pulmonary nodules in this work: (1) Radiological data of pulmonary nodules are used to construct structured features of length 9. (2) A multi-branch fusion-based effective attention mechanism network is designed for 3D CT Patch unstructured data, which uses 3D ECA-ResNet to dynamically adjust the extracted features. In addition, feature maps with different receptive fields from multi-layer are fully fused to obtain representative multi-scale unstructured features. (3) Multi-modal feature fusion of structured data and unstructured data is performed to distinguish benign and malignant nodules. Numerous experimental results show that this advanced network can effectively classify the benign and malignant pulmonary nodules for clinical diagnosis, which achieves the highest accuracy (94.89%), sensitivity (94.91%), and F1-score (94.65%) and lowest false positive rate (5.55%).
Collapse
Affiliation(s)
- Haiying Yuan
- Beijing University of Technology, Beijing, China.
| | - Yanrui Wu
- Beijing University of Technology, Beijing, China
| | - Mengfan Dai
- Beijing University of Technology, Beijing, China
| |
Collapse
|
13
|
Cai J, Guo L, Zhu L, Xia L, Qian L, Lure YMF, Yin X. Impact of localized fine tuning in the performance of segmentation and classification of lung nodules from computed tomography scans using deep learning. Front Oncol 2023; 13:1140635. [PMID: 37056345 PMCID: PMC10088514 DOI: 10.3389/fonc.2023.1140635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
BackgroundAlgorithm malfunction may occur when there is a performance mismatch between the dataset with which it was developed and the dataset on which it was deployed.MethodsA baseline segmentation algorithm and a baseline classification algorithm were developed using public dataset of Lung Image Database Consortium to detect benign and malignant nodules, and two additional external datasets (i.e., HB and XZ) including 542 cases and 486 cases were involved for the independent validation of these two algorithms. To explore the impact of localized fine tuning on the individual segmentation and classification process, the baseline algorithms were fine tuned with CT scans of HB and XZ datasets, respectively, and the performance of the fine tuned algorithms was tested to compare with the baseline algorithms.ResultsThe proposed baseline algorithms of both segmentation and classification experienced a drop when directly deployed in external HB and XZ datasets. Comparing with the baseline validation results in nodule segmentation, the fine tuned segmentation algorithm obtained better performance in Dice coefficient, Intersection over Union, and Average Surface Distance in HB dataset (0.593 vs. 0.444; 0.450 vs. 0.348; 0.283 vs. 0.304) and XZ dataset (0.601 vs. 0.486; 0.482 vs. 0.378; 0.225 vs. 0.358). Similarly, comparing with the baseline validation results in benign and malignant nodule classification, the fine tuned classification algorithm had improved area under the receiver operating characteristic curve value, accuracy, and F1 score in HB dataset (0.851 vs. 0.812; 0.813 vs. 0.769; 0.852 vs. 0.822) and XZ dataset (0.724 vs. 0.668; 0.696 vs. 0.617; 0.737 vs. 0.668).ConclusionsThe external validation performance of localized fine tuned algorithms outperformed the baseline algorithms in both segmentation process and classification process, which showed that localized fine tuning may be an effective way to enable a baseline algorithm generalize to site-specific use.
Collapse
Affiliation(s)
- Jingwei Cai
- Radiology Department, Affiliated Hospital of Hebei University, Baoding, Hebei, China
- Clinical Medical College, Hebei University, Baoding, Hebei, China
| | - Lin Guo
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | - Litong Zhu
- Department of Medicine, Queen Mary Hospital, University of Hong, Hong Kong, Hong Kong SAR, China
| | - Li Xia
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | - Lingjun Qian
- Shenzhen Zhiying Medical Imaging, Shenzhen, Guangdong, China
| | | | - Xiaoping Yin
- Radiology Department, Affiliated Hospital of Hebei University, Baoding, Hebei, China
- *Correspondence: Xiaoping Yin,
| |
Collapse
|
14
|
Appadurai JP, G S, Prabhu Kavin B, C K, Lai WC. Multi-Process Remora Enhanced Hyperparameters of Convolutional Neural Network for Lung Cancer Prediction. Biomedicines 2023; 11:biomedicines11030679. [PMID: 36979657 PMCID: PMC10045623 DOI: 10.3390/biomedicines11030679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 01/30/2023] [Accepted: 02/08/2023] [Indexed: 03/30/2023] Open
Abstract
In recent years, lung cancer prediction is an essential topic for reducing the death rate of humans. In the literature section, some papers are reviewed that reduce the accuracy level during the prediction stage. Hence, in this paper, we develop a Multi-Process Remora Optimized Hyperparameters of Convolutional Neural Network (MPROH-CNN) aimed at lung cancer prediction. The proposed technique can be utilized to detect the CT images of the human lung. The proposed technique proceeds with four phases, including pre-processing, feature extraction and classification. Initially, the databases are collected from the open-source system. After that, the collected CT images contain unwanted noise, which affects classification efficiency. So, the pre-processing techniques can be considered to remove unwanted noise from the input images, such as filtering and contrast enhancement. Following that, the essential features are extracted with the assistance of feature extraction techniques such as histogram, texture and wavelet. The extracted features are utilized to classification stage. The proposed classifier is a combination of the Remora Optimization Algorithm (ROA) and Convolutional Neural Network (CNN). In the CNN, the ROA is utilized for multi process optimization such as structure optimization and hyperparameter optimization. The proposed methodology is implemented in MATLAB and performances are evaluated by utilized performance matrices such as accuracy, precision, recall, specificity, sensitivity and F_Measure. To validate the projected approach, it is compared with the traditional techniques CNN, CNN-Particle Swarm Optimization (PSO) and CNN-Firefly Algorithm (FA), respectively. From the analysis, the proposed method achieved a 0.98 accuracy level in the lung cancer prediction.
Collapse
Affiliation(s)
- Jothi Prabha Appadurai
- Computer Science and Engineering Department, Kakatiya Institute of Technology and Science, Warangal 506015, Telangana, India
| | - Suganeshwari G
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai 600127, Tamil Nadu, India
| | - Balasubramanian Prabhu Kavin
- Department of Data Science and Business Systems, College of Engineering and Technology, SRM Institute of Science and Technology, SRM Nagar, Chengalpattu District, Chennai 603203, Tamil Nadu, India
| | - Kavitha C
- Department of Computer Science and Engineering, Sathyabama Institute of Science and Technology, Chennai 600119, Tamil Nadu, India
| | - Wen-Cheng Lai
- Bachelor Program in Industrial Projects, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
- Department Electronic Engineering, National Yunlin University of Science and Technology, Douliu 640301, Taiwan
| |
Collapse
|
15
|
Ghaffar Nia N, Kaplanoglu E, Nasab A. Evaluation of artificial intelligence techniques in disease diagnosis and prediction. DISCOVER ARTIFICIAL INTELLIGENCE 2023. [PMCID: PMC9885935 DOI: 10.1007/s44163-023-00049-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
AbstractA broad range of medical diagnoses is based on analyzing disease images obtained through high-tech digital devices. The application of artificial intelligence (AI) in the assessment of medical images has led to accurate evaluations being performed automatically, which in turn has reduced the workload of physicians, decreased errors and times in diagnosis, and improved performance in the prediction and detection of various diseases. AI techniques based on medical image processing are an essential area of research that uses advanced computer algorithms for prediction, diagnosis, and treatment planning, leading to a remarkable impact on decision-making procedures. Machine Learning (ML) and Deep Learning (DL) as advanced AI techniques are two main subfields applied in the healthcare system to diagnose diseases, discover medication, and identify patient risk factors. The advancement of electronic medical records and big data technologies in recent years has accompanied the success of ML and DL algorithms. ML includes neural networks and fuzzy logic algorithms with various applications in automating forecasting and diagnosis processes. DL algorithm is an ML technique that does not rely on expert feature extraction, unlike classical neural network algorithms. DL algorithms with high-performance calculations give promising results in medical image analysis, such as fusion, segmentation, recording, and classification. Support Vector Machine (SVM) as an ML method and Convolutional Neural Network (CNN) as a DL method is usually the most widely used techniques for analyzing and diagnosing diseases. This review study aims to cover recent AI techniques in diagnosing and predicting numerous diseases such as cancers, heart, lung, skin, genetic, and neural disorders, which perform more precisely compared to specialists without human error. Also, AI's existing challenges and limitations in the medical area are discussed and highlighted.
Collapse
Affiliation(s)
- Nafiseh Ghaffar Nia
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| | - Erkan Kaplanoglu
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| | - Ahad Nasab
- College of Engineering and Computer Science, The University of Tennessee at Chattanooga, Chattanooga, TN 37403 USA
| |
Collapse
|
16
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
17
|
Sethy PK, Geetha Devi A, Padhan B, Behera SK, Sreedhar S, Das K. Lung cancer histopathological image classification using wavelets and AlexNet. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:211-221. [PMID: 36463485 DOI: 10.3233/xst-221301] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Among malignant tumors, lung cancer has the highest morbidity and fatality rates worldwide. Screening for lung cancer has been investigated for decades in order to reduce mortality rates of lung cancer patients, and treatment options have improved dramatically in recent years. Pathologists utilize various techniques to determine the stage, type, and subtype of lung cancers, but one of the most common is a visual assessment of histopathology slides. The most common subtypes of lung cancer are adenocarcinoma and squamous cell carcinoma, lung benign, and distinguishing between them requires visual inspection by a skilled pathologist. The purpose of this article was to develop a hybrid network for the categorization of lung histopathology images, and it did so by combining AlexNet, wavelet, and support vector machines. In this study, we feed the integrated discrete wavelet transform (DWT) coefficients and AlexNet deep features into linear support vector machines (SVMs) for lung nodule sample classification. The LC25000 Lung and colon histopathology image dataset, which contains 5,000 digital histopathology images in three categories of benign (normal cells), adenocarcinoma, and squamous carcinoma cells (both are cancerous cells) is used in this study to train and test SVM classifiers. The study results of using a 10-fold cross-validation method achieve an accuracy of 99.3% and an area under the curve (AUC) of 0.99 in classifying these digital histopathology images of lung nodule samples.
Collapse
Affiliation(s)
| | - A Geetha Devi
- Department of Electronics and Communication Engineering, PVP Siddhartha Institute of Technology, Vijayawada, AP, India
| | - Bikash Padhan
- Department of Electronics, Sambalpur University, Jyoti Vihar, Burla, India
| | | | | | - Kalyan Das
- Department Computer Science Engineering and Application, Sambalpur University Institute of Information Technology, Burla, India
| |
Collapse
|
18
|
Liu M, Wu J, Wang N, Zhang X, Bai Y, Guo J, Zhang L, Liu S, Tao K. The value of artificial intelligence in the diagnosis of lung cancer: A systematic review and meta-analysis. PLoS One 2023; 18:e0273445. [PMID: 36952523 PMCID: PMC10035910 DOI: 10.1371/journal.pone.0273445] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 02/03/2023] [Indexed: 03/25/2023] Open
Abstract
Lung cancer is a common malignant tumor disease with high clinical disability and death rates. Currently, lung cancer diagnosis mainly relies on manual pathology section analysis, but the low efficiency and subjective nature of manual film reading can lead to certain misdiagnoses and omissions. With the continuous development of science and technology, artificial intelligence (AI) has been gradually applied to imaging diagnosis. Although there are reports on AI-assisted lung cancer diagnosis, there are still problems such as small sample size and untimely data updates. Therefore, in this study, a large amount of recent data was included, and meta-analysis was used to evaluate the value of AI for lung cancer diagnosis. With the help of STATA16.0, the value of AI-assisted lung cancer diagnosis was assessed by specificity, sensitivity, negative likelihood ratio, positive likelihood ratio, diagnostic ratio, and plotting the working characteristic curves of subjects. Meta-regression and subgroup analysis were used to investigate the value of AI-assisted lung cancer diagnosis. The results of the meta-analysis showed that the combined sensitivity of the AI-aided diagnosis system for lung cancer diagnosis was 0.87 [95% CI (0.82, 0.90)], specificity was 0.87 [95% CI (0.82, 0.91)] (CI stands for confidence interval.), the missed diagnosis rate was 13%, the misdiagnosis rate was 13%, the positive likelihood ratio was 6.5 [95% CI (4.6, 9.3)], the negative likelihood ratio was 0.15 [95% CI (0.11, 0.21)], a diagnostic ratio of 43 [95% CI (24, 76)] and a sum of area under the combined subject operating characteristic (SROC) curve of 0.93 [95% CI (0.91, 0.95)]. Based on the results, the AI-assisted diagnostic system for CT (Computerized Tomography), imaging has considerable diagnostic accuracy for lung cancer diagnosis, which is of significant value for lung cancer diagnosis and has greater feasibility of realizing the extension application in the field of clinical diagnosis.
Collapse
Affiliation(s)
- Mingsi Liu
- Department of Computer and Artificial Intelligence, Zhengzhou University, Zhengzhou, Henan, China
| | - Jinghui Wu
- College of Life Science, Sichuan University, Chengdu, Sichuan, China
| | - Nian Wang
- School of Basic Medical Sciences, Chengdu Medical College, Chengdu, Sichuan, China
| | - Xianqin Zhang
- School of Basic Medical Sciences, Chengdu Medical College, Chengdu, Sichuan, China
| | - Yujiao Bai
- School of Basic Medical Sciences, Chengdu Medical College, Chengdu, Sichuan, China
- Non-Coding RNA and Drug Discovery Key Laboratory of Sichuan Province, Chengdu Medical College, Chengdu, Sichuan, China
| | - Jinlin Guo
- Chongqing Key Laboratory of Sichuan-Chongqing Co-construction for Diagnosis and Treatment of Infectious Diseases Integrated Traditional Chinese and Western Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China
| | - Lin Zhang
- Department of Pharmacy, Shaoxing people's Hospital, Shaoxing, Zhejiang, China
| | - Shulin Liu
- Department of the First Affiliated Hospital of Chengdu Medical College, Sichuan, China
| | - Ke Tao
- College of Life Science, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
19
|
A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
|
20
|
Wu L, Zhuang J, Chen W, Tang Y, Hou C, Li C, Zhong Z, Luo S. Data augmentation based on multiple oversampling fusion for medical image segmentation. PLoS One 2022; 17:e0274522. [PMID: 36256637 PMCID: PMC9578635 DOI: 10.1371/journal.pone.0274522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 08/28/2022] [Indexed: 11/18/2022] Open
Abstract
A high-performance medical image segmentation model based on deep learning depends on the availability of large amounts of annotated training data. However, it is not trivial to obtain sufficient annotated medical images. Generally, the small size of most tissue lesions, e.g., pulmonary nodules and liver tumours, could worsen the class imbalance problem in medical image segmentation. In this study, we propose a multidimensional data augmentation method combining affine transform and random oversampling. The training data is first expanded by affine transformation combined with random oversampling to improve the prior data distribution of small objects and the diversity of samples. Secondly, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the lesion pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The LUNA16 and LiTS17 datasets were introduced to evaluate the performance of our works, where four deep neural network models, Mask-RCNN, U-Net, SegNet and DeepLabv3+, were adopted for small tissue lesion segmentation in CT images. In addition, the small tissue segmentation performance of the four different deep learning architectures on both datasets could be greatly improved by incorporating the data augmentation strategy. The best pixelwise segmentation performance for both pulmonary nodules and liver tumours was obtained by the Mask-RCNN model, with DSC values of 0.829 and 0.879, respectively, which were similar to those of state-of-the-art methods.
Collapse
Affiliation(s)
- Liangsheng Wu
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
- Academy of Contemporary Agriculture Engineering Innovations, Zhongkai University of Agriculture and Engineering, Guangzhou, China
- Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, China
| | - Jiajun Zhuang
- Academy of Contemporary Agriculture Engineering Innovations, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Weizhao Chen
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Yu Tang
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
- * E-mail:
| | - Chaojun Hou
- Academy of Contemporary Agriculture Engineering Innovations, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Chentong Li
- Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, China
| | - Zhenyu Zhong
- Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, China
| | - Shaoming Luo
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
| |
Collapse
|
21
|
Saturi S, Banda S. Modelling of deep learning enabled lung disease detection and classification on chest X-ray images. INTERNATIONAL JOURNAL OF HEALTHCARE MANAGEMENT 2022. [DOI: 10.1080/20479700.2022.2102223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Swapna Saturi
- Department of CSE, Osmania University, Hyderabad, India
| | - Sandhya Banda
- CSED, Maturi Venkata Subba Rao (MVSR) Engineering College, Hyderabad, India
| |
Collapse
|
22
|
Jassim MM, Jaber MM. Systematic review for lung cancer detection and lung nodule classification: Taxonomy, challenges, and recommendation future works. JOURNAL OF INTELLIGENT SYSTEMS 2022. [DOI: 10.1515/jisys-2022-0062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Nowadays, lung cancer is one of the most dangerous diseases that require early diagnosis. Artificial intelligence has played an essential role in the medical field in general and in analyzing medical images and diagnosing diseases in particular, as it can reduce human errors that can occur with the medical expert when analyzing medical image. In this research study, we have done a systematic survey of the research published during the last 5 years in the diagnosis of lung cancer classification of lung nodules in 4 reliable databases (Science Direct, Scopus, web of science, and IEEE), and we selected 50 research paper using systematic literature review. The goal of this review work is to provide a concise overview of recent advancements in lung cancer diagnosis issues by machine learning and deep learning algorithms. This article summarizes the present state of knowledge on the subject. Addressing the findings offered in recent research publications gives the researchers a better grasp of the topic. We checked all the characteristics, such as challenges, recommendations for future work were analyzed in detail, and the published datasets and their source were presented to facilitate the researchers’ access to them and use it to develop the results achieved previously.
Collapse
Affiliation(s)
- Mustafa Mohammed Jassim
- Department of Computer Science, Informatics Institute for Postgraduate Studies (IIPS), Iraqi Commission for Computers and Informatics (ICCI) , Baghdad , Iraq
| | - Mustafa Musa Jaber
- Department of Medical Instruments Engineering Techniques, Dijlah University College , Baghdad , 10021 , Iraq
- Department of Medical Instruments Engineering Techniques, Al-Farahidi University , Baghdad , 10021 , Iraq
| |
Collapse
|
23
|
Priya KV, Peter JD. A federated approach for detecting the chest diseases using DenseNet for multi-label classification. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-021-00474-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
AbstractMulti-label disease classification algorithms help to predict various chronic diseases at an early stage. Diverse deep neural networks are applied for multi-label classification problems to foresee multiple mutually non-exclusive classes or diseases. We propose a federated approach for detecting the chest diseases using DenseNets for better accuracy in prediction of various diseases. Images of chest X-ray from the Kaggle repository is used as the dataset in the proposed model. This new model is tested with both sample and full dataset of chest X-ray, and it outperforms existing models in terms of various evaluation metrics. We adopted transfer learning approach along with the pre-trained network from scratch to improve performance. For this, we have integrated DenseNet121 to our framework. DenseNets have a few focal points as they help to overcome vanishing gradient issues, boost up the feature propagation and reuse and also to reduce the number of parameters. Furthermore, gradCAMS are used as visualization methods to visualize the affected parts on chest X-ray. Henceforth, the proposed architecture will help the prediction of various diseases from a single chest X-ray and furthermore direct the doctors and specialists for taking timely decisions.
Collapse
|
24
|
SegChaNet: A Novel Model for Lung Cancer Segmentation in CT Scans. Appl Bionics Biomech 2022; 2022:1139587. [PMID: 35607427 PMCID: PMC9124150 DOI: 10.1155/2022/1139587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 05/02/2022] [Indexed: 11/17/2022] Open
Abstract
Accurate lung tumor identification is crucial for radiation treatment planning. Due to the low contrast of the lung tumor in computed tomography (CT) images, segmentation of the tumor in CT images is challenging. This paper effectively integrates the U-Net with the channel attention module (CAM) to segment the malignant lung area from the surrounding chest region. The SegChaNet method encodes CT slices of the input lung into feature maps utilizing the trail of encoders. Finally, we explicitly developed a multiscale, dense-feature extraction module to extract multiscale features from the collection of encoded feature maps. We have identified the segmentation map of the lungs by employing the decoders and compared SegChaNet with the state-of-the-art. The model has learned the dense-feature extraction in lung abnormalities, while iterative downsampling followed by iterative upsampling causes the network to remain invariant to the size of the dense abnormality. Experimental results show that the proposed method is accurate and efficient and directly provides explicit lung regions in complex circumstances without postprocessing.
Collapse
|
25
|
Zhao H, Tsai CC, Zhou M, Liu Y, Chen YL, Huang F, Lin YC, Wang JJ. Deep learning based diagnosis of Parkinson's Disease using diffusion magnetic resonance imaging. Brain Imaging Behav 2022; 16:1749-1760. [PMID: 35285004 DOI: 10.1007/s11682-022-00631-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/07/2022] [Indexed: 12/31/2022]
Abstract
The diagnostic performance of a combined architecture on Parkinson's disease using diffusion tensor imaging was evaluated. A convolutional neural network was trained from multiple parcellated brain regions. A greedy algorithm was proposed to combine the models from individual regions into a complex one. Total 305 Parkinson's disease patients (aged 59.9±9.7 years old) and 227 healthy control subjects (aged 61.0±7.4 years old) were enrolled from 3 retrospective studies. The participants were divided into training with ten-fold cross-validation (N = 432) and an independent blind dataset (N = 100). Diffusion-weighted images were acquired from a 3T scanner. Fractional anisotropy and mean diffusivity were calculated and was subsequently parcellated into 90 cerebral regions of interest based on the Automatic Anatomic Labeling template. A convolutional neural network was implemented which contained three convolutional blocks and a fully connected layer. Each convolutional block consisted of a convolutional layer, activation layer, and pooling layer. This model was trained for each individual region. A greedy algorithm was implemented to combine multiple regions as the final prediction. The greedy algorithm predicted the area under curve of 94.1±3.2% from the combination of fractional anisotropy from 22 regions. The model performance analysis showed that the combination of 9 regions is equivalent. The best area under curve was 74.7±5.4% from the right postcentral gyrus. The current study proposed an architecture of convolutional neural network and a greedy algorithm to combine from multiple regions. With diffusion tensor imaging, the algorithm showed the potential to distinguish patients with Parkinson's disease from normal control with satisfactory performance.
Collapse
Affiliation(s)
- Hengling Zhao
- School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Chih-Chien Tsai
- Healthy Aging Research Center, Chang Gung University, Taoyuan, Taiwan.,Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
| | - Mingyi Zhou
- School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Yipeng Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu, China.
| | - Yao-Liang Chen
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan.,Department of Diagnostic Radiology, Chang Gung Memorial Hospital at Keelung, Keelung, Taiwan
| | - Fan Huang
- Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Yu-Chun Lin
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan.,Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Jiun-Jie Wang
- Healthy Aging Research Center, Chang Gung University, Taoyuan, Taiwan. .,Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan. .,Department of Diagnostic Radiology, Chang Gung Memorial Hospital at Keelung, Keelung, Taiwan. .,Institute for Radiological Research, Chang Gung University, Taoyuan, Taiwan.
| |
Collapse
|
26
|
Jones MA, Faiz R, Qiu Y, Zheng B. Improving mammography lesion classification by optimal fusion of handcrafted and deep transfer learning features. Phys Med Biol 2022; 67:10.1088/1361-6560/ac5297. [PMID: 35130517 PMCID: PMC8935657 DOI: 10.1088/1361-6560/ac5297] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 02/07/2022] [Indexed: 12/20/2022]
Abstract
Objective.Handcrafted radiomics features or deep learning model-generated automated features are commonly used to develop computer-aided diagnosis schemes of medical images. The objective of this study is to test the hypothesis that handcrafted and automated features contain complementary classification information and fusion of these two types of features can improve CAD performance.Approach.We retrospectively assembled a dataset involving 1535 lesions (740 malignant and 795 benign). Regions of interest (ROI) surrounding suspicious lesions are extracted and two types of features are computed from each ROI. The first one includes 40 radiomic features and the second one includes automated features computed from a VGG16 network using a transfer learning method. A single channel ROI image is converted to three channel pseudo-ROI images by stacking the original image, a bilateral filtered image, and a histogram equalized image. Two VGG16 models using pseudo-ROIs and 3 stacked original ROIs without pre-processing are used to extract automated features. Five linear support vector machines (SVM) are built using the optimally selected feature vectors from the handcrafted features, two sets of VGG16 model-generated automated features, and the fusion of handcrafted and each set of automated features, respectively.Main Results.Using a 10-fold cross-validation, the fusion SVM using pseudo-ROIs yields the highest lesion classification performance with area under ROC curve (AUC = 0.756 ± 0.042), which is significantly higher than those yielded by other SVMs trained using handcrafted or automated features only (p < 0.05).Significance.This study demonstrates that both handcrafted and automated futures contain useful information to classify breast lesions. Fusion of these two types of features can further increase CAD performance.
Collapse
Affiliation(s)
- Meredith A. Jones
- School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA,Author to whom any correspondence should be addressed.,
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
27
|
Mehrotra R, Agrawal R, Ansari MA. Diagnosis of hypercritical chronic pulmonary disorders using dense convolutional network through chest radiography. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:7625-7649. [PMID: 35125924 PMCID: PMC8798313 DOI: 10.1007/s11042-021-11748-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 08/30/2021] [Accepted: 11/22/2021] [Indexed: 06/14/2023]
Abstract
Lung-related ailments are prevalent all over the world which majorly includes asthma, chronic obstructive pulmonary disease (COPD), tuberculosis, pneumonia, fibrosis, etc. and now COVID-19 is added to this list. Infection of COVID-19 poses respirational complications with other indications like cough, high fever, and pneumonia. WHO had identified cancer in the lungs as a fatal cancer type amongst others and thus, the timely detection of such cancer is pivotal for an individual's health. Since the elementary convolutional neural networks have not performed fairly well in identifying atypical image types hence, we recommend a novel and completely automated framework with a deep learning approach for the recognition and classification of chronic pulmonary disorders (CPD) and COVID-pneumonia using Thoracic or Chest X-Ray (CXR) images. A novel three-step, completely automated, approach is presented that first extracts the region of interest from CXR images for preprocessing, and they are then used to detects infected lungs X-rays from the Normal ones. Thereafter, the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD), which might be utilized in the current scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases. And finally, highlight the regions in the CXR which are indicative of severe chronic pulmonary disorders like COVID-19 and pneumonia. A detailed investigation of various pivotal parameters based on several experimental outcomes are made here. This paper presents an approach that detects the Normal lung X-rays from infected ones and the infected lung images are further classified into COVID-pneumonia, pneumonia, and other chronic pulmonary disorders with an utmost accuracy of 96.8%. Several other collective performance measurements validate the superiority of the presented model. The proposed framework shows effective results in classifying lung images into Normal, COVID-pneumonia, pneumonia, and other chronic pulmonary disorders (OCPD). This framework can be effectively utilized in this current pandemic scenario to help the radiologist in substantiating their diagnosis and in starting well in time treatment of these deadly lung diseases.
Collapse
Affiliation(s)
- Rajat Mehrotra
- Department of Electrical & Electronics Engineering, GL Bajaj Institute of Technology & Management, Gr. Noida, India
| | - Rajeev Agrawal
- Department of Electronics & Communication Engineering, GL Bajaj Institute of Technology & Management, Gr. Noida, India
| | - M. A. Ansari
- Department of Electrical Engineering, School of Engineering, Gautam Buddha University, Gr. Noida, India
| |
Collapse
|
28
|
Bizzego A, Gabrieli G, Neoh MJY, Esposito G. Improving the Efficacy of Deep-Learning Models for Heart Beat Detection on Heterogeneous Datasets. Bioengineering (Basel) 2021; 8:bioengineering8120193. [PMID: 34940346 PMCID: PMC8698903 DOI: 10.3390/bioengineering8120193] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 11/05/2021] [Accepted: 11/24/2021] [Indexed: 11/16/2022] Open
Abstract
Deep learning (DL) has greatly contributed to bioelectric signal processing, in particular to extract physiological markers. However, the efficacy and applicability of the results proposed in the literature is often constrained to the population represented by the data used to train the models. In this study, we investigate the issues related to applying a DL model on heterogeneous datasets. In particular, by focusing on heart beat detection from electrocardiogram signals (ECG), we show that the performance of a model trained on data from healthy subjects decreases when applied to patients with cardiac conditions and to signals collected with different devices. We then evaluate the use of transfer learning (TL) to adapt the model to the different datasets. In particular, we show that the classification performance is improved, even with datasets with a small sample size. These results suggest that a greater effort should be made towards the generalizability of DL models applied on bioelectric signals, in particular, by retrieving more representative datasets.
Collapse
Affiliation(s)
- Andrea Bizzego
- Department of Psychology and Cognitive Science, University of Trento, 38068 Trento, Italy;
| | - Giulio Gabrieli
- Psychology Program, Nanyang Technological University, Singapore 639818, Singapore; (G.G.); (M.J.Y.N.)
| | - Michelle Jin Yee Neoh
- Psychology Program, Nanyang Technological University, Singapore 639818, Singapore; (G.G.); (M.J.Y.N.)
| | - Gianluca Esposito
- Department of Psychology and Cognitive Science, University of Trento, 38068 Trento, Italy;
- Psychology Program, Nanyang Technological University, Singapore 639818, Singapore; (G.G.); (M.J.Y.N.)
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 308232, Singapore
- Correspondence: or
| |
Collapse
|
29
|
Huang G, Wei X, Tang H, Bai F, Lin X, Xue D. A systematic review and meta-analysis of diagnostic performance and physicians' perceptions of artificial intelligence (AI)-assisted CT diagnostic technology for the classification of pulmonary nodules. J Thorac Dis 2021; 13:4797-4811. [PMID: 34527320 PMCID: PMC8411165 DOI: 10.21037/jtd-21-810] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 07/09/2021] [Indexed: 12/26/2022]
Abstract
Background Lung cancer was the second most commonly diagnosed cancer and the leading cause of cancer death in 2020. Although artificial intelligence (AI)-assisted diagnostic technologies have shown promise and has been used in clinical practice in recent years, no products related to AI-assisted CT diagnostic technologies for the classification of pulmonary nodules have been approved by the National Medical Products Administration in China. The objective of this article was to systematically review the diagnostic performance of AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant and to analyze physicians’ perceptions of this technology in China. Methods All relevant studies from 6 literature databases were searched and screened according to the inclusion and exclusion criteria. Data were extracted and the study quality was assessed by two reviewers. The study heterogeneity and publication bias were estimated. A questionnaire survey on the perceptions of physicians was conducted in 9 public tertiary hospitals in China. A meta-analysis, meta-regression and univariate logistic model were used in the systematic review and to explore the association of physicians’ perceptions with their rate of support for the clinical application of the technology. Results Twenty-seven studies with 5,727 pulmonary nodules were finally included in the meta-analysis. We found that the quality of the included studies was generally acceptable and that the pooled sensitivity and specificity of AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant were 0.90 and 0.89, respectively. The pooled diagnostic odds ratio (DOR) was 70.33. The majority of the surveyed physicians in China perceived “reduced workload for radiologists” and “improved diagnostic efficiency” as the important benefits of this technology. In addition, diagnostic accuracy (including misdiagnosis) and practical experience were significantly associated with whether physicians supported its clinical application. Conclusions In the context of lung cancer diagnosis, AI-assisted CT diagnostic technology for the classification of pulmonary nodules as benign or malignant has good diagnostic performance, but its specificity needs to be improved.
Collapse
Affiliation(s)
- Guo Huang
- NHC Key Laboratory of Health Technology Assessment (Fudan University), Department of Hospital Management, School of Public Health, Fudan University, Shanghai, China
| | - Xuefeng Wei
- Health Commission of Gansu Province, Lanzhou, China
| | - Huiqin Tang
- Health Commission of Hubei Province, Wuhan, China
| | - Fei Bai
- National Center for Medical Service Administration, Beijing, China
| | - Xia Lin
- National Center for Medical Service Administration, Beijing, China
| | - Di Xue
- NHC Key Laboratory of Health Technology Assessment (Fudan University), Department of Hospital Management, School of Public Health, Fudan University, Shanghai, China
| |
Collapse
|
30
|
HekmatiAthar S, Goins H, Samuel R, Byfield G, Anwar M. Data-Driven Forecasting of Agitation for Persons with Dementia: A Deep Learning-Based Approach. ACTA ACUST UNITED AC 2021; 2:326. [PMID: 34109317 PMCID: PMC8179095 DOI: 10.1007/s42979-021-00708-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Accepted: 05/15/2021] [Indexed: 10/25/2022]
Abstract
The World Health Organization estimates that approximately 10 million people are newly diagnosed with dementia each year and a global prevalence of nearly 50 million persons with dementia (PwD). The vast majority of PwD living at home receive the majority of their care from informal familial caregivers. The quality of life (QOL) of familial caregivers may be significantly impacted by their caregiving responsibilities and resultant caregiver burden. A major contributor to caregiver burden is the random occurrence of agitation in PwD and familial caregivers' lack of preparedness to manage these episodes. Caregiver burden may be reduced if it is possible to forecast impending agitation episodes. In this study, we leverage data-driven deep learning models to predict agitation episodes in PwD. We used Long Short-Term Memory (LSTM), a deep learning class of algorithms, to forecast agitations up to 30 min before actual agitation events. In particular, we managed the missing data by estimating the missing values and compensated for the class imbalance challenge by down-sampling the majority class. The simulations were based on real-world data from Alzheimer's disease (AD) caregivers and PwD dyads home environments, including ambient noise level, illumination, room temperature, atmospheric pressure (Pa), and relative humidity. Our results show the efficacy of data-driven deep learning models in predicting agitation episodes in community-dwelling AD dyads with accuracy of 98.6% and recall (sensitivity) of 84.8%.
Collapse
Affiliation(s)
| | - Hilda Goins
- Department of Computer Science, North Carolina A&T State University, Greensboro, NC USA
| | - Raymond Samuel
- Department of Biology, North Carolina A&T State University, Greensboro, NC USA
| | - Grace Byfield
- Department of Genetics, University of North Carolina, Chapel Hill, NC USA
| | - Mohd Anwar
- Department of Computer Science, North Carolina A&T State University, Greensboro, NC USA
| |
Collapse
|
31
|
Shiri I, Sorouri M, Geramifar P, Nazari M, Abdollahi M, Salimi Y, Khosravi B, Askari D, Aghaghazvini L, Hajianfar G, Kasaeian A, Abdollahi H, Arabi H, Rahmim A, Radmard AR, Zaidi H. Machine learning-based prognostic modeling using clinical data and quantitative radiomic features from chest CT images in COVID-19 patients. Comput Biol Med 2021; 132:104304. [PMID: 33691201 PMCID: PMC7925235 DOI: 10.1016/j.compbiomed.2021.104304] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 02/26/2021] [Accepted: 02/27/2021] [Indexed: 12/16/2022]
Abstract
OBJECTIVE To develop prognostic models for survival (alive or deceased status) prediction of COVID-19 patients using clinical data (demographics and history, laboratory tests, visual scoring by radiologists) and lung/lesion radiomic features extracted from chest CT images. METHODS Overall, 152 patients were enrolled in this study protocol. These were divided into 106 training/validation and 46 test datasets (untouched during training), respectively. Radiomic features were extracted from the segmented lungs and infectious lesions separately from chest CT images. Clinical data, including patients' history and demographics, laboratory tests and radiological scores were also collected. Univariate analysis was first performed (q-value reported after false discovery rate (FDR) correction) to determine the most predictive features among all imaging and clinical data. Prognostic modeling of survival was performed using radiomic features and clinical data, separately or in combination. Maximum relevance minimum redundancy (MRMR) and XGBoost were used for feature selection and classification. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC), sensitivity, specificity, and accuracy were used to assess the prognostic performance of the models on the test datasets. RESULTS For clinical data, cancer comorbidity (q-value < 0.01), consciousness level (q-value < 0.05) and radiological score involved zone (q-value < 0.02) were found to have high correlated features with outcome. Oxygen saturation (AUC = 0.73, q-value < 0.01) and Blood Urea Nitrogen (AUC = 0.72, q-value = 0.72) were identified as high clinical features. For lung radiomic features, SAHGLE (AUC = 0.70) and HGLZE (AUC = 0.67) from GLSZM were identified as most prognostic features. Amongst lesion radiomic features, RLNU from GLRLM (AUC = 0.73), HGLZE from GLSZM (AUC = 0.73) had the highest performance. In multivariate analysis, combining lung, lesion and clinical features was determined to provide the most accurate prognostic model (AUC = 0.95 ± 0.029 (95%CI: 0.95-0.96), accuracy = 0.88 ± 0.046 (95% CI: 0.88-0.89), sensitivity = 0.88 ± 0.066 (95% CI = 0.87-0.9) and specificity = 0.89 ± 0.07 (95% CI = 0.87-0.9)). CONCLUSION Combination of radiomic features and clinical data can effectively predict outcome in COVID-19 patients. The developed model has significant potential for improved management of COVID-19 patients.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Majid Sorouri
- Digestive Diseases Research Center, Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Mostafa Nazari
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Abdollahi
- Digestive Diseases Research Center, Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Bardia Khosravi
- Digestive Diseases Research Center, Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Dariush Askari
- Department of Radiology Technology, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Leila Aghaghazvini
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Amir Kasaeian
- Digestive Diseases Research Center, Digestive Diseases Research Institute, Tehran University of Medical Sciences, Tehran, Iran,Hematology, Oncology and Stem Cell Transplantation Research Center, Research Institute for Oncology, Hematology and Cell Therapy, Tehran University of Medical Sciences, Tehran, Iran,Inflammation Research Center, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid Abdollahi
- Department of Radiologic Sciences and Medical Physics, Kerman University of Medical Sciences, Kerman, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada,Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Amir Reza Radmard
- Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran,Corresponding author. Department of Radiology, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland,Geneva University Neurocenter, Geneva University, Geneva, Switzerland,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark,Corresponding author. Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, CH-1211, Geneva, Switzerland
| |
Collapse
|
32
|
Bizzego A, Gabrieli G, Esposito G. Deep Neural Networks and Transfer Learning on a Multivariate Physiological Signal Dataset. Bioengineering (Basel) 2021; 8:35. [PMID: 33800842 PMCID: PMC8058952 DOI: 10.3390/bioengineering8030035] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 02/25/2021] [Accepted: 03/03/2021] [Indexed: 11/16/2022] Open
Abstract
While Deep Neural Networks (DNNs) and Transfer Learning (TL) have greatly contributed to several medical and clinical disciplines, the application to multivariate physiological datasets is still limited. Current examples mainly focus on one physiological signal and can only utilise applications that are customised for that specific measure, thus it limits the possibility of transferring the trained DNN to other domains. In this study, we composed a dataset (n=813) of six different types of physiological signals (Electrocardiogram, Electrodermal activity, Electromyogram, Photoplethysmogram, Respiration and Acceleration). Signals were collected from 232 subjects using four different acquisition devices. We used a DNN to classify the type of physiological signal and to demonstrate how the TL approach allows the exploitation of the efficiency of DNNs in other domains. After the DNN was trained to optimally classify the type of signal, the features that were automatically extracted by the DNN were used to classify the type of device used for the acquisition using a Support Vector Machine. The dataset, the code and the trained parameters of the DNN are made publicly available to encourage the adoption of DNN and TL in applications with multivariate physiological signals.
Collapse
Affiliation(s)
- Andrea Bizzego
- Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto (Trento), Italy;
| | - Giulio Gabrieli
- Psychology Program, School of Social Sciences, Nanyang Technological University, Singapore 639798, Singapore;
| | - Gianluca Esposito
- Department of Psychology and Cognitive Science, University of Trento, 38068 Rovereto (Trento), Italy;
- Psychology Program, School of Social Sciences, Nanyang Technological University, Singapore 639798, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
| |
Collapse
|
33
|
Yu Y, Wang N, Huang N, Liu X, Zheng Y, Fu Y, Li X, Wu H, Xu J, Cheng J. Determining the invasiveness of ground-glass nodules using a 3D multi-task network. Eur Radiol 2021; 31:7162-7171. [PMID: 33665717 DOI: 10.1007/s00330-021-07794-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 12/17/2020] [Accepted: 02/15/2021] [Indexed: 10/22/2022]
Abstract
OBJECTIVES The aim of this study was to determine the invasiveness of ground-glass nodules (GGNs) using a 3D multi-task deep learning network. METHODS We propose a novel architecture based on 3D multi-task learning to determine the invasiveness of GGNs. In total, 770 patients with 909 GGNs who underwent lung CT scans were enrolled. The patients were divided into the training (n = 626) and test sets (n = 144). In the test set, invasiveness was classified using deep learning into three categories: atypical adenomatous hyperplasia (AAH) and adenocarcinoma in situ (AIS), minimally invasive adenocarcinoma (MIA), and invasive pulmonary adenocarcinoma (IA). Furthermore, binary classifications (AAH/AIS/MIA vs. IA) were made by two thoracic radiologists and compared with the deep learning results. RESULTS In the three-category classification task, the sensitivity, specificity, and accuracy were 65.41%, 82.21%, and 64.9%, respectively. In the binary classification task, the sensitivity, specificity, accuracy, and area under the ROC curve (AUC) values were 69.57%, 95.24%, 87.42%, and 0.89, respectively. In the visual assessment of GGN invasiveness of binary classification by the two thoracic radiologists, the sensitivity, specificity, and accuracy of the senior and junior radiologists were 58.93%, 90.51%, and 81.35% and 76.79%, 55.47%, and 61.66%, respectively. CONCLUSIONS The proposed multi-task deep learning model achieved good classification results in determining the invasiveness of GGNs. This model may help to select patients with invasive lesions who need surgery and the proper surgical methods. KEY POINTS • The proposed multi-task model has achieved good classification results for the invasiveness of GGNs. • The proposed network includes a classification and segmentation branch to learn global and regional features, respectively. • The multi-task model could assist doctors in selecting patients with invasive lesions who need surgery and choosing appropriate surgical methods.
Collapse
Affiliation(s)
- Ye Yu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Na Wang
- SenseTime Research, Shanghai, 200233, China
| | - Ning Huang
- SenseTime Research, Shanghai, 200233, China
| | | | - Yuanjie Zheng
- School of Information Science and Engineering at Shandong Normal University, Jinan, 250358, China
| | - Yicheng Fu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Xiaoqian Li
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Huawei Wu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China
| | - Jianrong Xu
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Jiejun Cheng
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, China.
| |
Collapse
|
34
|
Torrents-Barrena J, Monill N, Piella G, Gratacós E, Eixarch E, Ceresa M, González Ballester MA. Assessment of Radiomics and Deep Learning for the Segmentation of Fetal and Maternal Anatomy in Magnetic Resonance Imaging and Ultrasound. Acad Radiol 2021; 28:173-188. [PMID: 31879159 DOI: 10.1016/j.acra.2019.11.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 11/08/2019] [Accepted: 11/18/2019] [Indexed: 11/18/2022]
Abstract
Recent advances in fetal imaging open the door to enhanced detection of fetal disorders and computer-assisted surgical planning. However, precise segmentation of womb's tissues is challenging due to motion artifacts caused by fetal movements and maternal respiration during acquisition. This work aims to efficiently segment different intrauterine tissues in fetal magnetic resonance imaging (MRI) and 3D ultrasound (US). First, a large set of ninety-four radiomic features are extracted to characterize the mother uterus, placenta, umbilical cord, fetal lungs, and brain. The optimal features for each anatomy are identified using both K-best and Sequential Forward Feature Selection techniques. These features are then fed to a Support Vector Machine with instance balancing to accurately segment the intrauterine anatomies. To the best of our knowledge, this is the first time that "Radiomics" is expanded from classification tasks to segmentation purposes to deal with challenging fetal images. In addition, we evaluate several state-of-the-art deep learning-based segmentation approaches. Validation is extensively performed on a set of 60 axial MRI and 3D US images from pathological and clinical cases. Our results suggest that combining the selected 10 radiomic features per anatomy along with DeepLabV3+ or BiSeNet architectures for MRI, and PSPNet or Tiramisu for 3D US, can lead to the highest fetal / maternal tissue segmentation performance, robustness, informativeness, and heterogeneity. Therefore, this work opens new avenues for advancement of segmentation techniques and, in particular, for improved fetal surgical planning.
Collapse
Affiliation(s)
- Jordina Torrents-Barrena
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Núria Monill
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Gemma Piella
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Eduard Gratacós
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain and Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Elisenda Eixarch
- BCNatal | Fetal Medicine Research Center (Hospital Clínic and Hospital Sant Joan de Déu), University of Barcelona, Barcelona, Spain; Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain and Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Mario Ceresa
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Miguel A González Ballester
- BCN MedTech, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain; ICREA, Barcelona, Spain
| |
Collapse
|
35
|
Masud M, Sikder N, Nahid AA, Bairagi AK, AlZain MA. A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework. SENSORS 2021; 21:s21030748. [PMID: 33499364 PMCID: PMC7865416 DOI: 10.3390/s21030748] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 01/10/2021] [Accepted: 01/18/2021] [Indexed: 12/19/2022]
Abstract
The field of Medicine and Healthcare has attained revolutionary advancements in the last forty years. Within this period, the actual reasons behind numerous diseases were unveiled, novel diagnostic methods were designed, and new medicines were developed. Even after all these achievements, diseases like cancer continue to haunt us since we are still vulnerable to them. Cancer is the second leading cause of death globally; about one in every six people die suffering from it. Among many types of cancers, the lung and colon variants are the most common and deadliest ones. Together, they account for more than 25% of all cancer cases. However, identifying the disease at an early stage significantly improves the chances of survival. Cancer diagnosis can be automated by using the potential of Artificial Intelligence (AI), which allows us to assess more cases in less time and cost. With the help of modern Deep Learning (DL) and Digital Image Processing (DIP) techniques, this paper inscribes a classification framework to differentiate among five types of lung and colon tissues (two benign and three malignant) by analyzing their histopathological images. The acquired results show that the proposed framework can identify cancer tissues with a maximum of 96.33% accuracy. Implementation of this model will help medical professionals to develop an automatic and reliable system capable of identifying various types of lung and colon cancers.
Collapse
Affiliation(s)
- Mehedi Masud
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
- Correspondence:
| | - Niloy Sikder
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Abdullah-Al Nahid
- Electronics and Communication Engineering Discipline, Khulna University, Khulna 9208, Bangladesh;
| | - Anupam Kumar Bairagi
- Computer Science and Engineering Discipline, Khulna University, Khulna 9208, Bangladesh; (N.S.); (A.K.B.)
| | - Mohammed A. AlZain
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia;
| |
Collapse
|
36
|
Peng G, Dong H, Liang T, Li L, Liu J. Diagnosis of cervical precancerous lesions based on multimodal feature changes. Comput Biol Med 2021; 130:104209. [PMID: 33440316 DOI: 10.1016/j.compbiomed.2021.104209] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Revised: 12/11/2020] [Accepted: 12/31/2020] [Indexed: 12/24/2022]
Abstract
To realize the automatic diagnosis of cervical intraepithelial neoplasia (CIN) cases by preacetic acid test and postacetic acid test colposcopy images, this paper proposes a method of cervical precancerous lesion diagnosis based on multimodal feature changes. First, the preacetic acid test and postacetic acid test colposcopy images were registered based on cross-correlation and projection transformation, and then the cervical region was extracted by the k-means clustering algorithm. Finally, a deep learning network was used to extract features and classify the preacetic acid test and postacetic acid test cervical images after registration. Finally, the proposed method achieves a classification accuracy of 86.3%, a sensitivity of 84.1%, and a specificity of 89.8% in 60 test cases. Experimental results show that this method can make better use of the multimodal features of colposcopy images and has lower requirements for medical staff in the process of data acquisition. It has certain clinical significance in cervical cancer precancerous lesion screening systems.
Collapse
Affiliation(s)
- Gengyou Peng
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Hua Dong
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Tong Liang
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China
| | - Ling Li
- Department of Gynecologic Oncology, Jiangxi Maternal and Child Health Hospital, Nanchang, Jiangxi, China
| | - Jun Liu
- College of Information Engineering, Nanchang Hangkong University, Nanchang, China.
| |
Collapse
|
37
|
Zhou Z, Sodha V, Pang J, Gotway MB, Liang J. Models Genesis. Med Image Anal 2021; 67:101840. [PMID: 33188996 PMCID: PMC7726094 DOI: 10.1016/j.media.2020.101840] [Citation(s) in RCA: 76] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 08/12/2020] [Accepted: 09/14/2020] [Indexed: 12/27/2022]
Abstract
Transfer learning from natural image to medical image has been established as one of the most practical paradigms in deep learning for medical image analysis. To fit this paradigm, however, 3D imaging tasks in the most prominent imaging modalities (e.g., CT and MRI) have to be reformulated and solved in 2D, losing rich 3D anatomical information, thereby inevitably compromising its performance. To overcome this limitation, we have built a set of models, called Generic Autodidactic Models, nicknamed Models Genesis, because they are created ex nihilo (with no manual labeling), self-taught (learnt by self-supervision), and generic (served as source models for generating application-specific target models). Our extensive experiments demonstrate that our Models Genesis significantly outperform learning from scratch and existing pre-trained 3D models in all five target 3D applications covering both segmentation and classification. More importantly, learning a model from scratch simply in 3D may not necessarily yield performance better than transfer learning from ImageNet in 2D, but our Models Genesis consistently top any 2D/2.5D approaches including fine-tuning the models pre-trained from ImageNet as well as fine-tuning the 2D versions of our Models Genesis, confirming the importance of 3D anatomical information and significance of Models Genesis for 3D medical imaging. This performance is attributed to our unified self-supervised learning framework, built on a simple yet powerful observation: the sophisticated and recurrent anatomy in medical images can serve as strong yet free supervision signals for deep models to learn common anatomical representation automatically via self-supervision. As open science, all codes and pre-trained Models Genesis are available at https://github.com/MrGiovanni/ModelsGenesis.
Collapse
Affiliation(s)
- Zongwei Zhou
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ 85259, USA
| | - Vatsal Sodha
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85281 USA
| | - Jiaxuan Pang
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85281 USA
| | | | - Jianming Liang
- Department of Biomedical Informatics, Arizona State University, Scottsdale, AZ 85259, USA.
| |
Collapse
|
38
|
杨 杨. Advances in the Classification of Benign and Malignant Pulmonary Nodules Based on Machine Learning. Biophysics (Nagoya-shi) 2021. [DOI: 10.12677/biphy.2021.92006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
|
39
|
Shamshirband S, Fathi M, Dehzangi A, Chronopoulos AT, Alinejad-Rokny H. A review on deep learning approaches in healthcare systems: Taxonomies, challenges, and open issues. J Biomed Inform 2020; 113:103627. [PMID: 33259944 DOI: 10.1016/j.jbi.2020.103627] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 08/29/2020] [Accepted: 11/14/2020] [Indexed: 12/11/2022]
Abstract
In the last few years, the application of Machine Learning approaches like Deep Neural Network (DNN) models have become more attractive in the healthcare system given the rising complexity of the healthcare data. Machine Learning (ML) algorithms provide efficient and effective data analysis models to uncover hidden patterns and other meaningful information from the considerable amount of health data that conventional analytics are not able to discover in a reasonable time. In particular, Deep Learning (DL) techniques have been shown as promising methods in pattern recognition in the healthcare systems. Motivated by this consideration, the contribution of this paper is to investigate the deep learning approaches applied to healthcare systems by reviewing the cutting-edge network architectures, applications, and industrial trends. The goal is first to provide extensive insight into the application of deep learning models in healthcare solutions to bridge deep learning techniques and human healthcare interpretability. And then, to present the existing open challenges and future directions.
Collapse
Affiliation(s)
- Shahab Shamshirband
- Department of Computer Science, Norwegian University of Science and Technology, Trondheim, Norway; Future Technology Research Center, College of Future, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan, ROC.
| | - Mahdis Fathi
- Faculty of Computer and Information Technology Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran
| | - Abdollah Dehzangi
- Department of Computer Science, Rutgers University, Camden, NJ 08102, USA; Center for Computational and Integrative Biology, Rutgers University, Camden, NJ 08102, USA
| | - Anthony Theodore Chronopoulos
- Department of Computer Science, University of Texas at San Antonio, San Antonio, TX 78249, USA; (Visiting Faculty) Department of Computer Science, University of Patras, 26500 Rio, Greece
| | - Hamid Alinejad-Rokny
- Systems Biology and Health Data Analytics Lab, The Graduate School of Biomedical Engineering, UNSW Sydney, 2052 Sydney, Australia; School of Computer Science and Engineering, The University of New South Wales (UNSW Sydney), 2052 Sydney, Australia; Health Data Analytics Program Leader, AI-enabled Processes (AIP) Research Centre, Macquarie University, Sydney 2109, Australia
| |
Collapse
|
40
|
Nazari M, Shiri I, Zaidi H. Radiomics-based machine learning model to predict risk of death within 5-years in clear cell renal cell carcinoma patients. Comput Biol Med 2020; 129:104135. [PMID: 33254045 DOI: 10.1016/j.compbiomed.2020.104135] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 10/21/2020] [Accepted: 11/11/2020] [Indexed: 12/13/2022]
Abstract
PURPOSE The aim of this study was to develop radiomics-based machine learning models based on extracted radiomic features and clinical information to predict the risk of death within 5 years for prognosis of clear cell renal cell carcinoma (ccRCC) patients. METHODS According to image quality and clinical data availability, we eventually selected 70 ccRCC patients that underwent CT scans. Manual volume-of-interest (VOI) segmentation of each image was performed by an experienced radiologist using the 3D slicer software package. Prior to feature extraction, image pre-processing was performed on CT images to extract different image features, including wavelet, Laplacian of Gaussian, and resampling of the intensity values to 32, 64 and 128 bin levels. Overall, 2544 3D radiomics features were extracted from each VOI for each patient. Minimum Redundancy Maximum Relevance (MRMR) algorithm was used as feature selector. Four classification algorithms were used, including Generalized Linear Model (GLM), Support Vector Machine (SVM), K-nearest Neighbor (KNN) and XGBoost. We used the Bootstrap resampling method to create validation sets. Area under the receiver operating characteristic (ROC) curve (AUROC), accuracy, sensitivity, and specificity were used to assess the performance of the classification models. RESULTS The best single performance among 8 different models was achieved by the XGBoost model using a combination of radiomic features and clinical information (AUROC, accuracy, sensitivity, and specificity with 95% confidence interval were 0.95-0.98, 0.93-0.98, 0.93-0.96 and ~1.0, respectively). CONCLUSIONS We developed a robust radiomics-based classifier that is capable of accurately predicting overall survival of RCC patients for prognosis of ccRCC patients. This signature may help identifying high-risk patients who require additional treatment and follow up regimens.
Collapse
Affiliation(s)
- Mostafa Nazari
- Department of Biomedical Engineering and Medical Physics, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
41
|
Zhang L, Zhang J, Li Z, Song Y. A multiple-channel and atrous convolution network for ultrasound image segmentation. Med Phys 2020; 47:6270-6285. [PMID: 33007105 DOI: 10.1002/mp.14512] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 09/12/2020] [Accepted: 09/22/2020] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Ultrasound image segmentation is a challenging task due to a low signal-to-noise ratio and poor image quality. Although several approaches based on the convolutional neural network (CNN) have been applied to ultrasound image segmentation, they have weak generalization ability. We propose an end-to-end, multiple-channel and atrous CNN designed to extract a greater amount of semantic information for segmentation of ultrasound images. METHOD A multiple-channel and atrous convolution network is developed, referred to as MA-Net. Similar to U-Net, MA-Net is based on an encoder-decoder architecture and includes five modules: the encoder, atrous convolution, pyramid pooling, decoder, and residual skip pathway modules. In the encoder module, we aim to capture more information with multiple-channel convolution and use large kernel convolution instead of small filters in each convolution operation. In the last layer, atrous convolution and pyramid pooling are used to extract multi-scale features. The architecture of the decoder is similar to that of the encoder module, except that up-sampling is used instead of down-sampling. Furthermore, the residual skip pathway module connects the subnetworks of the encoder and decoder to optimize learning from the deeper layer and improve the accuracy of segmentation. During the learning process, we adopt multi-task learning to enhance segmentation performance. Five types of datasets are used in our experiments. Because the original training data are limited, we apply data augmentation (e.g., horizontal and vertical flipping, random rotations, and random scaling) to our training data. We use the Dice score, precision, recall, Hausdorff distance (HD), average symmetric surface distance (ASD), and root mean square symmetric surface distance (RMSD) as the metrics for segmentation evaluation. Meanwhile, Friedman test was performed as the nonparametric statistical analysis to evaluate the algorithms. RESULTS For the datasets of brachia plexus (BP), fetal head, and lymph node segmentations, MA-Net achieved average Dice scores of 0.776, 0.973, and 0.858, respectively; with average precisions of 0.787, 0.968, and 0.854, respectively; average recalls of 0.788, 0.978, and 0.885, respectively; average HDs (mm) of 13.591, 10.924, and 19.245, respectively; average ASDs (mm) of 4.822, 4.152, and 4.312, respectively; and average RMSDs (mm) of 4.979, 4.161, and 4.930, respectively. Compared with U-Net, U-Net++, M-Net, and Dilated U-Net, the average performance of the MA-Net increased by approximately 5.68%, 2.85%, 6.59%, 36.03%, 23.64%, and 31.71% for Dice, precision, recall, HD, ASD, and RMSD, respectively. Moreover, we verified the generalization of MA-Net segmentation to lower grade brain glioma MRI and lung CT images. In addition, the MA-Net achieved the highest mean rank in the Friedman test. CONCLUSION The proposed MA-Net accurately segments ultrasound images with high generalization, and therefore, it offers a useful tool for diagnostic application in ultrasound images.
Collapse
Affiliation(s)
- Lun Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China.,Yunnan Vocational Institute of Energy Technology, Qujing, Yunnan, 655001, China
| | - Junhua Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China
| | - Zonggui Li
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China
| | - Yingchao Song
- School of Information Science and Engineering, Yunnan University, Kunming, Yunnan, 650091, China
| |
Collapse
|
42
|
ROI-based feature learning for efficient true positive prediction using convolutional neural network for lung cancer diagnosis. Neural Comput Appl 2020. [DOI: 10.1007/s00521-020-04787-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
43
|
Bertsimas D, Wiberg H. Machine Learning in Oncology: Methods, Applications, and Challenges. JCO Clin Cancer Inform 2020; 4:885-894. [PMID: 33058693 PMCID: PMC7608565 DOI: 10.1200/cci.20.00072] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2020] [Indexed: 01/16/2023] Open
Affiliation(s)
- Dimitris Bertsimas
- Sloan School of Management, Massachusetts Institute of Technology, Cambridge, MA
- Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA
| | - Holly Wiberg
- Operations Research Center, Massachusetts Institute of Technology, Cambridge, MA
| |
Collapse
|
44
|
Liu H, Cao H, Song E, Ma G, Xu X, Jin R, Liu C, Hung CC. Multi-model Ensemble Learning Architecture Based on 3D CNN for Lung Nodule Malignancy Suspiciousness Classification. J Digit Imaging 2020; 33:1242-1256. [PMID: 32607905 PMCID: PMC7649841 DOI: 10.1007/s10278-020-00372-8] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Classification of benign and malignant in lung nodules using chest CT images is a key step in the diagnosis of early-stage lung cancer, as well as an effective way to improve the patients' survival rate. However, due to the diversity of lung nodules and the visual similarity of lung nodules to their surrounding tissues, it is difficult to construct a robust classification model with conventional deep learning-based diagnostic methods. To address this problem, we propose a multi-model ensemble learning architecture based on 3D convolutional neural network (MMEL-3DCNN). This approach incorporates three key ideas: (1) Constructed multi-model network architecture can be well adapted to the heterogeneity of lung nodules. (2) The input that concatenated of the intensity image corresponding to the nodule mask, the original image, and the enhanced image corresponding to which can help training model to extract advanced feature with more discriminative capacity. (3) Select the corresponding model to different nodule size dynamically for prediction, which can improve the generalization ability of the model effectively. In addition, ensemble learning is applied in this paper to further improve the robustness of the nodule classification model. The proposed method has been experimentally verified on the public dataset, LIDC-IDRI. The experimental results show that the proposed MMEL-3DCNN architecture can obtain satisfactory classification results.
Collapse
Affiliation(s)
- Hong Liu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Haichao Cao
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Enmin Song
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
| | - Guangzhi Ma
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Xiangyang Xu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Renchao Jin
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Chuhua Liu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Chih-Cheng Hung
- Laboratory for Machine Vision and Security Research, Kennesaw State University, Kennesaw, GA, USA
| |
Collapse
|
45
|
Horry MJ, Chakraborty S, Paul M, Ulhaq A, Pradhan B, Saha M, Shukla N. COVID-19 Detection Through Transfer Learning Using Multimodal Imaging Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:149808-149824. [PMID: 34931154 PMCID: PMC8668160 DOI: 10.1109/access.2020.3016780] [Citation(s) in RCA: 154] [Impact Index Per Article: 38.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 08/11/2020] [Indexed: 05/02/2023]
Abstract
Detecting COVID-19 early may help in devising an appropriate treatment plan and disease containment decisions. In this study, we demonstrate how transfer learning from deep learning models can be used to perform COVID-19 detection using images from three most commonly used medical imaging modes X-Ray, Ultrasound, and CT scan. The aim is to provide over-stressed medical professionals a second pair of eyes through intelligent deep learning image classification models. We identify a suitable Convolutional Neural Network (CNN) model through initial comparative study of several popular CNN models. We then optimize the selected VGG19 model for the image modalities to show how the models can be used for the highly scarce and challenging COVID-19 datasets. We highlight the challenges (including dataset size and quality) in utilizing current publicly available COVID-19 datasets for developing useful deep learning models and how it adversely impacts the trainability of complex models. We also propose an image pre-processing stage to create a trustworthy image dataset for developing and testing the deep learning models. The new approach is aimed to reduce unwanted noise from the images so that deep learning models can focus on detecting diseases with specific features from them. Our results indicate that Ultrasound images provide superior detection accuracy compared to X-Ray and CT scans. The experimental results highlight that with limited data, most of the deeper networks struggle to train well and provides less consistency over the three imaging modes we are using. The selected VGG19 model, which is then extensively tuned with appropriate parameters, performs in considerable levels of COVID-19 detection against pneumonia or normal for all three lung image modes with the precision of up to 86% for X-Ray, 100% for Ultrasound and 84% for CT scans.
Collapse
Affiliation(s)
- Michael J. Horry
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
- IBM Australia LimitedSydneyNSW2065Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
| | - Manoranjan Paul
- Machine Vision and Digital Health (MaViDH),
School of Computing and MathematicsCharles Sturt UniversityBathurstNSW2795Australia
| | - Anwaar Ulhaq
- Machine Vision and Digital Health (MaViDH),
School of Computing and MathematicsCharles Sturt UniversityBathurstNSW2795Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
- Department of Energy and Mineral
Resources EngineeringSejong UniversitySeoul05006South Korea
| | - Manas Saha
- Manning Rural Referral
HospitalTareeNSW2430Australia
| | - Nagesh Shukla
- Centre for Advanced Modelling and
Geospatial Information Systems (CAMGIS), School of Information, Systems, and
Modeling, Faculty of Engineering and ITUniversity of Technology
SydneySydneyNSW2007Australia
| |
Collapse
|
46
|
Usman M, Lee BD, Byon SS, Kim SH, Lee BI, Shin YG. Volumetric lung nodule segmentation using adaptive ROI with multi-view residual learning. Sci Rep 2020; 10:12839. [PMID: 32732963 PMCID: PMC7393083 DOI: 10.1038/s41598-020-69817-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 07/13/2020] [Indexed: 12/03/2022] Open
Abstract
Accurate quantification of pulmonary nodules can greatly assist the early diagnosis of lung cancer, enhancing patient survival possibilities. A number of nodule segmentation techniques, which either rely on a radiologist-provided 3-D volume of interest (VOI) or use the constant region of interests (ROIs) for all the slices, are proposed; however, these techniques can only investigate the presence of nodule voxels within the given VOI. Such approaches restrain the solutions to freely investigate the nodule presence outside the given VOI and also include the redundant structures (non-nodule) into VOI, which limits the segmentation accuracy. In this work, a novel semi-automated approach for 3-D segmentation of lung nodule in computerized tomography scans, has been proposed. The technique is segregated into two stages. In the first stage, a 2-D ROI containing the nodule is provided as an input to perform a patch-wise exploration along the axial axis using a novel adaptive ROI algorithm. This strategy enables the dynamic selection of the ROI in the surrounding slices to investigate the presence of nodules using a Deep Residual U-Net architecture. This stage provides the initial estimation of the nodule utilized to extract the VOI. In the second stage, the extracted VOI is further explored along the coronal and sagittal axes, in patchwise fashion, with Residual U-Nets. All the estimated masks are then fed into a consensus module to produce a final volumetric segmentation of the nodule. The algorithm is rigorously evaluated on LIDC–IDRI dataset, which is the largest publicly available dataset. The proposed approach achieved the average dice score of 87.5%, which is significantly higher than the existing state-of-the-art techniques.
Collapse
Affiliation(s)
- Muhammad Usman
- Department of Computer Science and Engineering, Seoul National University, 08826, Seoul, South Korea.,Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Byoung-Dai Lee
- School of Computer Science and Engineering, Kyonggi University, Suwon, 16227, South Korea.
| | - Shi-Sub Byon
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Sung-Hyun Kim
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Byung-Il Lee
- Center for Artificial Intelligence in Medicine and Imaging, HealthHub Co. Ltd., Seoul, 06524, South Korea
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 08826, Seoul, South Korea
| |
Collapse
|
47
|
Hussain AA, Bouachir O, Al-Turjman F, Aloqaily M. AI Techniques for COVID-19. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:128776-128795. [PMID: 34976554 PMCID: PMC8545328 DOI: 10.1109/access.2020.3007939] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 07/04/2020] [Indexed: 05/18/2023]
Abstract
Artificial Intelligence (AI) intent is to facilitate human limits. It is getting a standpoint on human administrations, filled by the growing availability of restorative clinical data and quick progression of insightful strategies. Motivated by the need to highlight the need for employing AI in battling the COVID-19 Crisis, this survey summarizes the current state of AI applications in clinical administrations while battling COVID-19. Furthermore, we highlight the application of Big Data while understanding this virus. We also overview various intelligence techniques and methods that can be applied to various types of medical information-based pandemic. We classify the existing AI techniques in clinical data analysis, including neural systems, classical SVM, and edge significant learning. Also, an emphasis has been made on regions that utilize AI-oriented cloud computing in combating various similar viruses to COVID-19. This survey study is an attempt to benefit medical practitioners and medical researchers in overpowering their faced difficulties while handling COVID-19 big data. The investigated techniques put forth advances in medical data analysis with an exactness of up to 90%. We further end up with a detailed discussion about how AI implementation can be a huge advantage in combating various similar viruses.
Collapse
Affiliation(s)
- Adedoyin Ahmed Hussain
- Department of Computer EngineeringNear East University99138NicosiaMersin 10Turkey
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Ouns Bouachir
- Department of Computer EngineeringZayed UniversityDubaiUnited Arab Emirates
- College of Technological InnovationZayed UniversityDubaiUnited Arab Emirates
| | - Fadi Al-Turjman
- Research Centre for AI and IoTDepartment of Artificial Intelligence EngineeringNear East University99138NicosiaMersin 10Turkey
| | - Moayad Aloqaily
- College of EngineeringAl Ain UniversityAl AinUnited Arab Emirates
| |
Collapse
|
48
|
Bharati S, Podder P, Mondal MRH. Hybrid deep learning for detecting lung diseases from X-ray images. INFORMATICS IN MEDICINE UNLOCKED 2020; 20:100391. [PMID: 32835077 PMCID: PMC7341954 DOI: 10.1016/j.imu.2020.100391] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 06/29/2020] [Accepted: 06/30/2020] [Indexed: 02/08/2023] Open
Abstract
Lung disease is common throughout the world. These include chronic obstructive pulmonary disease, pneumonia, asthma, tuberculosis, fibrosis, etc. Timely diagnosis of lung disease is essential. Many image processing and machine learning models have been developed for this purpose. Different forms of existing deep learning techniques including convolutional neural network (CNN), vanilla neural network, visual geometry group based neural network (VGG), and capsule network are applied for lung disease prediction. The basic CNN has poor performance for rotated, tilted, or other abnormal image orientation. Therefore, we propose a new hybrid deep learning framework by combining VGG, data augmentation and spatial transformer network (STN) with CNN. This new hybrid method is termed here as VGG Data STN with CNN (VDSNet). As implementation tools, Jupyter Notebook, Tensorflow, and Keras are used. The new model is applied to NIH chest X-ray image dataset collected from Kaggle repository. Full and sample versions of the dataset are considered. For both full and sample datasets, VDSNet outperforms existing methods in terms of a number of metrics including precision, recall, F0.5 score and validation accuracy. For the case of full dataset, VDSNet exhibits a validation accuracy of 73%, while vanilla gray, vanilla RGB, hybrid CNN and VGG, and modified capsule network have accuracy values of 67.8%, 69%, 69.5% and 63.8%, respectively. When sample dataset rather than full dataset is used, VDSNet requires much lower training time at the expense of a slightly lower validation accuracy. Hence, the proposed VDSNet framework will simplify the detection of lung disease for experts as well as for doctors.
Collapse
Affiliation(s)
- Subrato Bharati
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| | - Prajoy Podder
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| | - M Rubaiyat Hossain Mondal
- Institute of Information and Communication Technology, Bangladesh University of Engineering and Technology, Dhaka, 1205, Bangladesh
| |
Collapse
|
49
|
Ardakani AA, Kanafi AR, Acharya UR, Khadem N, Mohammadi A. Application of deep learning technique to manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks. Comput Biol Med 2020; 121:103795. [PMID: 32568676 PMCID: PMC7190523 DOI: 10.1016/j.compbiomed.2020.103795] [Citation(s) in RCA: 354] [Impact Index Per Article: 88.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 04/10/2020] [Accepted: 04/27/2020] [Indexed: 12/23/2022]
Abstract
Fast diagnostic methods can control and prevent the spread of pandemic diseases like coronavirus disease 2019 (COVID-19) and assist physicians to better manage patients in high workload conditions. Although a laboratory test is the current routine diagnostic tool, it is time-consuming, imposing a high cost and requiring a well-equipped laboratory for analysis. Computed tomography (CT) has thus far become a fast method to diagnose patients with COVID-19. However, the performance of radiologists in diagnosis of COVID-19 was moderate. Accordingly, additional investigations are needed to improve the performance in diagnosing COVID-19. In this study is suggested a rapid and valid method for COVID-19 diagnosis using an artificial intelligence technique based. 1020 CT slices from 108 patients with laboratory proven COVID-19 (the COVID-19 group) and 86 patients with other atypical and viral pneumonia diseases (the non-COVID-19 group) were included. Ten well-known convolutional neural networks were used to distinguish infection of COVID-19 from non-COVID-19 groups: AlexNet, VGG-16, VGG-19, SqueezeNet, GoogleNet, MobileNet-V2, ResNet-18, ResNet-50, ResNet-101, and Xception. Among all networks, the best performance was achieved by ResNet-101 and Xception. ResNet-101 could distinguish COVID-19 from non-COVID-19 cases with an AUC of 0.994 (sensitivity, 100%; specificity, 99.02%; accuracy, 99.51%). Xception achieved an AUC of 0.994 (sensitivity, 98.04%; specificity, 100%; accuracy, 99.02%). However, the performance of the radiologist was moderate with an AUC of 0.873 (sensitivity, 89.21%; specificity, 83.33%; accuracy, 86.27%). ResNet-101 can be considered as a high sensitivity model to characterize and diagnose COVID-19 infections, and can be used as an adjuvant tool in radiology departments. Ten CNNs were used to distinguish infection of COVID-19 from non-COVID-19 groups. ResNet-101 and Xception represented the best performance with an AUC of 0.994. Deep learning technique can be used as an adjuvant tool in diagnosing COVID-19.
Collapse
Affiliation(s)
- Ali Abbasian Ardakani
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| | | | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, Singapore University of Social Sciences, Singapore; School of Medicine, Faculty of Health and Medical Sciences, Taylor's University, 47500, Subang Jaya, Malaysia; Department of Biomedical Informatics and Medical Engineering, Asia University, Taiwan.
| | - Nazanin Khadem
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran.
| | - Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran.
| |
Collapse
|
50
|
Yang G, Pang Z, Jamal Deen M, Dong M, Zhang YT, Lovell N, Rahmani AM. Homecare Robotic Systems for Healthcare 4.0: Visions and Enabling Technologies. IEEE J Biomed Health Inform 2020; 24:2535-2549. [PMID: 32340971 DOI: 10.1109/jbhi.2020.2990529] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Powered by the technologies that have originated from manufacturing, the fourth revolution of healthcare technologies is happening (Healthcare 4.0). As an example of such revolution, new generation homecare robotic systems (HRS) based on the cyber-physical systems (CPS) with higher speed and more intelligent execution are emerging. In this article, the new visions and features of the CPS-based HRS are proposed. The latest progress in related enabling technologies is reviewed, including artificial intelligence, sensing fundamentals, materials and machines, cloud computing and communication, as well as motion capture and mapping. Finally, the future perspectives of the CPS-based HRS and the technical challenges faced in each technical area are discussed.
Collapse
|