1
|
Wang W, Yin S, Ye F, Chen Y, Zhu L, Yu H. GC-WIR : 3D global coordinate attention wide inverted ResNet network for pulmonary nodules classification. BMC Pulm Med 2024; 24:465. [PMID: 39304884 DOI: 10.1186/s12890-024-03272-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 09/04/2024] [Indexed: 09/22/2024] Open
Abstract
PURPOSE Currently, deep learning methods for the classification of benign and malignant lung nodules encounter challenges encompassing intricate and unstable algorithmic models, limited data adaptability, and an abundance of model parameters.To tackle these concerns, this investigation introduces a novel approach: the 3D Global Coordinated Attention Wide Inverted ResNet Network (GC-WIR). This network aims to achieve precise classification of benign and malignant pulmonary nodules, leveraging its merits of heightened efficiency, parsimonious parameterization, and robust stability. METHODS Within this framework, a 3D Global Coordinate Attention Mechanism (3D GCA) is designed to compute the features of the input images by converting 3D channel information and multi-dimensional positional cues. By encompassing both global channel details and spatial positional cues, this approach maintains a judicious balance between flexibility and computational efficiency. Furthermore, the GC-WIR architecture incorporates a 3D Wide Inverted Residual Network (3D WIRN), which augments feature computation by expanding input channels. This augmentation mitigates information loss during feature extraction, expedites model convergence, and concurrently enhances performance. The utilization of the inverted residual structure imbues the model with heightened stability. RESULTS Empirical validation of the GC-WIR method is performed on the LUNA 16 dataset, yielding predictions that surpass those generated by previous models. This novel approach achieves an impressive accuracy rate of 94.32%, coupled with a specificity of 93.69%. Notably, the model's parameter count remains modest at 5.76M, affording optimal classification accuracy. CONCLUSION Furthermore, experimental results unequivocally demonstrate that, even under stringent computational constraints, GC-WIR outperforms alternative deep learning methodologies, establishing a new benchmark in performance.
Collapse
Affiliation(s)
- Wenju Wang
- University of Shanghai for Science and Technology, Jungong 516 Rd, Shanghai, 200093, China
| | - Shuya Yin
- University of Shanghai for Science and Technology, Jungong 516 Rd, Shanghai, 200093, China.
| | - Fang Ye
- University of Shanghai for Science and Technology, Jungong 516 Rd, Shanghai, 200093, China
| | - Yinan Chen
- Department of Radiology, Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Huaihai West Road NO.241, Shanghai, 200030, China
| | - Lin Zhu
- Department of Radiology, Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Huaihai West Road NO.241, Shanghai, 200030, China
| | - Hong Yu
- Department of Radiology, Shanghai Chest Hospital, School of Medicine, Shanghai Jiao Tong University, Huaihai West Road NO.241, Shanghai, 200030, China
| |
Collapse
|
2
|
Zahari R, Cox J, Obara B. Uncertainty-aware image classification on 3D CT lung. Comput Biol Med 2024; 172:108324. [PMID: 38508053 DOI: 10.1016/j.compbiomed.2024.108324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 03/06/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024]
Abstract
Early detection is crucial for lung cancer to prolong the patient's survival. Existing model architectures used in such systems have shown promising results. However, they lack reliability and robustness in their predictions and the models are typically evaluated on a single dataset, making them overconfident when a new class is present. With the existence of uncertainty, uncertain images can be referred to medical experts for a second opinion. Thus, we propose an uncertainty-aware framework that includes three phases: data preprocessing and model selection and evaluation, uncertainty quantification (UQ), and uncertainty measurement and data referral for the classification of benign and malignant nodules using 3D CT images. To quantify the uncertainty, we employed three approaches; Monte Carlo Dropout (MCD), Deep Ensemble (DE), and Ensemble Monte Carlo Dropout (EMCD). We evaluated eight different deep learning models consisting of ResNet, DenseNet, and the Inception network family, all of which achieved average F1 scores above 0.832, and the highest average value of 0.845 was obtained using InceptionResNetV2. Furthermore, incorporating the UQ demonstrated significant improvement in the overall model performance. Upon evaluation of the uncertainty estimate, MCD outperforms the other UQ models except for the metric, URecall, where DE and EMCD excel, implying that they are better at identifying incorrect predictions with higher uncertainty levels, which is vital in the medical field. Finally, we show that using a threshold for data referral can greatly improve the performance further, increasing the accuracy up to 0.959.
Collapse
Affiliation(s)
- Rahimi Zahari
- School of Computing, Newcastle University, Newcastle upon Tyne, UK
| | - Julie Cox
- County Durham and Darlington NHS Foundation Trust, County Durham, UK
| | - Boguslaw Obara
- School of Computing, Newcastle University, Newcastle upon Tyne, UK; Biosciences Institute, Newcastle University, Newcastle upon Tyne, UK.
| |
Collapse
|
3
|
Zhan X, Long H, Gou F, Wu J. A semantic fidelity interpretable-assisted decision model for lung nodule classification. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-03043-5. [PMID: 38141069 DOI: 10.1007/s11548-023-03043-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/24/2023] [Indexed: 12/24/2023]
Abstract
PURPOSE Early diagnosis of lung nodules is important for the treatment of lung cancer patients, existing capsule network-based assisted diagnostic models for lung nodule classification have shown promising prospects in terms of interpretability. However, these models lack the ability to draw features robustly at shallow networks, which in turn limits the performance of the models. Therefore, we propose a semantic fidelity capsule encoding and interpretable (SFCEI)-assisted decision model for lung nodule multi-class classification. METHODS First, we propose multilevel receptive field feature encoding block to capture multi-scale features of lung nodules of different sizes. Second, we embed multilevel receptive field feature encoding blocks in the residual code-and-decode attention layer to extract fine-grained context features. Integrating multi-scale features and contextual features to form semantic fidelity lung nodule attribute capsule representations, which consequently enhances the performance of the model. RESULTS We implemented comprehensive experiments on the dataset (LIDC-IDRI) to validate the superiority of the model. The stratified fivefold cross-validation results show that the accuracy (94.17%) of our method exceeds existing advanced approaches in the multi-class classification of malignancy scores for lung nodules. CONCLUSION The experiments confirm that the methodology proposed can effectively capture the multi-scale features and contextual features of lung nodules. It enhances the capability of shallow structure drawing features in capsule networks, which in turn improves the classification performance of malignancy scores. The interpretable model can support the physicians' confidence in clinical decision-making.
Collapse
Affiliation(s)
- Xiangbing Zhan
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China
| | - Huiyun Long
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, 550025, China.
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC, 3800, Australia.
| |
Collapse
|
4
|
Dong Y, Li X, Yang Y, Wang M, Gao B. A Synthesizing Semantic Characteristics Lung Nodules Classification Method Based on 3D Convolutional Neural Network. Bioengineering (Basel) 2023; 10:1245. [PMID: 38002369 PMCID: PMC10669569 DOI: 10.3390/bioengineering10111245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 11/26/2023] Open
Abstract
Early detection is crucial for the survival and recovery of lung cancer patients. Computer-aided diagnosis system can assist in the early diagnosis of lung cancer by providing decision support. While deep learning methods are increasingly being applied to tasks such as CAD (Computer-aided diagnosis system), these models lack interpretability. In this paper, we propose a convolutional neural network model that combines semantic characteristics (SCCNN) to predict whether a given pulmonary nodule is malignant. The model synthesizes the advantages of multi-view, multi-task and attention modules in order to fully simulate the actual diagnostic process of radiologists. The 3D (three dimensional) multi-view samples of lung nodules are extracted by spatial sampling method. Meanwhile, semantic characteristics commonly used in radiology reports are used as an auxiliary task and serve to explain how the model interprets. The introduction of the attention module in the feature fusion stage improves the classification of lung nodules as benign or malignant. Our experimental results using the LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative) show that this study achieves 95.45% accuracy and 97.26% ROC (Receiver Operating Characteristic) curve area. The results show that the method we proposed not only realize the classification of benign and malignant compared to standard 3D CNN approaches but can also be used to intuitively explain how the model makes predictions, which can assist clinical diagnosis.
Collapse
Affiliation(s)
| | - Xiaoqin Li
- Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China; (Y.D.); (Y.Y.); (M.W.); (B.G.)
| | | | | | | |
Collapse
|
5
|
Yanagawa M, Ito R, Nozaki T, Fujioka T, Yamada A, Fujita S, Kamagata K, Fushimi Y, Tsuboyama T, Matsui Y, Tatsugami F, Kawamura M, Ueda D, Fujima N, Nakaura T, Hirata K, Naganawa S. New trend in artificial intelligence-based assistive technology for thoracic imaging. LA RADIOLOGIA MEDICA 2023; 128:1236-1249. [PMID: 37639191 PMCID: PMC10547663 DOI: 10.1007/s11547-023-01691-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023]
Abstract
Although there is no solid agreement for artificial intelligence (AI), it refers to a computer system with intelligence similar to that of humans. Deep learning appeared in 2006, and more than 10 years have passed since the third AI boom was triggered by improvements in computing power, algorithm development, and the use of big data. In recent years, the application and development of AI technology in the medical field have intensified internationally. There is no doubt that AI will be used in clinical practice to assist in diagnostic imaging in the future. In qualitative diagnosis, it is desirable to develop an explainable AI that at least represents the basis of the diagnostic process. However, it must be kept in mind that AI is a physician-assistant system, and the final decision should be made by the physician while understanding the limitations of AI. The aim of this article is to review the application of AI technology in diagnostic imaging from PubMed database while particularly focusing on diagnostic imaging in thorax such as lesion detection and qualitative diagnosis in order to help radiologists and clinicians to become more familiar with AI in thorax.
Collapse
Affiliation(s)
- Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-City, Osaka, 565-0871, Japan.
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, 35 Shinanomachi, Shinjuku-ku, Tokyo, 160-0016, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8519, Japan
| | - Akira Yamada
- Department of Radiology, Shinshu University School of Medicine, 3-1-1 Asahi, Matsumoto, Nagano, 390-2621, Japan
| | - Shohei Fujita
- Department of Radiology, Graduate School of Medicine and Faculty of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo, 113-8421, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawaharacho, Sakyoku, Kyoto, 606-8507, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita-City, Osaka, 565-0871, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, 2-5-1 Shikata-cho, Kita-ku, Okayama, 700-8558, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, 1-2-3 Kasumi, Minami-ku, Hiroshima, 734-8551, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3 Asahi-Machi, Abeno-ku, Osaka, 545-8585, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, N15, W5, Kita-ku, Sapporo, 060-8638, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, 1-1-1 Honjo Chuo-ku, Kumamoto, 860-8556, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Graduate School of Medicine, Hokkaido University, Kita 15 Nish I 7, Kita-ku, Sapporo, Hokkaido, 060-8648, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya, Aichi, 466-8550, Japan
| |
Collapse
|
6
|
Liang H, Hu M, Ma Y, Yang L, Chen J, Lou L, Chen C, Xiao Y. Performance of Deep-Learning Solutions on Lung Nodule Malignancy Classification: A Systematic Review. Life (Basel) 2023; 13:1911. [PMID: 37763314 PMCID: PMC10532719 DOI: 10.3390/life13091911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/06/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
OBJECTIVE For several years, computer technology has been utilized to diagnose lung nodules. When compared to traditional machine learning methods for image processing, deep-learning methods can improve the accuracy of lung nodule diagnosis by avoiding the laborious pre-processing step of the picture (extraction of fake features, etc.). Our goal is to investigate how well deep-learning approaches classify lung nodule malignancy. METHOD We evaluated the performance of deep-learning methods on lung nodule malignancy classification via a systematic literature search. We conducted searches for appropriate articles in the PubMed and ISI Web of Science databases and chose those that employed deep learning to classify or predict lung nodule malignancy for our investigation. The figures were plotted, and the data were extracted using SAS version 9.4 and Microsoft Excel 2010, respectively. RESULTS Sixteen studies that met the criteria were included in this study. The articles classified or predicted pulmonary nodule malignancy using classification and summarization, using convolutional neural network (CNN), autoencoder (AE), and deep belief network (DBN). The AUC of deep-learning models is typically greater than 90% in articles. It demonstrated that deep learning performed well in the diagnosis and forecasting of lung nodules. CONCLUSION It is a thorough analysis of the most recent advancements in lung nodule deep-learning technologies. The advancement of image processing techniques, traditional machine learning techniques, deep-learning techniques, and other techniques have all been applied to the technology for pulmonary nodule diagnosis. Although the deep-learning model has demonstrated distinct advantages in the detection of pulmonary nodules, it also carries significant drawbacks that warrant additional research.
Collapse
Affiliation(s)
- Hailun Liang
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Meili Hu
- Department of Gynecology, Baoding Maternal and Child Health Care Hospital, Baoding 071000, China;
| | - Yuxin Ma
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Lei Yang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Beijing Office for Cancer Prevention and Control, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Jie Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Liwei Lou
- School of Statistics, Renmin University of China, Beijing 100872, China
| | - Chen Chen
- School of Public Administration and Policy, Renmin University of China, Beijing 100872, China; (H.L.)
| | - Yuan Xiao
- Blockchain Research Institute, Renmin University of China, Beijing 100872, China
| |
Collapse
|
7
|
Sakshiwala, Singh MP. An ensemble of three-dimensional deep neural network models for multi-attribute scoring and classification of pulmonary nodules. Proc Inst Mech Eng H 2023; 237:946-957. [PMID: 37366554 DOI: 10.1177/09544119231182037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Lung cancer is the uncontrolled growth of cells that originates in the lung parenchyma or cells that line the air passages. These cells divide rapidly to form malicious tumors. This paper proposes a multi-task ensemble of three dimensional (3D) deep neural network (DNN) based model, namely: pre-trained EfficientNetB0, BiGRU-based SEResNext101, and the proposed LungNet. The ensemble model performs binary classification and regression tasks to accurately classify the benign and malignant pulmonary nodules. This study also explores the attribute importance and proposes a domain knowledge-based regularization technique. The proposed model is evaluated on the public benchmark LIDC-IDRI dataset. Through a comparative study, it was shown that when coefficients generated by the random forest (RF) are used in the loss function, the proposed ensemble model offers a better prediction capability of the accuracy of 96.4% compared to the state-of-the-art methods. In addition, the receiver operating characteristic curves show that the proposed ensemble model has better performance than the base learners. Thus, the proposed CAD-based model can efficiently detect malignant pulmonary nodules.
Collapse
Affiliation(s)
- Sakshiwala
- Department of Computer Science and Engineering, NIT Patna, Patna, Bihar, India
| | | |
Collapse
|
8
|
Yuan H, Wu Y, Dai M. Multi-Modal Feature Fusion-Based Multi-Branch Classification Network for Pulmonary Nodule Malignancy Suspiciousness Diagnosis. J Digit Imaging 2023; 36:617-626. [PMID: 36478311 PMCID: PMC10039149 DOI: 10.1007/s10278-022-00747-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/28/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022] Open
Abstract
Detecting and identifying malignant nodules on chest computed tomography (CT) plays an important role in the early diagnosis and timely treatment of lung cancer, which can greatly reduce the number of deaths worldwide. In view of the existing methods in pulmonary nodule diagnosis, the importance of clinical radiological structured data (laboratory examination, radiological data) is ignored for the accuracy judgment of patients' condition. Hence, a multi-modal fusion multi-branch classification network is constructed to detect and classify pulmonary nodules in this work: (1) Radiological data of pulmonary nodules are used to construct structured features of length 9. (2) A multi-branch fusion-based effective attention mechanism network is designed for 3D CT Patch unstructured data, which uses 3D ECA-ResNet to dynamically adjust the extracted features. In addition, feature maps with different receptive fields from multi-layer are fully fused to obtain representative multi-scale unstructured features. (3) Multi-modal feature fusion of structured data and unstructured data is performed to distinguish benign and malignant nodules. Numerous experimental results show that this advanced network can effectively classify the benign and malignant pulmonary nodules for clinical diagnosis, which achieves the highest accuracy (94.89%), sensitivity (94.91%), and F1-score (94.65%) and lowest false positive rate (5.55%).
Collapse
Affiliation(s)
- Haiying Yuan
- Beijing University of Technology, Beijing, China.
| | - Yanrui Wu
- Beijing University of Technology, Beijing, China
| | - Mengfan Dai
- Beijing University of Technology, Beijing, China
| |
Collapse
|
9
|
Bairagi VK, Gumaste PP, Rajput SH, Chethan K S. Automatic brain tumor detection using CNN transfer learning approach. Med Biol Eng Comput 2023:10.1007/s11517-023-02820-3. [PMID: 36949356 DOI: 10.1007/s11517-023-02820-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Accepted: 02/27/2023] [Indexed: 03/24/2023]
Abstract
Automatic brain tumor detection is a challenging task as tumors vary in their position, mass, nature, and similarities found between brain lesions and normal tissues. The tumor detection is vital and urgent as it is related to the lifespan of the affected person. Medical experts commonly utilize advanced imaging practices such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound images to decide the presence of abnormal tissues. It is a very time-consuming task to extract the tumor information from the enormous quantity of information produced by MRI volumetric data examination using a manual approach. In manual tumor detection, precise identification of tumor along with its details is a complex task. Henceforth, reliable and automatic detection systems are vital. In this paper, convolutional neural network based automated brain tumor recognition approach is proposed to analyze the MRI images and classify them into tumorous and non-tumorous classes. Various convolutional neutral network architectures like Alexnet, VGG-16, GooGLeNet, and RNN are explored and compared together. The paper focuses on the tuning of the hyperparameters for the two architectures namely Alexnet and VGG-16. Exploratory results on BRATS 2013, BRATS 2015, and OPEN I dataset with 621 images confirmed that the accuracy of 98.67% is achieved using CNN Alexnet for automatic detection of brain tumors while testing on 125 images.
Collapse
Affiliation(s)
- Vinayak K Bairagi
- Department of Electronics and Telecommunication, AISSMS Institute of Information Technology, Pune, India.
| | - Pratima Purushottam Gumaste
- Department of Electronics and Telecommunication, JSPM's Jayawantrao Sawant College of Engineering, Pune, India
| | - Seema H Rajput
- Department of Electronics and Telecommunications, Cummins College of Engineering for Women, Savitribai Phule Pune University, Pune, India
| | - Chethan K S
- RV Institute of Technology and Management, Bangalore, India
| |
Collapse
|
10
|
Qiao J, Fan Y, Zhang M, Fang K, Li D, Wang Z. Ensemble framework based on attributes and deep features for benign-malignant classification of lung nodule. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
11
|
Tomassini S, Sbrollini A, Covella G, Sernani P, Falcionelli N, Müller H, Morettini M, Burattini L, Dragoni AF. Brain-on-Cloud for automatic diagnosis of Alzheimer's disease from 3D structural magnetic resonance whole-brain scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107191. [PMID: 36335750 DOI: 10.1016/j.cmpb.2022.107191] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 10/13/2022] [Accepted: 10/17/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE Alzheimer's disease accounts for approximately 70% of all dementia cases. Cortical and hippocampal atrophy caused by Alzheimer's disease can be appreciated easily from a T1-weighted structural magnetic resonance scan. Since a timely therapeutic intervention during the initial stages of the syndrome has a positive impact on both disease progression and quality of life of affected subjects, Alzheimer's disease diagnosis is crucial. Thus, this study relies on the development of a robust yet lightweight 3D framework, Brain-on-Cloud, dedicated to efficient learning of Alzheimer's disease-related features from 3D structural magnetic resonance whole-brain scans by improving our recent convolutional long short-term memory-based framework with the integration of a set of data handling techniques in addition to the tuning of the model hyper-parameters and the evaluation of its diagnostic performance on independent test data. METHODS For this objective, four serial experiments were conducted on a scalable GPU cloud service. They were compared and the hyper-parameters of the best experiment were tuned until reaching the best-performing configuration. In parallel, two branches were designed. In the first branch of Brain-on-Cloud, training, validation and testing were performed on OASIS-3. In the second branch, unenhanced data from ADNI-2 were employed as independent test set, and the diagnostic performance of Brain-on-Cloud was evaluated to prove its robustness and generalization capability. The prediction scores were computed for each subject and stratified according to age, sex and mini mental state examination. RESULTS In its best guise, Brain-on-Cloud is able to discriminate Alzheimer's disease with an accuracy of 92% and 76%, sensitivity of 94% and 82%, and area under the curve of 96% and 92% on OASIS-3 and independent ADNI-2 test data, respectively. CONCLUSIONS Brain-on-Cloud shows to be a reliable, lightweight and easily-reproducible framework for automatic diagnosis of Alzheimer's disease from 3D structural magnetic resonance whole-brain scans, performing well without segmenting the brain into its portions. Preserving the brain anatomy, its application and diagnostic ability can be extended to other cognitive disorders. Due to its cloud nature, computational lightness and fast execution, it can also be applied in real-time diagnostic scenarios providing prompt clinical decision support.
Collapse
Affiliation(s)
- Selene Tomassini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Agnese Sbrollini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Giacomo Covella
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Paolo Sernani
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Nicola Falcionelli
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland.
| | - Micaela Morettini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Laura Burattini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| | - Aldo Franco Dragoni
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche (UnivPM), Ancona, Italy.
| |
Collapse
|
12
|
Nithiyaraj E, Selvaraj A. CTSC-Net: an effectual CT slice classification network to categorize organ and non-organ slices from a 3-D CT image. Neural Comput Appl 2022; 34:22141-22156. [PMID: 35990533 PMCID: PMC9376041 DOI: 10.1007/s00521-022-07701-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 08/02/2022] [Indexed: 11/29/2022]
Abstract
Computed tomography (CT) is a non-invasive diagnostic imaging modality that reveals more insight into human organs than conventional X-rays. In general, the CT output is a 3-D image that is formed by combining multiple 2D images or slices together. It is essential to keep in mind that not all of the slices provide significant information to detect tumours. Usually, a 3-D CT image obtained from the CT scanners has a significant number of unwanted non-organ slices in it. Radiologists typically devote a significant amount of time to select the slices with organ from a 3-D CT image. The presence of a tumour is only evident in the organ slice; hence, radiologists must be cautious not to skip any organ slices. This work is evaluated on the LITS, 3DIRCADb and COVID-19 CT datasets. The three datasets collectively contain 22,435 organ slices and 53,661 non-organ slices, and there is a huge gap between the number of organ and non-organ slices. There is a need for the automatic elimination of non-organ slices in 3-D CT volumes to assist the physicians, and hence, this work focuses on the automatic recognition of organ slices from 3-D CT volumes. In this paper, a new deep model called the computed tomography slice classification network (CTSC-Net) is proposed for CT slice classification between organ and non-organ slices. The model is trained on 77,980 CT slices, validated on 9748 slices and tested on 12,571 slices. Nine CNN architectures with different layer settings are trained and tested to arrive at the final optimal model. The performance measures are computed in terms of true positive rate, true negative rate, sensitivity, specificity and accuracy. The 20-layer CTSC-Net achieves a validation accuracy of 95.04% and an overall testing accuracy of 99.96%. The proposed model is compared to eight different pre-trained CNN models, and the results of the proposed CTSC-Net surpassed all the comparable models. The activation feature maps of different layers of the CTSC-Net are visualized to verify the discriminative features learned by the network. Hence, the proposed CTSC-Net can be employed as a computer-aided diagnosis tool to help physicians discard unnecessary non-organ slices from the 3-D CT volume and to speed up the CT diagnosis process.
Collapse
Affiliation(s)
- Emerson Nithiyaraj
- Department of Electronics and Communication Engineering, Centre for Image Processing and Pattern Recognition, Mepco Schlenk Engineering College, Sivakasi, 626005 India
| | - Arivazhagan Selvaraj
- Department of Electronics and Communication Engineering, Centre for Image Processing and Pattern Recognition, Mepco Schlenk Engineering College, Sivakasi, 626005 India
| |
Collapse
|
13
|
Tomassini S, Falcionelli N, Sernani P, Sbrollini A, Morettini M, Burattini L, Dragoni AF. Cloud-YLung for Non-Small Cell Lung Cancer Histology Classification from 3D Computed Tomography Whole-Lung Scans. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1556-1560. [PMID: 36085720 DOI: 10.1109/embc48229.2022.9871378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Non-Small Cell Lung Cancer (NSCLC) represents up to 85% of all malignant lung nodules. Adenocarcinoma and squamous cell carcinoma account for 90% of all NSCLC histotypes. The standard diagnostic procedure for NSCLC histotype characterization implies cooperation of 3D Computed Tomography (CT), especially in the form of low-dose CT, and lung biopsy. Since lung biopsy is invasive and challenging (especially for deeply-located lung cancers and for those close to blood vessels or airways), there is the necessity to develop non-invasive procedures for NSCLC histology classification. Thus, this study aims to propose Cloud-YLung for NSCLC histology classification directly from 3D CT whole-lung scans. With this aim, data were selected from the openly-accessible NSCLC-Radiomics dataset and a modular pipeline was designed. Automatic feature extraction and classification were accomplished by means of a Convolutional Long Short-Term Memory (ConvLSTM)-based neural network trained from scratch on a scalable GPU cloud service to ensure a machine-independent reproducibility of the entire framework. Results show that Cloud- YLung performs well in discriminating both NSCLC histotypes, achieving a test accuracy of 75% and AUC of 84%. Cloud-YLung is not only lung nodule segmentation free but also the first that makes use of a ConvLSTM-based neural network to automatically extract high-throughput features from 3D CT whole-lung scans and classify them. Clinical relevance- Cloud-YLung is a promising framework to non-invasively classify NSCLC histotypes. Preserving the lung anatomy, its application could be extended to other pulmonary pathologies using 3D CT whole-lung scans.
Collapse
|
14
|
Tomassini S, Falcionelli N, Sernani P, Burattini L, Dragoni AF. Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey. Comput Biol Med 2022; 146:105691. [PMID: 35691714 DOI: 10.1016/j.compbiomed.2022.105691] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 11/30/2022]
Abstract
Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks.
Collapse
Affiliation(s)
- Selene Tomassini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Nicola Falcionelli
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Paolo Sernani
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Laura Burattini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Aldo Franco Dragoni
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| |
Collapse
|
15
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and "motivate" the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|
16
|
Yan J, Xue X, Gao C, Guo Y, Wu L, Zhou C, Chen F, Xu M. Predicting the Ki-67 proliferation index in pulmonary adenocarcinoma patients presenting with subsolid nodules: construction of a nomogram based on CT images. Quant Imaging Med Surg 2022; 12:642-652. [PMID: 34993108 DOI: 10.21037/qims-20-1385] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 07/29/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND The Ki-67 proliferation index (PI) reflects the proliferation of cells. However, the conventional methods for the acquisition of the Ki-67 PI, such as surgery and biopsy, are generally invasive. This study investigated a potential noninvasive method of predicting the Ki-67 PI in patients with lung adenocarcinoma presenting with subsolid nodules. METHODS This retrospective study enrolled 153 patients who presented with pulmonary adenocarcinoma appearing as subsolid nodules (SSNs) on computed tomography (CT) images between January 2015 and December 2018. Presence of LUAD with SSNs was confirmed by histopathology. Of these participants, 107 patients were from institution 1 and were divided into a training cohort and an internal validation cohort in a 7:3 ratio. The other 46 patients were from institution 2 and were enrolled as an external validation cohort. All patients underwent conventional CT scans with thin-slice (≤1.25 mm) reconstruction, and 1,316 quantitative radiomic features were extracted from the CT images for each nodule. The minimum redundancy maximum relevance and the least absolute shrinkage and selection operator were used for feature selection, and the radiomics signature was constructed based on these selected features. Clinical features were examined using univariate logistic regression analysis. The nomogram was developed based on the radiomics signature and the independent clinical risk factors. The Delong test and t test were employed for statistical analysis. The performance of different models was assessed by the receiver operating characteristic (ROC) curve. RESULTS The diameter of the nodules [odds ratio (OR) =1.17; P=0.003] was identified as an independent predictive parameter. Both the radiomics signature and the nomogram suggested a good predictive probability for Ki-67 expression. For the radiomics signature, the area under the ROC curve (AUC) for the training cohort, the internal validation cohort, and the external validation cohort was 0.86 [95% confidence interval (CI): 0.77 to 0.95], 0.81 (95% CI: 0.64 to 0.98), and 0.77 (95% CI: 0.62 to 0.91), respectively. For the nomogram, the AUC for the training cohort, the internal validation cohort, and the external validation cohort was 0.86 (95% CI: 0.77 to 0.95), 0.80 (95% CI: 0.64 to 0.97), and 0.79 (95% CI: 0.65 to 0.94), respectively. There were no statistical differences in the AUCs between the radiomics signature and the radiomic nomogram in the training cohort or the validation cohorts (all P>0.05). CONCLUSIONS The nomogram provides a novel strategy for determining the Ki-67 PI in predicting the proliferation of subsolid nodules, which may be beneficial for the management of patients with SSNs.
Collapse
Affiliation(s)
- Jing Yan
- The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Xing Xue
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Chen Gao
- The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Yifan Guo
- The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Changyu Zhou
- The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| | - Feng Chen
- Department of Radiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Maosheng Xu
- The First Clinical Medical College of Zhejiang Chinese Medical University, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
17
|
Qian L. Research on complex attribute big data classification based on iterative fuzzy clustering algorithm. WEB INTELLIGENCE 2021. [DOI: 10.3233/web-210463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In order to overcome the low classification accuracy of traditional methods, this paper proposes a new classification method of complex attribute big data based on iterative fuzzy clustering algorithm. Firstly, principal component analysis and kernel local Fisher discriminant analysis were used to reduce dimensionality of complex attribute big data. Then, the Bloom Filter data structure is introduced to eliminate the redundancy of the complex attribute big data after dimensionality reduction. Secondly, the redundant complex attribute big data is classified in parallel by iterative fuzzy clustering algorithm, so as to complete the complex attribute big data classification. Finally, the simulation results show that the accuracy, the normalized mutual information index and the Richter’s index of the proposed method are close to 1, the classification accuracy is high, and the RDV value is low, which indicates that the proposed method has high classification effectiveness and fast convergence speed.
Collapse
Affiliation(s)
- Li Qian
- School of Digital Information Technology, Zhejiang Technical Institute of Economics, Hangzhou 310018, China. E-mail:
| |
Collapse
|
18
|
Classification of Benign and Malignant Lung Nodules Based on Deep Convolutional Network Feature Extraction. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:8769652. [PMID: 34745513 PMCID: PMC8566059 DOI: 10.1155/2021/8769652] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2021] [Accepted: 10/11/2021] [Indexed: 11/17/2022]
Abstract
With the rapid development of detection technology, CT imaging technology has been widely used in the early clinical diagnosis of lung nodules. However, accurate assessment of the nature of the nodule remains a challenging task due to the subjective nature of the radiologist. With the increasing amount of publicly available lung image data, it has become possible to use convolutional neural networks for benign and malignant classification of lung nodules. However, as the network depth increases, network training methods based on gradient descent usually lead to gradient dispersion. Therefore, we propose a novel deep convolutional network approach to classify the benignity and malignancy of lung nodules. Firstly, we segmented, extracted, and performed zero-phase component analysis whitening on images of lung nodules. Then, a multilayer perceptron was introduced into the structure to construct a deep convolutional network. Finally, the minibatch stochastic gradient descent method with a momentum coefficient is used to fine-tune the deep convolutional network to avoid the gradient dispersion. The 750 lung nodules in the lung image database are used for experimental verification. Classification accuracy of the proposed method can reach 96.0%. The experimental results show that the proposed method can provide an objective and efficient aid to solve the problem of classifying benign and malignant lung nodules in medical images.
Collapse
|
19
|
Wang SH, Du J, Xu H, Yang D, Ye Y, Chen Y, Zhu Y, Ba T, Yuan C, Yang ZH. Automatic discrimination of different sequences and phases of liver MRI using a dense feature fusion neural network: a preliminary study. Abdom Radiol (NY) 2021; 46:4576-4587. [PMID: 34057565 DOI: 10.1007/s00261-021-03142-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Revised: 05/16/2021] [Accepted: 05/22/2021] [Indexed: 12/29/2022]
Abstract
PURPOSE To develop and validate a dense feature fusion neural network (DFuNN) to automatically recognize different sequences and phases of liver magnetic resonance imaging (MRI). MATERIALS AND METHODS In total, 3869 sequences and phases from 384 liver MRI examinations, divided into training/validation (n = 2886 sequences from 287 patients) and test (n = 983 sequences from 97 patients) sets, were used in this retrospective study. Ten unenhanced sequences and enhanced phases were included. Manual sequence recognition, performed by two radiologists (20 and 10 years of experience) in a consensus reading, was used as the reference standard. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) were calculated to evaluate the performance of the DFuNN on an identical unseen test set. Finally, we evaluated the factors impacting the model precision. RESULTS A fusion block improved the performance of the DFuNN. DFuNN with a fusion block achieved good recognition performance for both complete and incomplete sequences and phases in the test set. The average sensitivity of recognition performance for complete sequence and phase inputs ranged from 88.06 to 100%, the average specificity ranged from 99.12 to 99.94%, and the median accuracy ranged from 98.02 to 99.95%. The DFuNN prediction accuracy for patients without cirrhosis were significantly higher than those for patients with cirrhosis (P = 0.0153). No significant difference was found in the accuracy across other factors. CONCLUSION DFuNN can automatically and accurately identify specific unenhanced MRI sequences and enhanced MRI phases.
Collapse
Affiliation(s)
- Shu-Hui Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 Yong An Road, Xicheng District, Beijing, 100050, China
- Department of Radiology, Weihai Municipal Hospital, Weihai, Shandong Province, China
| | - Jing Du
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 Yong An Road, Xicheng District, Beijing, 100050, China
| | - Hui Xu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 Yong An Road, Xicheng District, Beijing, 100050, China
| | - Dawei Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 Yong An Road, Xicheng District, Beijing, 100050, China
| | - Yuxiang Ye
- Shanghai SenseTime Intelligent Technology Co. Ltd, Beijing, China
| | - Yinan Chen
- Shanghai SenseTime Intelligent Technology Co. Ltd, Beijing, China
| | - Yajing Zhu
- Shanghai SenseTime Intelligent Technology Co. Ltd, Beijing, China
| | - Te Ba
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 Yong An Road, Xicheng District, Beijing, 100050, China
| | - Chunwang Yuan
- Center of Interventional Oncology and Liver Diseases, Beijing Youan Hospital, Capital Medical University, Beijing, China
| | - Zheng-Han Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 Yong An Road, Xicheng District, Beijing, 100050, China.
| |
Collapse
|
20
|
Zhao Y, Ma J, Peng Z, Xia H, Wan H. Pulmonary Nodule Detection Based on Three-Dimensional Multiscale Convolutional Neural Network with Channel and Spatial Attention. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Early screening for pulmonary nodules is currently an important means for reducing lung cancer mortality. In recent years, three-dimensional convolutional neural networks have achieved great success in the field of pulmonary nodule detection. This paper proposes a pulmonary nodule detection
method based on a threedimensional multiscale convolutional neural network with channel and spatial attention. First, a multiscale module is designed to extract the image features at different scales. Second, a channel and spatial attention module is designed to mine the correlation information
between features from the perspective of space and channel. Then the extracted features are sent to a pyramid-like fusion mechanism, so that the features contain both deep semantic information and shallow position information, which is conducive to object positioning and bounding box regression.
In general, the experiments on the LUng Nodule Analysis 2016 (LUNA16) dataset show that the average free-response receiver operating characteristic (FROC) score is 0.846. Compared with other current advanced methods, the method is competitive and effective.
Collapse
Affiliation(s)
- Yudu Zhao
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Jun Ma
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Zhenwei Peng
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Hao Xia
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Honglin Wan
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| |
Collapse
|
21
|
Adaptive Aggregated Attention Network for Pulmonary Nodule Classification. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11020610] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Lung cancer has one of the highest cancer mortality rates in the world and threatens people’s health. Timely and accurate diagnosis can greatly reduce the number of deaths. Therefore, an accurate diagnosis system is extremely important. The existing methods have achieved significant performances on lung cancer diagnosis, but they are insufficient in fine-grained representations. In this paper, we propose a novel attentive method to differentiate malignant and benign pulmonary nodules. Firstly, the residual attention network (RAN) and squeeze-and-excitation network (SEN) were utilized to extract spatial and contextual features. Secondly, a novel multi-scale attention network (MSAN) was proposed to capture multi-scale attention features automatically, and the MSAN integrated the advantages of the spatial attention mechanism and contextual attention mechanism, which are very important for capturing the salient features of nodules. Finally, the gradient boosting machine (GBM) algorithm was used to differentiate malignant and benign nodules. We conducted a series of experiments on the Lung Image Database Consortium image collection (LIDC-IDRI) database, achieving an accuracy of 91.9%, a sensitivity of 91.3%, a false positive rate of 8.0%, and an F1-score of 91.0%. The experimental results demonstrate that our proposed method outperforms the state-of-the-art methods with respect to accuracy, false positive rate, and F1-Score.
Collapse
|
22
|
Raja H, Akram MU, Shaukat A, Khan SA, Alghamdi N, Khawaja SG, Nazir N. Extraction of Retinal Layers Through Convolution Neural Network (CNN) in an OCT Image for Glaucoma Diagnosis. J Digit Imaging 2020; 33:1428-1442. [PMID: 32968881 DOI: 10.1007/s10278-020-00383-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 06/16/2020] [Accepted: 09/09/2020] [Indexed: 11/26/2022] Open
Abstract
Glaucoma is a progressive and deteriorating optic neuropathy that leads to visual field defects. The damage occurs as glaucoma is irreversible, so early and timely diagnosis is of significant importance. The proposed system employs the convolution neural network (CNN) for automatic segmentation of the retinal layers. The inner limiting membrane (ILM) and retinal pigmented epithelium (RPE) are used to calculate cup-to-disc ratio (CDR) for glaucoma diagnosis. The proposed system uses structure tensors to extract candidate layer pixels, and a patch across each candidate layer pixel is extracted, which is classified using CNN. The proposed framework is based upon VGG-16 architecture for feature extraction and classification of retinal layer pixels. The output feature map is merged into SoftMax layer for classification and produces probability map for central pixel of each patch and decides whether it is ILM, RPE, or background pixels. Graph search theory refines the extracted layers by interpolating the missing points, and these extracted ILM and RPE are finally used to compute CDR value and diagnose glaucoma. The proposed system is validated using a local dataset of optical coherence tomography images from 196 patients, including normal and glaucoma subjects. The dataset contains manually annotated ILM and RPE layers; manually extracted patches for ILM, RPE, and background pixels; CDR values; and eventually final finding related to glaucoma. The proposed system is able to extract ILM and RPE with a small absolute mean error of 6.03 and 5.56, respectively, and it finds CDR value within average range of ± 0.09 as compared with glaucoma expert. The proposed system achieves average sensitivity, specificity, and accuracies of 94.6, 94.07, and 94.68, respectively.
Collapse
Affiliation(s)
- Hina Raja
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan.
| | - M Usman Akram
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Arslan Shaukat
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Shoab Ahmed Khan
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Norah Alghamdi
- Department of Computer Science, Princess Nora Bint Abdurahman University, Riyadh, Saudi Arabia
| | - Sajid Gul Khawaja
- Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Noman Nazir
- Armed Forces Institute of Ophthalmology, Rawalpindi, Pakistan
| |
Collapse
|