1
|
Song W, Tang F, Marshall H, Fong KM, Liu F. A multiscale 3D network for lung nodule detection using flexible nodule modeling. Med Phys 2024. [PMID: 38949577 DOI: 10.1002/mp.17283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 05/17/2024] [Accepted: 06/18/2024] [Indexed: 07/02/2024] Open
Abstract
BACKGROUND Lung cancer is the most common type of cancer. Detection of lung cancer at an early stage can reduce mortality rates. Pulmonary nodules may represent early cancer and can be identified through computed tomography (CT) scans. Malignant risk can be estimated based on attributes like size, shape, location, and density. PURPOSE Deep learning algorithms have achieved remarkable advancements in this domain compared to traditional machine learning methods. Nevertheless, many existing anchor-based deep learning algorithms exhibit sensitivity to predefined anchor-box configurations, necessitating manual adjustments to obtain optimal outcomes. Conversely, current anchor-free deep learning-based nodule detection methods normally adopt fixed-size nodule models like cubes or spheres. METHODS To address these technical challenges, we propose a multiscale 3D anchor-free deep learning network (M3N) for pulmonary nodule detection, leveraging adjustable nodule modeling (ANM). Within this framework, ANM empowers the representation of target objects in an anisotropic manner, with a novel point selection strategy (PSS) devised to accelerate the learning process of anisotropic representation. We further incorporate a composite loss function that combines the conventional L2 loss and cosine similarity loss, facilitating M3N to learn nodules' intensity distribution in three dimensions. RESULTS Experiment results show that the M3N achieves 90.6% competitive performance metrics (CPM) with seven predefined false positives per scan on the LUNA 16 dataset. This performance appears to exceed that of other state-of-the-art deep learning-based networks reported in their respective publications. Individual test results also demonstrate that M3N excels in providing more accurate, adaptive bounding boxes surrounding the contours of target nodules. CONCLUSIONS The newly developed nodule detection system reduces reliance on prior knowledge, such as the general size of objects in the dataset, thus it should enhance overall robustness and versatility. Distinct from traditional nodule modeling techniques, the ANM approach aligns more closely with the morphological characteristics of nodules. Time consumption and detection results demonstrate promising efficiency and accuracy which should be validated in clinical settings.
Collapse
Affiliation(s)
- Wenjia Song
- School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
| | - Fangfang Tang
- School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
| | - Henry Marshall
- UQ Thoracic Research Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Department of Thoracic Medicine, The Prince Charles Hospital, Brisbane, Australia
| | - Kwun M Fong
- UQ Thoracic Research Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia
- Department of Thoracic Medicine, The Prince Charles Hospital, Brisbane, Australia
| | - Feng Liu
- School of Electrical Engineering and Computer Science, The University of Queensland, Brisbane, Australia
| |
Collapse
|
2
|
Song M, Wang J, Yu Z, Wang J, Yang L, Lu Y, Li B, Wang X, Wang X, Huang Q, Li Z, Kanellakis NI, Liu J, Wang J, Wang B, Yang J. PneumoLLM: Harnessing the power of large language model for pneumoconiosis diagnosis. Med Image Anal 2024; 97:103248. [PMID: 38941859 DOI: 10.1016/j.media.2024.103248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 06/17/2024] [Accepted: 06/17/2024] [Indexed: 06/30/2024]
Abstract
The conventional pretraining-and-finetuning paradigm, while effective for common diseases with ample data, faces challenges in diagnosing data-scarce occupational diseases like pneumoconiosis. Recently, large language models (LLMs) have exhibits unprecedented ability when conducting multiple tasks in dialogue, bringing opportunities to diagnosis. A common strategy might involve using adapter layers for vision-language alignment and diagnosis in a dialogic manner. Yet, this approach often requires optimization of extensive learnable parameters in the text branch and the dialogue head, potentially diminishing the LLMs' efficacy, especially with limited training data. In our work, we innovate by eliminating the text branch and substituting the dialogue head with a classification head. This approach presents a more effective method for harnessing LLMs in diagnosis with fewer learnable parameters. Furthermore, to balance the retention of detailed image information with progression towards accurate diagnosis, we introduce the contextual multi-token engine. This engine is specialized in adaptively generating diagnostic tokens. Additionally, we propose the information emitter module, which unidirectionally emits information from image tokens to diagnosis tokens. Comprehensive experiments validate the superiority of our methods.
Collapse
Affiliation(s)
- Meiyue Song
- Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, Beijing, 100005, China; State Key Laboratory of Respiratory Health and Multimorbidity, Beijing, 100005, China
| | - Jiarui Wang
- School of Automation, Northwestern Polytechnical University, Shaanxi, Xi'an 710072, China
| | - Zhihua Yu
- Jinneng Holding Coal Industry Group Co. Ltd Occupational Disease Precaution Clinic, Shanxi, 037001, China
| | - Jiaxin Wang
- School of Medicine, Tsinghua University, Beijing, 100084, China
| | - Le Yang
- School of Electronics and Control Engineering, Chang'an University, Shaanxi, Xi'an 710064, China
| | - Yuting Lu
- School of Automation, Northwestern Polytechnical University, Shaanxi, Xi'an 710072, China
| | - Baicun Li
- Center of Respiratory Medicine, China-Japan Friendship Hospital, National Center for Respiratory Medicine, Institute of Respiratory Medicine, Chinese Academy of Medical Sciences, National Clinical Research Center for Respiratory Diseases, Beijing, 100020, China
| | - Xue Wang
- Department of Respiratory, the Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, 150086, China; Internal Medicine, Harbin Medical University, Harbin, Heilongjiang, 150081, China
| | - Xiaoxu Wang
- School of Automation, Northwestern Polytechnical University, Shaanxi, Xi'an 710072, China
| | - Qinghua Huang
- School of Artificial Intelligence, OPtics and ElectroNics (iOPEN), Northwestern Polytechnical University, Xi'an 710072, China
| | - Zhijun Li
- Translational Research Center, Shanghai YangZhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), Shanghai 201619, China; School of Mechanical Engineering, Tongji University, Shanghai 201804, China
| | - Nikolaos I Kanellakis
- Laboratory of Pleural and Lung Cancer Translational Research, CAMS Oxford Institute, Nuffield Department of Medicine, University of Oxford, Oxford, UK; Oxford Centre for Respiratory Medicine, Churchill Hospital, Oxford University Hospitals NHS Foundation Trust, Oxford, UK; National Institute for Health Research Oxford Biomedical Research Centre, University of Oxford, Oxford, UK
| | - Jiangfeng Liu
- Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, Beijing, 100005, China; Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100144, China; State Key Laboratory of Common Mechanism Research for Major Diseases, Beijing, 100005, China.
| | - Jing Wang
- Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, Beijing, 100005, China; State Key Laboratory of Respiratory Health and Multimorbidity, Beijing, 100005, China.
| | - Binglu Wang
- School of Automation, Northwestern Polytechnical University, Shaanxi, Xi'an 710072, China.
| | - Juntao Yang
- Institute of Basic Medical Sciences Chinese Academy of Medical Sciences, School of Basic Medicine Peking Union Medical College, Beijing, 100005, China; Plastic Surgery Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100144, China; State Key Laboratory of Common Mechanism Research for Major Diseases, Beijing, 100005, China
| |
Collapse
|
3
|
Kim J, Li Y, Shin BS. 3D-DGGAN: A Data-Guided Generative Adversarial Network for High Fidelity in Medical Image Generation. IEEE J Biomed Health Inform 2024; 28:2904-2915. [PMID: 38416610 DOI: 10.1109/jbhi.2024.3367375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/01/2024]
Abstract
Three-dimensional images are frequently used in medical imaging research for classification, segmentation, and detection. However, the limited availability of 3D images hinders research progress due to network training difficulties. Generative methods have been proposed to create medical images using AI techniques. Nevertheless, 2D approaches have difficulty dealing with 3D anatomical structures, which can result in discontinuities between slices. To mitigate these discontinuities, several 3D generative networks have been proposed. However, the scarcity of available 3D images makes training these networks with limited samples inadequate for producing high-fidelity 3D images. We propose a data-guided generative adversarial network to provide high fidelity in 3D image generation. The generator creates fake images with noise using reference code obtained by extracting features from real images. The generator also creates decoded images using reference code without noise. These decoded images are compared to the real images to evaluate fidelity in the reference code. This generation process can create high-fidelity 3D images from only a small amount of real training data. Additionally, our method employs three types of discriminator: volume (evaluates all the slices), slab (evaluates a set of consecutive slices), and slice (evaluates randomly selected slices). The proposed discriminator enhances fidelity by differentiating between real and fake images based on detailed characteristics. Results from our method are compared with existing methods by using quantitative analysis such as Fréchet inception distance and maximum mean discrepancy. The results demonstrate that our method produces more realistic 3D images than existing methods.
Collapse
|
4
|
Gao Z, Guo Y, Wang G, Chen X, Cao X, Zhang C, An S, Xu F. Robust deep learning from incomplete annotation for accurate lung nodule detection. Comput Biol Med 2024; 173:108361. [PMID: 38569236 DOI: 10.1016/j.compbiomed.2024.108361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/02/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Abstract
Deep learning plays a significant role in the detection of pulmonary nodules in low-dose computed tomography (LDCT) scans, contributing to the diagnosis and treatment of lung cancer. Nevertheless, its effectiveness often relies on the availability of extensive, meticulously annotated dataset. In this paper, we explore the utilization of an incompletely annotated dataset for pulmonary nodules detection and introduce the FULFIL (Forecasting Uncompleted Labels For Inexpensive Lung nodule detection) algorithm as an innovative approach. By instructing annotators to label only the nodules they are most confident about, without requiring complete coverage, we can substantially reduce annotation costs. Nevertheless, this approach results in an incompletely annotated dataset, which presents challenges when training deep learning models. Within the FULFIL algorithm, we employ Graph Convolution Network (GCN) to discover the relationships between annotated and unannotated nodules for self-adaptively completing the annotation. Meanwhile, a teacher-student framework is employed for self-adaptive learning using the completed annotation dataset. Furthermore, we have designed a Dual-Views loss to leverage different data perspectives, aiding the model in acquiring robust features and enhancing generalization. We carried out experiments using the LUng Nodule Analysis (LUNA) dataset, achieving a sensitivity of 0.574 at a False positives per scan (FPs/scan) of 0.125 with only 10% instance-level annotations for nodules. This performance outperformed comparative methods by 7.00%. Experimental comparisons were conducted to evaluate the performance of our model and human experts on test dataset. The results demonstrate that our model can achieve a comparable level of performance to that of human experts. The comprehensive experimental results demonstrate that FULFIL can effectively leverage an incomplete pulmonary nodule dataset to develop a robust deep learning model, making it a promising tool for assisting in lung nodule detection.
Collapse
Affiliation(s)
- Zebin Gao
- School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Yuchen Guo
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Guoxin Wang
- JD Health International Inc, Beijing 100176, China
| | - Xiangru Chen
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Xuyang Cao
- JD Health International Inc, Beijing 100176, China
| | - Chao Zhang
- JD Health International Inc, Beijing 100176, China
| | - Shan An
- JD Health International Inc, Beijing 100176, China
| | - Feng Xu
- School of Software, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
5
|
Assis Y, Liao L, Pierre F, Anxionnat R, Kerrien E. Intracranial aneurysm detection: an object detection perspective. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03132-z. [PMID: 38632166 DOI: 10.1007/s11548-024-03132-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 03/28/2024] [Indexed: 04/19/2024]
Abstract
PURPOSE Intracranial aneurysm detection from 3D Time-Of-Flight Magnetic Resonance Angiography images is a problem of increasing clinical importance. Recently, a streak of methods have shown promising performance by using segmentation neural networks. However, these methods may be less relevant in a clinical settings where diagnostic decisions rely on detecting objects rather than their segmentation. METHODS We introduce a 3D single-stage object detection method tailored for small object detection such as aneurysms. Our anchor-free method incorporates fast data annotation, adapted data sampling and generation to address class imbalance problem, and spherical representations for improved detection. RESULTS A comprehensive evaluation was conducted, comparing our method with the state-of-the-art SCPM-Net, nnDetection and nnUNet baselines, using two datasets comprising 402 subjects. The evaluation used adapted object detection metrics. Our method exhibited comparable or superior performance, with an average precision of 78.96%, sensitivity of 86.78%, and 0.53 false positives per case. CONCLUSION Our method significantly reduces the detection complexity compared to existing methods and highlights the advantages of object detection over segmentation-based approaches for aneurysm detection. It also holds potential for application to other small object detection problems.
Collapse
Affiliation(s)
- Youssef Assis
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France.
| | - Liang Liao
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France
- Department of Diagnostic and Therapeutic Interventional Neuroradiology, Université de Lorraine, CHRU-Nancy, 54000, Nancy, France
- Université de Lorraine, Inserm, IADI, 54000, Nancy, France
| | - Fabien Pierre
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France
| | - René Anxionnat
- Department of Diagnostic and Therapeutic Interventional Neuroradiology, Université de Lorraine, CHRU-Nancy, 54000, Nancy, France
- Université de Lorraine, Inserm, IADI, 54000, Nancy, France
| | - Erwan Kerrien
- Université de Lorraine, CNRS, Inria, LORIA, 54000, Nancy, France
| |
Collapse
|
6
|
Zhang J, Zou W, Hu N, Zhang B, Wang J. S-Net: an S-shaped network for nodule detection in 3D CT images. Phys Med Biol 2024; 69:075013. [PMID: 38382097 DOI: 10.1088/1361-6560/ad2b96] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 02/21/2024] [Indexed: 02/23/2024]
Abstract
Objective. Accurate and automatic detection of pulmonary nodules is critical for early lung cancer diagnosis, and promising progress has been achieved in developing effective deep models for nodule detection. However, most existing nodule detection methods merely focus on integrating elaborately designed feature extraction modules into the backbone of the detection network to extract rich nodule features while ignore disadvantages of the structure of detection network itself. This study aims to address these disadvantages and develop a deep learning-based algorithm for pulmonary nodule detection to improve the accuracy of early lung cancer diagnosis.Approach. In this paper, an S-shaped network called S-Net is developed with the U-shaped network as backbone, where an information fusion branch is used to propagate lower-level details and positional information critical for nodule detection to higher-level feature maps, head shared scale adaptive detection strategy is utilized to capture information from different scales for better detecting nodules with different shapes and sizes and the feature decoupling detection head is used to allow the classification and regression branches to focus on the information required for their respective tasks. A hybrid loss function is utilized to fully exploit the interplay between the classification and regression branches.Main results. The proposed S-Net network with ResSENet and other three U-shaped backbones from SANet, OSAF-YOLOv3 and MSANet (R+SC+ECA) models achieve average CPM scores of 0.914, 0.915, 0.917 and 0.923 on the LUNA16 dataset, which are significantly higher than those achieved with other existing state-of-the-art models.Significance. The experimental results demonstrate that our proposed method effectively improves nodule detection performance, which implies potential applications of the proposed method in clinical practice.
Collapse
Affiliation(s)
- JingYu Zhang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Wei Zou
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Nan Hu
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| | - Bin Zhang
- Department of Nuclear Medicine, the First Affiliated Hospital of Soochow University, Suzhou 215006, People's Republic of China
| | - Jiajun Wang
- School of Electronic and Information Engineering, Soochow University, Suzhou 215006, People's Republic of China
| |
Collapse
|
7
|
J M, K J. Enhancing Lung Nodule Classification: A Novel CViEBi-CBGWO Approach with Integrated Image Preprocessing. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01074-1. [PMID: 38526706 DOI: 10.1007/s10278-024-01074-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 02/16/2024] [Accepted: 03/01/2024] [Indexed: 03/27/2024]
Abstract
Cancer detection and accurate classification pose significant challenges for medical professionals, as it is described as a lethal illness. Diagnosing the malignant lung nodules in its initial stage significantly enhances the recovery and survival rates. Therefore, a novel model named convolutional vision Elman bidirectional-based crossover boosted grey wolf optimization (CViEBi-CBGWO) has been proposed to enhance classification accuracy. CT images selected for further preprocessing are obtained from the LUNA16 dataset and LIDC-IDRI dataset. The data undergoes preprocessing phases involving normalization, data augmentation, and filtering to improve the generalization ability as well as image quality. The local features within the preprocessed images are extracted by implementing the convolutional neural network (CNN). For extracting the global features within the preprocessed images, the vision transformer (ViT) model consists of five encoder blocks. The attained local and global features are combined to generate the feature map. The Elman bidirectional long short-term memory (EBiLSTM) model is applied to categorize the generated feature map as benign and malignant. The crossover operation is integrated with the grey wolf optimization (GWO) algorithm, and the combined form of CBGWO fine-tunes the parameters of the CViEBi model, eliminating the problem of local optima. Experimental validation is conducted using various evaluation measures to assess effectiveness. Comparative analysis demonstrates a superior classification accuracy of 98.72% in the proposed method compared to existing methods.
Collapse
Affiliation(s)
- Manikandan J
- Department of Information Technology, St. Joseph's College of Engineering, Chennai, India.
| | - Jayashree K
- Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai, India
| |
Collapse
|
8
|
Lin CY, Guo SM, Lien JJJ, Tsai TY, Liu YS, Lai CH, Hsu IL, Chang CC, Tseng YL. Development of a modified 3D region proposal network for lung nodule detection in computed tomography scans: a secondary analysis of lung nodule datasets. Cancer Imaging 2024; 24:40. [PMID: 38509635 PMCID: PMC10953193 DOI: 10.1186/s40644-024-00683-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 03/03/2024] [Indexed: 03/22/2024] Open
Abstract
BACKGROUND Low-dose computed tomography (LDCT) has been shown useful in early lung cancer detection. This study aimed to develop a novel deep learning model for detecting pulmonary nodules on chest LDCT images. METHODS In this secondary analysis, three lung nodule datasets, including Lung Nodule Analysis 2016 (LUNA16), Lung Nodule Received Operation (LNOP), and Lung Nodule in Health Examination (LNHE), were used to train and test deep learning models. The 3D region proposal network (RPN) was modified via a series of pruning experiments for better predictive performance. The performance of each modified deep leaning model was evaluated based on sensitivity and competition performance metric (CPM). Furthermore, the performance of the modified 3D RPN trained on three datasets was evaluated by 10-fold cross validation. Temporal validation was conducted to assess the reliability of the modified 3D RPN for detecting lung nodules. RESULTS The results of pruning experiments indicated that the modified 3D RPN composed of the Cross Stage Partial Network (CSPNet) approach to Residual Network (ResNet) Xt (CSP-ResNeXt) module, feature pyramid network (FPN), nearest anchor method, and post-processing masking, had the optimal predictive performance with a CPM of 92.2%. The modified 3D RPN trained on the LUNA16 dataset had the highest CPM (90.1%), followed by the LNOP dataset (CPM: 74.1%) and the LNHE dataset (CPM: 70.2%). When the modified 3D RPN trained and tested on the same datasets, the sensitivities were 94.6%, 84.8%, and 79.7% for LUNA16, LNOP, and LNHE, respectively. The temporal validation analysis revealed that the modified 3D RPN tested on LNOP test set achieved a CPM of 71.6% and a sensitivity of 85.7%, and the modified 3D RPN tested on LNHE test set had a CPM of 71.7% and a sensitivity of 83.5%. CONCLUSION A modified 3D RPN for detecting lung nodules on LDCT scans was designed and validated, which may serve as a computer-aided diagnosis system to facilitate lung nodule detection and lung cancer diagnosis.
Collapse
Affiliation(s)
- Chia-Ying Lin
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, 701, Tainan City, Taiwan
| | - Shu-Mei Guo
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Jenn-Jier James Lien
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Tzung-Yi Tsai
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan, Taiwan
| | - Yi-Sheng Liu
- Department of Medical Imaging, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, No.1, University Road, 701, Tainan City, Taiwan
| | - Chao-Han Lai
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - I-Lin Hsu
- Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| | - Chao-Chun Chang
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan.
| | - Yau-Lin Tseng
- Division of Thoracic Surgery, Department of Surgery, College of Medicine, National Cheng Kung University Hospital, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
9
|
Ma L, Li G, Feng X, Fan Q, Liu L. TiCNet: Transformer in Convolutional Neural Network for Pulmonary Nodule Detection on CT Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:196-208. [PMID: 38343213 DOI: 10.1007/s10278-023-00904-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 07/19/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Lung cancer is the leading cause of cancer death. Since lung cancer appears as nodules in the early stage, detecting the pulmonary nodules in an early phase could enhance the treatment efficiency and improve the survival rate of patients. The development of computer-aided analysis technology has made it possible to automatically detect lung nodules in Computed Tomography (CT) screening. In this paper, we propose a novel detection network, TiCNet. It is attempted to embed a transformer module in the 3D Convolutional Neural Network (CNN) for pulmonary nodule detection on CT images. First, we integrate the transformer and CNN in an end-to-end structure to capture both the short- and long-range dependency to provide rich information on the characteristics of nodules. Second, we design the attention block and multi-scale skip pathways for improving the detection of small nodules. Last, we develop a two-head detector to guarantee high sensitivity and specificity. Experimental results on the LUNA16 dataset and PN9 dataset showed that our proposed TiCNet achieved superior performance compared with existing lung nodule detection methods. Moreover, the effectiveness of each module has been proven. The proposed TiCNet model is an effective tool for pulmonary nodule detection. Validation revealed that this model exhibited excellent performance, suggesting its potential usefulness to support lung cancer screening.
Collapse
Affiliation(s)
- Ling Ma
- College of Software, Nankai University, Tianjin, China
| | - Gen Li
- College of Software, Nankai University, Tianjin, China
| | - Xingyu Feng
- College of Software, Nankai University, Tianjin, China
| | - Qiliang Fan
- College of Software, Nankai University, Tianjin, China
| | - Lizhi Liu
- Department of Radiology, Sun Yat-Sen University Cancer Center, Guangdong, China.
| |
Collapse
|
10
|
Wu R, Liang C, Zhang J, Tan Q, Huang H. Multi-kernel driven 3D convolutional neural network for automated detection of lung nodules in chest CT scans. BIOMEDICAL OPTICS EXPRESS 2024; 15:1195-1218. [PMID: 38404310 PMCID: PMC10890889 DOI: 10.1364/boe.504875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/27/2023] [Accepted: 12/28/2023] [Indexed: 02/27/2024]
Abstract
The accurate position detection of lung nodules is crucial in early chest computed tomography (CT)-based lung cancer screening, which helps to improve the survival rate of patients. Deep learning methodologies have shown impressive feature extraction ability in the CT image analysis task, but it is still a challenge to develop a robust nodule detection model due to the salient morphological heterogeneity of nodules and complex surrounding environment. In this study, a multi-kernel driven 3D convolutional neural network (MK-3DCNN) is proposed for computerized nodule detection in CT scans. In the MK-3DCNN, a residual learning-based encoder-decoder architecture is introduced to employ the multi-layer features of the deep model. Considering the various nodule sizes and shapes, a multi-kernel joint learning block is developed to capture 3D multi-scale spatial information of nodule CT images, and this is conducive to improving nodule detection performance. Furthermore, a multi-mode mixed pooling strategy is designed to replace the conventional single-mode pooling manner, and it reasonably integrates the max pooling, average pooling, and center cropping pooling operations to obtain more comprehensive nodule descriptions from complicated CT images. Experimental results on the public dataset LUNA16 illustrate that the proposed MK-3DCNN method achieves more competitive nodule detection performance compared to some state-of-the-art algorithms. The results on our constructed clinical dataset CQUCH-LND indicate that the MK-3DCNN has a good prospect in clinical practice.
Collapse
Affiliation(s)
- Ruoyu Wu
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| | - Changyu Liang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Jiuquan Zhang
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - QiJuan Tan
- Department of Radiology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing 400030, China
| | - Hong Huang
- Key Laboratory of Optoelectronic Technology and Systems of the Education Ministry of China, Chongqing University, Chongqing 400044, China
| |
Collapse
|
11
|
Jian M, Jin H, Zhang L, Wei B, Yu H. DBPNDNet: dual-branch networks using 3DCNN toward pulmonary nodule detection. Med Biol Eng Comput 2024; 62:563-573. [PMID: 37945795 DOI: 10.1007/s11517-023-02957-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 10/21/2023] [Indexed: 11/12/2023]
Abstract
With the advancement of artificial intelligence, CNNs have been successfully introduced into the discipline of medical data analyzing. Clinically, automatic pulmonary nodules detection remains an intractable issue since those nodules existing in the lung parenchyma or on the chest wall are tough to be visually distinguished from shadows, background noises, blood vessels, and bones. Thus, when making medical diagnosis, clinical doctors need to first pay attention to the intensity cue and contour characteristic of pulmonary nodules, so as to locate the specific spatial locations of nodules. To automate the detection process, we propose an efficient architecture of multi-task and dual-branch 3D convolution neural networks, called DBPNDNet, for automatic pulmonary nodule detection and segmentation. Among the dual-branch structure, one branch is designed for candidate region extraction of pulmonary nodule detection, while the other incorporated branch is exploited for lesion region semantic segmentation of pulmonary nodules. In addition, we develop a 3D attention weighted feature fusion module according to the doctor's diagnosis perspective, so that the captured information obtained by the designed segmentation branch can further promote the effect of the adopted detection branch mutually. The experiment has been implemented and assessed on the commonly used dataset for medical image analysis to evaluate our designed framework. On average, our framework achieved a sensitivity of 91.33% false positives per CT scan and reached 97.14% sensitivity with 8 FPs per scan. The results of the experiments indicate that our framework outperforms other mainstream approaches.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Haodong Jin
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Linsong Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Benzheng Wei
- Medical Artificial Intelligence Research Center, Shandong University of Traditional Chinese Medicine, Qingdao, China
| | - Hui Yu
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
- School of Creative Technologies, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
12
|
Zhang C, Xu J, Tang R, Yang J, Wang W, Yu X, Shi S. Novel research and future prospects of artificial intelligence in cancer diagnosis and treatment. J Hematol Oncol 2023; 16:114. [PMID: 38012673 PMCID: PMC10680201 DOI: 10.1186/s13045-023-01514-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 11/20/2023] [Indexed: 11/29/2023] Open
Abstract
Research into the potential benefits of artificial intelligence for comprehending the intricate biology of cancer has grown as a result of the widespread use of deep learning and machine learning in the healthcare sector and the availability of highly specialized cancer datasets. Here, we review new artificial intelligence approaches and how they are being used in oncology. We describe how artificial intelligence might be used in the detection, prognosis, and administration of cancer treatments and introduce the use of the latest large language models such as ChatGPT in oncology clinics. We highlight artificial intelligence applications for omics data types, and we offer perspectives on how the various data types might be combined to create decision-support tools. We also evaluate the present constraints and challenges to applying artificial intelligence in precision oncology. Finally, we discuss how current challenges may be surmounted to make artificial intelligence useful in clinical settings in the future.
Collapse
Affiliation(s)
- Chaoyi Zhang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jin Xu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Rong Tang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jianhui Yang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Wei Wang
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China
| | - Xianjun Yu
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| | - Si Shi
- Department of Pancreatic Surgery, Fudan University Shanghai Cancer Center, No. 270 Dong'An Road, Shanghai, 200032, People's Republic of China.
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China.
- Shanghai Pancreatic Cancer Institute, No. 399 Lingling Road, Shanghai, 200032, People's Republic of China.
- Pancreatic Cancer Institute, Fudan University, Shanghai, 200032, People's Republic of China.
| |
Collapse
|
13
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
14
|
Song W, Tang F, Marshall H, Fong KM, Liu F. An Improved Anchor-Free Nodule Detection System Using Feature Pyramid Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082619 DOI: 10.1109/embc40787.2023.10340341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Lung cancer (LC) is the leading cause of cancer death. Detecting LC at the earliest stage facilitates curative treatment options and will improve mortality rates. Computer-aided detection (CAD) systems can help improve LC diagnostic accuracy. In this work, we propose a deep-learning-based lung nodule detection method. The proposed CAD system is a 3D anchor-free nodule detection (AFND) method based on a feature pyramid network (FPN). The deep learning-based CAD system has several novel properties: (1) It achieves region proposal and nodule classification in a single network, forming a one-step detection pipeline and reducing operation time. (2) An adaptive nodule modelling method was designed to detect nodules of various sizes. (3) The proposed AFND also establishes a novel center point selection mechanism for better classification. (4) Based on the new nodule model, a composite loss function integrating cosine similarity (CS) loss and SmoothL1loss was designed to further improve the nodule detection accuracy. Experimental results show that the AFND outperforms other similar nodule detection systems on the LUNA 16 dataset.
Collapse
|
15
|
Arul King J, Helen Sulochana C. An efficient deep neural network to segment lung nodule using optimized HDCCARUNet model. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-222215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Abstract
Lung cancer is a severe disease that may lead to death if left undiagnosed and untreated. Lung cancer recognition and segmentation is a difficult task in medical image processing. The study of Computed Tomography (CT) is an important phase for detecting abnormal tissues in the lung. The size of a nodule as well as the fine details of nodule can be varied for various images. Radiologists face a difficult task in diagnosing nodules from multiple images. Deep learning approaches outperform traditional learning algorithms when the data amount is large. One of the most common deep learning architectures is convolutional neural networks. Convolutional Neural Networks use pre-trained models like LeNet, AlexNet, GoogleNet, VGG16, VGG19, Resnet50, and others for learning features. This study proposes an optimized HDCCARUNet (Hybrid Dilated Convolutional Channel Attention Res-UNet) architecture, which combines an improved U-Net with a modified channel attention (MCA) block, and a HDAC (hybrid dilated attention convolutional) layer to accurately and effectively do medical image segmentation for various tasks. The attention mechanism aids in focusing on the desired outcome. The ability to dynamically allot input weights to neurons allows it to focus only on the most important information. In order to gather key details about different object features and infer a finer channel-wise attention, the proposed system uses a modified channel attention (MCA) block. The experiment is conducted on LIDC-IDRI dataset. The noises present in the dataset images are denoised by enhanced DWT filter and the performance is analysed at various noise levels. The proposed method achieves an accuracy rate of 99.58 % . Performance measures like accuracy, sensitivity, specificity, and ROC curves are evaluated and the system significantly outperforms other state-of-the-art systems.
Collapse
Affiliation(s)
- J. Arul King
- Department of ECE, St. Xavier’s Catholic College of Engineering, Tamilnadu, India
| | - C. Helen Sulochana
- Department of ECE, St. Xavier’s Catholic College of Engineering, Tamilnadu, India
| |
Collapse
|
16
|
Chen Y, Hou X, Yang Y, Ge Q, Zhou Y, Nie S. A Novel Deep Learning Model Based on Multi-Scale and Multi-View for Detection of Pulmonary Nodules. J Digit Imaging 2023; 36:688-699. [PMID: 36544067 PMCID: PMC10039158 DOI: 10.1007/s10278-022-00749-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Lung cancer manifests as pulmonary nodules in the early stage. Thus, the early and accurate detection of these nodules is crucial for improving the survival rate of patients. We propose a novel two-stage model for lung nodule detection. In the candidate nodule detection stage, a deep learning model based on 3D context information roughly segments the nodules detects the preprocessed image and obtain candidate nodules. In this model, 3D image blocks are input into the constructed model, and it learns the contextual information between the various slices in the 3D image block. The parameters of our model are equivalent to those of a 2D convolutional neural network (CNN), but the model could effectively learn the 3D context information of the nodules. In the false-positive reduction stage, we propose a multi-scale shared convolutional structure model. Our lung detection model has no significant increase in parameters and computation in both stages of multi-scale and multi-view detection. The proposed model was evaluated by using 888 computed tomography (CT) scans from the LIDC-IDRI dataset and achieved a competition performance metric (CPM) score of 0.957. The average detection sensitivity per scan was 0.971/1.0 FP. Furthermore, an average detection sensitivity of 0.933/1.0 FP per scan was achieved based on data from Shanghai Pulmonary Hospital. Our model exhibited a higher detection sensitivity, a lower false-positive rate, and better generalization than current lung nodule detection methods. The method has fewer parameters and less computational complexity, which provides more possibilities for the clinical application of this method.
Collapse
Affiliation(s)
- Yang Chen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yifeng Yang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Qianqian Ge
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yan Zhou
- Department of Radiology, School of Medicine, Renji Hospital, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
17
|
Modak S, Abdel-Raheem E, Rueda L. Applications of Deep Learning in Disease Diagnosis of Chest Radiographs: A Survey on Materials and Methods. BIOMEDICAL ENGINEERING ADVANCES 2023. [DOI: 10.1016/j.bea.2023.100076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
|
18
|
Han L, Li F, Yu H, Xia K, Xin Q, Zou X. BiRPN-YOLOvX: A weighted bidirectional recursive feature pyramid algorithm for lung nodule detection. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2023; 31:301-317. [PMID: 36617767 DOI: 10.3233/xst-221310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
BACKGROUND Lung cancer has the second highest cancer mortality rate in the world today. Although lung cancer screening using CT images is a common way for early lung cancer detection, accurately detecting lung nodules remains a challenged issue in clinical practice. OBJECTIVE This study aims to develop a new weighted bidirectional recursive pyramid algorithm to address the problems of small size of lung nodules, large proportion of background region, and complex lung structures in lung nodule detection of CT images. METHODS First, the weighted bidirectional recursive feature pyramid network (BiPRN) is proposed, which can increase the ability of network model to extract feature information and achieve multi-scale fusion information. Second, a CBAM_CSPDarknet53 structure is developed to incorporate an attention mechanism as a feature extraction module, which can aggregate both spatial information and channel information of the feature map. Third, the weighted BiRPN and CBAM_CSPDarknet53 are applied to the YOLOvX model for lung nodule detection experiments, named BiRPN-YOLOvX, where YOLOvX represents different versions of YOLO. To verify the effectiveness of our weighted BiRPN and CBAM_ CSPDarknet53 algorithm, they are fused with different models of YOLOv3, YOLOv4 and YOLOv5, and extensive experiments are carried out using the publicly available lung nodule datasets LUNA16 and LIDC-IDRI. The training set of LUNA16 contains 949 images, and the validation and testing sets each contain 118 images. There are 1987, 248 and 248 images in LIDC-IDRI's training, validation and testing sets, respectively. RESULTS The sensitivity of lung nodule detection using BiRPN-YOLOv5 reaches 98.7% on LUNA16 and 96.2% on LIDC-IDRI, respectively. CONCLUSION This study demonstrates that the proposed new method has potential to help improve the sensitivity of lung nodule detection in future clinical practice.
Collapse
Affiliation(s)
- Liying Han
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Fugai Li
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Hengyong Yu
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Kewen Xia
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Qiyuan Xin
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| | - Xiaoyu Zou
- School of Electronics and Information Engineering, Hebei University of Technology, Tianjin, China
| |
Collapse
|
19
|
Zhao W, Ma J, Zhao L, Hou R, Qiu L, Fu X, Zhao J. PUNDIT: Pulmonary nodule detection with image category transformation. Med Phys 2022; 50:2914-2927. [PMID: 36576169 DOI: 10.1002/mp.16183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 11/07/2022] [Accepted: 12/03/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Convolutional neural networks (CNNs) have achieved great success in pulmonary nodules detection, which plays an important role in lung cancer screening. PURPOSE In this paper, we proposed a novel strategy for pulmonary nodule detection by learning it from a harder task, which was to transform nodule images into normal images. We named this strategy as pulmonary nodule detection with image category transformation (PUNDIT). METHODS There were two steps for nodules detection, nodule candidate detection and false positive (FP) reduction. In nodule candidate detection step, a segmentation-based framework was built for detection. We designed an image category transformation (ICT) task to translate nodule images into pixel-to-pixel normal images and share the information of detection and transformation tasks by multitask learning. As for references of transformation tasks, we proposed background consistency losses into standard cycle-consistent adversarial networks, which can solve the problem of background uncontrolled changing. A three-dimensional network was used in FP reduction step. RESULTS PUNDIT was evaluated in two datasets, cancer screening dataset (CSD) with 1186 nodules for cross-validation and (CTD) with 3668 nodules for external test. Results were mainly evaluated by competition performance metric (CPM), the average sensitivity at seven predefined FP rates. The CPM was improved from 0.906 to 0.931 in CSD, and from 0.835 to 0.848 in CTD. CONCLUSIONS Experimental results showed that PUNDIT can improve the performance of pulmonary nodules detection effectively.
Collapse
Affiliation(s)
- Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jingchen Ma
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
| | - Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.,Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Lu Qiu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaolong Fu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
20
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:cancers14225569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
21
|
Rib Fracture Detection with Dual-Attention Enhanced U-Net. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8945423. [PMID: 36035283 PMCID: PMC9410867 DOI: 10.1155/2022/8945423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/24/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
Rib fractures are common injuries caused by chest trauma, which may cause serious consequences. It is essential to diagnose rib fractures accurately. Low-dose thoracic computed tomography (CT) is commonly used for rib fracture diagnosis, and convolutional neural network- (CNN-) based methods have assisted doctors in rib fracture diagnosis in recent years. However, due to the lack of rib fracture data and the irregular, various shape of rib fractures, it is difficult for CNN-based methods to extract rib fracture features. As a result, they cannot achieve satisfying results in terms of accuracy and sensitivity in detecting rib fractures. Inspired by the attention mechanism, we proposed the CFSG U-Net for rib fracture detection. The CSFG U-Net uses the U-Net architecture and is enhanced by a dual-attention module, including a channel-wise fusion attention module (CFAM) and a spatial-wise group attention module (SGAM). CFAM uses the channel attention mechanism to reweight the feature map along the channel dimension and refine the U-Net's skip connections. SGAM uses the group technique to generate spatial attention to adjust feature maps in the spatial dimension, which allows the spatial attention module to capture more fine-grained semantic information. To evaluate the effectiveness of our proposed methods, we established a rib fracture dataset in our research. The experimental results on our dataset show that the maximum sensitivity of our proposed method is 89.58%, and the average FROC score is 81.28%, which outperforms the existing rib fracture detection methods and attention modules.
Collapse
|
22
|
Huang YS, Chou PR, Chen HM, Chang YC, Chang RF. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106786. [PMID: 35398579 DOI: 10.1016/j.cmpb.2022.106786] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan
| | - Ping-Ru Chou
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
23
|
An Attention-Preserving Network-Based Method for Assisted Segmentation of Osteosarcoma MRI Images. MATHEMATICS 2022. [DOI: 10.3390/math10101665] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Osteosarcoma is a malignant bone tumor that is extremely dangerous to human health. Not only does it require a large amount of work, it is also a complicated task to outline the lesion area in an image manually, using traditional methods. With the development of computer-aided diagnostic techniques, more and more researchers are focusing on automatic segmentation techniques for osteosarcoma analysis. However, existing methods ignore the size of osteosarcomas, making it difficult to identify and segment smaller tumors. This is very detrimental to the early diagnosis of osteosarcoma. Therefore, this paper proposes a Contextual Axial-Preserving Attention Network (CaPaN)-based MRI image-assisted segmentation method for osteosarcoma detection. Based on the use of Res2Net, a parallel decoder is added to aggregate high-level features which effectively combines the local and global features of osteosarcoma. In addition, channel feature pyramid (CFP) and axial attention (A-RA) mechanisms are used. A lightweight CFP can extract feature mapping and contextual information of different sizes. A-RA uses axial attention to distinguish tumor tissues by mining, which reduces computational costs and thus improves the generalization performance of the model. We conducted experiments using a real dataset provided by the Second Xiangya Affiliated Hospital and the results showed that our proposed method achieves better segmentation results than alternative models. In particular, our method shows significant advantages with respect to small target segmentation. Its precision is about 2% higher than the average values of other models. For the segmentation of small objects, the DSC value of CaPaN is 0.021 higher than that of the commonly used U-Net method.
Collapse
|
24
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
25
|
Silva F, Pereira T, Neves I, Morgado J, Freitas C, Malafaia M, Sousa J, Fonseca J, Negrão E, Flor de Lima B, Correia da Silva M, Madureira AJ, Ramos I, Costa JL, Hespanhol V, Cunha A, Oliveira HP. Towards Machine Learning-Aided Lung Cancer Clinical Routines: Approaches and Open Challenges. J Pers Med 2022; 12:jpm12030480. [PMID: 35330479 PMCID: PMC8950137 DOI: 10.3390/jpm12030480] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 02/28/2022] [Accepted: 03/10/2022] [Indexed: 12/15/2022] Open
Abstract
Advancements in the development of computer-aided decision (CAD) systems for clinical routines provide unquestionable benefits in connecting human medical expertise with machine intelligence, to achieve better quality healthcare. Considering the large number of incidences and mortality numbers associated with lung cancer, there is a need for the most accurate clinical procedures; thus, the possibility of using artificial intelligence (AI) tools for decision support is becoming a closer reality. At any stage of the lung cancer clinical pathway, specific obstacles are identified and “motivate” the application of innovative AI solutions. This work provides a comprehensive review of the most recent research dedicated toward the development of CAD tools using computed tomography images for lung cancer-related tasks. We discuss the major challenges and provide critical perspectives on future directions. Although we focus on lung cancer in this review, we also provide a more clear definition of the path used to integrate AI in healthcare, emphasizing fundamental research points that are crucial for overcoming current barriers.
Collapse
Affiliation(s)
- Francisco Silva
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
- Correspondence: (F.S.); (T.P.)
| | - Tania Pereira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- Correspondence: (F.S.); (T.P.)
| | - Inês Neves
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- ICBAS—Abel Salazar Biomedical Sciences Institute, University of Porto, 4050-313 Porto, Portugal
| | - Joana Morgado
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - Cláudia Freitas
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Mafalda Malafaia
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Joana Sousa
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
| | - João Fonseca
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FEUP—Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Eduardo Negrão
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Beatriz Flor de Lima
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - Miguel Correia da Silva
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
| | - António J. Madureira
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - Isabel Ramos
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - José Luis Costa
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
- i3S—Instituto de Investigação e Inovação em Saúde, Universidade do Porto, 4200-135 Porto, Portugal
- IPATIMUP—Institute of Molecular Pathology and Immunology of the University of Porto, 4200-135 Porto, Portugal
| | - Venceslau Hespanhol
- CHUSJ—Centro Hospitalar e Universitário de São João, 4200-319 Porto, Portugal; (C.F.); (E.N.); (B.F.d.L.); (M.C.d.S.); (A.J.M.); (I.R.); (V.H.)
- FMUP—Faculty of Medicine, University of Porto, 4200-319 Porto, Portugal;
| | - António Cunha
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- UTAD—University of Trás-os-Montes and Alto Douro, 5001-801 Vila Real, Portugal
| | - Hélder P. Oliveira
- INESC TEC—Institute for Systems and Computer Engineering, Technology and Science, 4200-465 Porto, Portugal; (I.N.); (J.M.); (M.M.); (J.S.); (J.F.); (A.C.); (H.P.O.)
- FCUP—Faculty of Science, University of Porto, 4169-007 Porto, Portugal
| |
Collapse
|