1
|
Jian M, Jin H, Zhang L, Wei B, Yu H. DBPNDNet: dual-branch networks using 3DCNN toward pulmonary nodule detection. Med Biol Eng Comput 2024; 62:563-573. [PMID: 37945795 DOI: 10.1007/s11517-023-02957-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 10/21/2023] [Indexed: 11/12/2023]
Abstract
With the advancement of artificial intelligence, CNNs have been successfully introduced into the discipline of medical data analyzing. Clinically, automatic pulmonary nodules detection remains an intractable issue since those nodules existing in the lung parenchyma or on the chest wall are tough to be visually distinguished from shadows, background noises, blood vessels, and bones. Thus, when making medical diagnosis, clinical doctors need to first pay attention to the intensity cue and contour characteristic of pulmonary nodules, so as to locate the specific spatial locations of nodules. To automate the detection process, we propose an efficient architecture of multi-task and dual-branch 3D convolution neural networks, called DBPNDNet, for automatic pulmonary nodule detection and segmentation. Among the dual-branch structure, one branch is designed for candidate region extraction of pulmonary nodule detection, while the other incorporated branch is exploited for lesion region semantic segmentation of pulmonary nodules. In addition, we develop a 3D attention weighted feature fusion module according to the doctor's diagnosis perspective, so that the captured information obtained by the designed segmentation branch can further promote the effect of the adopted detection branch mutually. The experiment has been implemented and assessed on the commonly used dataset for medical image analysis to evaluate our designed framework. On average, our framework achieved a sensitivity of 91.33% false positives per CT scan and reached 97.14% sensitivity with 8 FPs per scan. The results of the experiments indicate that our framework outperforms other mainstream approaches.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Haodong Jin
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Linsong Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Benzheng Wei
- Medical Artificial Intelligence Research Center, Shandong University of Traditional Chinese Medicine, Qingdao, China
| | - Hui Yu
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
- School of Creative Technologies, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
2
|
Hendrix W, Hendrix N, Scholten ET, Mourits M, Trap-de Jong J, Schalekamp S, Korst M, van Leuken M, van Ginneken B, Prokop M, Rutten M, Jacobs C. Deep learning for the detection of benign and malignant pulmonary nodules in non-screening chest CT scans. COMMUNICATIONS MEDICINE 2023; 3:156. [PMID: 37891360 PMCID: PMC10611755 DOI: 10.1038/s43856-023-00388-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND Outside a screening program, early-stage lung cancer is generally diagnosed after the detection of incidental nodules in clinically ordered chest CT scans. Despite the advances in artificial intelligence (AI) systems for lung cancer detection, clinical validation of these systems is lacking in a non-screening setting. METHOD We developed a deep learning-based AI system and assessed its performance for the detection of actionable benign nodules (requiring follow-up), small lung cancers, and pulmonary metastases in CT scans acquired in two Dutch hospitals (internal and external validation). A panel of five thoracic radiologists labeled all nodules, and two additional radiologists verified the nodule malignancy status and searched for any missed cancers using data from the national Netherlands Cancer Registry. The detection performance was evaluated by measuring the sensitivity at predefined false positive rates on a free receiver operating characteristic curve and was compared with the panel of radiologists. RESULTS On the external test set (100 scans from 100 patients), the sensitivity of the AI system for detecting benign nodules, primary lung cancers, and metastases is respectively 94.3% (82/87, 95% CI: 88.1-98.8%), 96.9% (31/32, 95% CI: 91.7-100%), and 92.0% (104/113, 95% CI: 88.5-95.5%) at a clinically acceptable operating point of 1 false positive per scan (FP/s). These sensitivities are comparable to or higher than the radiologists, albeit with a slightly higher FP/s (average difference of 0.6). CONCLUSIONS The AI system reliably detects benign and malignant pulmonary nodules in clinically indicated CT scans and can potentially assist radiologists in this setting.
Collapse
Affiliation(s)
- Ward Hendrix
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Nils Hendrix
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
- Jheronimus Academy of Data Science, Sint Janssingel 92, 5211 DA, 's-Hertogenbosch, The Netherlands
| | - Ernst T Scholten
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mariëlle Mourits
- Radiology Department, Canisius Wilhelmina Hospital, Weg door Jonkerbos 100, 6532 SZ, Nijmegen, The Netherlands
| | - Joline Trap-de Jong
- Radiology Department, St. Antonius Hospital, Koekoekslaan 1, 3435 CM, Nieuwegein, The Netherlands
| | - Steven Schalekamp
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mike Korst
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Maarten van Leuken
- Radiology Department, Canisius Wilhelmina Hospital, Weg door Jonkerbos 100, 6532 SZ, Nijmegen, The Netherlands
| | - Bram van Ginneken
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
| | - Mathias Prokop
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, University Medical Center Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Matthieu Rutten
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Radiology Department, Jeroen Bosch Hospital, Henri Dunantstraat 1, 5223 GZ, 's-Hertogenbosch, The Netherlands
| | - Colin Jacobs
- Diagnostic Imaging Analysis Group, Radiology and Nuclear Medicine Department, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands.
| |
Collapse
|
3
|
Maynord M, Farhangi MM, Fermüller C, Aloimonos Y, Levine G, Petrick N, Sahiner B, Pezeshk A. Semi-supervised training using cooperative labeling of weakly annotated data for nodule detection in chest CT. Med Phys 2023. [PMID: 36630691 DOI: 10.1002/mp.16219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 12/14/2022] [Accepted: 12/23/2022] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly-annotated - produced for use by humans rather than machines and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false-positive rates. CONCLUSIONS Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.
Collapse
Affiliation(s)
- Michael Maynord
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA.,Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Cornelia Fermüller
- University of Maryland, Institute for Advanced Computer Studies, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Yiannis Aloimonos
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Gary Levine
- Division of Radiological Imaging Devices and Electronic Products, CDRH, FDA, Silver Spring, Maryland, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| |
Collapse
|
5
|
Breast Cancer Detection on Histopathological Images Using a Composite Dilated Backbone Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8517706. [PMID: 35845881 PMCID: PMC9279061 DOI: 10.1155/2022/8517706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 05/19/2022] [Accepted: 06/15/2022] [Indexed: 11/25/2022]
Abstract
Breast cancer is a lethal illness that has a high mortality rate. In treatment, the accuracy of diagnosis is crucial. Machine learning and deep learning may be beneficial to doctors. The proposed backbone network is critical for the present performance of CNN-based detectors. Integrating dilated convolution, ResNet, and Alexnet increases detection performance. The composite dilated backbone network (CDBN) is an innovative method for integrating many identical backbones into a single robust backbone. Hence, CDBN uses the lead backbone feature maps to identify objects. It feeds high-level output features from previous backbones into the next backbone in a stepwise way. We show that most contemporary detectors can easily include CDBN to improve performance achieved mAP improvements ranging from 1.5 to 3.0 percent on the breast cancer histopathological image classification (BreakHis) dataset. Experiments have also shown that instance segmentation may be improved. In the BreakHis dataset, CDBN enhances the baseline detector cascade mask R-CNN (mAP = 53.3). The proposed CDBN detector does not need pretraining. It creates high-level traits by combining low-level elements. This network is made up of several identical backbones that are linked together. The composite dilated backbone considers the linked backbones CDBN.
Collapse
|
7
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|