1
|
Cortesi M, Liu D, Powell E, Barlow E, Warton K, Ford CE. Accurate Identification of Cancer Cells in Complex Pre-Clinical Models Using a Deep-Learning Neural Network: A Transfection-Free Approach. Adv Biol (Weinh) 2024:e2400034. [PMID: 39133225 DOI: 10.1002/adbi.202400034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 07/07/2024] [Indexed: 08/13/2024]
Abstract
3D co-cultures are key tools for in vitro biomedical research as they recapitulate more closely the in vivo environment while allowing a tighter control on the culture's composition and experimental conditions. The limited technologies available for the analysis of these models, however, hamper their widespread application. The separation of the contribution of the different cell types, in particular, is a fundamental challenge. In this work, ORACLE (OvaRiAn Cancer ceLl rEcognition) is presented, a deep neural network trained to distinguish between ovarian cancer and healthy cells based on the shape of their nucleus. The extensive validation that are conducted includes multiple cell lines and patient-derived cultures to characterize the effect of all the major potential confounding factors. High accuracy and reliability are maintained throughout the analysis (F1score> 0.9 and Area under the ROC curve -ROC-AUC- score = 0.99) demonstrating ORACLE's effectiveness with this detection and classification task. ORACLE is freely available (https://github.com/MarilisaCortesi/ORACLE/tree/main) and can be used to recognize both ovarian cancer cell lines and primary patient-derived cells. This feature is unique to ORACLE and thus enables for the first time the analysis of in vitro co-cultures comprised solely of patient-derived cells.
Collapse
Affiliation(s)
- Marilisa Cortesi
- Gynaecological Cancer Research Group, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Kensington, NSW, 2033, Australia
- Laboratory of Cellular and Molecular Engineering, Department of Electrical Electronic and Information Engineering "G. Marconi", Alma Mater Studiorum-University of Bologna, Cesena, 47521, Italy
| | - Dongli Liu
- Gynaecological Cancer Research Group, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Kensington, NSW, 2033, Australia
| | - Elyse Powell
- Gynaecological Cancer Research Group, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Kensington, NSW, 2033, Australia
| | - Ellen Barlow
- Gynaecological Cancer Research Group, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Kensington, NSW, 2033, Australia
| | - Kristina Warton
- Gynaecological Cancer Research Group, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Kensington, NSW, 2033, Australia
| | - Caroline E Ford
- Gynaecological Cancer Research Group, School of Clinical Medicine, Faculty of Medicine and Health, University of New South Wales, Kensington, NSW, 2033, Australia
| |
Collapse
|
2
|
Wang L, Zhang C, Zhang Y, Li J. An Automated Diagnosis Method for Lung Cancer Target Detection and Subtype Classification-Based CT Scans. Bioengineering (Basel) 2024; 11:767. [PMID: 39199725 PMCID: PMC11351493 DOI: 10.3390/bioengineering11080767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 07/16/2024] [Accepted: 07/27/2024] [Indexed: 09/01/2024] Open
Abstract
When dealing with small targets in lung cancer detection, the YOLO V8 algorithm may encounter false positives and misses. To address this issue, this study proposes an enhanced YOLO V8 detection model. The model integrates a large separable kernel attention mechanism into the C2f module to expand the information retrieval range, strengthens the extraction of lung cancer features in the Backbone section, and achieves effective interaction between multi-scale features in the Neck section, thereby enhancing feature representation and robustness. Additionally, depth-wise convolution and Coordinate Attention mechanisms are embedded in the Fast Spatial Pyramid Pooling module to reduce feature loss and improve detection accuracy. This study introduces a Minimum Point Distance-based IOU loss to enhance correlation between predicted and ground truth bounding boxes, improving adaptability and accuracy in small target detection. Experimental validation demonstrates that the improved network outperforms other mainstream detection networks in terms of average precision values and surpasses other classification networks in terms of accuracy. These findings validate the outstanding performance of the enhanced model in the localization and recognition aspects of lung cancer auxiliary diagnosis.
Collapse
Affiliation(s)
| | | | | | - Jin Li
- College of Intelligent Systems Science and Engineering, Harbin Engineering University, Harbin 150001, China; (L.W.); (C.Z.); (Y.Z.)
| |
Collapse
|
3
|
Gao Z, Guo Y, Wang G, Chen X, Cao X, Zhang C, An S, Xu F. Robust deep learning from incomplete annotation for accurate lung nodule detection. Comput Biol Med 2024; 173:108361. [PMID: 38569236 DOI: 10.1016/j.compbiomed.2024.108361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/02/2024] [Accepted: 03/20/2024] [Indexed: 04/05/2024]
Abstract
Deep learning plays a significant role in the detection of pulmonary nodules in low-dose computed tomography (LDCT) scans, contributing to the diagnosis and treatment of lung cancer. Nevertheless, its effectiveness often relies on the availability of extensive, meticulously annotated dataset. In this paper, we explore the utilization of an incompletely annotated dataset for pulmonary nodules detection and introduce the FULFIL (Forecasting Uncompleted Labels For Inexpensive Lung nodule detection) algorithm as an innovative approach. By instructing annotators to label only the nodules they are most confident about, without requiring complete coverage, we can substantially reduce annotation costs. Nevertheless, this approach results in an incompletely annotated dataset, which presents challenges when training deep learning models. Within the FULFIL algorithm, we employ Graph Convolution Network (GCN) to discover the relationships between annotated and unannotated nodules for self-adaptively completing the annotation. Meanwhile, a teacher-student framework is employed for self-adaptive learning using the completed annotation dataset. Furthermore, we have designed a Dual-Views loss to leverage different data perspectives, aiding the model in acquiring robust features and enhancing generalization. We carried out experiments using the LUng Nodule Analysis (LUNA) dataset, achieving a sensitivity of 0.574 at a False positives per scan (FPs/scan) of 0.125 with only 10% instance-level annotations for nodules. This performance outperformed comparative methods by 7.00%. Experimental comparisons were conducted to evaluate the performance of our model and human experts on test dataset. The results demonstrate that our model can achieve a comparable level of performance to that of human experts. The comprehensive experimental results demonstrate that FULFIL can effectively leverage an incomplete pulmonary nodule dataset to develop a robust deep learning model, making it a promising tool for assisting in lung nodule detection.
Collapse
Affiliation(s)
- Zebin Gao
- School of Information Science and Technology, Fudan University, Shanghai 200438, China
| | - Yuchen Guo
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Guoxin Wang
- JD Health International Inc, Beijing 100176, China
| | - Xiangru Chen
- Hangzhou Zhuoxi Institute of Brain and Intelligence, Hangzhou 311100, China
| | - Xuyang Cao
- JD Health International Inc, Beijing 100176, China
| | - Chao Zhang
- JD Health International Inc, Beijing 100176, China
| | - Shan An
- JD Health International Inc, Beijing 100176, China
| | - Feng Xu
- School of Software, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
4
|
Fang M, Fu M, Liao B, Lei X, Wu FX. Deep integrated fusion of local and global features for cervical cell classification. Comput Biol Med 2024; 171:108153. [PMID: 38364660 DOI: 10.1016/j.compbiomed.2024.108153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/08/2024] [Accepted: 02/12/2024] [Indexed: 02/18/2024]
Abstract
Cervical cytology image classification is of great significance to the cervical cancer diagnosis and prognosis. Recently, convolutional neural network (CNN) and visual transformer have been adopted as two branches to learn the features for image classification by simply adding local and global features. However, such the simple addition may not be effective to integrate these features. In this study, we explore the synergy of local and global features for cytology images for classification tasks. Specifically, we design a Deep Integrated Feature Fusion (DIFF) block to synergize local and global features of cytology images from a CNN branch and a transformer branch. Our proposed method is evaluated on three cervical cell image datasets (SIPaKMeD, CRIC, Herlev) and another large blood cell dataset BCCD for several multi-class and binary classification tasks. Experimental results demonstrate the effectiveness of the proposed method in cervical cell classification, which could assist medical specialists to better diagnose cervical cancer.
Collapse
Affiliation(s)
- Ming Fang
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Minghan Fu
- Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada
| | - Bo Liao
- School of Mathematics and Statistics, Hainan Normal University, 99 Longkun South Road, Haikou, 571158, Hainan, China
| | - Xiujuan Lei
- School of Computer Science, Shaanxi Normal University, 620 West Chang'an Avenue, Xi'an, 710119, Shaanxi, China.
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Mechanical Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada; Department of Computer Science, University of Saskatchewan, 57 Campus Drive, Saskatoon, S7N 5A9, SK, Canada.
| |
Collapse
|
5
|
Tatar OC, Akay MA, Metin S. DraiNet: AI-driven decision support in pneumothorax and pleural effusion management. Pediatr Surg Int 2023; 40:30. [PMID: 38151565 DOI: 10.1007/s00383-023-05609-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/24/2023] [Indexed: 12/29/2023]
Abstract
OBJECTIVE This study presents DraiNet, a deep learning model developed to detect pneumothorax and pleural effusion in pediatric patients and aid in assessing the necessity for tube thoracostomy. The primary goal is to utilize DraiNet as a decision support tool to enhance clinical decision-making in the management of these conditions. METHODS DraiNet was trained on a diverse dataset of pediatric CT scans, carefully annotated by experienced surgeons. The model incorporated advanced object detection techniques and underwent evaluation using standard metrics, such as mean Average Precision (mAP), to assess its performance. RESULTS DraiNet achieved an impressive mAP score of 0.964, demonstrating high accuracy in detecting and precisely localizing abnormalities associated with pneumothorax and pleural effusion. The model's precision and recall further confirmed its ability to effectively predict positive cases. CONCLUSION The integration of DraiNet as an AI-driven decision support system marks a significant advancement in pediatric healthcare. By combining deep learning algorithms with clinical expertise, DraiNet provides a valuable tool for non-surgical teams and emergency room doctors, aiding them in making informed decisions about surgical interventions. With its remarkable mAP score of 0.964, DraiNet has the potential to enhance patient outcomes and optimize the management of critical conditions, including pneumothorax and pleural effusion.
Collapse
Affiliation(s)
- Ozan Can Tatar
- Department of General Surgery, School of Medicine, Kocaeli University, 41000, Kocaeli, Turkey.
- Information Systems Engineering, Faculty of Technology, Kocaeli University, Kocaeli, Turkey.
| | - Mustafa Alper Akay
- Department of Pediatric Surgery, School of Medicine, Kocaeli University, Kocaeli, Turkey
| | - Semih Metin
- Department of Pediatric Surgery, School of Medicine, Kocaeli University, Kocaeli, Turkey
| |
Collapse
|
6
|
Oh K, Lee SE, Kim EK. 3-D breast nodule detection on automated breast ultrasound using faster region-based convolutional neural networks and U-Net. Sci Rep 2023; 13:22625. [PMID: 38114666 PMCID: PMC10730541 DOI: 10.1038/s41598-023-49794-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023] Open
Abstract
Mammography is currently the most commonly used modality for breast cancer screening. However, its sensitivity is relatively low in women with dense breasts. Dense breast tissues show a relatively high rate of interval cancers and are at high risk for developing breast cancer. As a supplemental screening tool, ultrasonography is a widely adopted imaging modality to standard mammography, especially for dense breasts. Lately, automated breast ultrasound imaging has gained attention due to its advantages over hand-held ultrasound imaging. However, automated breast ultrasound imaging requires considerable time and effort for reading because of the lengthy data. Hence, developing a computer-aided nodule detection system for automated breast ultrasound is invaluable and impactful practically. This study proposes a three-dimensional breast nodule detection system based on a simple two-dimensional deep-learning model exploiting automated breast ultrasound. Additionally, we provide several postprocessing steps to reduce false positives. In our experiments using the in-house automated breast ultrasound datasets, a sensitivity of [Formula: see text] with 8.6 false positives is achieved on unseen test data at best.
Collapse
Affiliation(s)
- Kangrok Oh
- Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
7
|
Putra RH, Astuti ER, Nurrachman AS, Putri DK, Ghazali AB, Pradini TA, Prabaningtyas DT. Convolutional neural networks for automated tooth numbering on panoramic radiographs: A scoping review. Imaging Sci Dent 2023; 53:271-281. [PMID: 38174035 PMCID: PMC10761295 DOI: 10.5624/isd.20230058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 06/14/2023] [Accepted: 07/14/2023] [Indexed: 01/05/2024] Open
Abstract
Purpose The objective of this scoping review was to investigate the applicability and performance of various convolutional neural network (CNN) models in tooth numbering on panoramic radiographs, achieved through classification, detection, and segmentation tasks. Material and Methods An online search was performed of the PubMed, Science Direct, and Scopus databases. Based on the selection process, 12 studies were included in this review. Results Eleven studies utilized a CNN model for detection tasks, 5 for classification tasks, and 3 for segmentation tasks in the context of tooth numbering on panoramic radiographs. Most of these studies revealed high performance of various CNN models in automating tooth numbering. However, several studies also highlighted limitations of CNNs, such as the presence of false positives and false negatives in identifying decayed teeth, teeth with crown prosthetics, teeth adjacent to edentulous areas, dental implants, root remnants, wisdom teeth, and root canal-treated teeth. These limitations can be overcome by ensuring both the quality and quantity of datasets, as well as optimizing the CNN architecture. Conclusion CNNs have demonstrated high performance in automated tooth numbering on panoramic radiographs. Future development of CNN-based models for this purpose should also consider different stages of dentition, such as the primary and mixed dentition stages, as well as the presence of various tooth conditions. Ultimately, an optimized CNN architecture can serve as the foundation for an automated tooth numbering system and for further artificial intelligence research on panoramic radiographs for a variety of purposes.
Collapse
Affiliation(s)
- Ramadhan Hardani Putra
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | - Eha Renwi Astuti
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | - Aga Satria Nurrachman
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | - Dina Karimah Putri
- Department of Dentomaxillofacial Radiology, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
- Division of Dental Informatics and Radiology, Tohoku University Graduate School of Dentistry, Sendai, Japan
| | - Ahmad Badruddin Ghazali
- Oral Radiology Unit, Department of Oral Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University Malaysia, Malaysia
| | - Tjio Andrinanti Pradini
- Undergraduate Program, Faculty of Dental Medicine, Universitas Airlangga, Surabaya, Indonesia
| | | |
Collapse
|
8
|
Zhang J, Li Z, Lin H, Xue M, Wang H, Fang Y, Liu S, Huo T, Zhou H, Yang J, Xie Y, Xie M, Lu L, Liu P, Ye Z. Deep learning assisted diagnosis system: improving the diagnostic accuracy of distal radius fractures. Front Med (Lausanne) 2023; 10:1224489. [PMID: 37663656 PMCID: PMC10471443 DOI: 10.3389/fmed.2023.1224489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 08/04/2023] [Indexed: 09/05/2023] Open
Abstract
Objectives To explore an intelligent detection technology based on deep learning algorithms to assist the clinical diagnosis of distal radius fractures (DRFs), and further compare it with human performance to verify the feasibility of this method. Methods A total of 3,240 patients (fracture: n = 1,620, normal: n = 1,620) were included in this study, with a total of 3,276 wrist joint anteroposterior (AP) X-ray films (1,639 fractured, 1,637 normal) and 3,260 wrist joint lateral X-ray films (1,623 fractured, 1,637 normal). We divided the patients into training set, validation set and test set in a ratio of 7:1.5:1.5. The deep learning models were developed using the data from the training and validation sets, and then their effectiveness were evaluated using the data from the test set. Evaluate the diagnostic performance of deep learning models using receiver operating characteristic (ROC) curves and area under the curve (AUC), accuracy, sensitivity, and specificity, and compare them with medical professionals. Results The deep learning ensemble model had excellent accuracy (97.03%), sensitivity (95.70%), and specificity (98.37%) in detecting DRFs. Among them, the accuracy of the AP view was 97.75%, the sensitivity 97.13%, and the specificity 98.37%; the accuracy of the lateral view was 96.32%, the sensitivity 94.26%, and the specificity 98.37%. When the wrist joint is counted, the accuracy was 97.55%, the sensitivity 98.36%, and the specificity 96.73%. In terms of these variables, the performance of the ensemble model is superior to that of both the orthopedic attending physician group and the radiology attending physician group. Conclusion This deep learning ensemble model has excellent performance in detecting DRFs on plain X-ray films. Using this artificial intelligence model as a second expert to assist clinical diagnosis is expected to improve the accuracy of diagnosing DRFs and enhance clinical work efficiency.
Collapse
Affiliation(s)
- Jiayao Zhang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhimin Li
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Heng Lin
- Department of Orthopedics, Nanzhang People’s Hospital, Nanzhang, China
| | - Mingdi Xue
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Honglin Wang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Fang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Songxiang Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Tongtong Huo
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Hong Zhou
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jiaming Yang
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yi Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Mao Xie
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Lin Lu
- Department of Orthopedics, Renmin Hospital of Wuhan University, Wuhan, China
| | - Pengran Liu
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhewei Ye
- Department of Orthopedics, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Intelligent Medical Laboratory, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
9
|
Behrendt F, Bengs M, Bhattacharya D, Krüger J, Opfer R, Schlaefer A. A systematic approach to deep learning-based nodule detection in chest radiographs. Sci Rep 2023; 13:10120. [PMID: 37344565 DOI: 10.1038/s41598-023-37270-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 06/19/2023] [Indexed: 06/23/2023] Open
Abstract
Lung cancer is a serious disease responsible for millions of deaths every year. Early stages of lung cancer can be manifested in pulmonary lung nodules. To assist radiologists in reducing the number of overseen nodules and to increase the detection accuracy in general, automatic detection algorithms have been proposed. Particularly, deep learning methods are promising. However, obtaining clinically relevant results remains challenging. While a variety of approaches have been proposed for general purpose object detection, these are typically evaluated on benchmark data sets. Achieving competitive performance for specific real-world problems like lung nodule detection typically requires careful analysis of the problem at hand and the selection and tuning of suitable deep learning models. We present a systematic comparison of state-of-the-art object detection algorithms for the task of lung nodule detection. In this regard, we address the critical aspect of class imbalance and and demonstrate a data augmentation approach as well as transfer learning to boost performance. We illustrate how this analysis and a combination of multiple architectures results in state-of-the-art performance for lung nodule detection, which is demonstrated by the proposed model winning the detection track of the Node21 competition. The code for our approach is available at https://github.com/FinnBehrendt/node21-submit.
Collapse
Affiliation(s)
- Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany.
| | - Marcel Bengs
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | - Debayan Bhattacharya
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| | | | | | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, 21073, Hamburg, Germany
| |
Collapse
|
10
|
Aldughayfiq B, Ashfaq F, Jhanjhi NZ, Humayun M. YOLO-Based Deep Learning Model for Pressure Ulcer Detection and Classification. Healthcare (Basel) 2023; 11:healthcare11091222. [PMID: 37174764 PMCID: PMC10178524 DOI: 10.3390/healthcare11091222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 04/15/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023] Open
Abstract
Pressure ulcers are significant healthcare concerns affecting millions of people worldwide, particularly those with limited mobility. Early detection and classification of pressure ulcers are crucial in preventing their progression and reducing associated morbidity and mortality. In this work, we present a novel approach that uses YOLOv5, an advanced and robust object detection model, to detect and classify pressure ulcers into four stages and non-pressure ulcers. We also utilize data augmentation techniques to expand our dataset and strengthen the resilience of our model. Our approach shows promising results, achieving an overall mean average precision of 76.9% and class-specific mAP50 values ranging from 66% to 99.5%. Compared to previous studies that primarily utilize CNN-based algorithms, our approach provides a more efficient and accurate solution for the detection and classification of pressure ulcers. The successful implementation of our approach has the potential to improve the early detection and treatment of pressure ulcers, resulting in better patient outcomes and reduced healthcare costs.
Collapse
Affiliation(s)
- Bader Aldughayfiq
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| | - Farzeen Ashfaq
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - N Z Jhanjhi
- School of Computer Science, SCS, Taylor's University, Subang Jaya 47500, Malaysia
| | - Mamoona Humayun
- Department of Information Systems, College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia
| |
Collapse
|
11
|
Cao Z, Li R, Yang X, Fang L, Li Z, Li J. Multi-scale detection of pulmonary nodules by integrating attention mechanism. Sci Rep 2023; 13:5517. [PMID: 37015969 PMCID: PMC10073202 DOI: 10.1038/s41598-023-32312-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/25/2023] [Indexed: 04/06/2023] Open
Abstract
The detection of pulmonary nodules has a low accuracy due to the various shapes and sizes of pulmonary nodules. In this paper, a multi-scale detection network for pulmonary nodules based on the attention mechanism is proposed to accurately predict pulmonary nodules. During data processing, the pseudo-color processing strategy is designed to enhance the gray image and introduce more contextual semantic information. In the feature extraction network section, this paper designs a basic module of ResSCBlock integrating attention mechanism for feature extraction. At the same time, the feature pyramid structure is used for feature fusion in the network, and the problem of the detection of small-size nodules which are easily lost is solved by multi-scale prediction method. The proposed method is tested on the LUNA16 data set, with an 83% mAP value. Compared with other detection networks, the proposed method achieves an improvement in detecting pulmonary nodules.
Collapse
Affiliation(s)
- Zhenguan Cao
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, Anhui, China
| | - Rui Li
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, Anhui, China.
| | - Xun Yang
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, Anhui, China
| | - Liao Fang
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, Anhui, China
| | - Zhuoqin Li
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, Anhui, China
| | - Jinbiao Li
- School of Electrical and Information Engineering, Anhui University of Science and Technology, Huainan, 232001, Anhui, China
| |
Collapse
|
12
|
Han T, Ai D, Li X, Fan J, Song H, Wang Y, Yang J. Coronary artery stenosis detection via proposal-shifted spatial-temporal transformer in X-ray angiography. Comput Biol Med 2023; 153:106546. [PMID: 36641935 DOI: 10.1016/j.compbiomed.2023.106546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/03/2023] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
Accurate detection of coronary artery stenosis in X-ray angiography (XRA) images is crucial for the diagnosis and treatment of coronary artery disease. However, stenosis detection remains a challenging task due to complicated vascular structures, poor imaging quality, and fickle lesions. While devoted to accurate stenosis detection, most methods are inefficient in the exploitation of spatio-temporal information of XRA sequences, leading to a limited performance on the task. To overcome the problem, we propose a new stenosis detection framework based on a Transformer-based module to aggregate proposal-level spatio-temporal features. In the module, proposal-shifted spatio-temporal tokenization (PSSTT) scheme is devised to gather spatio-temporal region-of-interest (RoI) features for obtaining visual tokens within a local window. Then, the Transformer-based feature aggregation (TFA) network takes the tokens as the inputs to enhance the RoI features by learning the long-range spatio-temporal context for final stenosis prediction. The effectiveness of our method was validated by conducting qualitative and quantitative experiments on 233 XRA sequences of coronary artery. Our method achieves a high F1 score of 90.88%, outperforming other 15 state-of-the-art detection methods. It demonstrates that our method can perform accurate stenosis detection from XRA images due to the strong ability to aggregate spatio-temporal features.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Xinyu Li
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
13
|
Wang T, Yan D, Liu Z, Xiao L, Liang C, Xin H, Feng M, Zhao Z, Wang Y. Diagnosis of cervical lymph node metastasis with thyroid carcinoma by deep learning application to CT images. Front Oncol 2023; 13:1099104. [PMID: 36776294 PMCID: PMC9909181 DOI: 10.3389/fonc.2023.1099104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 01/10/2023] [Indexed: 01/27/2023] Open
Abstract
Introduction The incidence of thyroid diseases has increased in recent years, and cervical lymph node metastasis (LNM) is considered an important risk factor for locoregional recurrence. This study aims to develop a deep learning-based computer-aided diagnosis (CAD) method to diagnose cervical LNM with thyroid carcinoma on computed tomography (CT) images. Methods A new deep learning framework guided by the analysis of CT data for automated detection and classification of LNs on CT images is proposed. The presented CAD system consists of two stages. First, an improved region-based detection network is designed to learn pyramidal features for detecting small nodes at different feature scales. The region proposals are constrained by the prior knowledge of the size and shape distributions of real nodes. Then, a residual network with an attention module is proposed to perform the classification of LNs. The attention module helps to classify LNs in the fine-grained domain, improving the whole classification network performance. Results A total of 574 axial CT images (including 676 lymph nodes: 103 benign and 573 malignant lymph nodes) were retrieved from 196 patients who underwent CT for surgical planning. For detection, the data set was randomly subdivided into a training set (70%) and a testing set (30%), where each CT image was expanded to 20 images by rotation, mirror image, changing brightness, and Gaussian noise. The extended data set included 11,480 CT images. The proposed detection method outperformed three other detection architectures (average precision of 80.3%). For classification, ROI of lymph node metastasis labeled by radiologists were used to train the classification network. The 676 lymph nodes were randomly divided into 70% of the training set (73 benign and 401 malignant lymph nodes) and 30% of the test set (30 benign and 172 malignant lymph nodes). The classification method showed superior performance over other state-of-the-art methods with an accuracy of 96%, true positive and negative rates of 98.8 and 80%, respectively. It outperformed radiologists with an area under the curve of 0.894. Discussion The extensive experiments verify the high efficiency of the proposed method. It is considered instrumental in a clinical setting to diagnose cervical LNM with thyroid carcinoma using preoperative CT images. The future research can consider adding radiologists' experience and domain knowledge into the deep-learning based CAD method to make it more clinically significant. Conclusion The extensive experiments verify the high efficiency of the proposed method. It is considered instrumental in a clinical setting to diagnose cervical LNM with thyroid carcinoma using preoperative CT images.
Collapse
Affiliation(s)
- Tiantian Wang
- Department of Thyroid Surgery, the Second Affiliated Hospital of Zhejiang University College of Medicine, Hangzhou, China
| | - Ding Yan
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Zhaodi Liu
- School of Medicine, Zhejiang University, Hangzhou, China
| | - Lianxiang Xiao
- Shandong Provincial Maternal and Child Health Care Hospital, Shandong University, Jinan, China
| | - Changhu Liang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Haotian Xin
- Department of Radiology, Shandong Provincial Hospital, Shandong University, Jinan, China
| | - Mengmeng Feng
- Department of Radiology, Shandong Provincial Hospital, Shandong University, Jinan, China
| | - Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan, China,*Correspondence: Zijian Zhao,
| | - Yong Wang
- Department of Thyroid Surgery, the Second Affiliated Hospital of Zhejiang University College of Medicine, Hangzhou, China
| |
Collapse
|
14
|
Detecting pediatric wrist fractures using deep-learning-based object detection. Pediatr Radiol 2023; 53:1125-1134. [PMID: 36650360 DOI: 10.1007/s00247-023-05588-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 12/09/2022] [Accepted: 12/30/2022] [Indexed: 01/19/2023]
Abstract
BACKGROUND Missed fractures are the leading cause of diagnostic error in the emergency department, and fractures of pediatric bones, particularly subtle wrist fractures, can be misidentified because of their varying characteristics and responses to injury. OBJECTIVE This study evaluated the utility of an object detection deep learning framework for classifying pediatric wrist fractures as positive or negative for fracture, including subtle buckle fractures of the distal radius, and evaluated the performance of this algorithm as augmentation to trainee radiograph interpretation. MATERIALS AND METHODS We obtained 395 posteroanterior wrist radiographs from unique pediatric patients (65% positive for fracture, 30% positive for distal radial buckle fracture) and divided them into train (n = 229), tune (n = 41) and test (n = 125) sets. We trained a Faster R-CNN (region-based convolutional neural network) deep learning object-detection model. Two pediatric and two radiology residents evaluated radiographs initially without the artificial intelligence (AI) assistance, and then subsequently with access to the bounding box generated by the Faster R-CNN model. RESULTS The Faster R-CNN model demonstrated an area under the curve (AUC) of 0.92 (95% confidence interval [CI] 0.87-0.97), accuracy of 88% (n = 110/125; 95% CI 81-93%), sensitivity of 88% (n = 70/80; 95% CI 78-94%) and specificity of 89% (n = 40/45, 95% CI 76-96%) in identifying any fracture and identified 90% of buckle fractures (n = 35/39, 95% CI 76-97%). Access to Faster R-CNN model predictions significantly improved average resident accuracy from 80 to 93% in detecting any fracture (P < 0.001) and from 69 to 92% in detecting buckle fracture (P < 0.001). After accessing AI predictions, residents significantly outperformed AI in cases of disagreement (73% resident correct vs. 27% AI, P = 0.002). CONCLUSION An object-detection-based deep learning approach trained with only a few hundred examples identified radiographs containing pediatric wrist fractures with high accuracy. Access to model predictions significantly improved resident accuracy in diagnosing these fractures.
Collapse
|
15
|
Yan P, Li S, Zhou Z, Liu Q, Wu J, Ren Q, Chen Q, Chen Z, Chen Z, Chen S, Scholp A, Jiang JJ, Kang J, Ge P. Automated detection of glottic laryngeal carcinoma in laryngoscopic images from a multicentre database using a convolutional neural network. Clin Otolaryngol 2023; 48:436-441. [PMID: 36624555 DOI: 10.1111/coa.14029] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 11/22/2022] [Accepted: 12/31/2022] [Indexed: 01/11/2023]
Abstract
OBJECTIVE Little is known about the efficacy of using artificial intelligence (AI) to identify laryngeal carcinoma from images of vocal lesions taken in different hospitals with multiple laryngoscope systems. This multicentre study aimed to establish an AI system and provide a reliable auxiliary tool to screen for laryngeal carcinoma. STUDY DESIGN Multicentre case-control study. SETTING Six tertiary care centres. PARTICIPANTS Laryngoscopy images were collected from 2179 patients with vocal fold lesions. OUTCOME MEASURES An automatic detection system of laryngeal carcinoma was established and used to distinguish malignant and benign vocal lesions in 2179 laryngoscopy images acquired from 6 hospitals with 5 types of laryngoscopy systems. Pathological examination was the gold standard for identifying malignant and benign vocal lesions. RESULTS Out of 89 cases in the malignant group, the classifier was able to correctly identify laryngeal carcinoma in 66 patients (74.16%, sensitivity). Out of 640 cases in the benign group, the classifier was able to accurately assess the laryngeal lesion in 503 cases (78.59%, specificity). Furthermore, the region-based convolutional neural network (R-CNN) classifier achieved an overall accuracy of 78.05%, with a 95.63% negative predictive value and a 32.51% positive predictive value for the testing data set. CONCLUSION This automatic diagnostic system has the potential to assist clinical laryngeal carcinoma diagnosis which may improve and standardise the diagnostic capacity of laryngologists using different laryngoscopes.
Collapse
Affiliation(s)
- Peikai Yan
- Department of Otolaryngology & Head Neck Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.,School of Medicine, South China University of Technology, Guangzhou, China
| | - Shaohua Li
- Department of Otorhinolaryngology Head and Neck Surgery, Zhongshan Hospital of Traditional Chinese Medicine, Affiliated to Guangzhou University of Chinese Medicine, Guangdong, Zhongshan, Guangdong, China
| | - Zhou Zhou
- Department of Otolaryngology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China
| | - Qian Liu
- Department of Otolaryngology, Shenzhen People's Hospital (The Second Clinical Medical College, Jinan University; The First Affiliated Hospital, Southern University of Science and Technology), Shenzhen, China
| | - Jiahui Wu
- Department of Otolaryngology & Head Neck Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Qingyi Ren
- Department of Otolaryngology & Head Neck Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Qiuhuan Chen
- Department of Otolaryngology, Zhaoqing Gaoyao People's Hospital, Zhaoqing, China
| | - Zhipeng Chen
- Department of Otolaryngology, The Second People's Hospital of Longgang District, Shenzhen, China
| | - Ze Chen
- Department of Otolaryngology, Gaozhou People's Hospital, Gaozhou, China
| | - Shaohua Chen
- Department of Otolaryngology & Head Neck Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Austin Scholp
- Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA.,Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, School of Medicine and Public Health (A.S.), University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Jack J Jiang
- Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, School of Medicine and Public Health (A.S.), University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Jing Kang
- Department of Otolaryngology & Head Neck Surgery, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.,School of Medicine, South China University of Technology, Guangzhou, China
| | - Pingjiang Ge
- School of Medicine, South China University of Technology, Guangzhou, China
| |
Collapse
|
16
|
Huang ML, Wu YS. GCS-YOLOV4-Tiny: A lightweight group convolution network for multi-stage fruit detection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:241-268. [PMID: 36650764 DOI: 10.3934/mbe.2023011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Fruits require different planting techniques at different growth stages. Traditionally, the maturity stage of fruit is judged visually, which is time-consuming and labor-intensive. Fruits differ in size and color, and sometimes leaves or branches occult some of fruits, limiting automatic detection of growth stages in a real environment. Based on YOLOV4-Tiny, this study proposes a GCS-YOLOV4-Tiny model by (1) adding squeeze and excitation (SE) and the spatial pyramid pooling (SPP) modules to improve the accuracy of the model and (2) using the group convolution to reduce the size of the model and finally achieve faster detection speed. The proposed GCS-YOLOV4-Tiny model was executed on three public fruit datasets. Results have shown that GCS-YOLOV4-Tiny has favorable performance on mAP, Recall, F1-Score and Average IoU on Mango YOLO and Rpi-Tomato datasets. In addition, with the smallest model size of 20.70 MB, the mAP, Recall, F1-score, Precision and Average IoU of GCS-YOLOV4-Tiny achieve 93.42 ± 0.44, 91.00 ± 1.87, 90.80 ± 2.59, 90.80 ± 2.77 and 76.94 ± 1.35%, respectively, on F. margarita dataset. The detection results outperform the state-of-the-art YOLOV4-Tiny model with a 17.45% increase in mAP and a 13.80% increase in F1-score. The proposed model provides an effective and efficient performance to detect different growth stages of fruits and can be extended for different fruits and crops for object or disease detections.
Collapse
Affiliation(s)
- Mei-Ling Huang
- Department of Industrial Engineering & Management, National Chin-Yi University of Technology, Taichung, Taiwan
| | - Yi-Shan Wu
- Department of Industrial Engineering & Management, National Chin-Yi University of Technology, Taichung, Taiwan
| |
Collapse
|
17
|
Jin H, Yu C, Gong Z, Zheng R, Zhao Y, Fu Q. Machine learning techniques for pulmonary nodule computer-aided diagnosis using CT images: A systematic review. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
18
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
19
|
Wang H, Tang N, Zhang C, Hao Y, Meng X, Li J. Practice toward standardized performance testing of computer-aided detection algorithms for pulmonary nodule. Front Public Health 2022; 10:1071673. [PMID: 36568775 PMCID: PMC9768365 DOI: 10.3389/fpubh.2022.1071673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/21/2022] [Indexed: 12/12/2022] Open
Abstract
This study aimed at implementing practice to build a standardized protocol to test the performance of computer-aided detection (CAD) algorithms for pulmonary nodules. A test dataset was established according to a standardized procedure, including data collection, curation and annotation. Six types of pulmonary nodules were manually annotated as reference standard. Three specific rules to match algorithm output with reference standard were applied and compared. These rules included: (1) "center hit" [whether the center of algorithm highlighted region of interest (ROI) hit the ROI of reference standard]; (2) "center distance" (whether the distance between algorithm highlighted ROI center and reference standard center was below a certain threshold); (3) "area overlap" (whether the overlap between algorithm highlighted ROI and reference standard was above a certain threshold). Performance metrics were calculated and the results were compared among ten algorithms under test (AUTs). The test set currently consisted of CT sequences from 593 patients. Under "center hit" rule, the average recall rate, average precision, and average F1 score of ten algorithms under test were 54.68, 38.19, and 42.39%, respectively. Correspondingly, the results under "center distance" rule were 55.43, 38.69, and 42.96%, and the results under "area overlap" rule were 40.35, 27.75, and 31.13%. Among the six types of pulmonary nodules, the AUTs showed the highest miss rate for pure ground-glass nodules, with an average of 59.32%, followed by pleural nodules and solid nodules, with an average of 49.80 and 42.21%, respectively. The algorithm testing results changed along with specific matching methods adopted in the testing process. The AUTs showed uneven performance on different types of pulmonary nodules. This centralized testing protocol supports the comparison between algorithms with similar intended use, and helps evaluate algorithm performance.
Collapse
Affiliation(s)
- Hao Wang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Na Tang
- School of Bioengineering, Chongqing University, Chongqing, China
| | - Chao Zhang
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Ye Hao
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China
| | - Xiangfeng Meng
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,*Correspondence: Xiangfeng Meng
| | - Jiage Li
- Division of Active Medical Device and Medical Optics, Institute for Medical Device Control, National Institutes for Food and Drug Control, Beijing, China,Jiage Li
| |
Collapse
|
20
|
Tsivgoulis M, Papastergiou T, Megalooikonomou V. An improved SqueezeNet model for the diagnosis of lung cancer in CT scans. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022] Open
|
21
|
Fan L, Yang W, Tu W, Zhou X, Zou Q, Zhang H, Feng Y, Liu S. Thoracic Imaging in China: Yesterday, Today, and Tomorrow. J Thorac Imaging 2022; 37:366-373. [PMID: 35980382 PMCID: PMC9592175 DOI: 10.1097/rti.0000000000000670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Thoracic imaging has been revolutionized through advances in technology and research around the world, and so has China. Thoracic imaging in China has progressed from anatomic observation to quantitative and functional evaluation, from using traditional approaches to using artificial intelligence. This article will review the past, present, and future of thoracic imaging in China, in an attempt to establish new accepted strategies moving forward.
Collapse
Affiliation(s)
- Li Fan
- Second Affiliated Hospital, Naval Medical University
| | - Wenjie Yang
- Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wenting Tu
- Second Affiliated Hospital, Naval Medical University
| | - Xiuxiu Zhou
- Second Affiliated Hospital, Naval Medical University
| | - Qin Zou
- Second Affiliated Hospital, Naval Medical University
| | - Hanxiao Zhang
- Second Affiliated Hospital, Naval Medical University
| | - Yan Feng
- Second Affiliated Hospital, Naval Medical University
| | - Shiyuan Liu
- Second Affiliated Hospital, Naval Medical University
| |
Collapse
|
22
|
Wu L, Zhuang J, Chen W, Tang Y, Hou C, Li C, Zhong Z, Luo S. Data augmentation based on multiple oversampling fusion for medical image segmentation. PLoS One 2022; 17:e0274522. [PMID: 36256637 PMCID: PMC9578635 DOI: 10.1371/journal.pone.0274522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Accepted: 08/28/2022] [Indexed: 11/18/2022] Open
Abstract
A high-performance medical image segmentation model based on deep learning depends on the availability of large amounts of annotated training data. However, it is not trivial to obtain sufficient annotated medical images. Generally, the small size of most tissue lesions, e.g., pulmonary nodules and liver tumours, could worsen the class imbalance problem in medical image segmentation. In this study, we propose a multidimensional data augmentation method combining affine transform and random oversampling. The training data is first expanded by affine transformation combined with random oversampling to improve the prior data distribution of small objects and the diversity of samples. Secondly, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the lesion pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The LUNA16 and LiTS17 datasets were introduced to evaluate the performance of our works, where four deep neural network models, Mask-RCNN, U-Net, SegNet and DeepLabv3+, were adopted for small tissue lesion segmentation in CT images. In addition, the small tissue segmentation performance of the four different deep learning architectures on both datasets could be greatly improved by incorporating the data augmentation strategy. The best pixelwise segmentation performance for both pulmonary nodules and liver tumours was obtained by the Mask-RCNN model, with DSC values of 0.829 and 0.879, respectively, which were similar to those of state-of-the-art methods.
Collapse
Affiliation(s)
- Liangsheng Wu
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
- Academy of Contemporary Agriculture Engineering Innovations, Zhongkai University of Agriculture and Engineering, Guangzhou, China
- Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, China
| | - Jiajun Zhuang
- Academy of Contemporary Agriculture Engineering Innovations, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Weizhao Chen
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Yu Tang
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Chaojun Hou
- Academy of Contemporary Agriculture Engineering Innovations, Zhongkai University of Agriculture and Engineering, Guangzhou, China
| | - Chentong Li
- Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, China
| | - Zhenyu Zhong
- Institute of Intelligent Manufacturing, Guangdong Academy of Sciences, Guangzhou, China
| | - Shaoming Luo
- Academy of Interdisciplinary Studies, Guangdong Polytechnic Normal University, Guangzhou, China
| |
Collapse
|
23
|
Lung Cancer Nodules Detection via an Adaptive Boosting Algorithm Based on Self-Normalized Multiview Convolutional Neural Network. JOURNAL OF ONCOLOGY 2022; 2022:5682451. [PMID: 36199795 PMCID: PMC9529389 DOI: 10.1155/2022/5682451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 06/28/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022]
Abstract
Lung cancer is the deadliest cancer killing almost 1.8 million people in 2020. The new cases are expanding alarmingly. Early lung cancer manifests itself in the form of nodules in the lungs. One of the most widely used techniques for both lung cancer early and noninvasive diagnosis is computed tomography (CT). However, the intensive workload of radiologists to read a large number of scans for nodules detection gives rise to issues like false detection and missed detection. To overcome these issues, we proposed an innovative strategy titled adaptive boosting self-normalized multiview convolution neural network (AdaBoost-SNMV-CNN) for lung cancer nodules detection across CT scans. In AdaBoost-SNMV-CNN, MV-CNN function as a baseline learner while the scaled exponential linear unit (SELU) activation function normalizes the layers by considering their neighbors' information and a special drop-out technique (α-dropout). The proposed method was trained and tested using the widely Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) and Early Lung Cancer Action Program (ELCAP) datasets. AdaBoost-SNMV-CNN achieved an accuracy of 92%, sensitivity of 93%, and specificity of 92% for lung nodules detection on the LIDC-IDRI dataset. Meanwhile, on the ELCAP dataset, the accuracy for detecting lung nodules was 99%, sensitivity 100%, and specificity 98%. AdaBoost-SNMV-CNN outperformed the majority of the model in accuracy, sensitivity, and specificity. The multiviews confer the model's good generalization and learning ability for diverse features of lung nodules, the model architecture is simple, and has a minimal computational time of around 102 minutes. We believe that AdaBoost-SNMV-CNN has good accuracy for the detection of lung nodules and anticipate its potential application in the noninvasive clinical diagnosis of lung cancer. This model can be of good assistance to the radiologist and will be of interest to researchers involved in the designing and development of advanced systems for the detection of lung nodules to accomplish the goal of noninvasive diagnosis of lung cancer.
Collapse
|
24
|
Rib Fracture Detection with Dual-Attention Enhanced U-Net. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8945423. [PMID: 36035283 PMCID: PMC9410867 DOI: 10.1155/2022/8945423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 07/24/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
Rib fractures are common injuries caused by chest trauma, which may cause serious consequences. It is essential to diagnose rib fractures accurately. Low-dose thoracic computed tomography (CT) is commonly used for rib fracture diagnosis, and convolutional neural network- (CNN-) based methods have assisted doctors in rib fracture diagnosis in recent years. However, due to the lack of rib fracture data and the irregular, various shape of rib fractures, it is difficult for CNN-based methods to extract rib fracture features. As a result, they cannot achieve satisfying results in terms of accuracy and sensitivity in detecting rib fractures. Inspired by the attention mechanism, we proposed the CFSG U-Net for rib fracture detection. The CSFG U-Net uses the U-Net architecture and is enhanced by a dual-attention module, including a channel-wise fusion attention module (CFAM) and a spatial-wise group attention module (SGAM). CFAM uses the channel attention mechanism to reweight the feature map along the channel dimension and refine the U-Net's skip connections. SGAM uses the group technique to generate spatial attention to adjust feature maps in the spatial dimension, which allows the spatial attention module to capture more fine-grained semantic information. To evaluate the effectiveness of our proposed methods, we established a rib fracture dataset in our research. The experimental results on our dataset show that the maximum sensitivity of our proposed method is 89.58%, and the average FROC score is 81.28%, which outperforms the existing rib fracture detection methods and attention modules.
Collapse
|
25
|
Chen Y, Wang L, Luo R, Wang S, Wang H, Gao F, Wang D. A deep learning model based on dynamic contrast-enhanced magnetic resonance imaging enables accurate prediction of benign and malignant breast lessons. Front Oncol 2022; 12:943415. [PMID: 35936673 PMCID: PMC9353744 DOI: 10.3389/fonc.2022.943415] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 06/27/2022] [Indexed: 11/28/2022] Open
Abstract
Objectives The study aims to investigate the value of a convolutional neural network (CNN) based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in predicting malignancy of breast lesions. Methods We developed a CNN model based on DCE-MRI to characterize breast lesions. Between November 2018 and October 2019, 6,165 slices of 364 lesions (234 malignant, 130 benign) in 364 patients were pooled in the training/validation set. Lesions were semi-automatically segmented by two breast radiologists using ITK-SNAP software. The standard of reference was histologic consequences. Algorithm performance was evaluated in an independent testing set of 1,560 slices of 127 lesions in 127 patients using weighted sums of the area under the curve (AUC) scores. Results The area under the receiver operating characteristic (ROC) curve was 0.955 for breast cancer prediction while the accuracy, sensitivity, and specificity were 90.3, 96.2, and 79.0%, respectively, in the slice-based method. In the case-based method, the efficiency of the model changed by adjusting the standard for the number of positive slices. When a lesion with three or more positive slices was determined as malignant, the sensitivity was above 90%, with a specificity of nearly 60% and an accuracy higher than 80%. Conclusion The CNN model based on DCE-MRI demonstrated high accuracy for predicting malignancy among the breast lesions. This method should be validated in a larger and independent cohort.
Collapse
Affiliation(s)
- Yanhong Chen
- Department of Radiology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Lijun Wang
- Department of Radiology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ran Luo
- Department of Radiology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shuang Wang
- Department of Medicine, Beijing Medicinovo Technology Co., Ltd., Beijing, China
| | - Heng Wang
- Department of Medicine, Beijing Medicinovo Technology Co., Ltd., Beijing, China
| | - Fei Gao
- Department of Medicine, Beijing Medicinovo Technology Co., Ltd., Beijing, China
| | - Dengbin Wang
- Department of Radiology, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- *Correspondence: Dengbin Wang,
| |
Collapse
|
26
|
Huang YS, Chou PR, Chen HM, Chang YC, Chang RF. One-stage pulmonary nodule detection using 3-D DCNN with feature fusion and attention mechanism in CT image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106786. [PMID: 35398579 DOI: 10.1016/j.cmpb.2022.106786] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Lung cancer is the most common cause of cancer-related death in the world. Low-dose computed tomography (LDCT) is a widely used modality in lung cancer detection. The nodule is an abnormal tissue and may evolve into lung cancer. Hence, it is crucial to detect nodules in the early detection stage. However, reviewing the LDCT scans to observe suspicious nodules is a time-consuming task. Recently, designing a computer-aided detection (CADe) system with convolutional neural network (CNN) architecture has been proven that it is helpful for radiologists. Hence, in this study, a 3-D YOLO-based CADe system, 3-D OSAF-YOLOv3, is proposed for nodule detection in LDCT images. METHODS The proposed CADe system consists of data preprocessing, nodule detection, and non-maximum suppression algorithm (NMS). At first, the data preprocessing including the background elimination, the spacing normalization, and the volume of interest (VOI) extraction, are conducted to remove the non-lung region, normalize the image spacing, and divide LDCT image into numerous VOIs. Then, the VOIs are fed into the 3-D OSAF-YOLOv3 model, to detect the suspicious nodules. The proposed model is constructed by integrating the 3-D YOLOv3 with the one-shot aggregation module (OSA), the receptive field block (RFB), and the feature fusion scheme (FFS). Finally, the NMS algorithm is performed to eliminate the duplicated detection generated by the model. RESULTS In this study, the LUNA-16 dataset composed 1186 nodules from 888 LDCT scans and the competition performance metric (CPM) are used to evaluate our CADe system. In the experiment results, the proposed system can achieve a sensitivities rate of 0.962 with the false positive rate of 8 and complete a CPM value of 0.905. Moreover, according to the ablation study results, the employment of OSA module, RFB, and FFS could improve the detection performance actually. Furthermore, compared to other start-of-the-art (SOTA) models, our detection system could also achieve the higher performance. CONCLUSIONS In this study, a YOLO-based CADe system for nodule detection in CT image system integrating additional modules and scheme is proposed for nodule detection in LDCT. The result indicates that the proposed the modification can significantly improve detection performance.
Collapse
Affiliation(s)
- Yao-Sian Huang
- Department of Computer Science and Information Engineering, National Changhua University of Education, Changhua, Taiwan
| | - Ping-Ru Chou
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan
| | - Hsin-Ming Chen
- Department of Medical Imaging, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 10617, Taiwan.
| | - Ruey-Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, Taipei 10617, Taiwan; Graduate Institute of Network and Multimedia, National Taiwan University, Taipei, Taiwan; Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan; MOST Joint Research Center for AI Technology and All Vista Healthcare, Taipei, Taiwan.
| |
Collapse
|
27
|
Faisal M, Chaudhury S, Sankaran KS, Raghavendra S, Chitra RJ, Eswaran M, Boddu R. Faster R-CNN Algorithm for Detection of Plastic Garbage in the Ocean: A Case for Turtle Preservation. MATHEMATICAL PROBLEMS IN ENGINEERING 2022; 2022:1-11. [DOI: 10.1155/2022/3639222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
Abstract
Turtles are one of the ancient marine animals that live today. However, the population is threatened with extinction, so its existence needs to be protected and preserved because turtles often eat plastic waste in the ocean whose shape, texture, and color are similar to jellyfish. The technology in the computer vision area can be used to find the solution related to the case of reducing plastics and bottles trash in the ocean by implementing robotics. The region-based Convolutional Neural Network (CNN) is the latest image segmentation and has good detection accuracy based on the Faster R-CNN algorithm. In this study, the training image was built based on two different objects, namely plastic bottles and plastic bags. The target is that the two objects can be recognized even if there are other objects in the vicinity, or the image quality will be affected by the color of the seawater. The results obtained are that plastic objects and bottles can be recognized correctly in the picture. Of the five-color hues tested, the results show that the object detection process is valid on the average color hue, sepia, bandicoot, and grayscale. In contrast, the object detection process is invalid in black-and-white tones. The test results shown in the table explain that the object detection that gets the highest results is an image with normal coloring, while the lowest value is on bandicoot. The average accuracy of all types of images tested is 96.50. However, the accuracy value still needs to be improved to apply feasibility permanently to hardware such as diving robots.
Collapse
Affiliation(s)
- Muhammad Faisal
- Department of Computer Science, Sekolah Tinggi Manajemen Informatika Dan Komputer Profesional, A.P Petarani No. 27 Road, Makassar 90231, Indonesia
| | - Sushovan Chaudhury
- Department of Computer Science and Engineering, University of Engineering and Management, Kolkata, India
| | | | - S. Raghavendra
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - R. Jothi Chitra
- Department of ECE, Velammal Institute of Technology, Chennai, Tamilnadu, India
| | - Malathi Eswaran
- Department of Computer Technology–PG, Kongu Engineering College, Erode, Tamilnadu, India
| | - Rajasekhar Boddu
- Department of Software Engineering, College of Computing and Informatics, Haramaya University, Dire Dawa, Ethiopia
| |
Collapse
|
28
|
Min Y, Hu L, Wei L, Nie S. Computer-aided detection of pulmonary nodules based on convolutional neural networks: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac568e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 02/18/2022] [Indexed: 02/08/2023]
Abstract
Abstract
Computer-aided detection (CADe) technology has been proven to increase the detection rate of pulmonary nodules that has important clinical significance for the early diagnosis of lung cancer. In this study, we systematically review the latest techniques in pulmonary nodule CADe based on deep learning models with convolutional neural networks in computed tomography images. First, the brief descriptions and popular architecture of convolutional neural networks are introduced. Second, several common public databases and evaluation metrics are briefly described. Third, state-of-the-art approaches with excellent performances are selected. Subsequently, we combine the clinical diagnostic process and the traditional four steps of pulmonary nodule CADe into two stages, namely, data preprocessing and image analysis. Further, the major optimizations of deep learning models and algorithms are highlighted according to the progressive evaluation effect of each method, and some clinical evidence is added. Finally, various methods are summarized and compared. The innovative or valuable contributions of each method are expected to guide future research directions. The analyzed results show that deep learning-based methods significantly transformed the detection of pulmonary nodules, and the design of these methods can be inspired by clinical imaging diagnostic procedures. Moreover, focusing on the image analysis stage will result in improved returns. In particular, optimal results can be achieved by optimizing the steps of candidate nodule generation and false positive reduction. End-to-end methods, with greater operating speeds and lower computational consumptions, are superior to other methods in CADe of pulmonary nodules.
Collapse
|
29
|
Feng Y, Yang X, Qiu D, Zhang H, Wei D, Liu J. PCXRNet: Pneumonia diagnosis from Chest X-Ray Images using Condense attention block and Multiconvolution attention block. IEEE J Biomed Health Inform 2022; 26:1484-1495. [PMID: 35120015 DOI: 10.1109/jbhi.2022.3148317] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Coronavirus disease 2019 (COVID-19) has become a global pandemic. Many recognition approaches based on convolutional neural networks have been proposed for COVID-19 chest X-ray images. However, only a few of them make good use of the potential inter- and intra-relationships of feature maps. Considering the limitation mentioned above, this paper proposes an attention-based convolutional neural network, called PCXRNet, for diagnosis of pneumonia using chest X-ray images. To utilize the information from the channels of the feature maps, we added a novel condense attention module (CDSE) that comprised of two steps: condensation step and squeeze-excitation step. Unlike traditional channel attention modules, CDSE first downsamples the feature map channel by channel to condense the information, followed by the squeeze-excitation step, in which the channel weights are calculated. To make the model pay more attention to informative spatial parts in every feature map, we proposed a multi-convolution spatial attention module (MCSA). It reduces the number of parameters and introduces more nonlinearity. The CDSE and MCSA complement each other in series to tackle the problem of redundancy in feature maps and provide useful information from and between feature maps. We used the ChestXRay2017 dataset to explore the internal structure of PCXRNet, and the proposed network was applied to COVID-19 diagnosis. Additional experiments were conducted on a tuberculosis dataset to verify the effectiveness of PCXRNet. As a result, the network achieves an accuracy of 94.619%, recall of 94.753%, precision of 95.286%, and F1-score of 94.996% on the COVID-19 dataset.
Collapse
|
30
|
Lightweight convolutional neural network with knowledge distillation for cervical cells classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
31
|
Morelli R, Clissa L, Amici R, Cerri M, Hitrec T, Luppi M, Rinaldi L, Squarcio F, Zoccoli A. Automating cell counting in fluorescent microscopy through deep learning with c-ResUnet. Sci Rep 2021; 11:22920. [PMID: 34824294 PMCID: PMC8617067 DOI: 10.1038/s41598-021-01929-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 11/03/2021] [Indexed: 02/06/2023] Open
Abstract
Counting cells in fluorescent microscopy is a tedious, time-consuming task that researchers have to accomplish to assess the effects of different experimental conditions on biological structures of interest. Although such objects are generally easy to identify, the process of manually annotating cells is sometimes subject to fatigue errors and suffers from arbitrariness due to the operator’s interpretation of the borderline cases. We propose a Deep Learning approach that exploits a fully-convolutional network in a binary segmentation fashion to localize the objects of interest. Counts are then retrieved as the number of detected items. Specifically, we introduce a Unet-like architecture, cell ResUnet (c-ResUnet), and compare its performance against 3 similar architectures. In addition, we evaluate through ablation studies the impact of two design choices, (i) artifacts oversampling and (ii) weight maps that penalize the errors on cells boundaries increasingly with overcrowding. In summary, the c-ResUnet outperforms the competitors with respect to both detection and counting metrics (respectively, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$F_1$$\end{document}F1 score = 0.81 and MAE = 3.09). Also, the introduction of weight maps contribute to enhance performances, especially in presence of clumping cells, artifacts and confounding biological structures. Posterior qualitative assessment by domain experts corroborates previous results, suggesting human-level performance inasmuch even erroneous predictions seem to fall within the limits of operator interpretation. Finally, we release the pre-trained model and the annotated dataset to foster research in this and related fields.
Collapse
Affiliation(s)
- Roberto Morelli
- National Institute for Nuclear Physics, Bologna, Italy. .,Department of Physics and Astronomy, University of Bologna, Bologna, Italy.
| | - Luca Clissa
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Roberto Amici
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Matteo Cerri
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Timna Hitrec
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Marco Luppi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Lorenzo Rinaldi
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Fabio Squarcio
- Department of Biomedical and Neuromotor Sciences, University of Bologna, Bologna, Italy
| | - Antonio Zoccoli
- National Institute for Nuclear Physics, Bologna, Italy.,Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| |
Collapse
|
32
|
Faruqui N, Yousuf MA, Whaiduzzaman M, Azad AKM, Barros A, Moni MA. LungNet: A hybrid deep-CNN model for lung cancer diagnosis using CT and wearable sensor-based medical IoT data. Comput Biol Med 2021; 139:104961. [PMID: 34741906 DOI: 10.1016/j.compbiomed.2021.104961] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 10/13/2021] [Accepted: 10/17/2021] [Indexed: 12/25/2022]
Abstract
Lung cancer, also known as pulmonary cancer, is one of the deadliest cancers, but yet curable if detected at the early stage. At present, the ambiguous features of the lung cancer nodule make the computer-aided automatic diagnosis a challenging task. To alleviate this, we present LungNet, a novel hybrid deep-convolutional neural network-based model, trained with CT scan and wearable sensor-based medical IoT (MIoT) data. LungNet consists of a unique 22-layers Convolutional Neural Network (CNN), which combines latent features that are learned from CT scan images and MIoT data to enhance the diagnostic accuracy of the system. Operated from a centralized server, the network has been trained with a balanced dataset having 525,000 images that can classify lung cancer into five classes with high accuracy (96.81%) and low false positive rate (3.35%), outperforming similar CNN-based classifiers. Moreover, it classifies the stage-1 and stage-2 lung cancers into 1A, 1B, 2A and 2B sub-classes with 91.6% accuracy and false positive rate of 7.25%. High predictive capability accompanied with sub-stage classification renders LungNet as a promising prospect in developing CNN-based automatic lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Nuruzzaman Faruqui
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, 1342, Bangladesh.
| | - Mohammad Abu Yousuf
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, 1342, Bangladesh.
| | - Md Whaiduzzaman
- Institute of Information Technology, Jahangirnagar University, Savar, Dhaka, 1342, Bangladesh; Queensland University of Technology, 2 George St, Brisbane City, QLD, 4000, Australia.
| | - A K M Azad
- Faculty of Science, Engineering & Technology, Swinburne University of Technology Sydney, Australia.
| | - Alistair Barros
- Queensland University of Technology, 2 George St, Brisbane City, QLD, 4000, Australia.
| | - Mohammad Ali Moni
- School of Health and Rehabilitation Sciences, Faculty of Health and Behavioural Sciences, The University of Queensland, St Lucia, QLD, 4072, Australia.
| |
Collapse
|
33
|
Lin K, Zhao Y, Tian L, Zhao C, Zhang M, Zhou T. Estimation of municipal solid waste amount based on one-dimension convolutional neural network and long short-term memory with attention mechanism model: A case study of Shanghai. THE SCIENCE OF THE TOTAL ENVIRONMENT 2021; 791:148088. [PMID: 34118670 DOI: 10.1016/j.scitotenv.2021.148088] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 05/23/2021] [Accepted: 05/24/2021] [Indexed: 05/16/2023]
Abstract
Municipal solid waste (MSW) amount has direct influence on MSW management, policy-decision making, and MSW treatment methods. Machine learning has great potential for prediction, but few studies apply the approaches of deep learning to forecast the quantity of MSW. Therefore, the aim of this study is to evaluate the feasibility and practicability of employing the methods of supervised learning, including Attention, one-dimension Convolutional Neural Network (1D-CNN) and Long Short-Term Memory (LSTM) to predict the MSW Amount in Shanghai. Integrated 1D-CNN and LSTM with Attention model, the new structure model (1D-CNN-LSTM-Attention, 1D-CLA), is designed to forecast MSW amount. In addition, the influence of socioeconomic factors on MSW amount, the structure and layers distribution of Attention, 1D-CNN, LSTM and 1D-CLA are also discussed. The results indicate that the correlation coefficients of Attention, one-dimension CNN, LSTM, and proposed 1D-CLA model to predict the MSW in Shanghai are 78%, 86.6%, 90%, and 95.3%, respectively, suggesting the feasible and practicable. The values of 24, 0.01, 50 and 25 for the number of neurons, dropout, the value of epoch number and Batch size best fit 1D-CLA to predict the amount of MSW in Shanghai. Furthermore, the performance of 1D-CLA is better than any single model or two model's combination (R2 is 95.3%) and the mechanism of 1D-CLA is contributed by three former models following the order: LSTM>CNN>Attention.
Collapse
Affiliation(s)
- Kunsen Lin
- The State Key Laboratory of Pollution Control and Resource Reuse, School of Environmental Science and Engineering, Tongji University, 1239 Siping Road, Shanghai 200092, China.
| | - Youcai Zhao
- The State Key Laboratory of Pollution Control and Resource Reuse, School of Environmental Science and Engineering, Tongji University, 1239 Siping Road, Shanghai 200092, China; Shanghai Institute of Pollution Control and Ecological Security, 1515 North Zhongshan Rd. (No. 2), Shanghai 200092, China
| | - Lu Tian
- The State Key Laboratory of Pollution Control and Resource Reuse, School of Environmental Science and Engineering, Tongji University, 1239 Siping Road, Shanghai 200092, China
| | - Chunlong Zhao
- The State Key Laboratory of Pollution Control and Resource Reuse, School of Environmental Science and Engineering, Tongji University, 1239 Siping Road, Shanghai 200092, China
| | - Meilan Zhang
- The State Key Laboratory of Pollution Control and Resource Reuse, School of Environmental Science and Engineering, Tongji University, 1239 Siping Road, Shanghai 200092, China; Shanghai Laogang Solid Waste Disposal Co., Ltd, Shanghai 201302, China
| | - Tao Zhou
- The State Key Laboratory of Pollution Control and Resource Reuse, School of Environmental Science and Engineering, Tongji University, 1239 Siping Road, Shanghai 200092, China; Shanghai Institute of Pollution Control and Ecological Security, 1515 North Zhongshan Rd. (No. 2), Shanghai 200092, China.
| |
Collapse
|
34
|
Detection of Lung Nodules in Micro-CT Imaging Using Deep Learning. ACTA ACUST UNITED AC 2021; 7:358-372. [PMID: 34449750 PMCID: PMC8396172 DOI: 10.3390/tomography7030032] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 07/23/2021] [Accepted: 08/02/2021] [Indexed: 02/05/2023]
Abstract
We are developing imaging methods for a co-clinical trial investigating synergy between immunotherapy and radiotherapy. We perform longitudinal micro-computed tomography (micro-CT) of mice to detect lung metastasis after treatment. This work explores deep learning (DL) as a fast approach for automated lung nodule detection. We used data from control mice both with and without primary lung tumors. To augment the number of training sets, we have simulated data using real augmented tumors inserted into micro-CT scans. We employed a convolutional neural network (CNN), trained with four competing types of training data: (1) simulated only, (2) real only, (3) simulated and real, and (4) pretraining on simulated followed with real data. We evaluated our model performance using precision and recall curves, as well as receiver operating curves (ROC) and their area under the curve (AUC). The AUC appears to be almost identical (0.76-0.77) for all four cases. However, the combination of real and synthetic data was shown to improve precision by 8%. Smaller tumors have lower rates of detection than larger ones, with networks trained on real data showing better performance. Our work suggests that DL is a promising approach for fast and relatively accurate detection of lung tumors in mice.
Collapse
|
35
|
Warin K, Limprasert W, Suebnukarn S, Jinaporntham S, Jantana P. Automatic classification and detection of oral cancer in photographic images using deep learning algorithms. J Oral Pathol Med 2021; 50:911-918. [PMID: 34358372 DOI: 10.1111/jop.13227] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 06/14/2021] [Accepted: 07/04/2021] [Indexed: 12/15/2022]
Abstract
BACKGROUND Oral cancer is a deadly disease among the most common malignant tumors worldwide, and it has become an increasingly important public health problem in developing and low-to-middle income countries. This study aims to use the convolutional neural network (CNN) deep learning algorithms to develop an automated classification and detection model for oral cancer screening. METHODS The study included 700 clinical oral photographs, collected retrospectively from the oral and maxillofacial center, which were divided into 350 images of oral squamous cell carcinoma and 350 images of normal oral mucosa. The classification and detection models were created by using DenseNet121 and faster R-CNN, respectively. Four hundred and ninety images were randomly selected as training data. In addition, 70 and 140 images were assigned as validating and testing data, respectively. RESULTS The classification accuracy of DenseNet121 model achieved a precision of 99%, a recall of 100%, an F1 score of 99%, a sensitivity of 98.75%, a specificity of 100%, and an area under the receiver operating characteristic curve of 99%. The detection accuracy of a faster R-CNN model achieved a precision of 76.67%, a recall of 82.14%, an F1 score of 79.31%, and an area under the precision-recall curve of 0.79. CONCLUSION The DenseNet121 and faster R-CNN algorithm were proved to offer the acceptable potential for classification and detection of cancerous lesions in oral photographic images.
Collapse
Affiliation(s)
- Kritsasith Warin
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, Thammasat University, Pathum Thani, Thailand
| | - Wasit Limprasert
- College of Interdisciplinary Studies, Thammasat University, Patum Thani, Thailand
| | | | - Suthin Jinaporntham
- Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Khon Kaen University, Khon Kaen, Thailand
| | | |
Collapse
|