1
|
Chen ZH, Zha HL, Yao Q, Zhang WB, Zhou GQ, Li CY. Predicting Pathological Characteristics of HER2-Positive Breast Cancer from Ultrasound Images: a Deep Ensemble Approach. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01229-0. [PMID: 39187701 DOI: 10.1007/s10278-024-01229-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 08/04/2024] [Accepted: 08/05/2024] [Indexed: 08/28/2024]
Abstract
The objective is to evaluate the feasibility of utilizing ultrasound images in identifying critical prognostic biomarkers for HER2-positive breast cancer (HER2 + BC). This study enrolled 512 female patients diagnosed with HER2-positive breast cancer through pathological validation at our institution from January 2016 to December 2021. Five distinct deep convolutional neural networks (DCNNs) and a deep ensemble (DE) approach were trained to classify axillary lymph node involvement (ALNM), lymphovascular invasion (LVI), and histological grade (HG). The efficacy of the models was evaluated based on accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), receiver operating characteristic (ROC) curves, areas under the ROC curve (AUCs), and heat maps. DeLong test was applied to compare differences in AUC among different models. The deep ensemble approach, as the most effective model, demonstrated AUCs and accuracy of 0.869 (95% CI: 0.802-0.936) and 69.7% in LVI, 0.973 (95% CI: 0.949-0.998) and 73.8% in HG, thus providing superior classification performance in the context of imbalanced data (p < 0.05 by the DeLong test). On ALNM, AUC and accuracy were 0.780 (95% CI: 0.688-0.873) and 77.5%, which were comparable to other single models. The pretreatment US-based DE model could hold promise as a clinical guidance for predicting pathological characteristics of patients with HER2-positive breast cancer, thereby providing benefit of facilitating timely adjustments in treatment strategies.
Collapse
Affiliation(s)
- Zhi-Hui Chen
- Department of Ultrasound, Affiliated Hangzhou First People's Hospital, Westlake University School of Medicine, No. 261, Huansha Road, Shangcheng district, Hangzhou, 310006, China
| | - Hai-Ling Zha
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, No. 300 Guangzhou Road, Nanjing, 210029, China
| | - Qing Yao
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, No. 300 Guangzhou Road, Nanjing, 210029, China
| | - Wen-Bo Zhang
- Jiangsu Key Laboratory of Biomaterials and Devices, State Key Laboratory of Digital Medical Engineering, School of Biological Science and Medical Engineering, Southeast University, No. 2 Sipailou Road, Nanjing, 210096, China
| | - Guang-Quan Zhou
- Jiangsu Key Laboratory of Biomaterials and Devices, State Key Laboratory of Digital Medical Engineering, School of Biological Science and Medical Engineering, Southeast University, No. 2 Sipailou Road, Nanjing, 210096, China.
| | - Cui-Ying Li
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, No. 300 Guangzhou Road, Nanjing, 210029, China.
| |
Collapse
|
2
|
Chang YH, Lin MY, Hsieh MT, Ou MC, Huang CR, Sheu BS. Multiple Field-of-View Based Attention Driven Network for Weakly Supervised Common Bile Duct Stone Detection. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:394-404. [PMID: 37465459 PMCID: PMC10351611 DOI: 10.1109/jtehm.2023.3286423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/14/2023] [Accepted: 06/08/2023] [Indexed: 07/20/2023]
Abstract
OBJECTIVE Common bile duct (CBD) stones caused diseases are life-threatening. Because CBD stones locate in the distal part of the CBD and have relatively small sizes, detecting CBD stones from CT scans is a challenging issue in the medical domain. METHODS AND PROCEDURES We propose a deep learning based weakly-supervised method called multiple field-of-view based attention driven network (MFADNet) to detect CBD stones from CT scans based on image-level labels. Three dominant modules including a multiple field-of-view encoder, an attention driven decoder and a classification network are collaborated in the network. The encoder learns the feature of multi-scale contextual information while the decoder with the classification network is applied to locate the CBD stones based on spatial-channel attentions. To drive the learning of the whole network in a weakly-supervised and end-to-end trainable manner, four losses including the foreground loss, background loss, consistency loss and classification loss are proposed. RESULTS Compared with state-of-the-art weakly-supervised methods in the experiments, the proposed method can accurately classify and locate CBD stones based on the quantitative and qualitative results. CONCLUSION We propose a novel multiple field-of-view based attention driven network for a new medical application of CBD stone detection from CT scans while only image-levels are required to reduce the burdens of labeling and help physicians automatically diagnose CBD stones. The source code is available at https://github.com/nchucvml/MFADNet after acceptance. CLINICAL IMPACT Our deep learning method can help physicians localize relatively small CBD stones for effectively diagnosing CBD stone caused diseases.
Collapse
Affiliation(s)
- Ya-Han Chang
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402202Taiwan
| | - Meng-Ying Lin
- Department of Internal MedicineNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| | - Ming-Tsung Hsieh
- Department of Internal MedicineNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| | - Ming-Ching Ou
- Department of Medical ImageNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| | - Chun-Rong Huang
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402202Taiwan
- Cross College Elite Program, and Academy of Innovative Semiconductor and Sustainable ManufacturingNational Cheng Kung UniversityTainan701401Taiwan
| | - Bor-Shyang Sheu
- Department of Internal MedicineNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| |
Collapse
|
3
|
Jhang JY, Tsai YC, Hsu TC, Huang CR, Cheng HC, Sheu BS. Gastric Section Correlation Network for Gastric Precancerous Lesion Diagnosis. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:434-442. [PMID: 38899022 PMCID: PMC11186652 DOI: 10.1109/ojemb.2023.3277219] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 03/24/2023] [Accepted: 05/10/2023] [Indexed: 06/21/2024] Open
Abstract
Goal: Diagnosing the corpus-predominant gastritis index (CGI) which is an early precancerous lesion in the stomach has been shown its effectiveness in identifying high gastric cancer risk patients for preventive healthcare. However, invasive biopsies and time-consuming pathological analysis are required for the CGI diagnosis. Methods: We propose a novel gastric section correlation network (GSCNet) for the CGI diagnosis from endoscopic images of three dominant gastric sections, the antrum, body and cardia. The proposed network consists of two dominant modules including the scaling feature fusion module and section correlation module. The front one aims to extract scaling fusion features which can effectively represent the mucosa under variant viewing angles and scale changes for each gastric section. The latter one aims to apply the medical prior knowledge with three section correlation losses to model the correlations of different gastric sections for the CGI diagnosis. Results: The proposed method outperforms competing deep learning methods and achieves high testing accuracy, sensitivity, and specificity of 0.957, 0.938 and 0.962, respectively. Conclusions: The proposed method is the first method to identify high gastric cancer risk patients with CGI from endoscopic images without invasive biopsies and time-consuming pathological analysis.
Collapse
Affiliation(s)
- Jyun-Yao Jhang
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402Taiwan
| | - Yu-Ching Tsai
- Department of Internal MedicineTainan Hospital, Ministry of Health and WelfareTainan701Taiwan
- Department of Internal Medicine, National Cheng Kung University Hospital, College of MedicineNational Cheng Kung UniversityTainan701Taiwan
| | - Tzu-Chun Hsu
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402Taiwan
| | - Chun-Rong Huang
- Cross College Elite Program, and Academy of Innovative Semiconductor and Sustainable ManufacturingNational Cheng Kung UniversityTainan701Taiwan
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402Taiwan
| | - Hsiu-Chi Cheng
- Department of Internal Medicine, Institute of Clinical Medicine and Molecular MedicineNational Cheng Kung UniversityTainan701Taiwan
- Department of Internal MedicineTainan Hospital, Ministry of Health and WelfareTainan701Taiwan
| | - Bor-Shyang Sheu
- Institute of Clinical Medicine and Department of Internal Medicine, National Cheng Kung University HospitalNational Cheng Kung UniversityTainan701Taiwan
| |
Collapse
|
4
|
Yue G, Han W, Jiang B, Zhou T, Cong R, Wang T. Boundary Constraint Network with Cross Layer Feature Integration for Polyp Segmentation. IEEE J Biomed Health Inform 2022; 26:4090-4099. [PMID: 35536816 DOI: 10.1109/jbhi.2022.3173948] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Clinically, proper polyp localization in endoscopy images plays a vital role in the follow-up treatment (e.g., surgical planning). Deep convolutional neural networks (CNNs) provide a favoured prospect for automatic polyp segmentation and evade the limitations of visual inspection, e.g., subjectivity and overwork. However, most existing CNNs-based methods often provide unsatisfactory segmentation performance. In this paper, we propose a novel boundary constraint network, namely BCNet, for accurate polyp segmentation. The success of BCNet benefits from integrating cross-level context information and leveraging edge information. Specifically, to avoid the drawbacks caused by simple feature addition or concentration, BCNet applies a cross-layer feature integration strategy (CFIS) in fusing the features of the top-three highest layers, yielding a better performance. CFIS consists of three attention-driven cross-layer feature interaction modules (ACFIMs) and two global feature integration modules (GFIMs). ACFIM adaptively fuses the context information of the top-three highest layers via the self-attention mechanism instead of direct addition or concentration. GFIM integrates the fused information across layers with the guidance from global attention. To obtain accurate boundaries, BCNet introduces a bilateral boundary extraction module that explores the polyp and non-polyp information of the shallow layer collaboratively based on the high-level location information and boundary supervision. Through joint supervision of the polyp area and boundary, BCNet is able to get more accurate polyp masks. Experimental results on three public datasets show that the proposed BCNet outperforms seven state-of-the-art competing methods in terms of both effectiveness and generalization.
Collapse
|
5
|
Real-Time Multi-Label Upper Gastrointestinal Anatomy Recognition from Gastroscope Videos. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Esophagogastroduodenoscopy (EGD) is a critical step in the diagnosis of upper gastrointestinal disorders. However, due to inexperience or high workload, there is a wide variation in EGD performance by endoscopists. Variations in performance may result in exams that do not completely cover all anatomical locations of the stomach, leading to a potential risk of missed diagnosis of gastric diseases. Numerous guidelines or expert consensus have been proposed to assess and optimize the quality of endoscopy. However, there is a lack of mature and robust methods to accurately apply to real clinical real-time video environments. In this paper, we innovatively define the problem of recognizing anatomical locations in videos as a multi-label recognition task. This can be more consistent with the model learning of image-to-label mapping relationships. We propose a combined structure of a deep learning model (GL-Net) that combines a graph convolutional network (GCN) with long short-term memory (LSTM) networks to both extract label features and correlate temporal dependencies for accurate real-time anatomical locations identification in gastroscopy videos. Our methodological evaluation dataset is based on complete videos of real clinical examinations. A total of 29,269 images from 49 videos were collected as a dataset for model training and validation. Another 1736 clinical videos were retrospectively analyzed and evaluated for the application of the proposed model. Our method achieves 97.1% mean accuracy (mAP), 95.5% mean per-class accuracy and 93.7% average overall accuracy in a multi-label classification task, and is able to process these videos in real-time at 29.9 FPS. In addition, based on our approach, we designed a system to monitor routine EGD videos in detail and perform statistical analysis of the operating habits of endoscopists, which can be a useful tool to improve the quality of clinical endoscopy.
Collapse
|
6
|
Ai Z, Huang X, Fan Y, Feng J, Zeng F, Lu Y. DR-IIXRN : Detection Algorithm of Diabetic Retinopathy Based on Deep Ensemble Learning and Attention Mechanism. Front Neuroinform 2021; 15:778552. [PMID: 35002666 PMCID: PMC8740273 DOI: 10.3389/fninf.2021.778552] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 11/16/2021] [Indexed: 11/30/2022] Open
Abstract
Diabetic retinopathy (DR) is one of the common chronic complications of diabetes and the most common blinding eye disease. If not treated in time, it might lead to visual impairment and even blindness in severe cases. Therefore, this article proposes an algorithm for detecting diabetic retinopathy based on deep ensemble learning and attention mechanism. First, image samples were preprocessed and enhanced to obtain high quality image data. Second, in order to improve the adaptability and accuracy of the detection algorithm, we constructed a holistic detection model DR-IIXRN, which consists of Inception V3, InceptionResNet V2, Xception, ResNeXt101, and NASNetLarge. For each base classifier, we modified the network model using transfer learning, fine-tuning, and attention mechanisms to improve its ability to detect DR. Finally, a weighted voting algorithm was used to determine which category (normal, mild, moderate, severe, or proliferative DR) the images belonged to. We also tuned the trained network model on the hospital data, and the real test samples in the hospital also confirmed the advantages of the algorithm in the detection of the diabetic retina. Experiments show that compared with the traditional single network model detection algorithm, the auc, accuracy, and recall rate of the proposed method are improved to 95, 92, and 92%, respectively, which proves the adaptability and correctness of the proposed method.
Collapse
Affiliation(s)
- Zhuang Ai
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| | - Xuan Huang
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
- Medical Research Center, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Yuan Fan
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| | - Jing Feng
- Department of Ophthalmology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Fanxin Zeng
- Department of Clinical Research Center, Dazhou Central Hospital, Sichuan, China
| | - Yaping Lu
- Department of Research and Development, Sinopharm Genomics Technology Co., Ltd., Jiangsu, China
| |
Collapse
|