1
|
Wang W, Zhou J, Zhao J, Lin X, Zhang Y, Lu S, Zhao W, Wang S, Tang W, Qu X. Interactively Fusing Global and Local Features for Benign and Malignant Classification of Breast Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:525-534. [PMID: 39709289 DOI: 10.1016/j.ultrasmedbio.2024.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 10/17/2024] [Accepted: 11/14/2024] [Indexed: 12/23/2024]
Abstract
OBJECTIVE Breast ultrasound (BUS) is used to classify benign and malignant breast tumors, and its automatic classification can reduce subjectivity. However, current convolutional neural networks (CNNs) face challenges in capturing global features, while vision transformer (ViT) networks have limitations in effectively extracting local features. Therefore, this study aimed to develop a deep learning method that enables the interaction and updating of intermediate features between CNN and ViT to achieve high-accuracy BUS image classification. METHODS This study introduced the CNN and transformer multi-stage fusion network (CTMF-Net) consisting of two branches: a CNN branch and a transformer branch. The CNN branch employs visual geometry group as its backbone, while the transformer branch utilizes ViT as its base network. Both branches were divided into four stages. At the end of each stage, a proposed feature interaction module facilitated feature interaction and fusion between the two branches. Additionally, the convolutional block attention module was employed to enhance relevant features after each stage of the CNN branch. Extensive experiments were conducted using various state-of-the-art deep-learning classification methods on three public breast ultrasound datasets (SYSU, UDIAT and BUSI). RESULTS For the internal validation on SYSU and UDIAT, our proposed method CTMF-Net achieved the highest accuracy of 90.14 ± 0.58% on SYSU and 92.04 ± 4.90% on UDIAT, which showed superior classification performance over other state-of-art networks (p < 0.05). Additionally, for external validation on BUSI, CTMF-Net showed outstanding performance, achieving the highest area under the curve score of 0.8704 when trained on SYSU, marking a 0.0126 improvement over the second-best visual geometry group attention ViT method. Similarly, when applied to UDIAT, CTMF-Net achieved an area under the curve score of 0.8505, surpassing the second-best global context ViT method by 0.0130. CONCLUSION Our proposed method, CTMF-Net, outperforms all existing methods and can effectively assist doctors in achieving more accurate classification performance of breast tumors.
Collapse
Affiliation(s)
- Wenhan Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Jiale Zhou
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Jin Zhao
- Breast and Thyroid Surgery, China-Japan Friendship Hospital, Beijing, China
| | - Xun Lin
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Yan Zhang
- Department of Gynecology and Obstetrics, Peking University Third Hospital, Beijing, China
| | - Shan Lu
- Department of Gynecology and Obstetrics, Peking University Third Hospital, Beijing, China
| | - Wanchen Zhao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Shuai Wang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Wenzhong Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.
| |
Collapse
|
2
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
3
|
Bala PM, Palani U. Innovative breast cancer detection using a segmentation-guided ensemble classification framework. Biomed Eng Lett 2025; 15:179-191. [PMID: 39781047 PMCID: PMC11704121 DOI: 10.1007/s13534-024-00435-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 09/25/2024] [Accepted: 09/28/2024] [Indexed: 01/11/2025] Open
Abstract
Breast cancer (BC) remains a significant global health issue, necessitating innovative methodologies to improve early detection and diagnosis. Despite the existence of intelligent deep learning models, their efficacy is often limited due to the oversight of small-sized masses, leading to false positive and false negative outcomes. This research introduces a novel segmentation-guided classification model developed to increase BC detection accuracy. The designed model unfolds in two critical phases, each contributing to a comprehensive BC diagnostic pipeline. In Phase I, the Attention U-Net model is utilized for BC segmentation. The encoder extracts hierarchical features, while the decoder, supported by attention mechanisms, refines the segmentation, focusing on suspicious regions. In Phase II, a novel ensemble approach is introduced for BC classification, involving various feature extraction methods, base classifiers, and a meta-classifier. An ensemble of model classifiers-including support vector machine, decision trees, k-nearest neighbor and artificial neural network- captures diverse patterns within these features. The Random Forest meta-classifier amalgamates their outputs, leveraging their collective strengths. The proposed integrated model accurately identifies different breast tumor classes, including malignant, benign, and normal. The precise region-of-interest analysis from segmentation phase significantly boosted classification performance of ensemble meta-classifier. The model accomplished an overall accuracy rate of 99.57% with high segmentation performance of 95% f1-score, illustrating its high discriminative power in detecting malignant, benign, and normal cases within the ultrasound image dataset. This research contributes to reducing breast tumor morbidity and mortality by facilitating early detection and timely intervention, ultimately supporting better patient outcomes. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-024-00435-7.
Collapse
Affiliation(s)
- P. Manju Bala
- Computer Science and Engineering, IFET College of Engineering, Villupuram, Tamilnadu India
| | - U. Palani
- Electronics and Communication Engineering, IFET College of Engineering, Villupuram, Tamilnadu India
| |
Collapse
|
4
|
Ahmad I, Alqurashi F. Early cancer detection using deep learning and medical imaging: A survey. Crit Rev Oncol Hematol 2024; 204:104528. [PMID: 39413940 DOI: 10.1016/j.critrevonc.2024.104528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Accepted: 10/02/2024] [Indexed: 10/18/2024] Open
Abstract
Cancer, characterized by the uncontrolled division of abnormal cells that harm body tissues, necessitates early detection for effective treatment. Medical imaging is crucial for identifying various cancers, yet its manual interpretation by radiologists is often subjective, labour-intensive, and time-consuming. Consequently, there is a critical need for an automated decision-making process to enhance cancer detection and diagnosis. Previously, a lot of work was done on surveys of different cancer detection methods, and most of them were focused on specific cancers and limited techniques. This study presents a comprehensive survey of cancer detection methods. It entails a review of 99 research articles collected from the Web of Science, IEEE, and Scopus databases, published between 2020 and 2024. The scope of the study encompasses 12 types of cancer, including breast, cervical, ovarian, prostate, esophageal, liver, pancreatic, colon, lung, oral, brain, and skin cancers. This study discusses different cancer detection techniques, including medical imaging data, image preprocessing, segmentation, feature extraction, deep learning and transfer learning methods, and evaluation metrics. Eventually, we summarised the datasets and techniques with research challenges and limitations. Finally, we provide future directions for enhancing cancer detection techniques.
Collapse
Affiliation(s)
- Istiak Ahmad
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; School of Information and Communication Technology, Griffith University, Queensland 4111, Australia.
| | - Fahad Alqurashi
- Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
5
|
Gui H, Jiao H, Li L, Jiang X, Su T, Pang Z. Breast Tumor Detection and Diagnosis Using an Improved Faster R-CNN in DCE-MRI. Bioengineering (Basel) 2024; 11:1217. [PMID: 39768035 PMCID: PMC11673413 DOI: 10.3390/bioengineering11121217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2024] [Revised: 11/27/2024] [Accepted: 11/28/2024] [Indexed: 01/11/2025] Open
Abstract
AI-based breast cancer detection can improve the sensitivity and specificity of detection, especially for small lesions, which has clinical value in realizing early detection and treatment so as to reduce mortality. The two-stage detection network performs well; however, it adopts an imprecise ROI during classification, which can easily include surrounding tumor tissues. Additionally, fuzzy noise is a significant contributor to false positives. We adopted Faster RCNN as the architecture, introduced ROI aligning to minimize quantization errors and feature pyramid network (FPN) to extract different resolution features, added a bounding box quadratic regression feature map extraction network and three convolutional layers to reduce interference from tumor surrounding information, and extracted more accurate and deeper feature maps. Our approach outperformed Faster R-CNN, Mask R-CNN, and YOLOv9 in breast cancer detection across 485 internal cases. We achieved superior performance in mAP, sensitivity, and false positive rate ((0.752, 0.950, 0.133) vs. (0.711, 0.950, 0.200) vs. (0.718, 0.880, 0.120) vs. (0.658, 0.680, 405)), which represents a 38.5% reduction in false positives compared to manual detection. Additionally, in a public dataset of 220 cases, our model also demonstrated the best performance. It showed improved sensitivity and specificity, effectively assisting doctors in diagnosing cancer.
Collapse
Affiliation(s)
- Haitian Gui
- School of Biomedical Engineering, Shenzhen Campus of Sun Yat-sen University, Shenzhen 518107, China;
| | - Han Jiao
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China;
| | - Li Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Medical Imaging, Sun Yat-sen University Cancer Center (SYSUCC), Guangzhou 510060, China; (L.L.); (X.J.)
| | - Xinhua Jiang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Department of Medical Imaging, Sun Yat-sen University Cancer Center (SYSUCC), Guangzhou 510060, China; (L.L.); (X.J.)
| | - Tao Su
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China;
| | - Zhiyong Pang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China;
| |
Collapse
|
6
|
Yan P, Gong W, Li M, Zhang J, Li X, Jiang Y, Luo H, Zhou H. TDF-Net: Trusted Dynamic Feature Fusion Network for breast cancer diagnosis using incomplete multimodal ultrasound. INFORMATION FUSION 2024; 112:102592. [DOI: 10.1016/j.inffus.2024.102592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
|
7
|
Ekta, Bhatia V. Auto-BCS: A Hybrid System for Real-Time Breast Cancer Screening from Pathological Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1752-1766. [PMID: 38429562 PMCID: PMC11300416 DOI: 10.1007/s10278-024-01056-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/24/2023] [Accepted: 01/14/2024] [Indexed: 03/03/2024]
Abstract
Breast cancer is recognized as a prominent cause of cancer-related mortality among women globally, emphasizing the critical need for early diagnosis resulting improvement in survival rates. Current breast cancer diagnostic procedures depend on manual assessments of pathological images by medical professionals. However, in remote or underserved regions, the scarcity of expert healthcare resources often compromised the diagnostic accuracy. Machine learning holds great promise for early detection, yet existing breast cancer screening algorithms are frequently characterized by significant computational demands, rendering them unsuitable for deployment on low-processing-power mobile devices. In this paper, a real-time automated system "Auto-BCS" is introduced that significantly enhances the efficiency of early breast cancer screening. The system is structured into three distinct phases. In the initial phase, images undergo a pre-processing stage aimed at noise reduction. Subsequently, feature extraction is carried out using a lightweight and optimized deep learning model followed by extreme gradient boosting classifier, strategically employed to optimize the overall performance and prevent overfitting in the deep learning model. The system's performance is gauged through essential metrics, including accuracy, precision, recall, F1 score, and inference time. Comparative evaluations against state-of-the-art algorithms affirm that Auto-BCS outperforms existing models, excelling in both efficiency and processing speed. Computational efficiency is prioritized by Auto-BCS, making it particularly adaptable to low-processing-power mobile devices. Comparative assessments confirm the superior performance of Auto-BCS, signifying its potential to advance breast cancer screening technology.
Collapse
Affiliation(s)
- Ekta
- Netaji Subhas University of Technology, Delhi, India
| | | |
Collapse
|
8
|
Tagnamas J, Ramadan H, Yahyaouy A, Tairi H. Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. Vis Comput Ind Biomed Art 2024; 7:2. [PMID: 38273164 PMCID: PMC10811315 DOI: 10.1186/s42492-024-00155-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/11/2024] [Indexed: 01/27/2024] Open
Abstract
Accurate segmentation of breast ultrasound (BUS) images is crucial for early diagnosis and treatment of breast cancer. Further, the task of segmenting lesions in BUS images continues to pose significant challenges due to the limitations of convolutional neural networks (CNNs) in capturing long-range dependencies and obtaining global context information. Existing methods relying solely on CNNs have struggled to address these issues. Recently, ConvNeXts have emerged as a promising architecture for CNNs, while transformers have demonstrated outstanding performance in diverse computer vision tasks, including the analysis of medical images. In this paper, we propose a novel breast lesion segmentation network CS-Net that combines the strengths of ConvNeXt and Swin Transformer models to enhance the performance of the U-Net architecture. Our network operates on BUS images and adopts an end-to-end approach to perform segmentation. To address the limitations of CNNs, we design a hybrid encoder that incorporates modified ConvNeXt convolutions and Swin Transformer. Furthermore, to enhance capturing the spatial and channel attention in feature maps we incorporate the Coordinate Attention Module. Second, we design an Encoder-Decoder Features Fusion Module that facilitates the fusion of low-level features from the encoder with high-level semantic features from the decoder during the image reconstruction. Experimental results demonstrate the superiority of our network over state-of-the-art image segmentation methods for BUS lesions segmentation.
Collapse
Affiliation(s)
- Jaouad Tagnamas
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco.
| | - Hiba Ramadan
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Ali Yahyaouy
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Hamid Tairi
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| |
Collapse
|
9
|
Ajmal M, Khan MA, Akram T, Alqahtani A, Alhaisoni M, Armghan A, Althubiti SA, Alenezi F. BF2SkNet: best deep learning features fusion-assisted framework for multiclass skin lesion classification. Neural Comput Appl 2023; 35:22115-22131. [DOI: 10.1007/s00521-022-08084-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
|
10
|
Zhang Y, Liu YL, Nie K, Zhou J, Chen Z, Chen JH, Wang X, Kim B, Parajuli R, Mehta RS, Wang M, Su MY. Deep Learning-based Automatic Diagnosis of Breast Cancer on MRI Using Mask R-CNN for Detection Followed by ResNet50 for Classification. Acad Radiol 2023; 30 Suppl 2:S161-S171. [PMID: 36631349 PMCID: PMC10515321 DOI: 10.1016/j.acra.2022.12.038] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 12/10/2022] [Accepted: 12/23/2022] [Indexed: 01/11/2023]
Abstract
RATIONALE AND OBJECTIVES Diagnosis of breast cancer on MRI requires, first, the identification of suspicious lesions; second, the characterization to give a diagnostic impression. We implemented Mask Reginal-Convolutional Neural Network (R-CNN) to detect abnormal lesions, followed by ResNet50 to estimate the malignancy probability. MATERIALS AND METHODS Two datasets were used. The first set had 176 cases, 103 cancer, and 73 benign. The second set had 84 cases, 53 cancer, and 31 benign. For detection, the pre-contrast image and the subtraction images of left and right breasts were used as inputs, so the symmetry could be considered. The detected suspicious area was characterized by ResNet50, using three DCE parametric maps as inputs. The results obtained using slice-based analyses were combined to give a lesion-based diagnosis. RESULTS In the first dataset, 101 of 103 cancers were detected by Mask R-CNN as suspicious, and 99 of 101 were correctly classified by ResNet50 as cancer, with a sensitivity of 99/103 = 96%. 48 of 73 benign lesions and 131 normal areas were identified as suspicious. Following classification by ResNet50, only 16 benign and 16 normal areas remained as malignant. The second dataset was used for independent testing. The sensitivity was 43/53 = 81%. Of the total of 121 identified non-cancerous lesions, only 6 of 31 benign lesions and 22 normal tissues were classified as malignant. CONCLUSION ResNet50 could eliminate approximately 80% of false positives detected by Mask R-CNN. Combining Mask R-CNN and ResNet50 has the potential to develop a fully-automatic computer-aided diagnostic system for breast cancer on MRI.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, California; Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Yan-Lin Liu
- Department of Radiological Sciences, University of California, Irvine, California
| | - Ke Nie
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Jiejie Zhou
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhongwei Chen
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Jeon-Hor Chen
- Department of Radiological Sciences, University of California, Irvine, California; Department of Radiology, E-Da Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, New Jersey
| | - Bomi Kim
- Department of Radiological Sciences, University of California, Irvine, California; Department of Breast Radiology, Ilsan Hospital, Goyang, South Korea
| | - Ritesh Parajuli
- Department of Medicine, University of California, Irvine, United States
| | - Rita S Mehta
- Department of Medicine, University of California, Irvine, United States
| | - Meihao Wang
- Department of Radiology, First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, California; Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan.
| |
Collapse
|
11
|
Wang L, Pan Z, Liu W, Wang J, Ji L, Shi D. A dual-attention based coupling network for diabetes classification with heterogeneous data. J Biomed Inform 2023; 139:104300. [PMID: 36736446 DOI: 10.1016/j.jbi.2023.104300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 12/02/2022] [Accepted: 01/26/2023] [Indexed: 02/05/2023]
Abstract
Diabetes Mellitus (DM) is a group of metabolic disorders characterized by hyperglycaemia in the absence of treatment. Classification of DM is essential as it corresponds to the respective diagnosis and treatment. In this paper, we propose a new coupling network with hierarchical dual-attention that utilizes heterogeneous data, including Flash Glucose Monitoring (FGM) data and biomarkers in electronic medical records. The long short-term memory-based FGM sub-network extracts the time-dependent features of dynamic FGM sequences, while the biomarkers sub-network learns the features of static biomarkers. The convolutional block attention module (CBAM) for dispersing the feature weights of the spatial and channel dimensions is implemented into the FGM sub-network to endure the variability of FGM and allows us to extract high-level discriminative features more accurately. To better adjust the importance weights of the characteristics of the two sub-networks, self-attention is introduced to integrate the characteristics of heterogeneous data. Based on the dataset provided by Peking University People's Hospital, the proposed method is evaluated through factorial experiments of multi-source heterogeneous data, ablation studies of various attention strategies, time consumption evaluation and quantitative evaluation. The benchmark tests reveal the proposed network achieves a type 1 and 2 diabetes classification accuracy of 95.835% and the comprehensive performance metrics, including Matthews correlation coefficient, F1-score and G-mean, are 91.333%, 94.939% and 94.937% respectively. In the factorial experiments, the proposed method reaches the maximum area under the receiver operating characteristic curve of 0.9428, which indicates the effectiveness of the coupling between the nominated sub-networks. The coupling network with a dual-attention strategy performs better than the one without or only with a single-attention strategy in the ablation study as well. In addition, the model is also tested on another data set, and the accuracy of the test reaches 94.286%, reflecting that the model is robust when it is transferred to untrained diabetes data. The experimental results show that the proposed method is feasible in the classification of diabetes types. The code is available at https://github.com/bitDalei/Diabetes-Classification-with-Heterogeneous-Data.
Collapse
Affiliation(s)
- Lei Wang
- Institute of Engineering Medicine, Beijing Institute of Technology, Beijing, China
| | - Zhenglin Pan
- Department of Endocrinology and Metabolism, Peking University People's Hospital, Beijing, China
| | - Wei Liu
- Department of Endocrinology and Metabolism, Peking University People's Hospital, Beijing, China.
| | - Junzheng Wang
- MIIT Key Laboratory of Servo Motion Systems Drive and Control, School of Automation, Beijing Institute of Technology, Beijing, China
| | - Linong Ji
- Department of Endocrinology and Metabolism, Peking University People's Hospital, Beijing, China
| | - Dawei Shi
- Institute of Engineering Medicine, Beijing Institute of Technology, Beijing, China; MIIT Key Laboratory of Servo Motion Systems Drive and Control, School of Automation, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
12
|
Catteau X, Zindy E, Bouri S, Noël JC, Salmon I, Decaestecker C. Comparison Between Manual and Automated Assessment of Ki-67 in Breast Carcinoma: Test of a Simple Method in Daily Practice. Technol Cancer Res Treat 2023; 22:15330338231169603. [PMID: 37559526 PMCID: PMC10416654 DOI: 10.1177/15330338231169603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/11/2023] Open
Abstract
BACKGROUND In the era of "precision medicine," the availability of high-quality tumor biomarker tests is critical and tumor proliferation evaluated by Ki-67 antibody is one of the most important prognostic factors in breast cancer. But the evaluation of Ki-67 index has been shown to suffer from some interobserver variability. The goal of the study is to develop an easy, automated, and reliable Ki-67 assessment approach for invasive breast carcinoma in routine practice. PATIENTS AND METHODS A total of 151 biopsies of invasive breast carcinoma were analyzed. The Ki-67 index was evaluated by 2 pathologists with MIB-1 antibody as a global tumor index and also in a hotspot. These 2 areas were also analyzed by digital image analysis (DIA). RESULTS For Ki-67 index assessment, in the global and hotspot tumor area, the concordances were very good between DIA and pathologists when DIA focused on the annotations made by pathologist (0.73 and 0.83, respectively). However, this was definitely not the case when DIA was not constrained within the pathologist's annotations and automatically established its global or hotspot area in the whole tissue sample (concordance correlation coefficients between 0.28 and 0.58). CONCLUSIONS The DIA technique demonstrated a meaningful concordance with the indices evaluated by pathologists when the tumor area is previously identified by a pathologist. In contrast, basing Ki-67 assessment on automatic tissue detection was not satisfactory and provided bad concordance results. A representative tumoral zone must therefore be manually selected prior to the measurement made by the DIA.
Collapse
Affiliation(s)
- Xavier Catteau
- Department of Pathology, Erasme's Hospital, Université Libre de Bruxelles, Brussels, Belgium
- Curepath laboratory, CHU Tivoli and CHIREC institute, Jumet, Belgium
| | - Egor Zindy
- Laboratory of Image Synthesis and Analysis (LISA), Université Libre de Bruxelles, Bruxelles, Belgium
- Digital Pathology Platform of the CMMI (DIAPath), Université Libre de Bruxelles, Gosselies, Belgium
| | - Sarah Bouri
- Department of Pathology, Erasme's Hospital, Université Libre de Bruxelles, Brussels, Belgium
- Curepath laboratory, CHU Tivoli and CHIREC institute, Jumet, Belgium
| | - Jean-Christophe Noël
- Department of Pathology, Erasme's Hospital, Université Libre de Bruxelles, Brussels, Belgium
- Curepath laboratory, CHU Tivoli and CHIREC institute, Jumet, Belgium
| | - Isabelle Salmon
- Department of Pathology, Erasme's Hospital, Université Libre de Bruxelles, Brussels, Belgium
- Digital Pathology Platform of the CMMI (DIAPath), Université Libre de Bruxelles, Gosselies, Belgium
| | - Christine Decaestecker
- Laboratory of Image Synthesis and Analysis (LISA), Université Libre de Bruxelles, Bruxelles, Belgium
- Digital Pathology Platform of the CMMI (DIAPath), Université Libre de Bruxelles, Gosselies, Belgium
| |
Collapse
|