1
|
Xu Z, Zhong S, Gao Y, Huo J, Xu W, Huang W, Huang X, Zhang C, Zhou J, Dan Q, Li L, Jiang Z, Lang T, Xu S, Lu J, Wen G, Zhang Y, Li Y. Optimizing breast lesions diagnosis and decision-making with a deep learning fusion model integrating ultrasound and mammography: a dual-center retrospective study. Breast Cancer Res 2025; 27:80. [PMID: 40369585 PMCID: PMC12080162 DOI: 10.1186/s13058-025-02033-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Accepted: 04/22/2025] [Indexed: 05/16/2025] Open
Abstract
BACKGROUND This study aimed to develop a BI-RADS network (DL-UM) via integrating ultrasound (US) and mammography (MG) images and explore its performance in improving breast lesion diagnosis and management when collaborating with radiologists, particularly in cases with discordant US and MG Breast Imaging Reporting and Data System (BI-RADS) classifications. METHODS We retrospectively collected image data from 1283 women with breast lesions who underwent both US and MG within one month at two medical centres and categorised them into concordant and discordant BI-RADS classification subgroups. We developed a DL-UM network via integrating US and MG images, and DL networks using US (DL-U) or MG (DL-M) alone, respectively. The performance of DL-UM network for breast lesion diagnosis was evaluated using ROC curves and compared to DL-U and DL-M networks in the external testing dataset. The diagnostic performance of radiologists with different levels of experience under the assistance of DL-UM network was also evaluated. RESULTS In the external testing dataset, DL-UM outperformed DL-M in sensitivity (0.962 vs. 0.833, P = 0.016) and DL-U in specificity (0.667 vs. 0.526, P = 0.030), respectively. In the discordant BI-RADS classification subgroup, DL-UM achieved an AUC of 0.910. The diagnostic performance of four radiologists improved when collaborating with the DL-UM network, with AUCs increased from 0.674-0.772 to 0.889-0.910, specificities from 52.1%-75.0 to 81.3-87.5% and reducing unnecessary biopsies by 16.1%-24.6%, particularly for junior radiologists. Meanwhile, DL-UM outputs and heatmaps enhanced radiologists' trust and improved interobserver agreement between US and MG, with weighted kappa increased from 0.048 to 0.713 (P < 0.05). CONCLUSIONS The DL-UM network, integrating complementary US and MG features, assisted radiologists in improving breast lesion diagnosis and management, potentially reducing unnecessary biopsies.
Collapse
Affiliation(s)
- Ziting Xu
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Shengzhou Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Yang Gao
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jiekun Huo
- Department of Imaging, Zengcheng Branch of Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Weimin Xu
- Department of Radiology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Weijun Huang
- Department of Ultrasound, First People's Hospital of Foshan, Foshan, 510515, Guangdong, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Chifa Zhang
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jianqiao Zhou
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, 200025, China
| | - Qing Dan
- Shenzhen Key Laboratory for Drug Addiction and Medication Safety, Department of Ultrasound, Institute of Ultrasonic Medicine, Peking University Shenzhen Hospital, Shenzhen Peking University-The Hong Kong University of Science and Technology Medical Center, Shenzhen, 518036, China
| | - Lian Li
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Zhouyue Jiang
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Ting Lang
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Shuying Xu
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jiayin Lu
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Ge Wen
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China.
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, Guangdong, China.
| | - Yingjia Li
- Department of Ultrasound, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Hou C, Huang T, Hu K, Ye Z, Guo J, Zhou H. Artificial intelligence-assisted multimodal imaging for the clinical applications of breast cancer: a bibliometric analysis. Discov Oncol 2025; 16:537. [PMID: 40237900 PMCID: PMC12003249 DOI: 10.1007/s12672-025-02329-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2025] [Accepted: 04/08/2025] [Indexed: 04/18/2025] Open
Abstract
BACKGROUND Breast cancer (BC) remains a leading cause of cancer-related mortality among women globally, with increasing incidence rates posing significant public health challenges. Recent advancements in artificial intelligence (AI) have revolutionized medical imaging, particularly in enhancing diagnostic accuracy and prognostic capabilities for BC. While multimodal imaging combined with AI has shown remarkable potential, a comprehensive analysis is needed to synthesize current research and identify emerging trends and hotspots in AI-assisted multimodal imaging for BC. METHODS This study analyzed literature on AI-assisted multimodal imaging in BC from January 2010 to November 2024 in Web of Science Core Collection (WoSCC). Bibliometric and visualization tools, including VOSviewer, CiteSpace, and the Bibliometrix R package, were employed to assess countries, institutions, authors, journals, and keywords. RESULTS A total of 80 publications were included, revealing a steady increase in annual publications and citations, with a notable surge post-2021. China led in productivity and citations, while Germany exhibited the highest citation average. The United States demonstrated the strongest international collaboration. The most productive institution and author are Radboud University Nijmegen and Xi, Xiaoming. Publications were predominantly published in Computerized Medical Imaging and Graphics, with Qian, XJ's 2021 study on BC risk prediction under deep learning frameworks being the most influential. Keyword analysis highlighted themes such as "breast cancer", "classification", and "deep learning". CONCLUSIONS AI-assisted multimodal imaging has significantly advanced BC diagnosis and management, with promising future developments. This study offers researchers a comprehensive overview of current frameworks and emerging research directions. Future efforts are expected to focus on improving diagnostic precision and refining therapeutic strategies through optimized imaging techniques and AI algorithms, emphasizing international collaboration to drive innovation and clinical translation.
Collapse
Affiliation(s)
- Chenke Hou
- Hangzhou TCM Hospital of Zhejiang Chinese Medical University (Hangzhou Hospital of Traditional Chinese Medicine), Hangzhou, 310007, Zhejiang, China
| | - Ting Huang
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Keke Hu
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Zhifeng Ye
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Junhua Guo
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Heran Zhou
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China.
| |
Collapse
|
3
|
Xie W, Liu Z, Zhao L, Wang M, Tian J, Liu J. DIFLF: A domain-invariant features learning framework for single-source domain generalization in mammogram classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 261:108592. [PMID: 39813937 DOI: 10.1016/j.cmpb.2025.108592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 12/14/2024] [Accepted: 01/05/2025] [Indexed: 01/18/2025]
Abstract
BACKGROUND AND OBJECTIVE Single-source domain generalization (SSDG) aims to generalize a deep learning (DL) model trained on one source dataset to multiple unseen datasets. This is important for the clinical applications of DL-based models to breast cancer screening, wherein a DL-based model is commonly developed in an institute and then tested in other institutes. One challenge of SSDG is to alleviate the domain shifts using only one domain dataset. METHODS The present study proposed a domain-invariant features learning framework (DIFLF) for single-source domain. Specifically, a style-augmentation module (SAM) and a content-style disentanglement module (CSDM) are proposed in DIFLF. SAM includes two different color jitter transforms, which transforms each mammogram in the source domain into two synthesized mammograms with new styles. Thus, it can greatly increase the feature diversity of the source domain, reducing the overfitting of the trained model. CSDM includes three feature disentanglement units, which extracts domain-invariant content (DIC) features by disentangling them from domain-specific style (DSS) features, reducing the influence of the domain shifts resulting from different feature distributions. Our code is available for open access on Github (https://github.com/85675/DIFLF). RESULTS DIFLF is trained in a private dataset (PRI1), and tested first in another private dataset (PRI2) with similar feature distribution to PRI1 and then tested in two public datasets (INbreast and MIAS) with greatly different feature distributions from PRI1. As revealed by the experiment results, DIFLF presents excellent performance for classifying mammograms in the unseen target datasets of PRI2, INbreast, and MIAS. The accuracy and AUC of DIFLF are 0.917 and 0.928 in PRI2, 0.882 and 0.893 in INbreast, 0.767 and 0.710 in MIAS, respectively. CONCLUSIONS DIFLF can alleviate the influence of domain shifts only using one source dataset. Moreover, DIFLF can achieve an excellent mammogram classification performance even in the unseen datasets with great feature distribution differences from the training dataset.
Collapse
Affiliation(s)
- Wanfang Xie
- School of Engineering Medicine, Beihang University, Beijing 100191, PR China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing 100191, PR China
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing 100190, PR China; University of Chinese Academy of Sciences, Beijing 100080, PR China
| | - Litao Zhao
- School of Engineering Medicine, Beihang University, Beijing 100191, PR China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing 100191, PR China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital & People's Hospital of Zhengzhou University, Zhengzhou 450003, PR China.
| | - Jie Tian
- School of Engineering Medicine, Beihang University, Beijing 100191, PR China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing 100191, PR China.
| | - Jiangang Liu
- School of Engineering Medicine, Beihang University, Beijing 100191, PR China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing 100191, PR China; Beijing Engineering Research Center of Cardiovascular Wisdom Diagnosis and Treatment, Beijing 100029, PR China.
| |
Collapse
|
4
|
Wang W, Zhou J, Zhao J, Lin X, Zhang Y, Lu S, Zhao W, Wang S, Tang W, Qu X. Interactively Fusing Global and Local Features for Benign and Malignant Classification of Breast Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:525-534. [PMID: 39709289 DOI: 10.1016/j.ultrasmedbio.2024.11.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 10/17/2024] [Accepted: 11/14/2024] [Indexed: 12/23/2024]
Abstract
OBJECTIVE Breast ultrasound (BUS) is used to classify benign and malignant breast tumors, and its automatic classification can reduce subjectivity. However, current convolutional neural networks (CNNs) face challenges in capturing global features, while vision transformer (ViT) networks have limitations in effectively extracting local features. Therefore, this study aimed to develop a deep learning method that enables the interaction and updating of intermediate features between CNN and ViT to achieve high-accuracy BUS image classification. METHODS This study introduced the CNN and transformer multi-stage fusion network (CTMF-Net) consisting of two branches: a CNN branch and a transformer branch. The CNN branch employs visual geometry group as its backbone, while the transformer branch utilizes ViT as its base network. Both branches were divided into four stages. At the end of each stage, a proposed feature interaction module facilitated feature interaction and fusion between the two branches. Additionally, the convolutional block attention module was employed to enhance relevant features after each stage of the CNN branch. Extensive experiments were conducted using various state-of-the-art deep-learning classification methods on three public breast ultrasound datasets (SYSU, UDIAT and BUSI). RESULTS For the internal validation on SYSU and UDIAT, our proposed method CTMF-Net achieved the highest accuracy of 90.14 ± 0.58% on SYSU and 92.04 ± 4.90% on UDIAT, which showed superior classification performance over other state-of-art networks (p < 0.05). Additionally, for external validation on BUSI, CTMF-Net showed outstanding performance, achieving the highest area under the curve score of 0.8704 when trained on SYSU, marking a 0.0126 improvement over the second-best visual geometry group attention ViT method. Similarly, when applied to UDIAT, CTMF-Net achieved an area under the curve score of 0.8505, surpassing the second-best global context ViT method by 0.0130. CONCLUSION Our proposed method, CTMF-Net, outperforms all existing methods and can effectively assist doctors in achieving more accurate classification performance of breast tumors.
Collapse
Affiliation(s)
- Wenhan Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Jiale Zhou
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Jin Zhao
- Breast and Thyroid Surgery, China-Japan Friendship Hospital, Beijing, China
| | - Xun Lin
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Yan Zhang
- Department of Gynecology and Obstetrics, Peking University Third Hospital, Beijing, China
| | - Shan Lu
- Department of Gynecology and Obstetrics, Peking University Third Hospital, Beijing, China
| | - Wanchen Zhao
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Shuai Wang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Wenzhong Tang
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.
| |
Collapse
|
5
|
Yan P, Gong W, Li M, Zhang J, Li X, Jiang Y, Luo H, Zhou H. TDF-Net: Trusted Dynamic Feature Fusion Network for breast cancer diagnosis using incomplete multimodal ultrasound. INFORMATION FUSION 2024; 112:102592. [DOI: 10.1016/j.inffus.2024.102592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
|
6
|
Rajdeo P, Aronow B, Surya Prasath VB. Deep learning-based multimodal spatial transcriptomics analysis for cancer. Adv Cancer Res 2024; 163:1-38. [PMID: 39271260 PMCID: PMC11431148 DOI: 10.1016/bs.acr.2024.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
The advent of deep learning (DL) and multimodal spatial transcriptomics (ST) has revolutionized cancer research, offering unprecedented insights into tumor biology. This book chapter explores the integration of DL with ST to advance cancer diagnostics, treatment planning, and precision medicine. DL, a subset of artificial intelligence, employs neural networks to model complex patterns in vast datasets, significantly enhancing diagnostic and treatment applications. In oncology, convolutional neural networks excel in image classification, segmentation, and tumor volume analysis, essential for identifying tumors and optimizing radiotherapy. The chapter also delves into multimodal data analysis, which integrates genomic, proteomic, imaging, and clinical data to offer a holistic understanding of cancer biology. Leveraging diverse data sources, researchers can uncover intricate details of tumor heterogeneity, microenvironment interactions, and treatment responses. Examples include integrating MRI data with genomic profiles for accurate glioma grading and combining proteomic and clinical data to uncover drug resistance mechanisms. DL's integration with multimodal data enables comprehensive and actionable insights for cancer diagnosis and treatment. The synergy between DL models and multimodal data analysis enhances diagnostic accuracy, personalized treatment planning, and prognostic modeling. Notable applications include ST, which maps gene expression patterns within tissue contexts, providing critical insights into tumor heterogeneity and potential therapeutic targets. In summary, the integration of DL and multimodal ST represents a paradigm shift towards more precise and personalized oncology. This chapter elucidates the methodologies and applications of these advanced technologies, highlighting their transformative potential in cancer research and clinical practice.
Collapse
Affiliation(s)
- Pankaj Rajdeo
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Bruce Aronow
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States; Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, United States
| | - V B Surya Prasath
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States; Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, OH, United States; Department of Biomedical Informatics, College of Medicine, University of Cincinnati, Cincinnati, OH, United States; Department of Computer Science, University of Cincinnati, Cincinnati, OH, United States.
| |
Collapse
|