1
|
Yan P, Gong W, Li M, Zhang J, Li X, Jiang Y, Luo H, Zhou H. TDF-Net: Trusted Dynamic Feature Fusion Network for breast cancer diagnosis using incomplete multimodal ultrasound. INFORMATION FUSION 2024; 112:102592. [DOI: 10.1016/j.inffus.2024.102592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2024]
|
2
|
Zhang H, Liu J, Liu W, Chen H, Yu Z, Yuan Y, Wang P, Qin J. MHD-Net: Memory-Aware Hetero-Modal Distillation Network for Thymic Epithelial Tumor Typing With Missing Pathology Modality. IEEE J Biomed Health Inform 2024; 28:3003-3014. [PMID: 38470599 DOI: 10.1109/jbhi.2024.3376462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
Fusing multi-modal radiology and pathology data with complementary information can improve the accuracy of tumor typing. However, collecting pathology data is difficult since it is high-cost and sometimes only obtainable after the surgery, which limits the application of multi-modal methods in diagnosis. To address this problem, we propose comprehensively learning multi-modal radiology-pathology data in training, and only using uni-modal radiology data in testing. Concretely, a Memory-aware Hetero-modal Distillation Network (MHD-Net) is proposed, which can distill well-learned multi-modal knowledge with the assistance of memory from the teacher to the student. In the teacher, to tackle the challenge in hetero-modal feature fusion, we propose a novel spatial-differentiated hetero-modal fusion module (SHFM) that models spatial-specific tumor information correlations across modalities. As only radiology data is accessible to the student, we store pathology features in the proposed contrast-boosted typing memory module (CTMM) that achieves type-wise memory updating and stage-wise contrastive memory boosting to ensure the effectiveness and generalization of memory items. In the student, to improve the cross-modal distillation, we propose a multi-stage memory-aware distillation (MMD) scheme that reads memory-aware pathology features from CTMM to remedy missing modal-specific information. Furthermore, we construct a Radiology-Pathology Thymic Epithelial Tumor (RPTET) dataset containing paired CT and WSI images with annotations. Experiments on the RPTET and CPTAC-LUAD datasets demonstrate that MHD-Net significantly improves tumor typing and outperforms existing multi-modal methods on missing modality situations.
Collapse
|
3
|
Yao J, Zhou W, Zhu Y, Zhou J, Chen X, Zhan W. Predictive nomogram using multimodal ultrasonographic features for axillary lymph node metastasis in early‑stage invasive breast cancer. Oncol Lett 2024; 27:95. [PMID: 38288042 PMCID: PMC10823315 DOI: 10.3892/ol.2024.14228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 12/19/2023] [Indexed: 01/31/2024] Open
Abstract
Axillary lymph node (ALN) status is a key prognostic factor in patients with early-stage invasive breast cancer (IBC). The present study aimed to develop and validate a nomogram based on multimodal ultrasonographic (MMUS) features for early prediction of axillary lymph node metastasis (ALNM). A total of 342 patients with early-stage IBC (240 in the training cohort and 102 in the validation cohort) who underwent preoperative conventional ultrasound (US), strain elastography, shear wave elastography and contrast-enhanced US examination were included between August 2021 and March 2022. Pathological ALN status was used as the reference standard. The clinicopathological factors and MMUS features were analyzed with uni- and multivariate logistic regression to construct a clinicopathological and conventional US model and a MMUS-based nomogram. The MMUS nomogram was validated with respect to discrimination, calibration, reclassification and clinical usefulness. US features of tumor size, echogenicity, stiff rim sign, perfusion defect, radial vessel and US Breast Imaging Reporting and Data System category 5 were independent risk predictors for ALNM. MMUS nomogram based on these factors demonstrated an improved calibration and favorable performance [area under the receiver operator characteristic curve (AUC), 0.927 and 0.922 in the training and validation cohorts, respectively] compared with the clinicopathological model (AUC, 0.681 and 0.670, respectively), US-depicted ALN status (AUC, 0.710 and 0.716, respectively) and the conventional US model (AUC, 0.867 and 0.894, respectively). MMUS nomogram improved the reclassification ability of the conventional US model for ALNM prediction (net reclassification improvement, 0.296 and 0.288 in the training and validation cohorts, respectively; both P<0.001). Taken together, the findings of the present study suggested that the MMUS nomogram may be a promising, non-invasive and reliable approach for predicting ALNM.
Collapse
Affiliation(s)
- Jiejie Yao
- Department of Ultrasound, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, P.R. China
| | - Wei Zhou
- Department of Ultrasound, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, P.R. China
| | - Ying Zhu
- Department of Ultrasound, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, P.R. China
| | - Jianqiao Zhou
- Department of Ultrasound, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, P.R. China
| | - Xiaosong Chen
- Comprehensive Breast Health Center, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, P.R. China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200025, P.R. China
| |
Collapse
|
4
|
Wang X, Cai S, Wang H, Li J, Yang Y. Deep-learning-based renal artery stenosis diagnosis via multimodal fusion. J Appl Clin Med Phys 2024; 25:e14298. [PMID: 38373294 DOI: 10.1002/acm2.14298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 10/19/2023] [Accepted: 01/22/2024] [Indexed: 02/21/2024] Open
Abstract
PURPOSE Diagnosing Renal artery stenosis (RAS) presents challenges. This research aimed to develop a deep learning model for the computer-aided diagnosis of RAS, utilizing multimodal fusion technology based on ultrasound scanning images, spectral waveforms, and clinical information. METHODS A total of 1485 patients received renal artery ultrasonography from Peking Union Medical College Hospital were included and their color doppler sonography (CDS) images were classified according to anatomical site and left-right orientation. The RAS diagnosis was modeled as a process involving feature extraction and multimodal fusion. Three deep learning (DL) models (ResNeSt, ResNet, and XCiT) were trained on a multimodal dataset consisted of CDS images, spectrum waveform images, and individual basic information. Predicted performance of different models were compared with senior physician and evaluated on a test dataset (N = 117 patients) with renal artery angiography results. RESULTS Sample sizes of training and validation datasets were 3292 and 169 respectively. On test data (N = 676 samples), predicted accuracies of three DL models were more than 80% and the ResNeSt achieved the accuracy 83.49% ± 0.45%, precision 81.89% ± 3.00%, and recall 76.97% ± 3.7%. There was no significant difference between the accuracy of ResNeSt and ResNet (82.84% ± 1.52%), and the ResNeSt was higher than the XCiT (80.71% ± 2.23%, p < 0.05). Compared to the gold standard, renal artery angiography, the accuracy of ResNest model was 78.25% ± 1.62%, which was inferior to the senior physician (90.09%). Besides, compared to the multimodal fusion model, the performance of single-modal model on spectrum waveform images was relatively lower. CONCLUSION The DL multimodal fusion model shows promising results in assisting RAS diagnosis.
Collapse
Affiliation(s)
- Xin Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Sheng Cai
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Hongyan Wang
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Jianchu Li
- Department of Ultrasound, State Key Laboratory of Complex Severe and Rare Diseases, Peking Union Medical College Hospital, Chinese Academy of Medical Science and Peking Union Medical College, Beijing, China
| | - Yuqing Yang
- State Key Laboratory of Networking and Switching Technology, Beijing University of Posts and Telecommunications, Beijing, China
| |
Collapse
|
5
|
Lv T, Hong X, Liu Y, Miao K, Sun H, Li L, Deng C, Jiang C, Pan X. AI-powered interpretable imaging phenotypes noninvasively characterize tumor microenvironment associated with diverse molecular signatures and survival in breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107857. [PMID: 37865058 DOI: 10.1016/j.cmpb.2023.107857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 08/23/2023] [Accepted: 10/08/2023] [Indexed: 10/23/2023]
Abstract
BACKGROUND AND OBJECTIVES Tumor microenvironment (TME) is a determining factor in decision-making and personalized treatment for breast cancer, which is highly intra-tumor heterogeneous (ITH). However, the noninvasive imaging phenotypes of TME are poorly understood, even invasive genotypes have been largely known in breast cancer. METHODS Here, we develop an artificial intelligence (AI)-driven approach for noninvasively characterizing TME by integrating the predictive power of deep learning with the explainability of human-interpretable imaging phenotypes (IMPs) derived from 4D dynamic imaging (DCE-MRI) of 342 breast tumors linked to genomic and clinical data, which connect cancer phenotypes to genotypes. An unsupervised dual-attention deep graph clustering model (DGCLM) is developed to divide bulk tumor into multiple spatially segregated and phenotypically consistent subclusters. The IMPs ranging from spatial heterogeneity to kinetic heterogeneity are leveraged to capture architecture, interaction, and proximity between intratumoral subclusters. RESULTS We demonstrate that our IMPs correlate with well-known markers of TME and also can predict distinct molecular signatures, including expression of hormone receptor, epithelial growth factor receptor and immune checkpoint proteins, with the performance of accuracy, reliability and transparency superior to recent state-of-the-art radiomics and 'black-box' deep learning methods. Moreover, prognostic value is confirmed by survival analysis accounting for IMPs. CONCLUSIONS Our approach provides an interpretable, quantitative, and comprehensive perspective to characterize TME in a noninvasive and clinically relevant manner.
Collapse
Affiliation(s)
- Tianxu Lv
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Xiaoyan Hong
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Yuan Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Kai Miao
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Heng Sun
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China.
| | - Lihua Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Chuxia Deng
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; MOE Frontier Science Centre for Precision Oncology, University of Macau, Macau SAR, China.
| | - Chunjuan Jiang
- Department of Nuclear Medicine, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China.
| | - Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; MOE Frontier Science Centre for Precision Oncology, University of Macau, Macau SAR, China; Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China.
| |
Collapse
|
6
|
Misra S, Yoon C, Kim K, Managuli R, Barr RG, Baek J, Kim C. Deep learning-based multimodal fusion network for segmentation and classification of breast cancers using B-mode and elastography ultrasound images. Bioeng Transl Med 2023; 8:e10480. [PMID: 38023698 PMCID: PMC10658476 DOI: 10.1002/btm2.10480] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/02/2022] [Accepted: 12/13/2022] [Indexed: 12/01/2023] Open
Abstract
Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
Collapse
Affiliation(s)
- Sampa Misra
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| | - Chiho Yoon
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| | - Kwang‐Ju Kim
- Daegu‐Gyeongbuk Research CenterElectronics and Telecommunications Research Institute (ETRI)DaeguSouth Korea
| | - Ravi Managuli
- Department of BioengineeringUniversity of WashingtonSeattleWashingtonUSA
| | - Richard G. Barr
- Department of RadiologyNortheastern Ohio Medical UniversityYoungstownOhioUSA
| | - Jongduk Baek
- School of Integrated TechnologyYonsei UniversitySeoulSouth Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| |
Collapse
|
7
|
Cheng K, Wang J, Liu J, Zhang X, Shen Y, Su H. Public health implications of computer-aided diagnosis and treatment technologies in breast cancer care. AIMS Public Health 2023; 10:867-895. [PMID: 38187901 PMCID: PMC10764974 DOI: 10.3934/publichealth.2023057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/10/2023] [Indexed: 01/09/2024] Open
Abstract
Breast cancer remains a significant public health issue, being a leading cause of cancer-related mortality among women globally. Timely diagnosis and efficient treatment are crucial for enhancing patient outcomes, reducing healthcare burdens and advancing community health. This systematic review, following the PRISMA guidelines, aims to comprehensively synthesize the recent advancements in computer-aided diagnosis and treatment for breast cancer. The study covers the latest developments in image analysis and processing, machine learning and deep learning algorithms, multimodal fusion techniques and radiation therapy planning and simulation. The results of the review suggest that machine learning, augmented and virtual reality and data mining are the three major research hotspots in breast cancer management. Moreover, this paper discusses the challenges and opportunities for future research in this field. The conclusion highlights the importance of computer-aided techniques in the management of breast cancer and summarizes the key findings of the review.
Collapse
Affiliation(s)
- Kai Cheng
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jiangtao Wang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jian Liu
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Xiangsheng Zhang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Yuanyuan Shen
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
8
|
Zhang H, Meng Z, Ru J, Meng Y, Wang K. Application and prospects of AI-based radiomics in ultrasound diagnosis. Vis Comput Ind Biomed Art 2023; 6:20. [PMID: 37828411 PMCID: PMC10570254 DOI: 10.1186/s42492-023-00147-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 09/20/2023] [Indexed: 10/14/2023] Open
Abstract
Artificial intelligence (AI)-based radiomics has attracted considerable research attention in the field of medical imaging, including ultrasound diagnosis. Ultrasound imaging has unique advantages such as high temporal resolution, low cost, and no radiation exposure. This renders it a preferred imaging modality for several clinical scenarios. This review includes a detailed introduction to imaging modalities, including Brightness-mode ultrasound, color Doppler flow imaging, ultrasound elastography, contrast-enhanced ultrasound, and multi-modal fusion analysis. It provides an overview of the current status and prospects of AI-based radiomics in ultrasound diagnosis, highlighting the application of AI-based radiomics to static ultrasound images, dynamic ultrasound videos, and multi-modal ultrasound fusion analysis.
Collapse
Affiliation(s)
- Haoyan Zhang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Zheling Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Jinyu Ru
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Yaqing Meng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100190, China.
| |
Collapse
|
9
|
Wang KN, Zhuang S, Ran QY, Zhou P, Hua J, Zhou GQ, He X. DLGNet: A dual-branch lesion-aware network with the supervised Gaussian Mixture model for colon lesions classification in colonoscopy images. Med Image Anal 2023; 87:102832. [PMID: 37148864 DOI: 10.1016/j.media.2023.102832] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 01/20/2023] [Accepted: 04/20/2023] [Indexed: 05/08/2023]
Abstract
Colorectal cancer is one of the malignant tumors with the highest mortality due to the lack of obvious early symptoms. It is usually in the advanced stage when it is discovered. Thus the automatic and accurate classification of early colon lesions is of great significance for clinically estimating the status of colon lesions and formulating appropriate diagnostic programs. However, it is challenging to classify full-stage colon lesions due to the large inter-class similarities and intra-class differences of the images. In this work, we propose a novel dual-branch lesion-aware neural network (DLGNet) to classify intestinal lesions by exploring the intrinsic relationship between diseases, composed of four modules: lesion location module, dual-branch classification module, attention guidance module, and inter-class Gaussian loss function. Specifically, the elaborate dual-branch module integrates the original image and the lesion patch obtained by the lesion localization module to explore and interact with lesion-specific features from a global and local perspective. Also, the feature-guided module guides the model to pay attention to the disease-specific features by learning remote dependencies through spatial and channel attention after network feature learning. Finally, the inter-class Gaussian loss function is proposed, which assumes that each feature extracted by the network is an independent Gaussian distribution, and the inter-class clustering is more compact, thereby improving the discriminative ability of the network. The extensive experiments on the collected 2568 colonoscopy images have an average accuracy of 91.50%, and the proposed method surpasses the state-of-the-art methods. This study is the first time that colon lesions are classified at each stage and achieves promising colon disease classification performance. To motivate the community, we have made our code publicly available via https://github.com/soleilssss/DLGNet.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Shuaishuai Zhuang
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Qi-Yong Ran
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Ping Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Jie Hua
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China; Liyang People's Hospital, Liyang Branch Hospital of Jiangsu Province Hospital, Liyang, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; State Key Laboratory of Digital Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | - Xiaopu He
- The First Affiliated Hospital of Nanjing Medical University, Nanjing, China.
| |
Collapse
|
10
|
Meng Z, Zhu Y, Pang W, Tian J, Nie F, Wang K. MSMFN: An Ultrasound Based Multi-Step Modality Fusion Network for Identifying the Histologic Subtypes of Metastatic Cervical Lymphadenopathy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:996-1008. [PMID: 36383594 DOI: 10.1109/tmi.2022.3222541] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Identifying squamous cell carcinoma and adenocarcinoma subtypes of metastatic cervical lymphadenopathy (CLA) is critical for localizing the primary lesion and initiating timely therapy. B-mode ultrasound (BUS), color Doppler flow imaging (CDFI), ultrasound elastography (UE) and dynamic contrast-enhanced ultrasound provide effective tools for identification but synthesis of modality information is a challenge for clinicians. Therefore, based on deep learning, rationally fusing these modalities with clinical information to personalize the classification of metastatic CLA requires new explorations. In this paper, we propose Multi-step Modality Fusion Network (MSMFN) for multi-modal ultrasound fusion to identify histological subtypes of metastatic CLA. MSMFN can mine the unique features of each modality and fuse them in a hierarchical three-step process. Specifically, first, under the guidance of high-level BUS semantic feature maps, information in CDFI and UE is extracted by modality interaction, and the static imaging feature vector is obtained. Then, a self-supervised feature orthogonalization loss is introduced to help learn modality heterogeneity features while maintaining maximal task-consistent category distinguishability of modalities. Finally, six encoded clinical information are utilized to avoid prediction bias and improve prediction ability further. Our three-fold cross-validation experiments demonstrate that our method surpasses clinicians and other multi-modal fusion methods with an accuracy of 80.06%, a true-positive rate of 81.81%, and a true-negative rate of 80.00%. Our network provides a multi-modal ultrasound fusion framework that considers prior clinical knowledge and modality-specific characteristics. Our code will be available at: https://github.com/RichardSunnyMeng/MSMFN.
Collapse
|
11
|
Ji J, Wan T, Chen D, Wang H, Zheng M, Qin Z. A deep learning method for automatic evaluation of diagnostic information from multi-stained histopathological images. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
12
|
Evaluation of the Property of Axillary Lymph Nodes and Analysis of Lymph Node Metastasis Factors in Breast Cancer by Ultrasound Elastography. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:8066289. [PMID: 35693263 PMCID: PMC9187465 DOI: 10.1155/2022/8066289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 05/09/2022] [Accepted: 05/11/2022] [Indexed: 11/17/2022]
Abstract
This research was aimed at investigating the role of ultrasound elastography (UE) in evaluating the properties of axillary lymph nodes in breast cancer and exploring the influencing factors of lymph node metastasis in breast cancer patients. Routine ultrasonography (US) and UE were performed for 160 breast cancer patients. 80 cases were in the group with lymph node metastasis, and the other 80 were in the nonlymph node metastasis group. The sensitivity, specificity, and accuracy of the two ultrasound examinations were compared, the receiver-operator characteristic (ROC) curves were drawn, and the influencing factors of lymph node metastasis were analyzed. The sensitivity, specificity, and accuracy of UE in diagnosing axillary lymph nodes of breast cancer were 97.22%, 95.45%, and 96.25%, respectively, which were markedly higher than those of routine US (P < 0.05). Cortical thickness, blood flow grade, blood flow type, and elasticity score had a greater impact on axillary lymph node metastasis of breast cancer. When cortical thickness ≥ 3 cm, blood flow was of 2-3 grades, blood flow was the peripheral/mixed type, and elasticity score was 3-4 points, these became risk factors for lymph node metastasis in breast cancer patients. UE was effective in diagnosing the property of lymph nodes and could evaluate lymph node metastasis in breast cancer patients. It had a good clinical value and was worthy of popularization and application.
Collapse
|
13
|
Huang R, Ying Q, Lin Z, Zheng Z, Tan L, Tang G, Zhang Q, Luo M, Yi X, Liu P, Pan W, Wu J, Luo B, Ni D. Extracting keyframes of Breast Ultrasound Video using Deep Reinforcement Learning. Med Image Anal 2022; 80:102490. [DOI: 10.1016/j.media.2022.102490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 04/08/2022] [Accepted: 05/20/2022] [Indexed: 10/18/2022]
|