1
|
Park JH, Lim JH, Kim S, Heo J. A Multi-label Artificial Intelligence Approach for Improving Breast Cancer Detection With Mammographic Image Analysis. In Vivo 2024; 38:2864-2872. [PMID: 39477432 PMCID: PMC11535944 DOI: 10.21873/invivo.13767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 07/17/2024] [Accepted: 07/18/2024] [Indexed: 11/07/2024]
Abstract
BACKGROUND/AIM Breast cancer remains a major global health concern. This study aimed to develop a deep-learning-based artificial intelligence (AI) model that predicts the malignancy of mammographic lesions and reduces unnecessary biopsies in patients with breast cancer. PATIENTS AND METHODS In this retrospective study, we used deep-learning-based AI to predict whether lesions in mammographic images are malignant. The AI model learned the malignancy as well as margins and shapes of mass lesions through multi-label training, similar to the diagnostic process of a radiologist. We used the Curated Breast Imaging Subset of Digital Database for Screening Mammography. This dataset includes annotations for mass lesions, and we developed an algorithm to determine the exact location of the lesions for accurate classification. A multi-label classification approach enabled the model to recognize malignancy and lesion attributes. RESULTS Our multi-label classification model, trained on both lesion shape and margin, demonstrated superior performance compared with models trained solely on malignancy. Gradient-weighted class activation mapping analysis revealed that by considering the margin and shape, the model assigned higher importance to border areas and analyzed pixels more uniformly when classifying malignant lesions. This approach improved diagnostic accuracy, particularly in challenging cases, such as American College of Radiology Breast Imaging-Reporting and Data System categories 3 and 4, where the breast density exceeded 50%. CONCLUSION This study highlights the potential of AI in improving the diagnosis of breast cancer. By integrating advanced techniques and modern neural network designs, we developed an AI model with enhanced accuracy for mammographic image analysis.
Collapse
Affiliation(s)
- Jun Hyeong Park
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
- Ajou Healthcare AI Research Center, Suwon, Republic of Korea
- Department of Biomedical Sciences, Graduate School of Ajou University, Suwon, Republic of Korea
| | - June Hyuck Lim
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
- Ajou Healthcare AI Research Center, Suwon, Republic of Korea
| | - Seonhwa Kim
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea
- Ajou Healthcare AI Research Center, Suwon, Republic of Korea
| | - Jaesung Heo
- Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea;
- Ajou Healthcare AI Research Center, Suwon, Republic of Korea
| |
Collapse
|
2
|
De Marco P, Ricciardi V, Montesano M, Cassano E, Origgi D. Transfer learning classification of suspicious lesions on breast ultrasound: is there room to avoid biopsies of benign lesions? Eur Radiol Exp 2024; 8:121. [PMID: 39466515 PMCID: PMC11519280 DOI: 10.1186/s41747-024-00480-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Accepted: 05/19/2024] [Indexed: 10/30/2024] Open
Abstract
BACKGROUND Breast cancer (BC) is the most common malignancy in women and the second cause of cancer death. In recent years, there has been a strong development in artificial intelligence (AI) applications in medical imaging for several tasks. Our aim was to evaluate the potential of transfer learning with convolutional neural networks (CNNs) in discriminating suspicious breast lesions on ultrasound images. METHODS Transfer learning performances of five different CNNs (Inception V3, Xception, Densenet121, VGG 16, and ResNet50) were evaluated on a public and on an institutional dataset (526 and 392 images, respectively), customizing the top layers for the specific task. Institutional images were contoured by an expert radiologist and processed to feed the CNNs for training and testing. Postimaging biopsies were used as a reference standard for classification. The area under the receiver operating curve (AUROC) was used to assess diagnostic performance. RESULTS Networks performed very well on the public dataset (AUROC 0.938-0.996). The direct generalization to the institutional dataset resulted in lower performances (max AUROC 0.676); however, when tested on BI-RADS 3 and BI-RADS 5 only, results were improved (max AUROC 0.792). Good results were achieved on the institutional dataset (AUROC 0.759-0.818) and, when selecting a threshold of 2% for classification, a sensitivity of 0.983 was obtained for three of five CNNs, with the potential to spare biopsy in 15.3%-18.6% of patients. CONCLUSION In conclusion, transfer learning with CNNs may achieve high sensitivity and might be used as a support tool in managing suspicious breast lesions on ultrasound images. RELEVANCE STATEMENT Transfer learning is a powerful technique to exploit the performances of well-trained CNNs for image classification. In a clinical scenario, it might be useful for the management of suspicious breast lesions on breast ultrasound, potentially sparing biopsy in a non-negligible number of patients. KEY POINTS Properly trained CNNs with transfer learning are highly effective in differentiating benign and malignant lesions on breast ultrasound. Setting clinical thresholds increased sensitivity. CNNs might be useful as support tools in managing suspicious lesions on breast ultrasound.
Collapse
Affiliation(s)
- Paolo De Marco
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, Milan, Italy.
| | - Valerio Ricciardi
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, Milan, Italy
- Medical Physics School, University of Milan, Milan, Italy
| | - Marta Montesano
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Enrico Cassano
- Breast Imaging Division, Radiology Department, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Daniela Origgi
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, Milan, Italy
| |
Collapse
|
3
|
Chowa SS, Azam S, Montaha S, Bhuiyan MRI, Jonkman M. Improving the Automated Diagnosis of Breast Cancer with Mesh Reconstruction of Ultrasound Images Incorporating 3D Mesh Features and a Graph Attention Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1067-1085. [PMID: 38361007 PMCID: PMC11573965 DOI: 10.1007/s10278-024-00983-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 11/17/2023] [Accepted: 12/11/2023] [Indexed: 02/17/2024]
Abstract
This study proposes a novel approach for breast tumor classification from ultrasound images into benign and malignant by converting the region of interest (ROI) of a 2D ultrasound image into a 3D representation using the point-e system, allowing for in-depth analysis of underlying characteristics. Instead of relying solely on 2D imaging features, this method extracts 3D mesh features that describe tumor patterns more precisely. Ten informative and medically relevant mesh features are extracted and assessed with two feature selection techniques. Additionally, a feature pattern analysis has been conducted to determine the feature's significance. A feature table with dimensions of 445 × 12 is generated and a graph is constructed, considering the rows as nodes and the relationships among the nodes as edges. The Spearman correlation coefficient method is employed to identify edges between the strongly connected nodes (with a correlation score greater than or equal to 0.7), resulting in a graph containing 56,054 edges and 445 nodes. A graph attention network (GAT) is proposed for the classification task and the model is optimized with an ablation study, resulting in the highest accuracy of 99.34%. The performance of the proposed model is compared with ten machine learning (ML) models and one-dimensional convolutional neural network where the test accuracy of these models ranges from 73 to 91%. Our novel 3D mesh-based approach, coupled with the GAT, yields promising performance for breast tumor classification, outperforming traditional models, and has the potential to reduce time and effort of radiologists providing a reliable diagnostic system.
Collapse
Affiliation(s)
- Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Md Rahad Islam Bhuiyan
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
4
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
5
|
Fasihi-Shirehjini O, Babapour-Mofrad F. Effectiveness of ConvNeXt variants in diabetic feet diagnosis using plantar thermal images. QUANTITATIVE INFRARED THERMOGRAPHY JOURNAL 2024:1-18. [DOI: 10.1080/17686733.2024.2310794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/23/2024] [Indexed: 10/11/2024]
Affiliation(s)
- Oktay Fasihi-Shirehjini
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Farshid Babapour-Mofrad
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| |
Collapse
|
6
|
Yao Y, Yang J, Sun H, Kong H, Wang S, Xu K, Dai W, Jiang S, Bai Q, Xing S, Yuan J, Liu X, Lu F, Chen Z, Qu J, Su J. DeepGraFT: A novel semantic segmentation auxiliary ROI-based deep learning framework for effective fundus tessellation classification. Comput Biol Med 2024; 169:107881. [PMID: 38159401 DOI: 10.1016/j.compbiomed.2023.107881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 12/04/2023] [Accepted: 12/18/2023] [Indexed: 01/03/2024]
Abstract
Fundus tessellation (FT) is a prevalent clinical feature associated with myopia and has implications in the development of myopic maculopathy, which causes irreversible visual impairment. Accurate classification of FT in color fundus photo can help predict the disease progression and prognosis. However, the lack of precise detection and classification tools has created an unmet medical need, underscoring the importance of exploring the clinical utility of FT. Thus, to address this gap, we introduce an automatic FT grading system (called DeepGraFT) using classification-and-segmentation co-decision models by deep learning. ConvNeXt, utilizing transfer learning from pretrained ImageNet weights, was employed for the classification algorithm, aligning with a region of interest based on the ETDRS grading system to boost performance. A segmentation model was developed to detect FT exits, complementing the classification for improved grading accuracy. The training set of DeepGraFT was from our in-house cohort (MAGIC), and the validation sets consisted of the rest part of in-house cohort and an independent public cohort (UK Biobank). DeepGraFT demonstrated a high performance in the training stage and achieved an impressive accuracy in validation phase (in-house cohort: 86.85 %; public cohort: 81.50 %). Furthermore, our findings demonstrated that DeepGraFT surpasses machine learning-based classification models in FT classification, achieving a 5.57 % increase in accuracy. Ablation analysis revealed that the introduced modules significantly enhanced classification effectiveness and elevated accuracy from 79.85 % to 86.85 %. Further analysis using the results provided by DeepGraFT unveiled a significant negative association between FT and spherical equivalent (SE) in the UK Biobank cohort. In conclusion, DeepGraFT accentuates potential benefits of the deep learning model in automating the grading of FT and allows for potential utility as a clinical-decision support tool for predicting progression of pathological myopia.
Collapse
Affiliation(s)
- Yinghao Yao
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Jiaying Yang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Haojun Sun
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Hengte Kong
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Sheng Wang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Ke Xu
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Wei Dai
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Siyi Jiang
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - QingShi Bai
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Shilai Xing
- Institute of PSI Genomics, Wenzhou Global Eye & Vision Innovation Center, Wenzhou, 325024, China
| | - Jian Yuan
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China
| | - Xinting Liu
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Fan Lu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China
| | - Zhenhui Chen
- National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jia Qu
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jianzhong Su
- Oujiang Laboratory (Zhejiang Lab for Regenerative Medicine, Vision and Brain Health), Eye Hospital, Wenzhou Medical University, Wenzhou, 325011, Zhejiang, China; National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, Zhejiang, China; National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
| |
Collapse
|
7
|
Azam S, Montaha S, Raiaan MAK, Rafid AKMRH, Mukta SH, Jonkman M. An Automated Decision Support System to Analyze Malignancy Patterns of Breast Masses Employing Medically Relevant Features of Ultrasound Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:45-59. [PMID: 38343240 DOI: 10.1007/s10278-023-00925-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/22/2023] [Accepted: 10/23/2023] [Indexed: 03/02/2024]
Abstract
An automated computer-aided approach might aid radiologists in diagnosing breast cancer at a primary stage. This study proposes a novel decision support system to classify breast tumors into benign and malignant based on clinically important features, using ultrasound images. Nine handcrafted features, which align with the clinical markers used by radiologists, are extracted from the region of interest (ROI) of ultrasound images. To validate that these elected clinical markers have a significant impact on predicting the benign and malignant classes, ten machine learning (ML) models are experimented with resulting in test accuracies in the range of 96 to 99%. In addition, four feature selection techniques are explored where two features are eliminated according to the feature ranking score of each feature selection method. The Random Forest classifier is trained with the resultant four feature sets. Results indicate that even when eliminating only two features, the performance of the model is reduced for each feature selection technique. These experiments validate the efficiency and effectiveness of the clinically important features. To develop the decision support system, a probability density function (PDF) graph is generated for each feature in order to find a threshold range to distinguish benign and malignant tumors. Based on the threshold range of particular features, a decision support system is developed in such a way that if at least eight out of nine features are within the threshold range, the image will be denoted as true predicted. With this algorithm, a test accuracy of 99.38% and an F1 Score of 99.05% is achieved, which means that our decision support system outperforms all the previously trained ML models. Moreover, after calculating individual class-based test accuracies, for the benign class, a test accuracy of 99.31% has been attained where only three benign instances are misclassified out of 437 instances, and for the malignant class, a test accuracy of 99.52% has been attained where only one malignant instance is misclassified out of 210 instances. This system is robust, time-effective, and reliable as the radiologists' criteria are followed and may aid specialists in making a diagnosis.
Collapse
Affiliation(s)
- Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia.
| | - Sidratul Montaha
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| | | | | | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT, 0909, Australia
| |
Collapse
|
8
|
Zhang L, Wang R, Gao J, Tang Y, Xu X, Kan Y, Cao X, Wen Z, Liu Z, Cui S, Li Y. A novel MRI-based deep learning networks combined with attention mechanism for predicting CDKN2A/B homozygous deletion status in IDH-mutant astrocytoma. Eur Radiol 2024; 34:391-399. [PMID: 37553486 DOI: 10.1007/s00330-023-09944-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 04/12/2023] [Accepted: 05/16/2023] [Indexed: 08/10/2023]
Abstract
OBJECTIVES To develop a high-accuracy MRI-based deep learning method for predicting cyclin-dependent kinase inhibitor 2A/B (CDKN2A/B) homozygous deletion status in isocitrate dehydrogenase (IDH)-mutant astrocytoma. METHODS Multiparametric brain MRI data and corresponding genomic information of 234 subjects (111 positives for CDKN2A/B homozygous deletion and 123 negatives for CDKN2A/B homozygous deletion) were obtained from The Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) respectively. Two independent multi-sequence networks (ResFN-Net and FN-Net) are built on the basis of ResNet and ConvNeXt network combined with attention mechanism to classify CDKN2A/B homozygous deletion status using MR images including contrast-enhanced T1-weighted imaging (CE-T1WI) and T2-weighted imaging (T2WI). The performance of the network is summarized by three-way cross-validation; ROC analysis is also performed. RESULTS The average cross-validation accuracy (ACC) of ResFN-Net is 0.813. The average cross-validation area under curve (AUC) of ResFN-Net is 0.8804. The average cross-validation ACC and AUC of FN-Net is 0.9236 and 0.9704, respectively. Comparing all sequence combinations of the two networks (ResFN-Net and FN-Net), the sequence combination of CE-T1WI and T2WI performed the best, and the ACC and AUC were 0.8244, 0.8975 and 0.8971, 0.9574, respectively. CONCLUSIONS The FN-Net deep learning networks based on ConvNeXt network achieved promising performance for predicting CDKN2A/B homozygous deletion status of IDH-mutant astrocytoma. CLINICAL RELEVANCE STATEMENT A novel deep learning network (FN-Net) based on preoperative MRI was developed to predict the CDKN2A/B homozygous deletion status. This network has the potential to be a practical tool for the noninvasive characterization of CDKN2A/B in glioma to support personalized classification and treatment planning. KEY POINTS • CDKN2A/B homozygous deletion status is an important marker for glioma grading and prognosis. • An MRI-based deep learning approach was developed to predict CDKN2A/B homozygous deletion status. • The predictive performance based on ConvNeXt network was better than that of ResNet network.
Collapse
Affiliation(s)
- Liqiang Zhang
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Rui Wang
- School of Computer Science and Engineering, Chongqing Normal University, Chongqing, 401331, China
| | - Jueni Gao
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Yi Tang
- Molecular Medicine Diagnostic and Testing Center, Chongqing Medical University, Chongqing, 400016, China
| | - Xinyi Xu
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China
| | - Yubo Kan
- School of Medical and Life Sciences, Chengdu University of Traditional Chinese Medicine, Chengdu, 610032, China
| | - Xu Cao
- School of Medical and Life Sciences, Chengdu University of Traditional Chinese Medicine, Chengdu, 610032, China
| | - Zhipeng Wen
- Department of Radiology, School of Medicine, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, University of Electronic Science and Technology of China, Chengdu, 610042, China
| | - Zhi Liu
- Department of Radiology, Chongqing Hospital of Traditional Chinese Medicine, Chongqing, 400021, China.
| | - Shaoguo Cui
- School of Computer Science and Engineering, Chongqing Normal University, Chongqing, 401331, China.
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, China.
| |
Collapse
|
9
|
Sarker MMK, Singh VK, Alsharid M, Hernandez-Cruz N, Papageorghiou AT, Noble JA. COMFormer: Classification of Maternal-Fetal and Brain Anatomy Using a Residual Cross-Covariance Attention Guided Transformer in Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1417-1427. [PMID: 37665699 DOI: 10.1109/tuffc.2023.3311879] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/06/2023]
Abstract
Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in 2-D fetal ultrasound (US) images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix (MC), and others) and brain anatomical structures [trans-thalamic (TT), trans-cerebellum (TC), trans-ventricular (TV), and non-brain (NB)]. Our proposed architecture relies on a transformer-based approach that leverages spatial and global features using a newly designed residual cross-variance attention block. This block introduces an advanced cross-covariance attention (XCA) mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12 400 images from 1792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively.
Collapse
|
10
|
Jo Y, Lee D, Baek D, Choi BK, Aryal N, Jung J, Shin YS, Hong B. Optimal view detection for ultrasound-guided supraclavicular block using deep learning approaches. Sci Rep 2023; 13:17209. [PMID: 37821574 PMCID: PMC10567700 DOI: 10.1038/s41598-023-44170-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 10/04/2023] [Indexed: 10/13/2023] Open
Abstract
Successful ultrasound-guided supraclavicular block (SCB) requires the understanding of sonoanatomy and identification of the optimal view. Segmentation using a convolutional neural network (CNN) is limited in clearly determining the optimal view. The present study describes the development of a computer-aided diagnosis (CADx) system using a CNN that can determine the optimal view for complete SCB in real time. The aim of this study was the development of computer-aided diagnosis system that aid non-expert to determine the optimal view for complete supraclavicular block in real time. Ultrasound videos were retrospectively collected from 881 patients to develop the CADx system (600 to the training and validation set and 281 to the test set). The CADx system included classification and segmentation approaches, with Residual neural network (ResNet) and U-Net, respectively, applied as backbone networks. In the classification approach, an ablation study was performed to determine the optimal architecture and improve the performance of the model. In the segmentation approach, a cascade structure, in which U-Net is connected to ResNet, was implemented. The performance of the two approaches was evaluated based on a confusion matrix. Using the classification approach, ResNet34 and gated recurrent units with augmentation showed the highest performance, with average accuracy 0.901, precision 0.613, recall 0.757, f1-score 0.677 and AUROC 0.936. Using the segmentation approach, U-Net combined with ResNet34 and augmentation showed poorer performance than the classification approach. The CADx system described in this study showed high performance in determining the optimal view for SCB. This system could be expanded to include many anatomical regions and may have potential to aid clinicians in real-time settings.Trial registration The protocol was registered with the Clinical Trial Registry of Korea (KCT0005822, https://cris.nih.go.kr ).
Collapse
Affiliation(s)
- Yumin Jo
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea
| | - Dongheon Lee
- Department of Biomedical Engineering, College of Medicine, Chungnam National University and Hospital, Daejeon, Republic of Korea
- Biomedical Research Institute, Chungnam National University Hospital, Daejeon, Republic of Korea
| | - Donghyeon Baek
- Chungnam National University College of Medicine, Daejeon, Republic of Korea
| | | | | | - Jinsik Jung
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea
| | - Yong Sup Shin
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea.
| | - Boohwi Hong
- Department of Anaesthesiology and Pain Medicine, College of Medicine, Chungnam National University and Hospital, 282 Munhwar-ro, Jung-gu, Daejeon, 35015, Republic of Korea.
- Biomedical Research Institute, Chungnam National University Hospital, Daejeon, Republic of Korea.
| |
Collapse
|
11
|
Shareef B, Xian M, Vakanski A, Wang H. Breast Ultrasound Tumor Classification Using a Hybrid Multitask CNN-Transformer Network. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14223:344-353. [PMID: 38601088 PMCID: PMC11006090 DOI: 10.1007/978-3-031-43901-8_33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Capturing global contextual information plays a critical role in breast ultrasound (BUS) image classification. Although convolutional neural networks (CNNs) have demonstrated reliable performance in tumor classification, they have inherent limitations for modeling global and long-range dependencies due to the localized nature of convolution operations. Vision Transformers have an improved capability of capturing global contextual information but may distort the local image patterns due to the tokenization operations. In this study, we proposed a hybrid multitask deep neural network called Hybrid-MT-ESTAN, designed to perform BUS tumor classification and segmentation using a hybrid architecture composed of CNNs and Swin Transformer components. The proposed approach was compared to nine BUS classification methods and evaluated using seven quantitative metrics on a dataset of 3,320 BUS images. The results indicate that Hybrid-MT-ESTAN achieved the highest accuracy, sensitivity, and F1 score of 82.7%, 86.4%, and 86.0%, respectively.
Collapse
Affiliation(s)
- Bryar Shareef
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| | - Aleksandar Vakanski
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| | - Haotian Wang
- Department of Computer Science, University of Idaho, Idaho Falls, Idaho 83402, USA
| |
Collapse
|
12
|
Zhang H, Zhong X, Li G, Liu W, Liu J, Ji D, Li X, Wu J. BCU-Net: Bridging ConvNeXt and U-Net for medical image segmentation. Comput Biol Med 2023; 159:106960. [PMID: 37099973 DOI: 10.1016/j.compbiomed.2023.106960] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 04/12/2023] [Accepted: 04/17/2023] [Indexed: 04/28/2023]
Abstract
Medical image segmentation enables doctors to observe lesion regions better and make accurate diagnostic decisions. Single-branch models such as U-Net have achieved great progress in this field. However, the complementary local and global pathological semantics of heterogeneous neural networks have not yet been fully explored. The class-imbalance problem remains a serious issue. To alleviate these two problems, we propose a novel model called BCU-Net, which leverages the advantages of ConvNeXt in global interaction and U-Net in local processing. We propose a new multilabel recall loss (MRL) module to relieve the class imbalance problem and facilitate deep-level fusion of local and global pathological semantics between the two heterogeneous branches. Extensive experiments were conducted on six medical image datasets including retinal vessel and polyp images. The qualitative and quantitative results demonstrate the superiority and generalizability of BCU-Net. In particular, BCU-Net can handle diverse medical images with diverse resolutions. It has a flexible structure owing to its plug-and-play characteristics, which promotes its practicality.
Collapse
Affiliation(s)
- Hongbin Zhang
- School of Software, East China Jiaotong University, China.
| | - Xiang Zhong
- School of Software, East China Jiaotong University, China.
| | - Guangli Li
- School of Information Engineering, East China Jiaotong University, China.
| | - Wei Liu
- School of Software, East China Jiaotong University, China.
| | - Jiawei Liu
- School of Software, East China Jiaotong University, China.
| | - Donghong Ji
- School of Cyber Science and Engineering, Wuhan University, China.
| | - Xiong Li
- School of Software, East China Jiaotong University, China.
| | - Jianguo Wu
- The Second Affiliated Hospital of Nanchang University, China.
| |
Collapse
|
13
|
Zheng D, He X, Jing J. Overview of Artificial Intelligence in Breast Cancer Medical Imaging. J Clin Med 2023; 12:419. [PMID: 36675348 PMCID: PMC9864608 DOI: 10.3390/jcm12020419] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/26/2022] [Accepted: 12/30/2022] [Indexed: 01/07/2023] Open
Abstract
The heavy global burden and mortality of breast cancer emphasize the importance of early diagnosis and treatment. Imaging detection is one of the main tools used in clinical practice for screening, diagnosis, and treatment efficacy evaluation, and can visualize changes in tumor size and texture before and after treatment. The overwhelming number of images, which lead to a heavy workload for radiologists and a sluggish reporting period, suggests the need for computer-aid detection techniques and platform. In addition, complex and changeable image features, heterogeneous quality of images, and inconsistent interpretation by different radiologists and medical institutions constitute the primary difficulties in breast cancer screening and imaging diagnosis. The advancement of imaging-based artificial intelligence (AI)-assisted tumor diagnosis is an ideal strategy for improving imaging diagnosis efficient and accuracy. By learning from image data input and constructing algorithm models, AI is able to recognize, segment, and diagnose tumor lesion automatically, showing promising application prospects. Furthermore, the rapid advancement of "omics" promotes a deeper and more comprehensive recognition of the nature of cancer. The fascinating relationship between tumor image and molecular characteristics has attracted attention to the radiomic and radiogenomics, which allow us to perform analysis and detection on the molecular level with no need for invasive operations. In this review, we integrate the current developments in AI-assisted imaging diagnosis and discuss the advances of AI-based breast cancer precise diagnosis from a clinical point of view. Although AI-assisted imaging breast cancer screening and detection is an emerging field and draws much attention, the clinical application of AI in tumor lesion recognition, segmentation, and diagnosis is still limited to research or in limited patients' cohort. Randomized clinical trials based on large and high-quality cohort are lacking. This review aims to describe the progress of the imaging-based AI application in breast cancer screening and diagnosis for clinicians.
Collapse
Affiliation(s)
| | | | - Jing Jing
- Laboratory of Integrative Medicine, Clinical Research Center for Breast, State Key Laboratory of Biotherapy, West China Hospital, Sichuan University and Collaborative Innovation Center, Chengdu 610041, China
| |
Collapse
|
14
|
Singh VK, Yousef Kalafi E, Cheah E, Wang S, Wang J, Ozturk A, Li Q, Eldar YC, Samir AE, Kumar V. HaTU-Net: Harmonic Attention Network for Automated Ovarian Ultrasound Quantification in Assisted Pregnancy. Diagnostics (Basel) 2022; 12:diagnostics12123213. [PMID: 36553220 PMCID: PMC9777827 DOI: 10.3390/diagnostics12123213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 12/02/2022] [Accepted: 12/08/2022] [Indexed: 12/23/2022] Open
Abstract
Antral follicle Count (AFC) is a non-invasive biomarker used to assess ovarian reserves through transvaginal ultrasound (TVUS) imaging. Antral follicles' diameter is usually in the range of 2-10 mm. The primary aim of ovarian reserve monitoring is to measure the size of ovarian follicles and the number of antral follicles. Manual follicle measurement is inhibited by operator time, expertise and the subjectivity of delineating the two axes of the follicles. This necessitates an automated framework capable of quantifying follicle size and count in a clinical setting. This paper proposes a novel Harmonic Attention-based U-Net network, HaTU-Net, to precisely segment the ovary and follicles in ultrasound images. We replace the standard convolution operation with a harmonic block that convolves the features with a window-based discrete cosine transform (DCT). Additionally, we proposed a harmonic attention mechanism that helps to promote the extraction of rich features. The suggested technique allows for capturing the most relevant features, such as boundaries, shape, and textural patterns, in the presence of various noise sources (i.e., shadows, poor contrast between tissues, and speckle noise). We evaluated the proposed model on our in-house private dataset of 197 patients undergoing TransVaginal UltraSound (TVUS) exam. The experimental results on an independent test set confirm that HaTU-Net achieved a Dice coefficient score of 90% for ovaries and 81% for antral follicles, an improvement of 2% and 10%, respectively, when compared to a standard U-Net. Further, we accurately measure the follicle size, yielding the recall, and precision rates of 91.01% and 76.49%, respectively.
Collapse
Affiliation(s)
- Vivek Kumar Singh
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Elham Yousef Kalafi
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Eugene Cheah
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Shuhang Wang
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Jingchao Wang
- Department of Ultrasound, The Third Hospital of Hebei Medical University, Shijiazhuang 050051, China
| | - Arinc Ozturk
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Qian Li
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Yonina C. Eldar
- Faculty of Mathematics and Computer Science, Weizmann Institute of Science, Rehovot 7610001, Israel
| | - Anthony E. Samir
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
| | - Viksit Kumar
- Center for Ultrasound Research & Translation at the Massachusetts General Hospital, Department of Radiology, Harvard Medical School, Boston, MA 02114, USA
- Correspondence:
| |
Collapse
|