1
|
Vijaya P, Chander S, Fernandes R, Rodrigues AP, Raja M. Flamingo Search Sailfish Optimizer Based SqueezeNet for Detection of Breast Cancer Using MRI Images. Cancer Invest 2024; 42:745-768. [PMID: 39301618 DOI: 10.1080/07357907.2024.2403088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 09/08/2024] [Indexed: 09/22/2024]
Abstract
Breast cancer with increased risk in women is identified with Breast Magnetic Resonance Imaging (Breast MRI) and this helps in evaluating treatment therapies. Breast MRI is time time-consuming process that involves the assessment of current imaging. This research work depends on the detection of breast cancer at the earlier stages. Among various cancers, breast cancer in women occurs in larger accounts for almost 30% of estimated cancer cases. In this research, many steps are followed for breast cancer detection like pre-processing, segmentation, augmentation, extraction of features, and cancer detection. Here, the median filter is utilized for pre-processing, as well as segmentation is followed after pre-processing, which is done by Psi-Net. Moreover, the process of augmentation like shearing, translation, and cropping are followed after segmentation. Also, the segmented image tends to process feature extraction, where features like shape features, Completed Local Binary Pattern (CLBP), Pyramid Histogram of Oriented Gradients (PHOG), and statistical features are extracted. Finally, breast cancer is detected using the DL model, SqueezeNet. Here, the newly devised Flamingo Search SailFish Optimizer (FSSFO) is used in training Psi-Net as well as SqueezeNet. Furthermore, FSSFO is the combination of both the Flamingo Search Algorithm (FSA) and SailFish Optimizer (SFO).
Collapse
Affiliation(s)
- P Vijaya
- Department of Mathematics & Computer Science, Modern College of Business and Sciences, Muscat, Oman
| | - Satish Chander
- Department of Computer Science and Engineering, Birla Institute of Technology, Ranchi, India
| | - Roshan Fernandes
- Department of Cyber Security, NMAM Institute of Technology, NITTE (Deemed to be University), Nitte, India
| | - Anisha P Rodrigues
- Department of Computer Science and Engineering, NMAM Institute of Technology, NITTE (Deemed to be University), Nitte, India
| | - Maheswari Raja
- School of Computer Science and Information Technology, Symbiosis Skills and Professional University, Pune, India
| |
Collapse
|
2
|
Lu G, Tian R, Yang W, Liu R, Liu D, Xiang Z, Zhang G. Deep learning radiomics based on multimodal imaging for distinguishing benign and malignant breast tumours. Front Med (Lausanne) 2024; 11:1402967. [PMID: 39036101 PMCID: PMC11257849 DOI: 10.3389/fmed.2024.1402967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 06/14/2024] [Indexed: 07/23/2024] Open
Abstract
Objectives This study aimed to develop a deep learning radiomic model using multimodal imaging to differentiate benign and malignant breast tumours. Methods Multimodality imaging data, including ultrasonography (US), mammography (MG), and magnetic resonance imaging (MRI), from 322 patients (112 with benign breast tumours and 210 with malignant breast tumours) with histopathologically confirmed breast tumours were retrospectively collected between December 2018 and May 2023. Based on multimodal imaging, the experiment was divided into three parts: traditional radiomics, deep learning radiomics, and feature fusion. We tested the performance of seven classifiers, namely, SVM, KNN, random forest, extra trees, XGBoost, LightGBM, and LR, on different feature models. Through feature fusion using ensemble and stacking strategies, we obtained the optimal classification model for benign and malignant breast tumours. Results In terms of traditional radiomics, the ensemble fusion strategy achieved the highest accuracy, AUC, and specificity, with values of 0.892, 0.942 [0.886-0.996], and 0.956 [0.873-1.000], respectively. The early fusion strategy with US, MG, and MRI achieved the highest sensitivity of 0.952 [0.887-1.000]. In terms of deep learning radiomics, the stacking fusion strategy achieved the highest accuracy, AUC, and sensitivity, with values of 0.937, 0.947 [0.887-1.000], and 1.000 [0.999-1.000], respectively. The early fusion strategies of US+MRI and US+MG achieved the highest specificity of 0.954 [0.867-1.000]. In terms of feature fusion, the ensemble and stacking approaches of the late fusion strategy achieved the highest accuracy of 0.968. In addition, stacking achieved the highest AUC and specificity, which were 0.997 [0.990-1.000] and 1.000 [0.999-1.000], respectively. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity of 1.000 [0.999-1.000] under the early fusion strategy. Conclusion This study demonstrated the potential of integrating deep learning and radiomic features with multimodal images. As a single modality, MRI based on radiomic features achieved greater accuracy than US or MG. The US and MG models achieved higher accuracy with transfer learning than the single-mode or radiomic models. The traditional radiomic and depth features of US+MG + MR achieved the highest sensitivity under the early fusion strategy, showed higher diagnostic performance, and provided more valuable information for differentiation between benign and malignant breast tumours.
Collapse
Affiliation(s)
- Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| | - Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital and Institute, Shenyang, Liaoning, China
| | - Ruibo Liu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Dongmei Liu
- Department of Ultrasound, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Zijie Xiang
- Biomedical Engineering, Shenyang University of Technology, Shenyang, Liaoning, China
| | - Guoxu Zhang
- Department of Nuclear Medicine, General Hospital of Northern Theater Command, Shenyang, Liaoning, China
| |
Collapse
|
3
|
Li Y, El Habib Daho M, Conze PH, Zeghlache R, Le Boité H, Tadayoni R, Cochener B, Lamard M, Quellec G. A review of deep learning-based information fusion techniques for multimodal medical image classification. Comput Biol Med 2024; 177:108635. [PMID: 38796881 DOI: 10.1016/j.compbiomed.2024.108635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 03/18/2024] [Accepted: 05/18/2024] [Indexed: 05/29/2024]
Abstract
Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.
Collapse
Affiliation(s)
- Yihao Li
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Mostafa El Habib Daho
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France.
| | | | - Rachid Zeghlache
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | - Hugo Le Boité
- Sorbonne University, Paris, France; Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France
| | - Ramin Tadayoni
- Ophthalmology Department, Lariboisière Hospital, AP-HP, Paris, France; Paris Cité University, Paris, France
| | - Béatrice Cochener
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France; Ophthalmology Department, CHRU Brest, Brest, France
| | - Mathieu Lamard
- LaTIM UMR 1101, Inserm, Brest, France; University of Western Brittany, Brest, France
| | | |
Collapse
|
4
|
Li Y, Yu R, Chang H, Yan W, Wang D, Li F, Cui Y, Wang Y, Wang X, Yan Q, Liu X, Jia W, Zeng Q. Identifying Pathological Subtypes of Brain Metastasis from Lung Cancer Using MRI-Based Deep Learning Approach: A Multicenter Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:976-987. [PMID: 38347392 PMCID: PMC11169103 DOI: 10.1007/s10278-024-00988-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 12/07/2023] [Accepted: 12/12/2023] [Indexed: 06/13/2024]
Abstract
The aim of this study was to investigate the feasibility of deep learning (DL) based on multiparametric MRI to differentiate the pathological subtypes of brain metastasis (BM) in lung cancer patients. This retrospective analysis collected 246 patients (456 BMs) from five medical centers from July 2016 to June 2022. The BMs were from small-cell lung cancer (SCLC, n = 230) and non-small-cell lung cancer (NSCLC, n = 226; 119 adenocarcinoma and 107 squamous cell carcinoma). Patients from four medical centers were assigned to training set and internal validation set with a ratio of 4:1, and we selected another medical center as an external test set. An attention-guided residual fusion network (ARFN) model for T1WI, T2WI, T2-FLAIR, DWI, and contrast-enhanced T1WI based on the ResNet-18 basic network was developed. The area under the receiver operating characteristic curve (AUC) was used to assess the classification performance. Compared with models based on five single-sequence and other combinations, a multiparametric MRI model based on five sequences had higher specificity in distinguishing BMs from different types of lung cancer. In the internal validation and external test sets, AUCs of the model for the classification of SCLC and NSCLC brain metastasis were 0.796 and 0.751, respectively; in terms of differentiating adenocarcinoma from squamous cell carcinoma BMs, the AUC values of the prediction models combining the five sequences were 0.771 and 0.738, respectively. DL together with multiparametric MRI has discriminatory feasibility in identifying pathology type of BM from lung cancer.
Collapse
Affiliation(s)
- Yuting Li
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, No. 16766 Jingshi Road, Qianfoshan Hospital, Shandong, Jinan, China
- The First Clinical Medical College, Shandong University of Traditional Chinese Medicine, Jinan, China
| | - Ruize Yu
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Huan Chang
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, No. 16766 Jingshi Road, Qianfoshan Hospital, Shandong, Jinan, China
| | - Wanying Yan
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Dawei Wang
- Infervision Medical Technology Co., Ltd., Beijing, China
| | - Fuyan Li
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Yi Cui
- Department of Radiology, Qilu Hospital of Shandong University, Jinan, China
| | - Yong Wang
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Xiao Wang
- Department of Radiology, Jining No. 1 People's Hospital, Jining, China
| | - Qingqing Yan
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, No. 16766 Jingshi Road, Qianfoshan Hospital, Shandong, Jinan, China
| | - Xinhui Liu
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, No. 16766 Jingshi Road, Qianfoshan Hospital, Shandong, Jinan, China
| | - Wenjing Jia
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, No. 16766 Jingshi Road, Qianfoshan Hospital, Shandong, Jinan, China
| | - Qingshi Zeng
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University & Shandong Provincial Qianfoshan Hospital, No. 16766 Jingshi Road, Qianfoshan Hospital, Shandong, Jinan, China.
| |
Collapse
|
5
|
Guo Y, Zhang H, Yuan L, Chen W, Zhao H, Yu QQ, Shi W. Machine learning and new insights for breast cancer diagnosis. J Int Med Res 2024; 52:3000605241237867. [PMID: 38663911 PMCID: PMC11047257 DOI: 10.1177/03000605241237867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 02/21/2024] [Indexed: 04/28/2024] Open
Abstract
Breast cancer (BC) is the most prominent form of cancer among females all over the world. The current methods of BC detection include X-ray mammography, ultrasound, computed tomography, magnetic resonance imaging, positron emission tomography and breast thermographic techniques. More recently, machine learning (ML) tools have been increasingly employed in diagnostic medicine for its high efficiency in detection and intervention. The subsequent imaging features and mathematical analyses can then be used to generate ML models, which stratify, differentiate and detect benign and malignant breast lesions. Given its marked advantages, radiomics is a frequently used tool in recent research and clinics. Artificial neural networks and deep learning (DL) are novel forms of ML that evaluate data using computer simulation of the human brain. DL directly processes unstructured information, such as images, sounds and language, and performs precise clinical image stratification, medical record analyses and tumour diagnosis. Herein, this review thoroughly summarizes prior investigations on the application of medical images for the detection and intervention of BC using radiomics, namely DL and ML. The aim was to provide guidance to scientists regarding the use of artificial intelligence and ML in research and the clinic.
Collapse
Affiliation(s)
- Ya Guo
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Heng Zhang
- Department of Laboratory Medicine, Shandong Daizhuang Hospital, Jining, Shandong Province, China
| | - Leilei Yuan
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Weidong Chen
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Haibo Zhao
- Department of Oncology, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Qing-Qing Yu
- Phase I Clinical Research Centre, Jining No.1 People’s Hospital, Shandong First Medical University, Jining, Shandong Province, China
| | - Wenjie Shi
- Molecular and Experimental Surgery, University Clinic for General-, Visceral-, Vascular- and Trans-Plantation Surgery, Medical Faculty University Hospital Magdeburg, Otto-von Guericke University, Magdeburg, Germany
| |
Collapse
|
6
|
Qi X, Wang W, Pan S, Liu G, Xia L, Duan S, He Y. Predictive value of triple negative breast cancer based on DCE-MRI multi-phase full-volume ROI clinical radiomics model. Acta Radiol 2024; 65:173-184. [PMID: 38017694 DOI: 10.1177/02841851231215145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
BACKGROUND Since no studies compared the value of radiomics features of distinct phases of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for predicting triple-negative breast cancer (TNBC). PURPOSE To identify the optimal phase of DCE-MRI for diagnosing TNBC and, in combination with clinical factors, to develop a clinical-radiomics model to well predict TNBC. MATERIAL AND METHODS This retrospective study included 158 patients with pathology-confirmed breast cancer, including 38 cases of TNBC. The patients were randomly divided into the training and validation set (7:3). Eight radiomics models were built based on eight DCE-MR phases, and their performances were evaluated using receiver operating characteristic curve (ROC) and DeLong's test. The Radscore derived from the best radiomics model was integrated with independent clinical risk factors to construct a clinical-radiomics predictive model, and evaluate its performance using ROC analysis, calibration, and decision curve analyses. RESULTS WHO classification, margin, and T2-weighted (T2W) imaging signals were significantly correlated with TNBC and independent risk factors for TNBC (P<0.05). The clinical model yielded areas under the curve (AUCs) of 0.867 and 0.843 in the training and validation sets, respectively. The radiomics model based on DCEphase7 achieved the highest efficacy, with an AUC of 0.818 and 0.777. The AUC of the clinical-radiomics model was 0.936 and 0.886 in the training and validation sets, respectively. The decision curve showed the clinical utility of the clinical-radiomics model. CONCLUSION The radiomics features of DCE-MRI had the potential to predict TNBC and could improve the performance of clinical risk factors for preoperative personalized prediction of TNBC.
Collapse
Affiliation(s)
- Xuan Qi
- Department of Radiology, Ma'anshan People's Hospital, Maanshan, PR China
| | - Wuling Wang
- Department of Radiology, Ma'anshan People's Hospital, Maanshan, PR China
| | - Shuya Pan
- Department of Radiology, Ma'anshan People's Hospital, Maanshan, PR China
| | - Guangzhu Liu
- Ma'anshan Clinical College, Anhui Medical University, Hefei, PR China
| | - Liang Xia
- Department of Radiology, Sir Run Run Hospital affiliated to Nanjing Medical University, Nanjing, PR China
| | - Shaofeng Duan
- Precision Health Institution, GE Healthcare China, Shanghai, China
| | - Yongsheng He
- Department of Radiology, Ma'anshan People's Hospital, Maanshan, PR China
| |
Collapse
|
7
|
Cong C, Li X, Zhang C, Zhang J, Sun K, Liu L, Ambale-Venkatesh B, Chen X, Wang Y. MRI-Based Breast Cancer Classification and Localization by Multiparametric Feature Extraction and Combination Using Deep Learning. J Magn Reson Imaging 2024; 59:148-161. [PMID: 37013422 DOI: 10.1002/jmri.28713] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/16/2023] [Accepted: 03/16/2023] [Indexed: 04/05/2023] Open
Abstract
BACKGROUND Deep learning (DL) have been reported feasible in breast MRI. However, the effectiveness of DL method in mpMRI combinations for breast cancer detection has not been well investigated. PURPOSE To implement a DL method for breast cancer classification and detection using feature extraction and combination from multiple sequences. STUDY TYPE Retrospective. POPULATION A total of 569 local cases as internal cohort (50.2 ± 11.2 years; 100% female), divided among training (218), validation (73) and testing (278); 125 cases from a public dataset as the external cohort (53.6 ± 11.5 years; 100% female). FIELD STRENGTH/SEQUENCE T1-weighted imaging and dynamic contrast-enhanced MRI (DCE-MRI) with gradient echo sequences, T2-weighted imaging (T2WI) with spin-echo sequences, diffusion-weighted imaging with single-shot echo-planar sequence and at 1.5-T. ASSESSMENT A convolutional neural network and long short-term memory cascaded network was implemented for lesion classification with histopathology as the ground truth for malignant and benign categories and contralateral breasts as healthy category in internal/external cohorts. BI-RADS categories were assessed by three independent radiologists as comparison, and class activation map was employed for lesion localization in internal cohort. The classification and localization performances were assessed with DCE-MRI and non-DCE sequences, respectively. STATISTICAL TESTS Sensitivity, specificity, area under the curve (AUC), DeLong test, and Cohen's kappa for lesion classification. Sensitivity and mean squared error for localization. A P-value <0.05 was considered statistically significant. RESULTS With the optimized mpMRI combinations, the lesion classification achieved an AUC = 0.98/0.91, sensitivity = 0.96/0.83 in the internal/external cohorts, respectively. Without DCE-MRI, the DL-based method was superior to radiologists' readings (AUC 0.96 vs. 0.90). The lesion localization achieved sensitivities of 0.97/0.93 with DCE-MRI/T2WI alone, respectively. DATA CONCLUSION The DL method achieved high accuracy for lesion detection in the internal/external cohorts. The classification performance with a contrast agent-free combination is comparable to DCE-MRI alone and the radiologists' reading in AUC and sensitivity. EVIDENCE LEVEL 3. TECHNICAL EFFICACY Stage 2.
Collapse
Affiliation(s)
- Chao Cong
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
- School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China
- Department of Nuclear Medicine, Daping Hospital, Army Medical University, Chongqing, China
| | - Xiaoguang Li
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Chunlai Zhang
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Jing Zhang
- Department of Radiology, Daping Hospital, Army Medical University, Chongqing, China
| | - Kaixiang Sun
- School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China
| | - Lianluyi Liu
- School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing, China
| | | | - Xiao Chen
- Department of Nuclear Medicine, Daping Hospital, Army Medical University, Chongqing, China
| | - Yi Wang
- Department of Nuclear Medicine, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
8
|
Ansari MY, Qaraqe M, Righetti R, Serpedin E, Qaraqe K. Unveiling the future of breast cancer assessment: a critical review on generative adversarial networks in elastography ultrasound. Front Oncol 2023; 13:1282536. [PMID: 38125949 PMCID: PMC10731303 DOI: 10.3389/fonc.2023.1282536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 10/27/2023] [Indexed: 12/23/2023] Open
Abstract
Elastography Ultrasound provides elasticity information of the tissues, which is crucial for understanding the density and texture, allowing for the diagnosis of different medical conditions such as fibrosis and cancer. In the current medical imaging scenario, elastograms for B-mode Ultrasound are restricted to well-equipped hospitals, making the modality unavailable for pocket ultrasound. To highlight the recent progress in elastogram synthesis, this article performs a critical review of generative adversarial network (GAN) methodology for elastogram generation from B-mode Ultrasound images. Along with a brief overview of cutting-edge medical image synthesis, the article highlights the contribution of the GAN framework in light of its impact and thoroughly analyzes the results to validate whether the existing challenges have been effectively addressed. Specifically, This article highlights that GANs can successfully generate accurate elastograms for deep-seated breast tumors (without having artifacts) and improve diagnostic effectiveness for pocket US. Furthermore, the results of the GAN framework are thoroughly analyzed by considering the quantitative metrics, visual evaluations, and cancer diagnostic accuracy. Finally, essential unaddressed challenges that lie at the intersection of elastography and GANs are presented, and a few future directions are shared for the elastogram synthesis research.
Collapse
Affiliation(s)
- Mohammed Yusuf Ansari
- Electrical and Computer Engineering, Texas A&M University, College Station, TX, United States
- Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
| | - Marwa Qaraqe
- Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Raffaella Righetti
- Electrical and Computer Engineering, Texas A&M University, College Station, TX, United States
| | - Erchin Serpedin
- Electrical and Computer Engineering, Texas A&M University, College Station, TX, United States
| | - Khalid Qaraqe
- Electrical and Computer Engineering, Texas A&M University at Qatar, Doha, Qatar
| |
Collapse
|
9
|
Ghorbian M, Ghorbian S. Usefulness of machine learning and deep learning approaches in screening and early detection of breast cancer. Heliyon 2023; 9:e22427. [PMID: 38076050 PMCID: PMC10709063 DOI: 10.1016/j.heliyon.2023.e22427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 11/07/2023] [Accepted: 11/13/2023] [Indexed: 10/16/2024] Open
Abstract
Breast cancer (BC) is one of the most common types of cancer in women, and its prevalence is on the rise. The diagnosis of this disease in the first steps can be highly challenging. Hence, early and rapid diagnosis of this disease in its early stages increases the likelihood of a patient's recovery and survival. This study presents a systematic and detailed analysis of the various ML approaches and mechanisms employed during the BC diagnosis process. Further, this study provides a comprehensive and accurate overview of techniques, approaches, challenges, solutions, and important concepts related to this process in order to provide healthcare professionals and technologists with a deeper understanding of new screening and diagnostic tools and approaches, as well as identify new challenges and popular approaches in this field. Therefore, this study has attempted to provide a comprehensive taxonomy of applying ML techniques to BC diagnosis, focusing on the data obtained from the clinical methods diagnosis. The taxonomy presented in this study has two major components. Clinical diagnostic methods such as MRI, mammography, and hybrid methods are presented in the first part of the taxonomy. The second part involves implementing machine learning approaches such as neural networks (NN), deep learning (DL), and hybrid on the dataset in the first part. Then, the taxonomy will be analyzed based on implementing ML approaches in clinical diagnosis methods. The findings of the study demonstrated that the approaches based on NN and DL are the most accurate and widely used models for BC diagnosis compared to other diagnostic techniques, and accuracy (ACC), sensitivity (SEN), and specificity (SPE) are the most commonly used performance evaluation criteria. Additionally, factors such as the advantages and disadvantages of using machine learning techniques, as well as the objectives of each research, separately for ML technology and BC detection, as well as evaluation criteria, are discussed in this study. Lastly, this study provides an overview of open and unresolved issues related to using ML for BC diagnosis, along with a proposal to resolve each issue to assist researchers and healthcare professionals.
Collapse
Affiliation(s)
- Mohsen Ghorbian
- Department of Computer Engineering, Qom Branch, Islamic Azad University, Qom, Iran
| | - Saeid Ghorbian
- Department of Molecular Genetics, Ahar Branch, Islamic Azad University, Ahar, Iran
| |
Collapse
|
10
|
Fan M, Huang G, Lou J, Gao X, Zeng T, Li L. Cross-Parametric Generative Adversarial Network-Based Magnetic Resonance Image Feature Synthesis for Breast Lesion Classification. IEEE J Biomed Health Inform 2023; 27:5495-5505. [PMID: 37656652 DOI: 10.1109/jbhi.2023.3311021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) contains information on tumor morphology and physiology for breast cancer diagnosis and treatment. However, this technology requires contrast agent injection with more acquisition time than other parametric images, such as T2-weighted imaging (T2WI). Current image synthesis methods attempt to map the image data from one domain to another, whereas it is challenging or even infeasible to map the images with one sequence into images with multiple sequences. Here, we propose a new approach of cross-parametric generative adversarial network (GAN)-based feature synthesis (CPGANFS) to generate discriminative DCE-MRI features from T2WI with applications in breast cancer diagnosis. The proposed approach decodes the T2W images into latent cross-parameter features to reconstruct the DCE-MRI and T2WI features by balancing the information shared between the two. A Wasserstein GAN with a gradient penalty is employed to differentiate the T2WI-generated features from ground-truth features extracted from DCE-MRI. The synthesized DCE-MRI feature-based model achieved significantly (p = 0.036) higher prediction performance (AUC = 0.866) in breast cancer diagnosis than that based on T2WI (AUC = 0.815). Visualization of the model shows that our CPGANFS method enhances the predictive power by levitating attention to the lesion and the surrounding parenchyma areas, which is driven by the interparametric information learned from T2WI and DCE-MRI. Our proposed CPGANFS provides a framework for cross-parametric MR image feature generation from a single-sequence image guided by an information-rich, time-series image with kinetic information. Extensive experimental results demonstrate its effectiveness with high interpretability and improved performance in breast cancer diagnosis.
Collapse
|
11
|
Saleh GA, Batouty NM, Gamal A, Elnakib A, Hamdy O, Sharafeldeen A, Mahmoud A, Ghazal M, Yousaf J, Alhalabi M, AbouEleneen A, Tolba AE, Elmougy S, Contractor S, El-Baz A. Impact of Imaging Biomarkers and AI on Breast Cancer Management: A Brief Review. Cancers (Basel) 2023; 15:5216. [PMID: 37958390 PMCID: PMC10650187 DOI: 10.3390/cancers15215216] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/13/2023] [Accepted: 10/21/2023] [Indexed: 11/15/2023] Open
Abstract
Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.
Collapse
Affiliation(s)
- Gehad A. Saleh
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Nihal M. Batouty
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Abdelrahman Gamal
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elnakib
- Electrical and Computer Engineering Department, School of Engineering, Penn State Erie, The Behrend College, Erie, PA 16563, USA;
| | - Omar Hamdy
- Surgical Oncology Department, Oncology Centre, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Amal AbouEleneen
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis, Cairo 11829, Egypt
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
12
|
Hagiwara A, Fujita S, Kurokawa R, Andica C, Kamagata K, Aoki S. Multiparametric MRI: From Simultaneous Rapid Acquisition Methods and Analysis Techniques Using Scoring, Machine Learning, Radiomics, and Deep Learning to the Generation of Novel Metrics. Invest Radiol 2023; 58:548-560. [PMID: 36822661 PMCID: PMC10332659 DOI: 10.1097/rli.0000000000000962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/10/2023] [Indexed: 02/25/2023]
Abstract
ABSTRACT With the recent advancements in rapid imaging methods, higher numbers of contrasts and quantitative parameters can be acquired in less and less time. Some acquisition models simultaneously obtain multiparametric images and quantitative maps to reduce scan times and avoid potential issues associated with the registration of different images. Multiparametric magnetic resonance imaging (MRI) has the potential to provide complementary information on a target lesion and thus overcome the limitations of individual techniques. In this review, we introduce methods to acquire multiparametric MRI data in a clinically feasible scan time with a particular focus on simultaneous acquisition techniques, and we discuss how multiparametric MRI data can be analyzed as a whole rather than each parameter separately. Such data analysis approaches include clinical scoring systems, machine learning, radiomics, and deep learning. Other techniques combine multiple images to create new quantitative maps associated with meaningful aspects of human biology. They include the magnetic resonance g-ratio, the inner to the outer diameter of a nerve fiber, and the aerobic glycolytic index, which captures the metabolic status of tumor tissues.
Collapse
Affiliation(s)
- Akifumi Hagiwara
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shohei Fujita
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
- Division of Neuroradiology, Department of Radiology, University of Michigan, Ann Arbor, Michigan
| | - Christina Andica
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Koji Kamagata
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Shigeki Aoki
- From theDepartment of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| |
Collapse
|
13
|
Adam R, Dell'Aquila K, Hodges L, Maldjian T, Duong TQ. Deep learning applications to breast cancer detection by magnetic resonance imaging: a literature review. Breast Cancer Res 2023; 25:87. [PMID: 37488621 PMCID: PMC10367400 DOI: 10.1186/s13058-023-01687-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 07/11/2023] [Indexed: 07/26/2023] Open
Abstract
Deep learning analysis of radiological images has the potential to improve diagnostic accuracy of breast cancer, ultimately leading to better patient outcomes. This paper systematically reviewed the current literature on deep learning detection of breast cancer based on magnetic resonance imaging (MRI). The literature search was performed from 2015 to Dec 31, 2022, using Pubmed. Other database included Semantic Scholar, ACM Digital Library, Google search, Google Scholar, and pre-print depositories (such as Research Square). Articles that were not deep learning (such as texture analysis) were excluded. PRISMA guidelines for reporting were used. We analyzed different deep learning algorithms, methods of analysis, experimental design, MRI image types, types of ground truths, sample sizes, numbers of benign and malignant lesions, and performance in the literature. We discussed lessons learned, challenges to broad deployment in clinical practice and suggested future research directions.
Collapse
Affiliation(s)
- Richard Adam
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Kevin Dell'Aquila
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Laura Hodges
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Takouhie Maldjian
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Tim Q Duong
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA.
| |
Collapse
|
14
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
15
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
16
|
Chaudhury S, Sau K. A BERT encoding with Recurrent Neural Network and Long-Short Term Memory for breast cancer image classification. DECISION ANALYTICS JOURNAL 2023; 6:100177. [DOI: 10.1016/j.dajour.2023.100177] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/19/2023]
|
17
|
Patsanis A, Sunoqrot MRS, Bathen TF, Elschot M. CROPro: a tool for automated cropping of prostate magnetic resonance images. J Med Imaging (Bellingham) 2023; 10:024004. [PMID: 36895761 PMCID: PMC9990132 DOI: 10.1117/1.jmi.10.2.024004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 02/09/2023] [Indexed: 03/09/2023] Open
Abstract
Purpose To bypass manual data preprocessing and optimize deep learning performance, we developed and evaluated CROPro, a tool to standardize automated cropping of prostate magnetic resonance (MR) images. Approach CROPro enables automatic cropping of MR images regardless of patient health status, image size, prostate volume, or pixel spacing. CROPro can crop foreground pixels from a region of interest (e.g., prostate) with different image sizes, pixel spacing, and sampling strategies. Performance was evaluated in the context of clinically significant prostate cancer (csPCa) classification. Transfer learning was used to train five convolutional neural network (CNN) and five vision transformer (ViT) models using different combinations of cropped image sizes ( 64 × 64 , 128 × 128 , and 256 × 256 pixels2), pixel spacing ( 0.2 × 0.2 , 0.3 × 0.3 , 0.4 × 0.4 , and 0.5 × 0.5 mm 2 ), and sampling strategies (center, random, and stride cropping) over the prostate. T2-weighted MR images ( N = 1475 ) from the online available PI-CAI challenge were used to train ( N = 1033 ), validate ( N = 221 ), and test ( N = 221 ) all models. Results Among CNNs, SqueezeNet with stride cropping (image size: 128 × 128 , pixel spacing: 0.2 × 0.2 mm 2 ) achieved the best classification performance ( 0.678 ± 0.006 ). Among ViTs, ViT-H/14 with random cropping (image size: 64 × 64 and pixel spacing: 0.5 × 0.5 mm 2 ) achieved the best performance ( 0.756 ± 0.009 ). Model performance depended on the cropped area, with optimal size generally larger with center cropping ( ∼ 40 cm 2 ) than random/stride cropping ( ∼ 10 cm 2 ). Conclusion We found that csPCa classification performance of CNNs and ViTs depends on the cropping settings. We demonstrated that CROPro is well suited to optimize these settings in a standardized manner, which could improve the overall performance of deep learning models.
Collapse
Affiliation(s)
- Alexandros Patsanis
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
| | - Mohammed R. S. Sunoqrot
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
- St. Olavs Hospital, Trondheim University Hospital, Department of Radiology and Nuclear Medicine, Trondheim, Norway
| | - Tone F. Bathen
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
- St. Olavs Hospital, Trondheim University Hospital, Department of Radiology and Nuclear Medicine, Trondheim, Norway
| | - Mattijs Elschot
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Trondheim, Norway
- St. Olavs Hospital, Trondheim University Hospital, Department of Radiology and Nuclear Medicine, Trondheim, Norway
| |
Collapse
|
18
|
Prediction of pathologic complete response to neoadjuvant systemic therapy in triple negative breast cancer using deep learning on multiparametric MRI. Sci Rep 2023; 13:1171. [PMID: 36670144 PMCID: PMC9859781 DOI: 10.1038/s41598-023-27518-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 01/03/2023] [Indexed: 01/22/2023] Open
Abstract
Triple-negative breast cancer (TNBC) is an aggressive subtype of breast cancer. Neoadjuvant systemic therapy (NAST) followed by surgery are currently standard of care for TNBC with 50-60% of patients achieving pathologic complete response (pCR). We investigated ability of deep learning (DL) on dynamic contrast enhanced (DCE) MRI and diffusion weighted imaging acquired early during NAST to predict TNBC patients' pCR status in the breast. During the development phase using the images of 130 TNBC patients, the DL model achieved areas under the receiver operating characteristic curves (AUCs) of 0.97 ± 0.04 and 0.82 ± 0.10 for the training and the validation, respectively. The model achieved an AUC of 0.86 ± 0.03 when evaluated in the independent testing group of 32 patients. In an additional prospective blinded testing group of 48 patients, the model achieved an AUC of 0.83 ± 0.02. These results demonstrated that DL based on multiparametric MRI can potentially differentiate TNBC patients with pCR or non-pCR in the breast early during NAST.
Collapse
|
19
|
Zhang X, Liu M, Ren W, Sun J, Wang K, Xi X, Zhang G. Predicting of axillary lymph node metastasis in invasive breast cancer using multiparametric MRI dataset based on CNN model. Front Oncol 2022; 12:1069733. [PMID: 36561533 PMCID: PMC9763602 DOI: 10.3389/fonc.2022.1069733] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 11/15/2022] [Indexed: 12/12/2022] Open
Abstract
Purpose To develop a multiparametric MRI model for predicting axillary lymph node metastasis in invasive breast cancer. Methods Clinical data and T2WI, DWI, and DCE-MRI images of 252 patients with invasive breast cancer were retrospectively analyzed and divided into the axillary lymph node metastasis (ALNM) group and non-ALNM group using biopsy results as a reference standard. The regions of interest (ROI) in T2WI, DWI, and DCE-MRI images were segmented using MATLAB software, and the ROI was unified into 224 × 224 sizes, followed by image normalization as input to T2WI, DWI, and DCE-MRI models, all of which were based on ResNet 50 networks. The idea of a weighted voting method in ensemble learning was employed, and then T2WI, DWI, and DCE-MRI models were used as the base models to construct a multiparametric MRI model. The entire dataset was randomly divided into training sets and testing sets (the training set 202 cases, including 78 ALNM, 124 non-ALNM; the testing set 50 cases, including 20 ALNM, 30 non-ALNM). Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of models were calculated. The receiver operating characteristic (ROC) curve and area under the curve (AUC) were used to evaluate the diagnostic performance of each model for axillary lymph node metastasis, and the DeLong test was performed, P< 0.05 statistically significant. Results For the assessment of axillary lymph node status in invasive breast cancer on the test set, multiparametric MRI models yielded an AUC of 0.913 (95% CI, 0.799-0.974); T2WI-based model yielded an AUC of 0.908 (95% CI, 0.792-0.971); DWI-based model achieved an AUC of 0.702 (95% CI, 0.556-0.823); and the AUC of the DCE-MRI-based model was 0.572 (95% CI, 0.424-0.711). The improvement in the diagnostic performance of the multiparametric MRI model compared with the DWI and DCE-MRI-based models were significant (P< 0.01 for both). However, the increase was not meaningful compared with the T2WI-based model (P = 0.917). Conclusion Multiparametric MRI image analysis based on an ensemble CNN model with deep learning is of practical application and extension for preoperative prediction of axillary lymph node metastasis in invasive breast cancer.
Collapse
Affiliation(s)
- Xiaodong Zhang
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Menghan Liu
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University, Jinan, China
| | - Wanqing Ren
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Jingxiang Sun
- Department of Radiology, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,Postgraduate Department, Shandong First Medical University (Shandong Academy of Medical Sciences), Jinan, China
| | - Kesong Wang
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China
| | - Xiaoming Xi
- School of Computer Science and Technology, Shandong Jianzhu University, Jinan, China
| | - Guang Zhang
- Department of Health Management, The First Affiliated Hospital of Shandong First Medical University, Jinan, China,*Correspondence: Guang Zhang,
| |
Collapse
|
20
|
Exploring EPR Parameters of 187Re Complexes for Designing New MRI Probes: From the Gas Phase to Solution and a Model Protein Environment. J CHEM-NY 2022. [DOI: 10.1155/2022/7056284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Breast cancer is one of the major types of cancer around the world, and early diagnosis is essential for successful treatment. New contrast agents (CAs), with reduced toxicology, are needed to improve diagnosis. One of the most promising Magnetic Resonance Imaging (MRI) CA is based on rhenium conjugated with a benzothiazole derivate (ReABT). In this sense, DFT has been used to evaluate the best methodology for calculating the hyperfine coupling constant (Aiso) of ReABT. Then, a thermodynamic analysis was performed to confirm the stability of the complex. Furthermore, a docking study of ReABT at the enzyme PI3K active site and Aiso calculations of ReABT in the enzyme environment were carried out. The best methodology for the Aiso calculation of ReABT was using the M06L functional, SARC-ZORA-TZVP (for Re) and TZVP (for all other atoms) basis set, relativistic Hamiltonian, and the CPCM solvation model with water as the solvent which confirm that the relativistic effects are important for calculating the Aiso values. In addition, thermodynamic analysis indicates that ReABT presents a higher stability and a lower toxicity than Gd-based CAs. The docking studies point out that ReABT interacts with amino acids residues of alanine, aspartate, and lysine from the PI3K active site. Considering the enzyme environment, Aiso values decrease significantly. These findings indicate that the CA candidate ReABT could be a good candidate for a new contrast agent.
Collapse
|
21
|
Applying Deep Learning for Breast Cancer Detection in Radiology. Curr Oncol 2022; 29:8767-8793. [PMID: 36421343 PMCID: PMC9689782 DOI: 10.3390/curroncol29110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.
Collapse
|
22
|
Deep Learning Models for Automated Assessment of Breast Density Using Multiple Mammographic Image Types. Cancers (Basel) 2022; 14:cancers14205003. [PMID: 36291787 PMCID: PMC9599904 DOI: 10.3390/cancers14205003] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/09/2022] [Accepted: 10/10/2022] [Indexed: 11/24/2022] Open
Abstract
Simple Summary The DL model predictions in automated breast density assessment were independent of the imaging technologies, moderately or substantially agreed with the clinical reader density values, and had improved performance as compared to inclusion of commercial software values. Abstract Recently, convolutional neural network (CNN) models have been proposed to automate the assessment of breast density, breast cancer detection or risk stratification using single image modality. However, analysis of breast density using multiple mammographic types using clinical data has not been reported in the literature. In this study, we investigate pre-trained EfficientNetB0 deep learning (DL) models for automated assessment of breast density using multiple mammographic types with and without clinical information to improve reliability and versatility of reporting. 120,000 for-processing and for-presentation full-field digital mammograms (FFDM), digital breast tomosynthesis (DBT), and synthesized 2D images from 5032 women were retrospectively analyzed. Each participant underwent up to 3 screening examinations and completed a questionnaire at each screening encounter. Pre-trained EfficientNetB0 DL models with or without clinical history were optimized. The DL models were evaluated using BI-RADS (fatty, scattered fibroglandular densities, heterogeneously dense, or extremely dense) versus binary (non-dense or dense) density classification. Pre-trained EfficientNetB0 model performances were compared using inter-observer and commercial software (Volpara) variabilities. Results show that the average Fleiss’ Kappa score between-observers ranged from 0.31–0.50 and 0.55–0.69 for the BI-RADS and binary classifications, respectively, showing higher uncertainty among experts. Volpara-observer agreement was 0.33 and 0.54 for BI-RADS and binary classifications, respectively, showing fair to moderate agreement. However, our proposed pre-trained EfficientNetB0 DL models-observer agreement was 0.61–0.66 and 0.70–0.75 for BI-RADS and binary classifications, respectively, showing moderate to substantial agreement. Overall results show that the best breast density estimation was achieved using for-presentation FFDM and DBT images without added clinical information. Pre-trained EfficientNetB0 model can automatically assess breast density from any images modality type, with the best results obtained from for-presentation FFDM and DBT, which are the most common image archived in clinical practice.
Collapse
|
23
|
Baughan N, Douglas L, Giger ML. Past, Present, and Future of Machine Learning and Artificial Intelligence for Breast Cancer Screening. JOURNAL OF BREAST IMAGING 2022; 4:451-459. [PMID: 38416954 DOI: 10.1093/jbi/wbac052] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Indexed: 03/01/2024]
Abstract
Breast cancer screening has evolved substantially over the past few decades because of advancements in new image acquisition systems and novel artificial intelligence (AI) algorithms. This review provides a brief overview of the history, current state, and future of AI in breast cancer screening and diagnosis along with challenges involved in the development of AI systems. Although AI has been developing for interpretation tasks associated with breast cancer screening for decades, its potential to combat the subjective nature and improve the efficiency of human image interpretation is always expanding. The rapid advancement of computational power and deep learning has increased greatly in AI research, with promising performance in detection and classification tasks across imaging modalities. Most AI systems, based on human-engineered or deep learning methods, serve as concurrent or secondary readers, that is, as aids to radiologists for a specific, well-defined task. In the future, AI may be able to perform multiple integrated tasks, making decisions at the level of or surpassing the ability of humans. Artificial intelligence may also serve as a partial primary reader to streamline ancillary tasks, triaging cases or ruling out obvious normal cases. However, before AI is used as an independent, autonomous reader, various challenges need to be addressed, including explainability and interpretability, in addition to repeatability and generalizability, to ensure that AI will provide a significant clinical benefit to breast cancer screening across all populations.
Collapse
Affiliation(s)
- Natalie Baughan
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| | - Lindsay Douglas
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| | - Maryellen L Giger
- University of Chicago, Department of Radiology Committee on Medical Physics, Chicago, IL, USA
| |
Collapse
|
24
|
Zhu J, Geng J, Shan W, Zhang B, Shen H, Dong X, Liu M, Li X, Cheng L. Development and validation of a deep learning model for breast lesion segmentation and characterization in multiparametric MRI. Front Oncol 2022; 12:946580. [PMID: 36033449 PMCID: PMC9402900 DOI: 10.3389/fonc.2022.946580] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 07/12/2022] [Indexed: 11/13/2022] Open
Abstract
Importance The utilization of artificial intelligence for the differentiation of benign and malignant breast lesions in multiparametric MRI (mpMRI) assists radiologists to improve diagnostic performance. Objectives To develop an automated deep learning model for breast lesion segmentation and characterization and to evaluate the characterization performance of AI models and radiologists. Materials and methods For lesion segmentation, 2,823 patients were used for the training, validation, and testing of the VNet-based segmentation models, and the average Dice similarity coefficient (DSC) between the manual segmentation by radiologists and the mask generated by VNet was calculated. For lesion characterization, 3,303 female patients with 3,607 pathologically confirmed lesions (2,213 malignant and 1,394 benign lesions) were used for the three ResNet-based characterization models (two single-input and one multi-input models). Histopathology was used as the diagnostic criterion standard to assess the characterization performance of the AI models and the BI-RADS categorized by the radiologists, in terms of sensitivity, specificity, accuracy, and the area under the receiver operating characteristic curve (AUC). An additional 123 patients with 136 lesions (81 malignant and 55 benign lesions) from another institution were available for external testing. Results Of the 5,811 patients included in the study, the mean age was 46.14 (range 11–89) years. In the segmentation task, a DSC of 0.860 was obtained between the VNet-generated mask and manual segmentation by radiologists. In the characterization task, the AUCs of the multi-input and the other two single-input models were 0.927, 0.821, and 0.795, respectively. Compared to the single-input DWI or DCE model, the multi-input DCE and DWI model obtained a significant increase in sensitivity, specificity, and accuracy (0.831 vs. 0.772/0.776, 0.874 vs. 0.630/0.709, 0.846 vs. 0.721/0.752). Furthermore, the specificity of the multi-input model was higher than that of the radiologists, whether using BI-RADS category 3 or 4 as a cutoff point (0.874 vs. 0.404/0.841), and the accuracy was intermediate between the two assessment methods (0.846 vs. 0.773/0.882). For the external testing, the performance of the three models remained robust with AUCs of 0.812, 0.831, and 0.885, respectively. Conclusions Combining DCE with DWI was superior to applying a single sequence for breast lesion characterization. The deep learning computer-aided diagnosis (CADx) model we developed significantly improved specificity and achieved comparable accuracy to the radiologists with promise for clinical application to provide preliminary diagnoses.
Collapse
Affiliation(s)
- Jingjin Zhu
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Jiahui Geng
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Boya Zhang
- School of Medicine, Nankai University, Tianjin, China
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Huaqing Shen
- Department of Neurology, Beijing Tiantan Hospital, Beijing, China
| | - Xiaohan Dong
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Mei Liu
- Department of Pathology, Chinese People’s Liberation Army General Hospital, Beijing, China
| | - Xiru Li
- Department of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| | - Liuquan Cheng
- Department of Radiology, Chinese People’s Liberation Army General Hospital, Beijing, China
- *Correspondence: Liuquan Cheng, ; Xiru Li,
| |
Collapse
|
25
|
Multimodal Prediction of Five-Year Breast Cancer Recurrence in Women Who Receive Neoadjuvant Chemotherapy. Cancers (Basel) 2022; 14:cancers14163848. [PMID: 36010844 PMCID: PMC9405765 DOI: 10.3390/cancers14163848] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2022] [Revised: 07/29/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
In current clinical practice, it is difficult to predict whether a patient receiving neoadjuvant chemotherapy (NAC) for breast cancer is likely to encounter recurrence after treatment and have the cancer recur locally in the breast or in other areas of the body. We explore the use of clinical history, immunohistochemical markers, and multiparametric magnetic resonance imaging (DCE, ADC, Dixon) to predict the risk of post-treatment recurrence within five years. We performed a retrospective study on a cohort of 1738 patients from Institut Curie and analyzed the data using classical machine learning, image processing, and deep learning. Our results demonstrate the ability to predict recurrence prior to NAC treatment initiation using each modality alone, and the possible improvement achieved by combining the modalities. When evaluated on holdout data, the multimodal model achieved an AUC of 0.75 (CI: 0.70, 0.80) and 0.57 specificity at 0.90 sensitivity. We then stratified the data based on known prognostic biomarkers. We found that our models can provide accurate recurrence predictions (AUC > 0.89) for specific groups of women under 50 years old with poor prognoses. A version of our method won second place at the BMMR2 Challenge, with a very small margin from being first, and was a standout from the other challenge entries.
Collapse
|
26
|
Altabella L, Benetti G, Camera L, Cardano G, Montemezzi S, Cavedon C. Machine learning for multi-parametric breast MRI: radiomics-based approaches for lesion classification. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7d8f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 06/30/2022] [Indexed: 11/11/2022]
Abstract
Abstract
In the artificial intelligence era, machine learning (ML) techniques have gained more and more importance in the advanced analysis of medical images in several fields of modern medicine. Radiomics extracts a huge number of medical imaging features revealing key components of tumor phenotype that can be linked to genomic pathways. The multi-dimensional nature of radiomics requires highly accurate and reliable machine-learning methods to create predictive models for classification or therapy response assessment.
Multi-parametric breast magnetic resonance imaging (MRI) is routinely used for dense breast imaging as well for screening in high-risk patients and has shown its potential to improve clinical diagnosis of breast cancer. For this reason, the application of ML techniques to breast MRI, in particular to multi-parametric imaging, is rapidly expanding and enhancing both diagnostic and prognostic power. In this review we will focus on the recent literature related to the use of ML in multi-parametric breast MRI for tumor classification and differentiation of molecular subtypes. Indeed, at present, different models and approaches have been employed for this task, requiring a detailed description of the advantages and drawbacks of each technique and a general overview of their performances.
Collapse
|
27
|
The Usefulness of Gradient-Weighted CAM in Assisting Medical Diagnoses. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157748] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
In modern medicine, medical imaging technologies such as computed tomography (CT), X-ray, ultrasound, magnetic resonance imaging (MRI), nuclear medicine, etc., have been proven to provide useful diagnostic information by displaying areas of a lesion or tumor not visible to the human eye, and may also help provide additional recessive information by using modern data analysis methods. These methods, including Artificial Intelligence (AI) technologies, are based on deep learning architectures, and have shown remarkable results in recent studies. However, the lack of explanatory ability of connection-based, instead of algorithm-based, deep learning technologies is one of the main reasons for the delay in the acceptance of these technologies in the mainstream medical field. One of the recent methods that may offer the explanatory ability for the CNN classes of deep learning neural networks is the gradient-weighted class activation mapping (Grad-CAM) method, which produces heat-maps that may offer explanations of the classification results. There are already many studies in the literature that compare the objective metrics of Grad-CAM-generated heat-maps against other methods. However, the subjective evaluation of AI-based classification/prediction results using medical images by qualified personnel could potentially contribute more to the acceptance of AI than objective metrics. The purpose of this paper is to investigate whether and how the Grad-CAM heat-maps can help physicians and radiologists in making diagnoses by presenting the results from AI-based classifications as well as their associated Grad-CAM-generated heat-maps to a qualified radiologist. The results of this study show that the radiologist considers Grad-CAM-generated heat-maps to be generally helpful toward diagnosis.
Collapse
|
28
|
Wang W, Jiang R, Cui N, Li Q, Yuan F, Xiao Z. Semi-supervised vision transformer with adaptive token sampling for breast cancer classification. Front Pharmacol 2022; 13:929755. [PMID: 35935827 PMCID: PMC9353650 DOI: 10.3389/fphar.2022.929755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 06/29/2022] [Indexed: 12/24/2022] Open
Abstract
Various imaging techniques combined with machine learning (ML) models have been used to build computer-aided diagnosis (CAD) systems for breast cancer (BC) detection and classification. The rise of deep learning models in recent years, represented by convolutional neural network (CNN) models, has pushed the accuracy of ML-based CAD systems to a new level that is comparable to human experts. Existing studies have explored the usage of a wide spectrum of CNN models for BC detection, and supervised learning has been the mainstream. In this study, we propose a semi-supervised learning framework based on the Vision Transformer (ViT). The ViT is a model that has been validated to outperform CNN models on numerous classification benchmarks but its application in BC detection has been rare. The proposed method offers a custom semi-supervised learning procedure that unifies both supervised and consistency training to enhance the robustness of the model. In addition, the method uses an adaptive token sampling technique that can strategically sample the most significant tokens from the input image, leading to an effective performance gain. We validate our method on two datasets with ultrasound and histopathology images. Results demonstrate that our method can consistently outperform the CNN baselines for both learning tasks. The code repository of the project is available at https://github.com/FeiYee/Breast-area-TWO.
Collapse
Affiliation(s)
- Wei Wang
- Department of Breast Surgery, Hubei Provincial Clinical Research Center for Breast Cancer, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Ran Jiang
- Department of Thyroid and Breast Surgery, Maternal and Child Health Hospital of Hubei Province, Wuhan, Hubei, China
| | - Ning Cui
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qian Li
- Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Feng Yuan
- Department of Breast Surgery, Hubei Provincial Clinical Research Center for Breast Cancer, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Zhifeng Xiao
- School of Engineering,Penn State Erie, The Behrend College, Erie, PA, United States
| |
Collapse
|
29
|
Yin HL, Jiang Y, Xu Z, Jia HH, Lin GW. Combined diagnosis of multiparametric MRI-based deep learning models facilitates differentiating triple-negative breast cancer from fibroadenoma magnetic resonance BI-RADS 4 lesions. J Cancer Res Clin Oncol 2022; 149:2575-2584. [PMID: 35771263 DOI: 10.1007/s00432-022-04142-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 06/13/2022] [Indexed: 02/05/2023]
Abstract
PURPOSE To investigate the value of the combined diagnosis of multiparametric MRI-based deep learning models to differentiate triple-negative breast cancer (TNBC) from fibroadenoma magnetic resonance Breast Imaging-Reporting and Data System category 4 (BI-RADS 4) lesions and to evaluate whether the combined diagnosis of these models could improve the diagnostic performance of radiologists. METHODS A total of 319 female patients with 319 pathologically confirmed BI-RADS 4 lesions were randomly divided into training, validation, and testing sets in this retrospective study. The three models were established based on contrast-enhanced T1-weighted imaging, diffusion-weighted imaging, and T2-weighted imaging using the training and validation sets. The artificial intelligence (AI) combination score was calculated according to the results of three models. The diagnostic performances of four radiologists with and without AI assistance were compared with the AI combination score on the testing set. The area under the curve (AUC), sensitivity, specificity, accuracy, and weighted kappa value were calculated to assess the performance. RESULTS The AI combination score yielded an excellent performance (AUC = 0.944) on the testing set. With AI assistance, the AUC for the diagnosis of junior radiologist 1 (JR1) increased from 0.833 to 0.885, and that for JR2 increased from 0.823 to 0.876. The AUCs of senior radiologist 1 (SR1) and SR2 slightly increased from 0.901 and 0.950 to 0.925 and 0.975 after AI assistance, respectively. CONCLUSION Combined diagnosis of multiparametric MRI-based deep learning models to differentiate TNBC from fibroadenoma magnetic resonance BI-RADS 4 lesions can achieve comparable performance to that of SRs and improve the diagnostic performance of JRs.
Collapse
Affiliation(s)
- Hao-Lin Yin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Yu Jiang
- Department of Radiology, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Zihan Xu
- Lung Cancer Center, Cancer Center and State Key Laboratory of Biotherapy, West China Hospital of Sichuan University, 37# Guo Xue Xiang, Chengdu, Sichuan, China
| | - Hui-Hui Jia
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China
| | - Guang-Wu Lin
- Department of Radiology, Huadong Hospital Affiliated to Fudan University, Jing'an District, 221# Yan'anxi Road, Shanghai, 200040, China.
| |
Collapse
|
30
|
Xiang Y, Dong X, Zeng C, Liu J, Liu H, Hu X, Feng J, Du S, Wang J, Han Y, Luo Q, Chen S, Li Y. Clinical Variables, Deep Learning and Radiomics Features Help Predict the Prognosis of Adult Anti-N-methyl-D-aspartate Receptor Encephalitis Early: A Two-Center Study in Southwest China. Front Immunol 2022; 13:913703. [PMID: 35720336 PMCID: PMC9199424 DOI: 10.3389/fimmu.2022.913703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 04/26/2022] [Indexed: 11/17/2022] Open
Abstract
Objective To develop a fusion model combining clinical variables, deep learning (DL), and radiomics features to predict the functional outcomes early in patients with adult anti-N-methyl-D-aspartate receptor (NMDAR) encephalitis in Southwest China. Methods From January 2012, a two-center study of anti-NMDAR encephalitis was initiated to collect clinical and MRI data from acute patients in Southwest China. Two experienced neurologists independently assessed the patients’ prognosis at 24 moths based on the modified Rankin Scale (mRS) (good outcome defined as mRS 0–2; bad outcome defined as mRS 3-6). Risk factors influencing the prognosis of patients with acute anti-NMDAR encephalitis were investigated using clinical data. Five DL and radiomics models trained with four single or combined four MRI sequences (T1-weighted imaging, T2-weighted imaging, fluid-attenuated inversion recovery imaging and diffusion weighted imaging) and a clinical model were developed to predict the prognosis of anti-NMDAR encephalitis. A fusion model combing a clinical model and two machine learning-based models was built. The performances of the fusion model, clinical model, DL-based models and radiomics-based models were compared using the area under the receiver operating characteristic curve (AUC) and accuracy and then assessed by paired t-tests (P < 0.05 was considered significant). Results The fusion model achieved the significantly greatest predictive performance in the internal test dataset with an AUC of 0.963 [95% CI: (0.874-0.999)], and also significantly exhibited an equally good performance in the external validation dataset, with an AUC of 0.927 [95% CI: (0.688-0.975)]. The radiomics_combined model (AUC: 0.889; accuracy: 0.857) provided significantly superior predictive performance than the DL_combined (AUC: 0.845; accuracy: 0.857) and clinical models (AUC: 0.840; accuracy: 0.905), whereas the clinical model showed significantly higher accuracy. Compared with all single-sequence models, the DL_combined model and the radiomics_combined model had significantly greater AUCs and accuracies. Conclusions The fusion model combining clinical variables and machine learning-based models may have early predictive value for poor outcomes associated with anti-NMDAR encephalitis.
Collapse
Affiliation(s)
- Yayun Xiang
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Xiaoxuan Dong
- College of Computer and Information Science, Chongqing, China
| | - Chun Zeng
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Junhang Liu
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Hanjing Liu
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Xiaofei Hu
- Department of Neurology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Jinzhou Feng
- Department of Neurology, First Affiliated Hospital of Chongqing Medical University, Chongqing, China
| | - Silin Du
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Jingjie Wang
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Yongliang Han
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Qi Luo
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - Shanxiong Chen
- College of Computer and Information Science, Chongqing, China
| | - Yongmei Li
- Department of Radiology, The First Affiliated Hospital, Chongqing Medical University, Chongqing, China
| |
Collapse
|
31
|
Li H, Whitney HM, Ji Y, Edwards A, Papaioannou J, Liu P, Giger ML. Impact of continuous learning on diagnostic breast MRI AI: evaluation on an independent clinical dataset. J Med Imaging (Bellingham) 2022; 9:034502. [DOI: 10.1117/1.jmi.9.3.034502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 05/12/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Hui Li
- University of Chicago, Department of Radiology, Chicago, Illinois
| | | | - Yu Ji
- Tianjin Medical University, Tianjin Medical University Cancer Institute and Hospital, National Clini
| | | | - John Papaioannou
- University of Chicago, Department of Radiology, Chicago, Illinois
| | - Peifang Liu
- Tianjin Medical University, Tianjin Medical University Cancer Institute and Hospital, National Clini
| | | |
Collapse
|
32
|
Fan M, Yuan C, Huang G, Xu M, Wang S, Gao X, Li L. A framework for deep multitask learning with multiparametric magnetic resonance imaging for the joint prediction of histological characteristics in breast cancer. IEEE J Biomed Health Inform 2022; 26:3884-3895. [PMID: 35635826 DOI: 10.1109/jbhi.2022.3179014] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The clinical management and decision-making process related to breast cancer are based on multiple histological indicators. This study aims to jointly predict the Ki-67 expression level, luminal A subtype and histological grade molecular biomarkers using a new deep multitask learning method with multiparametric magnetic resonance imaging. A multitask learning network structure was proposed by introducing a common-task layer and task-specific layers to learn the high-level features that are common to all tasks and related to a specific task, respectively. A network pretrained with knowledge from the ImageNet dataset was used and fine-tuned with MRI data. Information from multiparametric MR images was fused using the strategy at the feature and decision levels. The area under the receiver operating characteristic curve (AUC) was used to measure model performance. For single-task learning using a single image series, the deep learning model generated AUCs of 0.752, 0.722, and 0.596 for the Ki-67, luminal A and histological grade prediction tasks, respectively. The performance was improved by freezing the first 5 convolutional layers, using 20% shared layers and fusing multiparametric series at the feature level, which achieved AUCs of 0.819, 0.799 and 0.747 for Ki-67, luminal A and histological grade prediction tasks, respectively. Our study showed advantages in jointly predicting correlated clinical biomarkers using a deep multitask learning framework with an appropriate number of fine-tuned convolutional layers by taking full advantage of common and complementary imaging features. Multiparametric image series-based multitask learning could be a promising approach for the multiple clinical indicator-based management of breast cancer.
Collapse
|
33
|
Galati F, Rizzo V, Trimboli RM, Kripa E, Maroncelli R, Pediconi F. MRI as a biomarker for breast cancer diagnosis and prognosis. BJR Open 2022; 4:20220002. [PMID: 36105423 PMCID: PMC9459861 DOI: 10.1259/bjro.20220002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 05/01/2022] [Accepted: 05/04/2022] [Indexed: 11/05/2022] Open
Abstract
Breast cancer (BC) is the most frequently diagnosed female invasive cancer in Western countries and the leading cause of cancer-related death worldwide. Nowadays, tumor heterogeneity is a well-known characteristic of BC, since it includes several nosological entities characterized by different morphologic features, clinical course and response to treatment. Thus, with the spread of molecular biology technologies and the growing knowledge of the biological processes underlying the development of BC, the importance of imaging biomarkers as non-invasive information about tissue hallmarks has progressively grown. To date, breast magnetic resonance imaging (MRI) is considered indispensable in breast imaging practice, with widely recognized indications such as BC screening in females at increased risk, locoregional staging and neoadjuvant therapy (NAT) monitoring. Moreover, breast MRI is increasingly used to assess not only the morphologic features of the pathological process but also to characterize individual phenotypes for targeted therapies, building on developments in genomics and molecular biology features. The aim of this review is to explore the role of breast multiparametric MRI in providing imaging biomarkers, leading to an improved differentiation of benign and malignant breast lesions and to a customized management of BC patients in monitoring and predicting response to treatment. Finally, we discuss how breast MRI biomarkers offer one of the most fertile ground for artificial intelligence (AI) applications. In the era of personalized medicine, with the development of omics-technologies, machine learning and big data, the role of imaging biomarkers is embracing new opportunities for BC diagnosis and treatment.
Collapse
Affiliation(s)
- Francesca Galati
- Department of Radiological, Oncological and Pathological Sciences, “Sapienza” - University of Rome, Viale Regina Elena, Rome, Italy
| | - Veronica Rizzo
- Department of Radiological, Oncological and Pathological Sciences, “Sapienza” - University of Rome, Viale Regina Elena, Rome, Italy
| | | | - Endi Kripa
- Department of Radiological, Oncological and Pathological Sciences, “Sapienza” - University of Rome, Viale Regina Elena, Rome, Italy
| | - Roberto Maroncelli
- Department of Radiological, Oncological and Pathological Sciences, “Sapienza” - University of Rome, Viale Regina Elena, Rome, Italy
| | - Federica Pediconi
- Department of Radiological, Oncological and Pathological Sciences, “Sapienza” - University of Rome, Viale Regina Elena, Rome, Italy
| |
Collapse
|
34
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
35
|
Assessing radiomics feature stability with simulated CT acquisitions. Sci Rep 2022; 12:4732. [PMID: 35304508 PMCID: PMC8933485 DOI: 10.1038/s41598-022-08301-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Accepted: 03/03/2022] [Indexed: 11/29/2022] Open
Abstract
Medical imaging quantitative features had once disputable usefulness in clinical studies. Nowadays, advancements in analysis techniques, for instance through machine learning, have enabled quantitative features to be progressively useful in diagnosis and research. Tissue characterisation is improved via the “radiomics” features, whose extraction can be automated. Despite the advances, stability of quantitative features remains an important open problem. As features can be highly sensitive to variations of acquisition details, it is not trivial to quantify stability and efficiently select stable features. In this work, we develop and validate a Computed Tomography (CT) simulator environment based on the publicly available ASTRA toolbox (www.astra-toolbox.com). We show that the variability, stability and discriminative power of the radiomics features extracted from the virtual phantom images generated by the simulator are similar to those observed in a tandem phantom study. Additionally, we show that the variability is matched between a multi-center phantom study and simulated results. Consequently, we demonstrate that the simulator can be utilised to assess radiomics features’ stability and discriminative power.
Collapse
|
36
|
Tomographic Ultrasound Imaging in the Diagnosis of Breast Tumors under the Guidance of Deep Learning Algorithms. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9227440. [PMID: 35265119 PMCID: PMC8901319 DOI: 10.1155/2022/9227440] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 01/23/2022] [Accepted: 02/01/2022] [Indexed: 11/18/2022]
Abstract
This study was aimed to discuss the feasibility of distinguishing benign and malignant breast tumors under the tomographic ultrasound imaging (TUI) of deep learning algorithm. The deep learning algorithm was used to segment the images, and 120 patients with breast tumor were included in this study, all of whom underwent routine ultrasound examinations. Subsequently, TUI was used to assist in guiding the positioning, and the light scattering tomography system was used to further measure the lesions. A deep learning model was established to process the imaging results, and the pathological test results were undertaken as the gold standard for the efficiency of different imaging methods to diagnose the breast tumors. The results showed that, among 120 patients with breast tumor, 56 were benign lesions and 64 were malignant lesions. The average total amount of hemoglobin (HBT) of malignant lesions was significantly higher than that of benign lesions (P < 0.05). The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of TUI in the diagnosis of breast cancer were 90.4%, 75.6%, 81.4%, 84.7%, and 80.6%, respectively. The sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of ultrasound in the diagnosis of breast cancer were 81.7%, 64.9%, 70.5%, 75.9%, and 80.6%, respectively. In addition, for suspected breast malignant lesions, the combined application of ultrasound and tomography can increase the diagnostic specificity to 82.1% and the accuracy to 83.8%. Based on the above results, it was concluded that TUI combined with ultrasound had a significant effect on benign and malignant diagnosis of breast cancer and can significantly improve the specificity and accuracy of diagnosis. It also reflected that deep learning technology had a good auxiliary role in the examination of diseases and was worth the promotion of clinical application.
Collapse
|
37
|
Rabbi F, Dabbagh SR, Angin P, Yetisen AK, Tasoglu S. Deep Learning-Enabled Technologies for Bioimage Analysis. MICROMACHINES 2022; 13:mi13020260. [PMID: 35208385 PMCID: PMC8880650 DOI: 10.3390/mi13020260] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 01/31/2022] [Accepted: 02/03/2022] [Indexed: 02/05/2023]
Abstract
Deep learning (DL) is a subfield of machine learning (ML), which has recently demonstrated its potency to significantly improve the quantification and classification workflows in biomedical and clinical applications. Among the end applications profoundly benefitting from DL, cellular morphology quantification is one of the pioneers. Here, we first briefly explain fundamental concepts in DL and then we review some of the emerging DL-enabled applications in cell morphology quantification in the fields of embryology, point-of-care ovulation testing, as a predictive tool for fetal heart pregnancy, cancer diagnostics via classification of cancer histology images, autosomal polycystic kidney disease, and chronic kidney diseases.
Collapse
Affiliation(s)
- Fazle Rabbi
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
| | - Sajjad Rahmani Dabbagh
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
| | - Pelin Angin
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Ali Kemal Yetisen
- Department of Chemical Engineering, Imperial College London, London SW7 2AZ, UK;
| | - Savas Tasoglu
- Department of Mechanical Engineering, Koç University, Sariyer, Istanbul 34450, Turkey; (F.R.); (S.R.D.)
- Koç University Arçelik Research Center for Creative Industries (KUAR), Koç University, Sariyer, Istanbul 34450, Turkey
- Koc University Is Bank Artificial Intelligence Lab (KUIS AILab), Koç University, Sariyer, Istanbul 34450, Turkey
- Institute of Biomedical Engineering, Boğaziçi University, Çengelköy, Istanbul 34684, Turkey
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, 70569 Stuttgart, Germany
- Correspondence:
| |
Collapse
|
38
|
Dewangan KK, Dewangan DK, Sahu SP, Janghel R. Breast cancer diagnosis in an early stage using novel deep learning with hybrid optimization technique. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:13935-13960. [PMID: 35233181 PMCID: PMC8874754 DOI: 10.1007/s11042-022-12385-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 01/17/2022] [Accepted: 01/21/2022] [Indexed: 05/17/2023]
Abstract
Breast cancer is one of the primary causes of death that is occurred in females around the world. So, the recognition and categorization of initial phase breast cancer are necessary to help the patients to have suitable action. However, mammography images provide very low sensitivity and efficiency while detecting breast cancer. Moreover, Magnetic Resonance Imaging (MRI) provides high sensitivity than mammography for predicting breast cancer. In this research, a novel Back Propagation Boosting Recurrent Wienmed model (BPBRW) with Hybrid Krill Herd African Buffalo Optimization (HKH-ABO) mechanism is developed for detecting breast cancer in an earlier stage using breast MRI images. Initially, the MRI breast images are trained to the system, and an innovative Wienmed filter is established for preprocessing the MRI noisy image content. Moreover, the projected BPBRW with HKH-ABO mechanism categorizes the breast cancer tumor as benign and malignant. Additionally, this model is simulated using Python, and the performance of the current research work is evaluated with prevailing works. Hence, the comparative graph shows that the current research model produces improved accuracy of 99.6% with a 0.12% lower error rate.
Collapse
Affiliation(s)
- Kranti Kumar Dewangan
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Deepak Kumar Dewangan
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Satya Prakash Sahu
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Rekhram Janghel
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| |
Collapse
|
39
|
Detection and Classification of Knee Injuries from MR Images Using the MRNet Dataset with Progressively Operating Deep Learning Methods. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2021. [DOI: 10.3390/make3040050] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
This study aimed to build progressively operating deep learning models that could detect meniscus injuries, anterior cruciate ligament (ACL) tears and knee abnormalities in magnetic resonance imaging (MRI). The Stanford Machine Learning Group MRNet dataset was employed in the study, which included MRI image indexes in the coronal, sagittal, and axial axes, each having 1130 trains and 120 validation items. The study is divided into three sections. In the first section, suitable images are selected to determine the disease in the image index based on the disturbance under examination. It is also used to identify images that have been misclassified or are noisy and/or damaged to the degree that they cannot be utilised for diagnosis in the first section. The study employed the 50-layer residual networks (ResNet50) model in this section. The second part of the study involves locating the region to be focused on based on the disturbance that is targeted to be diagnosed in the image under examination. A novel model was built by integrating the convolutional neural networks (CNN) and the denoising autoencoder models in the second section. The third section is dedicated to making a diagnosis of the disease. In this section, a novel ResNet50 model is trained to identify disease diagnoses or abnormalities, independent of the ResNet50 model used in the first section. The images that each model selects as output after training are referred to as progressively operating deep learning methods since they are supplied as an input to the following model.
Collapse
|
40
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
41
|
Zhou J, Liu YL, Zhang Y, Chen JH, Combs FJ, Parajuli R, Mehta RS, Liu H, Chen Z, Zhao Y, Pan Z, Wang M, Yu R, Su MY. BI-RADS Reading of Non-Mass Lesions on DCE-MRI and Differential Diagnosis Performed by Radiomics and Deep Learning. Front Oncol 2021; 11:728224. [PMID: 34790569 PMCID: PMC8591227 DOI: 10.3389/fonc.2021.728224] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 10/11/2021] [Indexed: 11/24/2022] Open
Abstract
Background A wide variety of benign and malignant processes can manifest as non-mass enhancement (NME) in breast MRI. Compared to mass lesions, there are no distinct features that can be used for differential diagnosis. The purpose is to use the BI-RADS descriptors and models developed using radiomics and deep learning to distinguish benign from malignant NME lesions. Materials and Methods A total of 150 patients with 104 malignant and 46 benign NME were analyzed. Three radiologists performed reading for morphological distribution and internal enhancement using the 5th BI-RADS lexicon. For each case, the 3D tumor mask was generated using Fuzzy-C-Means segmentation. Three DCE parametric maps related to wash-in, maximum, and wash-out were generated, and PyRadiomics was applied to extract features. The radiomics model was built using five machine learning algorithms. ResNet50 was implemented using three parametric maps as input. Approximately 70% of earlier cases were used for training, and 30% of later cases were held out for testing. Results The diagnostic BI-RADS in the original MRI report showed that 104/104 malignant and 36/46 benign lesions had a BI-RADS score of 4A–5. For category reading, the kappa coefficient was 0.83 for morphological distribution (excellent) and 0.52 for internal enhancement (moderate). Segmental and Regional distribution were the most prominent for the malignant group, and focal distribution for the benign group. Eight radiomics features were selected by support vector machine (SVM). Among the five machine learning algorithms, SVM yielded the highest accuracy of 80.4% in training and 77.5% in testing datasets. ResNet50 had a better diagnostic performance, 91.5% in training and 83.3% in testing datasets. Conclusion Diagnosis of NME was challenging, and the BI-RADS scores and descriptors showed a substantial overlap. Radiomics and deep learning may provide a useful CAD tool to aid in diagnosis.
Collapse
Affiliation(s)
- Jiejie Zhou
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China.,Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yan-Lin Liu
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States
| | - Yang Zhang
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States.,Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Jeon-Hor Chen
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States.,Department of Radiology, E-DA Hospital and I-Shou University, Kaohsiung, Taiwan
| | - Freddie J Combs
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States
| | - Ritesh Parajuli
- Department of Medicine, University of California, Irvine, Irvine, CA, United States
| | - Rita S Mehta
- Department of Medicine, University of California, Irvine, Irvine, CA, United States
| | - Huiru Liu
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhongwei Chen
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Youfan Zhao
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Zhifang Pan
- Zhejiang Engineering Research Center of Intelligent Medicine, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Meihao Wang
- Department of Radiology, The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Risheng Yu
- Department of Radiology, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Min-Ying Su
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA, United States.,Department of Medical Imaging and Radiological Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
42
|
Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9025470. [PMID: 34754327 PMCID: PMC8572604 DOI: 10.1155/2021/9025470] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 12/30/2022]
Abstract
Deep learning (DL) is a branch of machine learning and artificial intelligence that has been applied to many areas in different domains such as health care and drug design. Cancer prognosis estimates the ultimate fate of a cancer subject and provides survival estimation of the subjects. An accurate and timely diagnostic and prognostic decision will greatly benefit cancer subjects. DL has emerged as a technology of choice due to the availability of high computational resources. The main components in a standard computer-aided design (CAD) system are preprocessing, feature recognition, extraction and selection, categorization, and performance assessment. Reduction of costs associated with sequencing systems offers a myriad of opportunities for building precise models for cancer diagnosis and prognosis prediction. In this survey, we provided a summary of current works where DL has helped to determine the best models for the cancer diagnosis and prognosis prediction tasks. DL is a generic model requiring minimal data manipulations and achieves better results while working with enormous volumes of data. Aims are to scrutinize the influence of DL systems using histopathology images, present a summary of state-of-the-art DL methods, and give directions to future researchers to refine the existing methods.
Collapse
|
43
|
Akgönüllü S, Bakhshpour M, Pişkin AK, Denizli A. Microfluidic Systems for Cancer Diagnosis and Applications. MICROMACHINES 2021; 12:mi12111349. [PMID: 34832761 PMCID: PMC8619454 DOI: 10.3390/mi12111349] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Revised: 10/27/2021] [Accepted: 10/29/2021] [Indexed: 12/13/2022]
Abstract
Microfluidic devices have led to novel biological advances through the improvement of micro systems that can mimic and measure. Microsystems easily handle sub-microliter volumes, obviously with guidance presumably through laminated fluid flows. Microfluidic systems have production methods that do not need expert engineering, away from a centralized laboratory, and can implement basic and point of care analysis, and this has attracted attention to their widespread dissemination and adaptation to specific biological issues. The general use of microfluidic tools in clinical settings can be seen in pregnancy tests and diabetic control, but recently microfluidic platforms have become a key novel technology for cancer diagnostics. Cancer is a heterogeneous group of diseases that needs a multimodal paradigm to diagnose, manage, and treat. Using advanced technologies can enable this, providing better diagnosis and treatment for cancer patients. Microfluidic tools have evolved as a promising tool in the field of cancer such as detection of a single cancer cell, liquid biopsy, drug screening modeling angiogenesis, and metastasis detection. This review summarizes the need for the low-abundant blood and serum cancer diagnosis with microfluidic tools and the progress that has been followed to develop integrated microfluidic platforms for this application in the last few years.
Collapse
Affiliation(s)
- Semra Akgönüllü
- Department of Chemistry, Faculty of Science, Hacettepe University, Ankara 06800, Turkey; (S.A.); (M.B.)
| | - Monireh Bakhshpour
- Department of Chemistry, Faculty of Science, Hacettepe University, Ankara 06800, Turkey; (S.A.); (M.B.)
| | - Ayşe Kevser Pişkin
- Department of Medical Biology, Faculty of Medicine, Lokman Hekim University, Ankara 06230, Turkey;
| | - Adil Denizli
- Department of Chemistry, Faculty of Science, Hacettepe University, Ankara 06800, Turkey; (S.A.); (M.B.)
- Correspondence:
| |
Collapse
|
44
|
Pathologic Complete Response Prediction after Neoadjuvant Chemoradiation Therapy for Rectal Cancer Using Radiomics and Deep Embedding Network of MRI. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11209494] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
Assessment of magnetic resonance imaging (MRI) after neoadjuvant chemoradiation therapy (nCRT) is essential in rectal cancer staging and treatment planning. However, when predicting the pathologic complete response (pCR) after nCRT for rectal cancer, existing works either rely on simple quantitative evaluation based on radiomics features or partially analyze multi-parametric MRI. We propose an effective pCR prediction method based on novel multi-parametric MRI embedding. We first seek to extract volumetric features of tumors that can be found only by analyzing multiple MRI sequences jointly. Specifically, we encapsulate multiple MRI sequences into multi-sequence fusion images (MSFI) and generate MSFI embedding. We merge radiomics features, which capture important characteristics of tumors, with MSFI embedding to generate multi-parametric MRI embedding and then use it to predict pCR using a random forest classifier. Our extensive experiments demonstrate that using all given MRI sequences is the most effective regardless of the dimension reduction method. The proposed method outperformed any variants with different combinations of feature vectors and dimension reduction methods or different classification models. Comparative experiments demonstrate that it outperformed four competing baselines in terms of the AUC and F1-score. We use MRI sequences from 912 patients with rectal cancer, a much larger sample than in any existing work.
Collapse
|
45
|
Pourasad Y, Zarouri E, Salemizadeh Parizi M, Salih Mohammed A. Presentation of Novel Architecture for Diagnosis and Identifying Breast Cancer Location Based on Ultrasound Images Using Machine Learning. Diagnostics (Basel) 2021; 11:1870. [PMID: 34679568 PMCID: PMC8534593 DOI: 10.3390/diagnostics11101870] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 10/02/2021] [Accepted: 10/03/2021] [Indexed: 12/14/2022] Open
Abstract
Breast cancer is one of the main causes of death among women worldwide. Early detection of this disease helps reduce the number of premature deaths. This research aims to design a method for identifying and diagnosing breast tumors based on ultrasound images. For this purpose, six techniques have been performed to detect and segment ultrasound images. Features of images are extracted using the fractal method. Moreover, k-nearest neighbor, support vector machine, decision tree, and Naïve Bayes classification techniques are used to classify images. Then, the convolutional neural network (CNN) architecture is designed to classify breast cancer based on ultrasound images directly. The presented model obtains the accuracy of the training set to 99.8%. Regarding the test results, this diagnosis validation is associated with 88.5% sensitivity. Based on the findings of this study, it can be concluded that the proposed high-potential CNN algorithm can be used to diagnose breast cancer from ultrasound images. The second presented CNN model can identify the original location of the tumor. The results show 92% of the images in the high-performance region with an AUC above 0.6. The proposed model can identify the tumor's location and volume by morphological operations as a post-processing algorithm. These findings can also be used to monitor patients and prevent the growth of the infected area.
Collapse
Affiliation(s)
- Yaghoub Pourasad
- Department of Electrical Engineering, Urmia University of Technology (UUT), Urmia 57166-93188, Iran
| | - Esmaeil Zarouri
- School of Electrical Engineering, Electronic Engineering, Iran University of Science and Technology—IUST, Tehran 16846-13114, Iran;
| | | | - Amin Salih Mohammed
- Department of Computer Engineering, College of Engineering and Computer Science, Lebanese French University, Erbil 44001, Iraq;
- Department of Software and Informatics Engineering, Salahaddin University, Erbil 44002, Iraq
| |
Collapse
|
46
|
Automatic Breast Tumor Diagnosis in MRI Based on a Hybrid CNN and Feature-Based Method Using Improved Deer Hunting Optimization Algorithm. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:5396327. [PMID: 34326868 PMCID: PMC8302380 DOI: 10.1155/2021/5396327] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/12/2021] [Accepted: 07/06/2021] [Indexed: 11/18/2022]
Abstract
Breast cancer is an unusual mass of the breast texture. It begins with an abnormal change in cell structure. This disease may increase uncontrollably and affects neighboring textures. Early diagnosis of this cancer (abnormal cell changes) can help definitively treat it. Also, prevention of this cancer can help to decrease the high cost of medical caring for breast cancer patients. In recent years, the computer-aided technique is an important active field for automatic cancer detection. In this study, an automatic breast tumor diagnosis system is introduced. An improved Deer Hunting Optimization Algorithm (DHOA) is used as the optimization algorithm. The presented method utilized a hybrid feature-based technique and a new optimized convolutional neural network (CNN). Simulations are applied to the DCE-MRI dataset based on some performance indexes. The novel contribution of this paper is to apply the preprocessing stage to simplifying the classification. Besides, we used a new metaheuristic algorithm. Also, the feature extraction by Haralick texture and local binary pattern (LBP) is recommended. Due to the obtained results, the accuracy of this method is 98.89%, which represents the high potential and efficiency of this method.
Collapse
|
47
|
Adlung L, Cohen Y, Mor U, Elinav E. Machine learning in clinical decision making. MED 2021; 2:642-665. [DOI: 10.1016/j.medj.2021.04.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 03/22/2021] [Accepted: 04/06/2021] [Indexed: 12/24/2022]
|
48
|
Hu Q, Whitney HM, Li H, Ji Y, Liu P, Giger ML. Improved Classification of Benign and Malignant Breast Lesions Using Deep Feature Maximum Intensity Projection MRI in Breast Cancer Diagnosis Using Dynamic Contrast-enhanced MRI. Radiol Artif Intell 2021; 3:e200159. [PMID: 34235439 PMCID: PMC8231792 DOI: 10.1148/ryai.2021200159] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 02/04/2021] [Accepted: 02/09/2021] [Indexed: 04/16/2023]
Abstract
PURPOSE To develop a deep transfer learning method that incorporates four-dimensional (4D) information in dynamic contrast-enhanced (DCE) MRI to classify benign and malignant breast lesions. MATERIALS AND METHODS The retrospective dataset is composed of 1990 distinct lesions (1494 malignant and 496 benign) from 1979 women (mean age, 47 years ± 10). Lesions were split into a training and validation set of 1455 lesions (acquired in 2015-2016) and an independent test set of 535 lesions (acquired in 2017). Features were extracted from a convolutional neural network (CNN), and lesions were classified as benign or malignant using support vector machines. Volumetric information was collapsed into two dimensions by taking the maximum intensity projection (MIP) at the image level or feature level within the CNN architecture. Performances were evaluated using the area under the receiver operating characteristic curve (AUC) as the figure of merit and were compared using the DeLong test. RESULTS The image MIP and feature MIP methods yielded AUCs of 0.91 (95% CI: 0.87, 0.94) and 0.93 (95% CI: 0.91, 0.96), respectively, for the independent test set. The feature MIP method achieved higher performance than the image MIP method (∆AUC 95% CI: 0.003, 0.051; P = .03). CONCLUSION Incorporating 4D information in DCE MRI by MIP of features in deep transfer learning demonstrated superior classification performance compared with using MIP images as input in the task of distinguishing between benign and malignant breast lesions.Keywords: Breast, Computer Aided Diagnosis (CAD), Convolutional Neural Network (CNN), MR-Dynamic Contrast Enhanced, Supervised learning, Support vector machines (SVM), Transfer learning, Volume Analysis © RSNA, 2021.
Collapse
|
49
|
da Silva LG, da Silva Monteiro WRS, de Aguiar Moreira TM, Rabelo MAE, de Assis EACP, de Souza GT. Fractal dimension analysis as an easy computational approach to improve breast cancer histopathological diagnosis. Appl Microsc 2021; 51:6. [PMID: 33929635 PMCID: PMC8087740 DOI: 10.1186/s42649-021-00055-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 04/20/2021] [Indexed: 12/31/2022] Open
Abstract
Histopathology is a well-established standard diagnosis employed for the majority of malignancies, including breast cancer. Nevertheless, despite training and standardization, it is considered operator-dependent and errors are still a concern. Fractal dimension analysis is a computational image processing technique that allows assessing the degree of complexity in patterns. We aimed here at providing a robust and easily attainable method for introducing computer-assisted techniques to histopathology laboratories. Slides from two databases were used: A) Breast Cancer Histopathological; and B) Grand Challenge on Breast Cancer Histology. Set A contained 2480 images from 24 patients with benign alterations, and 5429 images from 58 patients with breast cancer. Set B comprised 100 images of each type: normal tissue, benign alterations, in situ carcinoma, and invasive carcinoma. All images were analyzed with the FracLac algorithm in the ImageJ computational environment to yield the box count fractal dimension (Db) results. Images on set A on 40x magnification were statistically different (p = 0.0003), whereas images on 400x did not present differences in their means. On set B, the mean Db values presented promissing statistical differences when comparing. Normal and/or benign images to in situ and/or invasive carcinoma (all p < 0.0001). Interestingly, there was no difference when comparing normal tissue to benign alterations. These data corroborate with previous work in which fractal analysis allowed differentiating malignancies. Computer-aided diagnosis algorithms may beneficiate from using Db data; specific Db cut-off values may yield ~ 99% specificity in diagnosing breast cancer. Furthermore, the fact that it allows assessing tissue complexity, this tool may be used to understand the progression of the histological alterations in cancer.
Collapse
Affiliation(s)
- Lucas Glaucio da Silva
- Faculty of Medical and Health Sciences of Juiz de Fora, Alameda Salvaterra, Juiz de Fora, Minas Gerais, 200 - 36033-003, Brazil
| | | | - Tiago Medeiros de Aguiar Moreira
- Department of Biology - Genetics - Federal University of Juiz de Fora, Rua José Lourenço Kelmer, s/n, Juiz de Fora, Minas Gerais, 36036-900, Brazil
| | - Maria Aparecida Esteves Rabelo
- Faculty of Medical and Health Sciences of Juiz de Fora, Alameda Salvaterra, Juiz de Fora, Minas Gerais, 200 - 36033-003, Brazil
| | - Emílio Augusto Campos Pereira de Assis
- Faculty of Medical and Health Sciences of Juiz de Fora, Alameda Salvaterra, Juiz de Fora, Minas Gerais, 200 - 36033-003, Brazil
- Animal Reproduction Laboratory - Brazilian Agricultural Research Corporation - Dairy Cattle, Laboratory of Animal Reproduction, Av. Eugênio do Nascimento, Juiz de Fora, Minas Gerais, 610 - 36038-330, Brazil
| | - Gustavo Torres de Souza
- Department of Biology - Genetics - Federal University of Juiz de Fora, Rua José Lourenço Kelmer, s/n, Juiz de Fora, Minas Gerais, 36036-900, Brazil.
- Center for Investigation and Diagnosis of Pathological Anatomy, Avenida Itamar Franco, Juiz de Fora, Minas Gerais, 4001 - 36033-318, Brazil.
| |
Collapse
|
50
|
Ayana G, Dese K, Choe SW. Transfer Learning in Breast Cancer Diagnoses via Ultrasound Imaging. Cancers (Basel) 2021; 13:738. [PMID: 33578891 PMCID: PMC7916666 DOI: 10.3390/cancers13040738] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 02/05/2021] [Accepted: 02/08/2021] [Indexed: 11/26/2022] Open
Abstract
Transfer learning is a machine learning approach that reuses a learning method developed for a task as the starting point for a model on a target task. The goal of transfer learning is to improve performance of target learners by transferring the knowledge contained in other (but related) source domains. As a result, the need for large numbers of target-domain data is lowered for constructing target learners. Due to this immense property, transfer learning techniques are frequently used in ultrasound breast cancer image analyses. In this review, we focus on transfer learning methods applied on ultrasound breast image classification and detection from the perspective of transfer learning approaches, pre-processing, pre-training models, and convolutional neural network (CNN) models. Finally, comparison of different works is carried out, and challenges-as well as outlooks-are discussed.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea;
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia;
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea;
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| |
Collapse
|