1
|
Kim HJ, Kim HH, Kim KH, Lee JS, Choi WJ, Chae EY, Shin HJ, Cha JH, Shim WH. Use of a commercial artificial intelligence-based mammography analysis software for improving breast ultrasound interpretations. Eur Radiol 2024; 34:6320-6331. [PMID: 38570382 DOI: 10.1007/s00330-024-10718-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/22/2024] [Accepted: 03/13/2024] [Indexed: 04/05/2024]
Abstract
OBJECTIVES To evaluate the use of a commercial artificial intelligence (AI)-based mammography analysis software for improving the interpretations of breast ultrasound (US)-detected lesions. METHODS A retrospective analysis was performed on 1109 breasts that underwent both mammography and US-guided breast biopsy. The AI software processed mammograms and provided an AI score ranging from 0 to 100 for each breast, indicating the likelihood of malignancy. The performance of the AI score in differentiating mammograms with benign outcomes from those revealing cancers following US-guided breast biopsy was evaluated. In addition, prediction models for benign outcomes were constructed based on clinical and imaging characteristics with and without AI scores, using logistic regression analysis. RESULTS The AI software had an area under the receiver operating characteristics curve (AUROC) of 0.79 (95% CI, 0.79-0.82) in differentiating between benign and cancer cases. The prediction models that did not include AI scores (non-AI model), only used AI scores (AI-only model), and included AI scores (integrated model) had AUROCs of 0.79 (95% CI, 0.75-0.83), 0.78 (95% CI, 0.74-0.82), and 0.85 (95% CI, 0.81-0.88) in the development cohort, and 0.75 (95% CI, 0.68-0.81), 0.82 (95% CI, 0.76-0.88), and 0.84 (95% CI, 0.79-0.90) in the validation cohort, respectively. The integrated model outperformed the non-AI model in the development and validation cohorts (p < 0.001 for both). CONCLUSION The commercial AI-based mammography analysis software could be a valuable adjunct to clinical decision-making for managing US-detected breast lesions. CLINICAL RELEVANCE STATEMENT The commercial AI-based mammography analysis software could potentially reduce unnecessary biopsies and improve patient outcomes. KEY POINTS • Breast US has high rates of false-positive interpretations. • A commercial AI-based mammography analysis software could distinguish mammograms having benign outcomes from those revealing cancers after US-guided breast biopsy. • A commercial AI-based mammography analysis software may improve interpretations for breast US-detected lesions.
Collapse
Affiliation(s)
- Hee Jeong Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | - Hak Hee Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea.
| | - Ki Hwan Kim
- Lunit Inc., 15F, 27, Teheran-Ro 2-Gil, Gangnam-Gu, Seoul, 06241, South Korea
| | - Ji Sung Lee
- Department of Clinical Epidemiology and Biostatistics, Asan Medical Center, University of Ulsan College, Ulsan, South Korea
| | - Woo Jung Choi
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | - Eun Young Chae
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | - Hee Jung Shin
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | - Joo Hee Cha
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | - Woo Hyun Shim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| |
Collapse
|
2
|
Arslan M, Asim M, Sattar H, Khan A, Thoppil Ali F, Zehra M, Talluri K. Role of Radiology in the Diagnosis and Treatment of Breast Cancer in Women: A Comprehensive Review. Cureus 2024; 16:e70097. [PMID: 39449897 PMCID: PMC11500669 DOI: 10.7759/cureus.70097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2024] [Indexed: 10/26/2024] Open
Abstract
Breast cancer remains a leading cause of morbidity and mortality among women worldwide. Early detection and precise diagnosis are critical for effective treatment and improved patient outcomes. This review explores the evolving role of radiology in the diagnosis and treatment of breast cancer, highlighting advancements in imaging technologies and the integration of artificial intelligence (AI). Traditional imaging modalities such as mammography, ultrasound, and magnetic resonance imaging have been the cornerstone of breast cancer diagnostics, with each modality offering unique advantages. The advent of radiomics, which involves extracting quantitative data from medical images, has further augmented the diagnostic capabilities of these modalities. AI, particularly deep learning algorithms, has shown potential in improving diagnostic accuracy and reducing observer variability across imaging modalities. AI-driven tools are increasingly being integrated into clinical workflows to assist in image interpretation, lesion classification, and treatment planning. Additionally, radiology plays a crucial role in guiding treatment decisions, particularly in the context of image-guided radiotherapy and monitoring response to neoadjuvant chemotherapy. The review also discusses the emerging field of theranostics, where diagnostic imaging is combined with therapeutic interventions to provide personalized cancer care. Despite these advancements, challenges such as the need for large annotated datasets and the integration of AI into clinical practice remain. The review concludes that while the role of radiology in breast cancer management is rapidly evolving, further research is required to fully realize the potential of these technologies in improving patient outcomes.
Collapse
Affiliation(s)
| | - Muhammad Asim
- Emergency Medicine, Royal Free Hospital, London, GBR
| | - Hina Sattar
- Medicine, Dow University of Health Sciences, Karachi, PAK
| | - Anita Khan
- Medicine, Khyber Girls Medical College, Peshawar, PAK
| | | | - Muneeza Zehra
- Internal Medicine, Karachi Medical and Dental College, Karachi, PAK
| | - Keerthi Talluri
- General Medicine, GSL (Ganni Subba Lakshmi garu) Medical College, Rajahmundry, IND
| |
Collapse
|
3
|
Lin Z, Chen L, Wang Y, Zhang T, Huang P. Improving ultrasound diagnostic Precision for breast cancer and adenosis with modality-specific enhancement (MSE) - Breast Net. Cancer Lett 2024; 596:216977. [PMID: 38795759 DOI: 10.1016/j.canlet.2024.216977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 05/10/2024] [Accepted: 05/18/2024] [Indexed: 05/28/2024]
Abstract
Adenosis is a benign breast condition whose lesions can mimic breast carcinoma and is evaluated for malignancy with the Breast Imaging-Reporting and Data System (BI-RADS). We construct and validate the performance of modality-specific enhancement (MSE)-Breast Net based on multimodal ultrasound images and compare it to the BI-RADS in differentiating adenosis from breast cancer. A total of 179 patients with breast carcinoma and 229 patients with adenosis were included in this retrospective, two-institution study, then divided into a training cohort (institution I, n = 292) and a validation cohort (institution II, n = 116). In the training cohort, the final model had a significantly greater AUC (0.82; P < 0.05) than B-mode-based model (0.69, 95% CI [0.49-0.90]). In the validation cohort, the AUC of the final model was 0.81, greater than that of the BI-RADS (0.75, P < 0.05). The multimodal model outperformed the individual and bimodal models, reaching a significantly greater AUC of 0.87 (95% CI = 0.69-1.0) (P < 0.05). MSE-Breast Net, based on multimodal ultrasound images, exhibited better diagnostic performance than the BI-RADS in differentiating adenosis from breast cancer and may contribute to clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Zimei Lin
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Libin Chen
- Department of Ultrasound in Medicine, The First Affiliated Hospital of Ningbo University, Ningbo, 315201, China
| | - Yunzhong Wang
- Department of Ultrasound in Medicine, The First Affiliated Hospital of Ningbo University, Ningbo, 315201, China
| | - Tao Zhang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China.
| | - Pintong Huang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China; Research Center of Ultrasound in Medicine and Biomedical Engineering, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China; Research Center for Life Science and Human Health, Binjiang Institute of Zhejiang University, Hangzhou, 310053, China.
| |
Collapse
|
4
|
Guldogan N, Taskin F, Icten GE, Yilmaz E, Turk EB, Erdemli S, Parlakkilic UT, Turkoglu O, Aribal E. Artificial Intelligence in BI-RADS Categorization of Breast Lesions on Ultrasound: Can We Omit Excessive Follow-ups and Biopsies? Acad Radiol 2024; 31:2194-2202. [PMID: 38087719 DOI: 10.1016/j.acra.2023.11.031] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/18/2023] [Accepted: 11/20/2023] [Indexed: 07/01/2024]
Abstract
RATIONALE AND OBJECTIVES Artificial intelligence (AI) systems have been increasingly applied to breast ultrasonography. They are expected to decrease the workload of radiologists and to improve diagnostic accuracy. The aim of this study is to evaluate the performance of an AI system for the BI-RADS category assessment in breast masses detected on breast ultrasound. MATERIALS AND METHODS: A total of 715 masses detected in 530 patients were analyzed. Three breast imaging centers of the same institution and nine breast radiologists participated in this study. Ultrasound was performed by one radiologist who obtained two orthogonal views of each detected lesion. These images were retrospectively reviewed by a second radiologist blinded to the patient's clinical data. A commercial AI system evaluated images. The level of agreement between the AI system and the two radiologists and their diagnostic performance were calculated according to dichotomic BI-RADS category assessment. RESULTS This study included 715 breast masses. Of these, 134 (18.75%) were malignant, and 581 (81.25%) were benign. In discriminating benign and probably benign from suspicious lesions, the agreement between AI and the first and second radiologists was moderate statistically. The sensitivity and specificity of radiologist 1, radiologist 2, and AI were calculated as 98.51% and 80.72%, 97.76% and 75.56%, and 98.51% and 65.40%, respectively. For radiologist 1, the positive predictive value (PPV) was 54.10%, the negative predictive value (NPV) was 99.58%, and the accuracy was 84.06%. Radiologist 2 achieved a PPV of 47.99%, NPV of 99.32%, and accuracy of 79.72%. The AI system exhibited a PPV of 39.64%, NPV of 99.48%, and accuracy of 71.61%. Notably, none of the lesions categorized as BI-RADS 2 by AI were malignant, while 2 of the lesions classified as BI-RADS 3 by AI were subsequently confirmed as malignant. By considering AI-assigned BI-RADS 2 as safe, we could potentially avoid 11% (18 out of 163) of benign lesion biopsies and 46.2% (110 out of 238) of follow-ups. CONCLUSION AI proves effective in predicting malignancy. Integrating it into the clinical workflow has the potential to reduce unnecessary biopsies and short-term follow-ups, which, in turn, can contribute to sustainability in healthcare practices.
Collapse
Affiliation(s)
- Nilgun Guldogan
- Breast Clinic, Acibadem Altunizade Hospital, 34662, Istanbul, Turkey (N.G., E.Y., E.B.T., E.A.).
| | - Fusun Taskin
- Department of Radiology, Acibadem M.A.A. University School of Medicine, Atakent University Hospital, 34755, Istanbul, Turkey (F.T., S.E.); Acibadem M.A.A. University Senology Research Institute, 34457, Sarıyer, Istanbul, Turkey (F.T., G.E.I., U.T.P.)
| | - Gul Esen Icten
- Acibadem M.A.A. University Senology Research Institute, 34457, Sarıyer, Istanbul, Turkey (F.T., G.E.I., U.T.P.); Department of Radiology, Acibadem M.A.A. University School of Medicine, Acıbadem Maslak Hospital, Büyükdere St. 40, 34457, Maslak, Istanbul, Turkey (G.E.I.)
| | - Ebru Yilmaz
- Breast Clinic, Acibadem Altunizade Hospital, 34662, Istanbul, Turkey (N.G., E.Y., E.B.T., E.A.)
| | - Ebru Banu Turk
- Breast Clinic, Acibadem Altunizade Hospital, 34662, Istanbul, Turkey (N.G., E.Y., E.B.T., E.A.)
| | - Servet Erdemli
- Department of Radiology, Acibadem M.A.A. University School of Medicine, Atakent University Hospital, 34755, Istanbul, Turkey (F.T., S.E.)
| | - Ulku Tuba Parlakkilic
- Acibadem M.A.A. University Senology Research Institute, 34457, Sarıyer, Istanbul, Turkey (F.T., G.E.I., U.T.P.)
| | - Ozlem Turkoglu
- Department of Radiology, Taksim Training and Research Hospital, Istanbul, Turkey (O.T.)
| | - Erkin Aribal
- Breast Clinic, Acibadem Altunizade Hospital, 34662, Istanbul, Turkey (N.G., E.Y., E.B.T., E.A.); Department of Radiology, Acibadem M.A.A. University School of Medicine, Istanbul, Turkey (E.A.)
| |
Collapse
|
5
|
AlZoubi A, Eskandari A, Yu H, Du H. Explainable DCNN Decision Framework for Breast Lesion Classification from Ultrasound Images Based on Cancer Characteristics. Bioengineering (Basel) 2024; 11:453. [PMID: 38790320 PMCID: PMC11117892 DOI: 10.3390/bioengineering11050453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 04/26/2024] [Accepted: 04/30/2024] [Indexed: 05/26/2024] Open
Abstract
In recent years, deep convolutional neural networks (DCNNs) have shown promising performance in medical image analysis, including breast lesion classification in 2D ultrasound (US) images. Despite the outstanding performance of DCNN solutions, explaining their decisions remains an open investigation. Yet, the explainability of DCNN models has become essential for healthcare systems to accept and trust the models. This paper presents a novel framework for explaining DCNN classification decisions of lesions in ultrasound images using the saliency maps linking the DCNN decisions to known cancer characteristics in the medical domain. The proposed framework consists of three main phases. First, DCNN models for classification in ultrasound images are built. Next, selected methods for visualization are applied to obtain saliency maps on the input images of the DCNN models. In the final phase, the visualization outputs and domain-known cancer characteristics are mapped. The paper then demonstrates the use of the framework for breast lesion classification from ultrasound images. We first follow the transfer learning approach and build two DCNN models. We then analyze the visualization outputs of the trained DCNN models using the EGrad-CAM and Ablation-CAM methods. We map the DCNN model decisions of benign and malignant lesions through the visualization outputs to the characteristics such as echogenicity, calcification, shape, and margin. A retrospective dataset of 1298 US images collected from different hospitals is used to evaluate the effectiveness of the framework. The test results show that these characteristics contribute differently to the benign and malignant lesions' decisions. Our study provides the foundation for other researchers to explain the DCNN classification decisions of other cancer types.
Collapse
Affiliation(s)
- Alaa AlZoubi
- School of Computing, University of Derby, Derby DE3 16B, UK; (A.E.); (H.Y.)
| | - Ali Eskandari
- School of Computing, University of Derby, Derby DE3 16B, UK; (A.E.); (H.Y.)
| | - Harry Yu
- School of Computing, University of Derby, Derby DE3 16B, UK; (A.E.); (H.Y.)
| | - Hongbo Du
- School of Computing, The University of Buckingham, Buckingham MK18 1EG, UK;
| |
Collapse
|
6
|
Chen J, Huang Z, Jiang Y, Wu H, Tian H, Cui C, Shi S, Tang S, Xu J, Xu D, Dong F. Diagnostic Performance of Deep Learning in Video-Based Ultrasonography for Breast Cancer: A Retrospective Multicentre Study. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:722-728. [PMID: 38369431 DOI: 10.1016/j.ultrasmedbio.2024.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/08/2024] [Accepted: 01/16/2024] [Indexed: 02/20/2024]
Abstract
OBJECTIVE Although ultrasound is a common tool for breast cancer screening, its accuracy is often operator-dependent. In this study, we proposed a new automated deep-learning framework that extracts video-based ultrasound data for breast cancer screening. METHODS Our framework incorporates DenseNet121, MobileNet, and Xception as backbones for both video- and image-based models. We used data from 3907 patients to train and evaluate the models, which were tested using video- and image-based methods, as well as reader studies with human experts. RESULTS This study evaluated 3907 female patients aged 22 to 86 years. The results indicated that the MobileNet video model achieved an AUROC of 0.961 in prospective data testing, surpassing the DenseNet121 video model. In real-world data testing, it demonstrated an accuracy of 92.59%, outperforming both the DenseNet121 and Xception video models, and exceeding the 76.00% to 85.60% accuracy range of human experts. Additionally, the MobileNet video model exceeded the performance of image models and other video models across all evaluation metrics, including accuracy, sensitivity, specificity, F1 score, and AUC. Its exceptional performance, particularly suitable for resource-limited clinical settings, demonstrates its potential for clinical application in breast cancer screening. CONCLUSIONS The level of expertise reached by the video models was greater than that achieved by image-based models. We have developed an artificial intelligence framework based on videos that may be able to aid breast cancer diagnosis and alleviate the shortage of experienced experts.
Collapse
Affiliation(s)
- Jing Chen
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | | | - Yitao Jiang
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, China
| | - Huaiyu Wu
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Hongtian Tian
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Chen Cui
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, China
| | - Siyuan Shi
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, China
| | | | - Jinfeng Xu
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Dong Xu
- Institute of Basic Medicine and Cancer (IBMC), The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Fajin Dong
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China; Jinan University, Guangzhou, Guangdong, China.
| |
Collapse
|
7
|
Zhou H, Hua Z, Gao J, Lin F, Chen Y, Zhang S, Zheng T, Wang Z, Shao H, Li W, Liu F, Li Q, Chen J, Wang X, Zhao F, Qu N, Xie H, Ma H, Zhang H, Mao N. Multitask Deep Learning-Based Whole-Process System for Automatic Diagnosis of Breast Lesions and Axillary Lymph Node Metastasis Discrimination from Dynamic Contrast-Enhanced-MRI: A Multicenter Study. J Magn Reson Imaging 2024; 59:1710-1722. [PMID: 37497811 DOI: 10.1002/jmri.28913] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 07/02/2023] [Accepted: 07/03/2023] [Indexed: 07/28/2023] Open
Abstract
BACKGROUND Accurate diagnosis of breast lesions and discrimination of axillary lymph node (ALN) metastases largely depend on radiologist experience. PURPOSE To develop a deep learning-based whole-process system (DLWPS) for segmentation and diagnosis of breast lesions and discrimination of ALN metastasis. STUDY TYPE Retrospective. POPULATION 1760 breast patients, who were divided into training and validation sets (1110 patients), internal (476 patients), and external (174 patients) test sets. FIELD STRENGTH/SEQUENCE 3.0T/dynamic contrast-enhanced (DCE)-MRI sequence. ASSESSMENT DLWPS was developed using segmentation and classification models. The DLWPS-based segmentation model was developed by the U-Net framework, which combined the attention module and the edge feature extraction module. The average score of the output scores of three networks was used as the result of the DLWPS-based classification model. Moreover, the radiologists' diagnosis without and with the DLWPS-assistance was explored. To reveal the underlying biological basis of DLWPS, genetic analysis was performed based on RNA-sequencing data. STATISTICAL TESTS Dice similarity coefficient (DI), area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and kappa value. RESULTS The segmentation model reached a DI of 0.828 and 0.813 in the internal and external test sets, respectively. Within the breast lesions diagnosis, the DLWPS achieved AUCs of 0.973 in internal test set and 0.936 in external test set. For ALN metastasis discrimination, the DLWPS achieved AUCs of 0.927 in internal test set and 0.917 in external test set. The agreement of radiologists improved with the DLWPS-assistance from 0.547 to 0.794, and from 0.848 to 0.892 in breast lesions diagnosis and ALN metastasis discrimination, respectively. Additionally, 10 breast cancers with ALN metastasis were associated with pathways of aerobic electron transport chain and cytoplasmic translation. DATA CONCLUSION The performance of DLWPS indicates that it can promote radiologists in the judgment of breast lesions and ALN metastasis and nonmetastasis. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
- Heng Zhou
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Jing Gao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Fan Lin
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Yuqian Chen
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai, Shandong, China
| | - Shijie Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Tiantian Zheng
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Zhongyi Wang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Huafei Shao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Wenjuan Li
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Fengjie Liu
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Qin Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jingjing Chen
- Department of Radiology, Qingdao University Affiliated Hospital, Qingdao, Shandong, China
| | - Ximing Wang
- Department of Radiology, Shandong Provincial Hospital, Jinan, Shandong, China
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai, Shandong, China
| | - Nina Qu
- Department of Ultrasound, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Haizhu Xie
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Heng Ma
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Haicheng Zhang
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| | - Ning Mao
- Department of Radiology, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
- Big Data and Artificial Intelligence Laboratory, Yantai Yuhuangding Hospital, Affiliated Hospital of Qingdao University, Yantai, Shandong, China
| |
Collapse
|
8
|
Ragab M, Khadidos AO, Alshareef AM, Khadidos AO, Altwijri M, Alhebaishi N. Optimal deep transfer learning driven computer‐aided breast cancer classification using ultrasound images. EXPERT SYSTEMS 2024; 41. [DOI: 10.1111/exsy.13515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 11/12/2023] [Indexed: 10/28/2024]
Abstract
AbstractBreast cancer (BC) is regarded as the second leading type of cancer among women globally. Ultrasound images are typically used for the identification and classification of abnormalities that exist in the breast. To enhance diagnosis performance, the computer assisted diagnosis (CAD) model finds it effective for identifying and classifying BC. Generally, the CAD technique contains distinct procedures like feature extraction, preprocessing, segmentation, and classification. The recent developments of deep learning (DL) algorithms in the form of CAD system helps to minimize the cost and enhance the ability of radiologists to interpret medical images. Therefore, this study develops an optimal deep transfer learning driven computer aided BC classification (ODTLD‐CABCC) technique on ultrasound images. The presented ODTLD‐CABCC algorithm undergoes pre‐processing in two levels such as median filtering based noise removal and graph cut segmentation. Furthermore, the residual network (ResNet101) model can be used as a feature extractor. Finally, the sailfish optimizer (SFO) with a labelled weighted extreme learning machine (LWELM) algorithm is used for the classification process. The SFO technique is employed to choose optimal parameters involved in the LWELM algorithm. A comprehensive set of simulations are conducted on the benchmark data and the experimental outcomes are examined under numerous aspects. The comparative examination represents the supremacy of the ODTLD‐CABCC technique over the other approaches.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Department of Information Technology, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
- King Abdulaziz University ‐ University of Oxford Centre for Artificial Intelligence in Precision Medicines King Abdulaziz University Jeddah Saudi Arabia
| | - Alaa O. Khadidos
- Department of Information Systems, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
- Center of Research Excellence in Artificial Intelligence and Data Science King Abdulaziz University Jeddah Saudi Arabia
| | - Abdulrhman M. Alshareef
- Department of Information Systems, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Adil O. Khadidos
- Department of Information Technology, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Mohammed Altwijri
- Department of Computer Science, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Nawaf Alhebaishi
- Department of Information Systems, Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| |
Collapse
|
9
|
Tian R, Lu G, Tang S, Sang L, Ma H, Qian W, Yang W. Benign and malignant classification of breast tumor ultrasound images using conventional radiomics and transfer learning features: A multicenter retrospective study. Med Eng Phys 2024; 125:104117. [PMID: 38508797 DOI: 10.1016/j.medengphy.2024.104117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 01/25/2024] [Accepted: 02/13/2024] [Indexed: 03/22/2024]
Abstract
This study aims to establish an effective benign and malignant classification model for breast tumor ultrasound images by using conventional radiomics and transfer learning features. We collaborated with a local hospital and collected a base dataset (Dataset A) consisting of 1050 cases of single lesion 2D ultrasound images from patients, with a total of 593 benign and 357 malignant tumor cases. The experimental approach comprises three main parts: conventional radiomics, transfer learning, and feature fusion. Furthermore, we assessed the model's generalizability by utilizing multicenter data obtained from Datasets B and C. The results from conventional radiomics indicated that the SVM classifier achieved the highest balanced accuracy of 0.791, while XGBoost obtained the highest AUC of 0.854. For transfer learning, we extracted deep features from ResNet50, Inception-v3, DenseNet121, MNASNet, and MobileNet. Among these models, MNASNet, with 640-dimensional deep features, yielded the optimal performance, with a balanced accuracy of 0.866, AUC of 0.937, sensitivity of 0.819, and specificity of 0.913. In the feature fusion phase, we trained SVM, ExtraTrees, XGBoost, and LightGBM with early fusion features and evaluated them with weighted voting. This approach achieved the highest balanced accuracy of 0.964 and AUC of 0.981. Combining conventional radiomics and transfer learning features demonstrated clear advantages over using individual features for breast tumor ultrasound image classification. This automated diagnostic model can ease patient burden and provide additional diagnostic support to radiologists. The performance of this model encourages future prospective research in this domain.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Guoxiu Lu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Department of Nuclear Medicine, General Hospital of Northern Theatre Command, Shenyang, China
| | - Shiting Tang
- Department of Orthopedics, Joint Surgery and Sports Medicine, The First Hospital of China Medical University, Shenyang, China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Wei Yang
- Department of Radiology, Cancer Hospital of China Medical University, Liaoning Cancer, Hospital & Institute, Shenyang, China.
| |
Collapse
|
10
|
Tagnamas J, Ramadan H, Yahyaouy A, Tairi H. Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images. Vis Comput Ind Biomed Art 2024; 7:2. [PMID: 38273164 PMCID: PMC10811315 DOI: 10.1186/s42492-024-00155-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/11/2024] [Indexed: 01/27/2024] Open
Abstract
Accurate segmentation of breast ultrasound (BUS) images is crucial for early diagnosis and treatment of breast cancer. Further, the task of segmenting lesions in BUS images continues to pose significant challenges due to the limitations of convolutional neural networks (CNNs) in capturing long-range dependencies and obtaining global context information. Existing methods relying solely on CNNs have struggled to address these issues. Recently, ConvNeXts have emerged as a promising architecture for CNNs, while transformers have demonstrated outstanding performance in diverse computer vision tasks, including the analysis of medical images. In this paper, we propose a novel breast lesion segmentation network CS-Net that combines the strengths of ConvNeXt and Swin Transformer models to enhance the performance of the U-Net architecture. Our network operates on BUS images and adopts an end-to-end approach to perform segmentation. To address the limitations of CNNs, we design a hybrid encoder that incorporates modified ConvNeXt convolutions and Swin Transformer. Furthermore, to enhance capturing the spatial and channel attention in feature maps we incorporate the Coordinate Attention Module. Second, we design an Encoder-Decoder Features Fusion Module that facilitates the fusion of low-level features from the encoder with high-level semantic features from the decoder during the image reconstruction. Experimental results demonstrate the superiority of our network over state-of-the-art image segmentation methods for BUS lesions segmentation.
Collapse
Affiliation(s)
- Jaouad Tagnamas
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco.
| | - Hiba Ramadan
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Ali Yahyaouy
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| | - Hamid Tairi
- Department of Informatics, Faculty of Sciences Dhar El Mahraz, University of Sidi Mohamed Ben Abdellah, 30000, Fez, Morocco
| |
Collapse
|
11
|
Huang Z, Yang K, Tian H, Wu H, Tang S, Cui C, Shi S, Jiang Y, Chen J, Xu J, Dong F. A validation of an entropy-based artificial intelligence for ultrasound data in breast tumors. BMC Med Inform Decis Mak 2024; 24:1. [PMID: 38166852 PMCID: PMC10759705 DOI: 10.1186/s12911-023-02404-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 12/11/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) in the ultrasound (US) diagnosis of breast cancer (BCa) is increasingly prevalent. However, the impact of US-probe frequencies on the diagnostic efficacy of AI models has not been clearly established. OBJECTIVES To explore the impact of using US-video of variable frequencies on the diagnostic efficacy of AI in breast US screening. METHODS This study utilized different frequency US-probes (L14: frequency range: 3.0-14.0 MHz, central frequency 9 MHz, L9: frequency range: 2.5-9.0 MHz, central frequency 6.5 MHz and L13: frequency range: 3.6-13.5 MHz, central frequency 8 MHz, L7: frequency range: 3-7 MHz, central frequency 4.0 MHz, linear arrays) to collect breast-video and applied an entropy-based deep learning approach for evaluation. We analyzed the average two-dimensional image entropy (2-DIE) of these videos and the performance of AI models in processing videos from these different frequencies to assess how probe frequency affects AI diagnostic performance. RESULTS The study found that in testing set 1, L9 was higher than L14 in average 2-DIE; in testing set 2, L13 was higher in average 2-DIE than L7. The diagnostic efficacy of US-data, utilized in AI model analysis, varied across different frequencies (AUC: L9 > L14: 0.849 vs. 0.784; L13 > L7: 0.920 vs. 0.887). CONCLUSION This study indicate that US-data acquired using probes with varying frequencies exhibit diverse average 2-DIE values, and datasets characterized by higher average 2-DIE demonstrate enhanced diagnostic outcomes in AI-driven BCa diagnosis. Unlike other studies, our research emphasizes the importance of US-probe frequency selection on AI model diagnostic performance, rather than focusing solely on the AI algorithms themselves. These insights offer a new perspective for early BCa screening and diagnosis and are of significant for future choices of US equipment and optimization of AI algorithms.
Collapse
Affiliation(s)
- Zhibin Huang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Keen Yang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Hongtian Tian
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Huaiyu Wu
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Shuzhen Tang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Chen Cui
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Siyuan Shi
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Yitao Jiang
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Jing Chen
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Jinfeng Xu
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China.
- Shenzhen People's Hospital, 518020, Shenzhen, China.
| | - Fajin Dong
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China.
- Shenzhen People's Hospital, 518020, Shenzhen, China.
| |
Collapse
|
12
|
Zhang L, Xu R, Zhao J. Learning technology for detection and grading of cancer tissue using tumour ultrasound images1. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:157-171. [PMID: 37424493 DOI: 10.3233/xst-230085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
BACKGROUND Early diagnosis of breast cancer is crucial to perform effective therapy. Many medical imaging modalities including MRI, CT, and ultrasound are used to diagnose cancer. OBJECTIVE This study aims to investigate feasibility of applying transfer learning techniques to train convoluted neural networks (CNNs) to automatically diagnose breast cancer via ultrasound images. METHODS Transfer learning techniques helped CNNs recognise breast cancer in ultrasound images. Each model's training and validation accuracies were assessed using the ultrasound image dataset. Ultrasound images educated and tested the models. RESULTS MobileNet had the greatest accuracy during training and DenseNet121 during validation. Transfer learning algorithms can detect breast cancer in ultrasound images. CONCLUSIONS Based on the results, transfer learning models may be useful for automated breast cancer diagnosis in ultrasound images. However, only a trained medical professional should diagnose cancer, and computational approaches should only be used to help make quick decisions.
Collapse
Affiliation(s)
- Liyan Zhang
- Department of Ultrasound, Sunshine Union Hospital, Weifang, China
| | - Ruiyan Xu
- College of Health, Binzhou Polytechnical College, Binzhou, China
| | - Jingde Zhao
- Department of Imaging, Qingdao Hospital of Traditional Chinese Medicine (Qingdao HaiCi Hospital), Qingdao, China
| |
Collapse
|
13
|
Montaha S, Azam S, Bhuiyan MRI, Chowa SS, Mukta MSH, Jonkman M. Malignancy pattern analysis of breast ultrasound images using clinical features and a graph convolutional network. Digit Health 2024; 10:20552076241251660. [PMID: 38817843 PMCID: PMC11138200 DOI: 10.1177/20552076241251660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/12/2024] [Indexed: 06/01/2024] Open
Abstract
Objective Early diagnosis of breast cancer can lead to effective treatment, possibly increase long-term survival rates, and improve quality of life. The objective of this study is to present an automated analysis and classification system for breast cancer using clinical markers such as tumor shape, orientation, margin, and surrounding tissue. The novelty and uniqueness of the study lie in the approach of considering medical features based on the diagnosis of radiologists. Methods Using clinical markers, a graph is generated where each feature is represented by a node, and the connection between them is represented by an edge which is derived through Pearson's correlation method. A graph convolutional network (GCN) model is proposed to classify breast tumors into benign and malignant, using the graph data. Several statistical tests are performed to assess the importance of the proposed features. The performance of the proposed GCN model is improved by experimenting with different layer configurations and hyper-parameter settings. Results Results show that the proposed model has a 98.73% test accuracy. The performance of the model is compared with a graph attention network, a one-dimensional convolutional neural network, and five transfer learning models, ten machine learning models, and three ensemble learning models. The performance of the model was further assessed with three supplementary breast cancer ultrasound image datasets, where the accuracies are 91.03%, 94.37%, and 89.62% for Dataset A, Dataset B, and Dataset C (combining Dataset A and Dataset B) respectively. Overfitting issues are assessed through k-fold cross-validation. Conclusion Several variants are utilized to present a more rigorous and fair evaluation of our work, especially the importance of extracting clinically relevant features. Moreover, a GCN model using graph data can be a promising solution for an automated feature-based breast image classification system.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| |
Collapse
|
14
|
Zhou G, Mosadegh B. Distilling Knowledge From an Ensemble of Vision Transformers for Improved Classification of Breast Ultrasound. Acad Radiol 2024; 31:104-120. [PMID: 37666747 DOI: 10.1016/j.acra.2023.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/20/2023] [Accepted: 08/05/2023] [Indexed: 09/06/2023]
Abstract
RATIONALE AND OBJECTIVES To develop a deep learning model for the automated classification of breast ultrasound images as benign or malignant. More specifically, the application of vision transformers, ensemble learning, and knowledge distillation is explored for breast ultrasound classification. MATERIALS AND METHODS Single view, B-mode ultrasound images were curated from the publicly available Breast Ultrasound Image (BUSI) dataset, which has categorical ground truth labels (benign vs malignant) assigned by radiologists and malignant cases confirmed by biopsy. The performance of vision transformers (ViT) is compared to convolutional neural networks (CNN), followed by a comparison between supervised, self-supervised, and randomly initialized ViT. Subsequently, the ensemble of 10 independently trained ViT, where the ensemble model is the unweighted average of the output of each individual model is compared to the performance of each ViT alone. Finally, we train a single ViT to emulate the ensembled ViT using knowledge distillation. RESULTS On this dataset that was trained using five-fold cross validation, ViT outperforms CNN, while self-supervised ViT outperform supervised and randomly initialized ViT. The ensemble model achieves an area under the receiver operating characteristics curve (AuROC) and area under the precision recall curve (AuPRC) of 0.977 and 0.965 on the test set, outperforming the average AuROC and AuPRC of the independently trained ViTs (0.958 ± 0.05 and 0.931 ± 0.016). The distilled ViT achieves an AuROC and AuPRC of 0.972 and 0.960. CONCLUSION Both transfer learning and ensemble learning can each offer increased performance independently and can be sequentially combined to collectively improve the performance of the final model. Furthermore, a single vision transformer can be trained to match the performance of an ensemble of a set of vision transformers using knowledge distillation.
Collapse
Affiliation(s)
| | - Bobak Mosadegh
- Dalio Institute of Cardiovascular Imaging, Department of Radiology, Weill Cornell Medicine, New York, New York
| |
Collapse
|
15
|
Tasnim J, Hasan MK. CAM-QUS guided self-tuning modular CNNs with multi-loss functions for fully automated breast lesion classification in ultrasound images. Phys Med Biol 2023; 69:015018. [PMID: 38056017 DOI: 10.1088/1361-6560/ad1319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Objective.Breast cancer is the major cause of cancer death among women worldwide. Deep learning-based computer-aided diagnosis (CAD) systems for classifying lesions in breast ultrasound images can help materialise the early detection of breast cancer and enhance survival chances.Approach.This paper presents a completely automated BUS diagnosis system with modular convolutional neural networks tuned with novel loss functions. The proposed network comprises a dynamic channel input enhancement network, an attention-guided InceptionV3-based feature extraction network, a classification network, and a parallel feature transformation network to map deep features into quantitative ultrasound (QUS) feature space. These networks function together to improve classification accuracy by increasing the separation of benign and malignant class-specific features and enriching them simultaneously. Unlike the categorical crossentropy (CCE) loss-based traditional approaches, our method uses two additional novel losses: class activation mapping (CAM)-based and QUS feature-based losses, to capacitate the overall network learn the extraction of clinically valued lesion shape and texture-related properties focusing primarily the lesion area for explainable AI (XAI).Main results.Experiments on four public, one private, and a combined breast ultrasound dataset are used to validate our strategy. The suggested technique obtains an accuracy of 97.28%, sensitivity of 93.87%, F1-score of 95.42% on dataset 1 (BUSI), and an accuracy of 91.50%, sensitivity of 89.38%, and F1-score of 89.31% on the combined dataset, consisting of 1494 images collected from hospitals in five demographic locations using four ultrasound systems of different manufacturers. These results outperform techniques reported in the literature by a considerable margin.Significance.The proposed CAD system provides diagnosis from the auto-focused lesion area of B-mode BUS images, avoiding the explicit requirement of any segmentation or region of interest extraction, and thus can be a handy tool for making accurate and reliable diagnoses even in unspecialized healthcare centers.
Collapse
Affiliation(s)
- Jarin Tasnim
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| | - Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| |
Collapse
|
16
|
Qiu S, Zhuang S, Li B, Wang J, Zhuang Z. Prospective assessment of breast lesions AI classification model based on ultrasound dynamic videos and ACR BI-RADS characteristics. Front Oncol 2023; 13:1274557. [PMID: 38023255 PMCID: PMC10656688 DOI: 10.3389/fonc.2023.1274557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction AI-assisted ultrasound diagnosis is considered a fast and accurate new method that can reduce the subjective and experience-dependent nature of handheld ultrasound. In order to meet clinical diagnostic needs better, we first proposed a breast lesions AI classification model based on ultrasound dynamic videos and ACR BI-RADS characteristics (hereafter, Auto BI-RADS). In this study, we prospectively verify its performance. Methods In this study, the model development was based on retrospective data including 480 ultrasound dynamic videos equivalent to 18122 static images of pathologically proven breast lesions from 420 patients. A total of 292 breast lesions ultrasound dynamic videos from the internal and external hospital were prospectively tested by Auto BI-RADS. The performance of Auto BI-RADS was compared with both experienced and junior radiologists using the DeLong method, Kappa test, and McNemar test. Results The Auto BI-RADS achieved an accuracy, sensitivity, and specificity of 0.87, 0.93, and 0.81, respectively. The consistency of the BI-RADS category between Auto BI-RADS and the experienced group (Kappa:0.82) was higher than that of the juniors (Kappa:0.60). The consistency rates between Auto BI-RADS and the experienced group were higher than those between Auto BI-RADS and the junior group for shape (93% vs. 80%; P = .01), orientation (90% vs. 84%; P = .02), margin (84% vs. 71%; P = .01), echo pattern (69% vs. 56%; P = .001) and posterior features (76% vs. 71%; P = .0046), While the difference of calcification was not significantly different. Discussion In this study, we aimed to prospectively verify a novel AI tool based on ultrasound dynamic videos and ACR BI-RADS characteristics. The prospective assessment suggested that the AI tool not only meets the clinical needs better but also reaches the diagnostic efficiency of experienced radiologists.
Collapse
Affiliation(s)
- Shunmin Qiu
- Department of Ultrasound, First Affiliated Hospital of Shantou University Medical College, Shantou, Guangdong, China
| | - Shuxin Zhuang
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
| | - Bin Li
- Product Development Department, Shantou Institute of Ultrasonic Instruments, Shantou, Guangdong, China
| | - Jinhong Wang
- Department of Ultrasound, Shantou Chaonan Minsheng Hospital, Shantou, Guangdong, China
| | - Zhemin Zhuang
- Engineering College, Shantou University, Shantou, Guangdong, China
| |
Collapse
|
17
|
Morita D, Kawarazaki A, Koimizu J, Tsujiko S, Soufi M, Otake Y, Sato Y, Numajiri T. Automatic orbital segmentation using deep learning-based 2D U-net and accuracy evaluation: A retrospective study. J Craniomaxillofac Surg 2023; 51:609-613. [PMID: 37813770 DOI: 10.1016/j.jcms.2023.09.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 05/25/2023] [Accepted: 09/05/2023] [Indexed: 10/11/2023] Open
Abstract
The purpose of this study was to verify whether the accuracy of automatic segmentation (AS) of computed tomography (CT) images of fractured orbits using deep learning (DL) is sufficient for clinical application. In the surgery of orbital fractures, many methods have been reported to create a 3D anatomical model for use as a reference. However, because the orbit bone is thin and complex, creating a segmentation model for 3D printing is complicated and time-consuming. Here, the training of DL was performed using U-Net as the DL model, and the AS output was validated with Dice coefficients and average symmetry surface distance (ASSD). In addition, the AS output was 3D printed and evaluated for accuracy by four surgeons, each with over 15 years of clinical experience. One hundred twenty-five CT images were prepared, and manual orbital segmentation was performed in all cases. Ten orbital fracture cases were randomly selected as validation data, and the remaining 115 were set as training data. AS was successful in all cases, with good accuracy: Dice, 0.860 ± 0.033 (mean ± SD); ASSD, 0.713 ± 0.212 mm. In evaluating AS accuracy, the expert surgeons generally considered that it could be used for surgical support without further modification. The orbital AS algorithm developed using DL in this study is extremely accurate and can create 3D models rapidly at low cost, potentially enabling safer and more accurate surgeries.
Collapse
Affiliation(s)
- Daiki Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - Ayako Kawarazaki
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Jungen Koimizu
- Department of Plastic and Reconstructive Surgery, Omihachiman Community Medical Center, Shiga, Japan
| | - Shoko Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Mazen Soufi
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshito Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Yoshinobu Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Toshiaki Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
18
|
Guo Y, Jiang R, Gu X, Cheng HD, Garg H. A Novel Fuzzy Relative-Position-Coding Transformer for Breast Cancer Diagnosis Using Ultrasonography. Healthcare (Basel) 2023; 11:2530. [PMID: 37761727 PMCID: PMC10531413 DOI: 10.3390/healthcare11182530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 08/31/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023] Open
Abstract
Breast cancer is a leading cause of death in women worldwide, and early detection is crucial for successful treatment. Computer-aided diagnosis (CAD) systems have been developed to assist doctors in identifying breast cancer on ultrasound images. In this paper, we propose a novel fuzzy relative-position-coding (FRPC) Transformer to classify breast ultrasound (BUS) images for breast cancer diagnosis. The proposed FRPC Transformer utilizes the self-attention mechanism of Transformer networks combined with fuzzy relative-position-coding to capture global and local features of the BUS images. The performance of the proposed method is evaluated on one benchmark dataset and compared with those obtained by existing Transformer approaches using various metrics. The experimental outcomes distinctly establish the superiority of the proposed method in achieving elevated levels of accuracy, sensitivity, specificity, and F1 score (all at 90.52%), as well as a heightened area under the receiver operating characteristic (ROC) curve (0.91), surpassing those attained by the original Transformer model (at 89.54%, 89.54%, 89.54%, and 0.89, respectively). Overall, the proposed FRPC Transformer is a promising approach for breast cancer diagnosis. It has potential applications in clinical practice and can contribute to the early detection of breast cancer.
Collapse
Affiliation(s)
- Yanhui Guo
- Department of Computer Science, University of Illinois, Springfield, IL 62703, USA
| | - Ruquan Jiang
- Department of Pediatrics, Xinxiang Medical University, Xinxiang 453003, China;
| | - Xin Gu
- School of Information Science and Technology, North China University of Technology, Beijing 100144, China;
| | - Heng-Da Cheng
- Department of Computer Science, Utah State University, Logan, UT 84322, USA;
| | - Harish Garg
- School of Mathematics, Thapar Institute of Engineering and Technology, Deemed University, Patiala 147004, Punjab, India;
| |
Collapse
|
19
|
Xiang H, Wang X, Xu M, Zhang Y, Zeng S, Li C, Liu L, Deng T, Tang G, Yan C, Ou J, Lin Q, He J, Sun P, Li A, Chen H, Heng PA, Lin X. Deep Learning-assisted Diagnosis of Breast Lesions on US Images: A Multivendor, Multicenter Study. Radiol Artif Intell 2023; 5:e220185. [PMID: 37795135 PMCID: PMC10546363 DOI: 10.1148/ryai.220185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 05/04/2023] [Accepted: 06/14/2023] [Indexed: 10/06/2023]
Abstract
Purpose To evaluate the diagnostic performance of a deep learning (DL) model for breast US across four hospitals and assess its value to readers with different levels of experience. Materials and Methods In this retrospective study, a dual attention-based convolutional neural network was built and validated to discriminate malignant tumors from benign tumors by using B-mode and color Doppler US images (n = 45 909, March 2011-August 2018), acquired with 42 types of US machines, of 9895 pathologic analysis-confirmed breast lesions in 8797 patients (27 men and 8770 women; mean age, 47 years ± 12 [SD]). With and without assistance from the DL model, three novice readers with less than 5 years of US experience and two experienced readers with 8 and 18 years of US experience, respectively, interpreted 1024 randomly selected lesions. Differences in the areas under the receiver operating characteristic curves (AUCs) were tested using the DeLong test. Results The DL model using both B-mode and color Doppler US images demonstrated expert-level performance at the lesion level, with an AUC of 0.94 (95% CI: 0.92, 0.95) for the internal set. In external datasets, the AUCs were 0.92 (95% CI: 0.90, 0.94) for hospital 1, 0.91 (95% CI: 0.89, 0.94) for hospital 2, and 0.96 (95% CI: 0.94, 0.98) for hospital 3. DL assistance led to improved AUCs (P < .001) for one experienced and three novice radiologists and improved interobserver agreement. The average false-positive rate was reduced by 7.6% (P = .08). Conclusion The DL model may help radiologists, especially novice readers, improve accuracy and interobserver agreement of breast tumor diagnosis using US.Keywords: Ultrasound, Breast, Diagnosis, Breast Cancer, Deep Learning, Ultrasonography Supplemental material is available for this article. © RSNA, 2023.
Collapse
Affiliation(s)
| | | | - Min Xu
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Yuhua Zhang
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Shue Zeng
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Chunyan Li
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Lixian Liu
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Tingting Deng
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Guoxue Tang
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Cuiju Yan
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Jinjing Ou
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Qingguang Lin
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Jiehua He
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Peng Sun
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Anhua Li
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Hao Chen
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Pheng-Ann Heng
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| | - Xi Lin
- From the Departments of Ultrasound (H.X., C.L., L.L., T.D., C.Y., J.O., Q.L., A.L., X.L.) and Pathology (J.H., P.S.), Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China; Zhejiang Laboratory, Hangzhou, China (X.W.); Department of Radiation Oncology, Stanford University School of Medicine, Stanford, Palo Alto, Calif (X.W.); Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China (X.W., P.A.H.); Department of Ultrasound Medicine, The First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China (M.X.); Key Laboratory of Precision Diagnosis and Treatment for Hepatobiliary and Pancreatic Tumor of Zhejiang Province, Hangzhou, China (M.X.); Department of Ultrasound Medicine, The Third People's Hospital of Zhengzhou, Cancer Hospital of Henan University, Zhengzhou, China (Y.Z.); Department of Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China (S.Z.); Department of Ultrasound and Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China (G.T.); and Department of Computer Science and Engineering and Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China (H.C.)
| |
Collapse
|
20
|
Gong X, Yuan S, Xiang Y, Fan L, Zhou H. Domain knowledge-guided adversarial adaptive fusion of hybrid breast ultrasound data. Comput Biol Med 2023; 164:107256. [PMID: 37473565 DOI: 10.1016/j.compbiomed.2023.107256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/20/2023] [Accepted: 07/07/2023] [Indexed: 07/22/2023]
Abstract
Contrast-enhanced ultrasound (CEUS), which provides more detailed microvascular information about the tumor, is always taken by radiologists in clinic diagnosis along with B-mode ultrasound (B-mode US). However, automatically analyzing breast CEUS is challenging due to the difference between the CEUS video and the natural video, e.g., sports or action videos, where the CEUS video has no positional displacements. Additionally, most existing methods rarely use the Time Intensity Curve (TIC) information of CEUS and non-imaging clinical (NIC) data. To address these issues, we propose a novel breast cancer diagnosis framework that learns the complementarity and correlation across hybrid modal data, including CEUS, B-mode US, and NIC data, by an adversarial adaptive fusion method. Furthermore, to fully exploit the CEUS information, the proposed method, inspired by the clinical processing of radiologists, first extracts the TIC parameters of CEUS. Then, we select a clip from CEUS using a frame screening strategy and finally get spatio-temporal features from these clips through a critical frame attention network. To our knowledge, this is the first AI system to use TIC parameters, NIC data, and ultrasound imaging in diagnoses. We have validated our method on a dataset collected from 554 patients. The experimental results demonstrate the excellent performance of the proposed method. The result shows that our method can achieve an accuracy of 87.73%, which is higher than that of uni-modal approaches by nearly 5%.
Collapse
Affiliation(s)
- Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 610031, Sichuan, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Chengdu, 610031, Sichuan, China.
| | - Shuai Yuan
- Tangshan Research Institute, Southwest Jiaotong University, Tangshan, 063002, Hebei, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Chengdu, 610031, Sichuan, China
| | - Yang Xiang
- Tangshan Research Institute, Southwest Jiaotong University, Tangshan, 063002, Hebei, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Chengdu, 610031, Sichuan, China
| | - Lin Fan
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, 610031, Sichuan, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Chengdu, 610031, Sichuan, China
| | - Hong Zhou
- Third People's Hospital of Chengdu, Affiliated Hospital of Southwest Jiaotong University, Chengdu, 610031, Sichuan, China
| |
Collapse
|
21
|
Tian R, Yu M, Liao L, Zhang C, Zhao J, Sang L, Qian W, Wang Z, Huang L, Ma H. An effective convolutional neural network for classification of benign and malignant breast and thyroid tumors from ultrasound images. Phys Eng Sci Med 2023; 46:995-1013. [PMID: 37195403 DOI: 10.1007/s13246-023-01262-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 04/16/2023] [Indexed: 05/18/2023]
Abstract
Breast and thyroid cancers are the two most common cancers among women worldwide. The early clinical diagnosis of breast and thyroid cancers often utilizes ultrasonography. Most of the ultrasound images of breast and thyroid cancer lack specificity, which reduces the accuracy of ultrasound clinical diagnosis. This study attempts to develop an effective convolutional neural network (E-CNN) for the classification of benign and malignant breast and thyroid tumors from ultrasound images. The 2-Dimension (2D) ultrasound images of 1052 breast tumors were collected, and 8245 2D tumor images were obtained from 76 thyroid cases. We performed tenfold cross-validation on breast and thyroid data, with a mean classification accuracy of 0.932 and 0.902, respectively. In addition, the proposed E-CNN was applied to classify and evaluate 9297 mixed images (breast and thyroid images). The mean classification accuracy was 0.875, and the mean area under the curve (AUC) was 0.955. Based on data in the same modality, we transferred the breast model to classify typical tumor images of 76 patients. The finetuning model achieved a mean classification accuracy of 0.945, and a mean AUC of 0.958. Meanwhile, the transfer thyroid model realized a mean classification accuracy of 0.932, and a mean AUC of 0.959, on 1052 breast tumor images. The experimental results demonstrate the ability of the E-CNN to learn the features and classify breast and thyroid tumors. Besides, it is promising to classify benign and malignant tumors from ultrasound images with the transfer model under the same modality.
Collapse
Affiliation(s)
- Ronghui Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Miao Yu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Lingmin Liao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
- Jiangxi Key Laboratory of Clinical and Translational Cancer Research, Nanchang, 330006, China
| | - Chunquan Zhang
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
| | - Jiali Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
- Department of Oncology, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China
- Jiangxi Key Laboratory of Clinical and Translational Cancer Research, Nanchang, 330006, China
| | - Liang Sang
- Department of Ultrasound, The First Hospital of China Medical University, Shenyang, 110001, Liaoning, China
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Zhiguo Wang
- Department of Nuclear Medicine, General Hospital of Northern Theatre Command, Shenyang, 110016, Liaoning, China
| | - Long Huang
- Department of Oncology, The Second Affiliated Hospital of Nanchang University, Nanchang, 330006, Jiangxi, China.
- Jiangxi Key Laboratory of Clinical and Translational Cancer Research, Nanchang, 330006, China.
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, Liaoning, China.
- National University of Singapore (Suzhou) Research Institute, Suzhou, 215123, China.
| |
Collapse
|
22
|
Deb SD, Jha RK. Breast UltraSound Image classification using fuzzy-rank-based ensemble network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
23
|
Wang SH, Chen G, Zhong X, Lin T, Shen Y, Fan X, Cao L. Global development of artificial intelligence in cancer field: a bibliometric analysis range from 1983 to 2022. Front Oncol 2023; 13:1215729. [PMID: 37519796 PMCID: PMC10382324 DOI: 10.3389/fonc.2023.1215729] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 06/26/2023] [Indexed: 08/01/2023] Open
Abstract
Background Artificial intelligence (AI) is widely applied in cancer field nowadays. The aim of this study is to explore the hotspots and trends of AI in cancer research. Methods The retrieval term includes four topic words ("tumor," "cancer," "carcinoma," and "artificial intelligence"), which were searched in the database of Web of Science from January 1983 to December 2022. Then, we documented and processed all data, including the country, continent, Journal Impact Factor, and so on using the bibliometric software. Results A total of 6,920 papers were collected and analyzed. We presented the annual publications and citations, most productive countries/regions, most influential scholars, the collaborations of journals and institutions, and research focus and hotspots in AI-based cancer research. Conclusion This study systematically summarizes the current research overview of AI in cancer research so as to lay the foundation for future research.
Collapse
Affiliation(s)
- Sui-Han Wang
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Guoqiao Chen
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Xin Zhong
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Tianyu Lin
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Yan Shen
- Department of General Surgery, The First People’s Hospital of Yu Hang District, Hangzhou, China
| | - Xiaoxiao Fan
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| | - Liping Cao
- Department of General Surgery, Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, China
| |
Collapse
|
24
|
Luo Y, Lu Z, Liu L, Huang Q. Deep fusion of human-machine knowledge with attention mechanism for breast cancer diagnosis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
|
25
|
Morita D, Mazen S, Tsujiko S, Otake Y, Sato Y, Numajiri T. Deep-learning-based automatic facial bone segmentation using a two-dimensional U-Net. Int J Oral Maxillofac Surg 2023; 52:787-792. [PMID: 36328865 DOI: 10.1016/j.ijom.2022.10.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Revised: 08/16/2022] [Accepted: 10/24/2022] [Indexed: 06/04/2023]
Abstract
The use of deep learning (DL) in medical imaging is becoming increasingly widespread. Although DL has been used previously for the segmentation of facial bones in computed tomography (CT) images, there are few reports of segmentation involving multiple areas. In this study, a U-Net was used to investigate the automatic segmentation of facial bones into eight areas, with the aim of facilitating virtual surgical planning (VSP) and computer-aided design and manufacturing (CAD/CAM) in maxillofacial surgery. CT data from 50 patients were prepared and used for training, and five-fold cross-validation was performed. The output results generated by the DL model were validated by Dice coefficient and average symmetric surface distance (ASSD). The automatic segmentation was successful in all cases, with a mean± standard deviation Dice coefficient of 0.897 ± 0.077 and ASSD of 1.168 ± 1.962 mm. The accuracy was very high for the mandible (Dice coefficient 0.984, ASSD 0.324 mm) and zygomatic bones (Dice coefficient 0.931, ASSD 0.487 mm), and these could be introduced for VSP and CAD/CAM without any modification. The results for other areas, particularly the teeth, were slightly inferior, with possible reasons being the effects of defects, bonded maxillary and mandibular teeth, and metal artefacts. A limitation of this study is that the data were from a single institution. Hence further research is required to improve the accuracy for some facial areas and to validate the results in larger and more diverse populations.
Collapse
Affiliation(s)
- D Morita
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan.
| | - S Mazen
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - S Tsujiko
- Department of Plastic and Reconstructive Surgery, Saiseikai Shigaken Hospital, Shiga, Japan
| | - Y Otake
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - Y Sato
- Division of Information Science, Nara Institute of Science and Technology, Nara, Japan
| | - T Numajiri
- Department of Plastic and Reconstructive Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| |
Collapse
|
26
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
27
|
Ng CKC. Diagnostic Performance of Artificial Intelligence-Based Computer-Aided Detection and Diagnosis in Pediatric Radiology: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:children10030525. [PMID: 36980083 PMCID: PMC10047006 DOI: 10.3390/children10030525] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 02/13/2023] [Accepted: 03/07/2023] [Indexed: 03/30/2023]
Abstract
Artificial intelligence (AI)-based computer-aided detection and diagnosis (CAD) is an important research area in radiology. However, only two narrative reviews about general uses of AI in pediatric radiology and AI-based CAD in pediatric chest imaging have been published yet. The purpose of this systematic review is to investigate the AI-based CAD applications in pediatric radiology, their diagnostic performances and methods for their performance evaluation. A literature search with the use of electronic databases was conducted on 11 January 2023. Twenty-three articles that met the selection criteria were included. This review shows that the AI-based CAD could be applied in pediatric brain, respiratory, musculoskeletal, urologic and cardiac imaging, and especially for pneumonia detection. Most of the studies (93.3%, 14/15; 77.8%, 14/18; 73.3%, 11/15; 80.0%, 8/10; 66.6%, 2/3; 84.2%, 16/19; 80.0%, 8/10) reported model performances of at least 0.83 (area under receiver operating characteristic curve), 0.84 (sensitivity), 0.80 (specificity), 0.89 (positive predictive value), 0.63 (negative predictive value), 0.87 (accuracy), and 0.82 (F1 score), respectively. However, a range of methodological weaknesses (especially a lack of model external validation) are found in the included studies. In the future, more AI-based CAD studies in pediatric radiology with robust methodology should be conducted for convincing clinical centers to adopt CAD and realizing its benefits in a wider context.
Collapse
Affiliation(s)
- Curtise K C Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
28
|
Artificial Intelligence in Breast Ultrasound: From Diagnosis to Prognosis-A Rapid Review. Diagnostics (Basel) 2022; 13:diagnostics13010058. [PMID: 36611350 PMCID: PMC9818181 DOI: 10.3390/diagnostics13010058] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Ultrasound (US) is a fundamental diagnostic tool in breast imaging. However, US remains an operator-dependent examination. Research into and the application of artificial intelligence (AI) in breast US are increasing. The aim of this rapid review was to assess the current development of US-based artificial intelligence in the field of breast cancer. METHODS Two investigators with experience in medical research performed literature searching and data extraction on PubMed. The studies included in this rapid review evaluated the role of artificial intelligence concerning BC diagnosis, prognosis, molecular subtypes of breast cancer, axillary lymph node status, and the response to neoadjuvant chemotherapy. The mean values of sensitivity, specificity, and AUC were calculated for the main study categories with a meta-analytical approach. RESULTS A total of 58 main studies, all published after 2017, were included. Only 9/58 studies were prospective (15.5%); 13/58 studies (22.4%) used an ML approach. The vast majority (77.6%) used DL systems. Most studies were conducted for the diagnosis or classification of BC (55.1%). At present, all the included studies showed that AI has excellent performance in breast cancer diagnosis, prognosis, and treatment strategy. CONCLUSIONS US-based AI has great potential and research value in the field of breast cancer diagnosis, treatment, and prognosis. More prospective and multicenter studies are needed to assess the potential impact of AI in breast ultrasound.
Collapse
|
29
|
Applying Deep Learning for Breast Cancer Detection in Radiology. Curr Oncol 2022; 29:8767-8793. [PMID: 36421343 PMCID: PMC9689782 DOI: 10.3390/curroncol29110690] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/12/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Recent advances in deep learning have enhanced medical imaging research. Breast cancer is the most prevalent cancer among women, and many applications have been developed to improve its early detection. The purpose of this review is to examine how various deep learning methods can be applied to breast cancer screening workflows. We summarize deep learning methods, data availability and different screening methods for breast cancer including mammography, thermography, ultrasound and magnetic resonance imaging. In this review, we will explore deep learning in diagnostic breast imaging and describe the literature review. As a conclusion, we discuss some of the limitations and opportunities of integrating artificial intelligence into breast cancer clinical practice.
Collapse
|
30
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
31
|
Gu Y, Xu W, Lin B, An X, Tian J, Ran H, Ren W, Chang C, Yuan J, Kang C, Deng Y, Wang H, Luo B, Guo S, Zhou Q, Xue E, Zhan W, Zhou Q, Li J, Zhou P, Chen M, Gu Y, Chen W, Zhang Y, Li J, Cong L, Zhu L, Wang H, Jiang Y. Deep learning based on ultrasound images assists breast lesion diagnosis in China: a multicenter diagnostic study. Insights Imaging 2022; 13:124. [PMID: 35900608 PMCID: PMC9334487 DOI: 10.1186/s13244-022-01259-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 06/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Studies on deep learning (DL)-based models in breast ultrasound (US) remain at the early stage due to a lack of large datasets for training and independent test sets for verification. We aimed to develop a DL model for differentiating benign from malignant breast lesions on US using a large multicenter dataset and explore the model's ability to assist the radiologists. METHODS A total of 14,043 US images from 5012 women were prospectively collected from 32 hospitals. To develop the DL model, the patients from 30 hospitals were randomly divided into a training cohort (n = 4149) and an internal test cohort (n = 466). The remaining 2 hospitals (n = 397) were used as the external test cohorts (ETC). We compared the model with the prospective Breast Imaging Reporting and Data System assessment and five radiologists. We also explored the model's ability to assist the radiologists using two different methods. RESULTS The model demonstrated excellent diagnostic performance with the ETC, with a high area under the receiver operating characteristic curve (AUC, 0.913), sensitivity (88.84%), specificity (83.77%), and accuracy (86.40%). In the comparison set, the AUC was similar to that of the expert (p = 0.5629) and one experienced radiologist (p = 0.2112) and significantly higher than that of three inexperienced radiologists (p < 0.01). After model assistance, the accuracies and specificities of the radiologists were substantially improved without loss in sensitivities. CONCLUSIONS The DL model yielded satisfactory predictions in distinguishing benign from malignant breast lesions. The model showed the potential value in improving the diagnosis of breast lesions by radiologists.
Collapse
Affiliation(s)
- Yang Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Wen Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Bin Lin
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Xing An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital of Chongqing Medical University and Chongqing Key Laboratory of Ultrasound Molecular Imaging, Chongqing, China
| | - Weidong Ren
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jianjun Yuan
- Department of Ultrasonography, Henan Provincial People's Hospital, Zhengzhou, China
| | - Chunsong Kang
- Department of Ultrasound, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Taiyuan, China
| | - Youbin Deng
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baoming Luo
- Department of Ultrasound, The Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shenglan Guo
- Department of Ultrasonography, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qi Zhou
- Department of Medical Ultrasound, The Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University, Xi'an, China
| | - Ensheng Xue
- Department of Ultrasound, Union Hospital of Fujian Medical University, Fujian Institute of Ultrasound Medicine, Fuzhou, China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University, School of Medicine, Shanghai, China
| | - Qing Zhou
- Department of Ultrasonography, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jie Li
- Department of Ultrasound, Qilu Hospital, Shandong University, Jinan, 250012, China
| | - Ping Zhou
- Department of Ultrasound, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ying Gu
- Department of Ultrasonography, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Yuhong Zhang
- Department of Ultrasound, The Second Hospital of Dalian Medical University, Dalian, China
| | - Jianchu Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Longfei Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Lei Zhu
- Department of Medical Imaging Advanced Research, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
32
|
Hayashida T, Odani E, Kikuchi M, Nagayama A, Seki T, Takahashi M, Futatsugi N, Matsumoto A, Murata T, Watanuki R, Yokoe T, Nakashoji A, Maeda H, Onishi T, Asaga S, Hojo T, Jinno H, Sotome K, Matsui A, Suto A, Imoto S, Kitagawa Y. Establishment of a deep-learning system to diagnose BI-RADS4a or higher using breast ultrasound for clinical application. Cancer Sci 2022; 113:3528-3534. [PMID: 35880248 PMCID: PMC9530860 DOI: 10.1111/cas.15511] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 07/16/2022] [Accepted: 07/19/2022] [Indexed: 11/27/2022] Open
Abstract
Although the categorization of ultrasound using the Breast Imaging Reporting and Data System (BI‐RADS) has become widespread worldwide, the problem of inter‐observer variability remains. To maintain uniformity in diagnostic accuracy, we have developed a system in which artificial intelligence (AI) can distinguish whether a static image obtained using a breast ultrasound represents BI‐RADS3 or lower or BI‐RADS4a or higher to determine the medical management that should be performed on a patient whose breast ultrasound shows abnormalities. To establish and validate the AI system, a training dataset consisting of 4028 images containing 5014 lesions and a test dataset consisting of 3166 images containing 3656 lesions were collected and annotated. We selected a setting that maximized the area under the curve (AUC) and minimized the difference in sensitivity and specificity by adjusting the internal parameters of the AI system, achieving an AUC, sensitivity, and specificity of 0.95, 91.2%, and 90.7%, respectively. Furthermore, based on 30 images extracted from the test data, the diagnostic accuracy of 20 clinicians and the AI system was compared, and the AI system was found to be significantly superior to the clinicians (McNemar test, p < 0.001). Although deep‐learning methods to categorize benign and malignant tumors using breast ultrasound have been extensively reported, our work represents the first attempt to establish an AI system to classify BI‐RADS3 or lower and BI‐RADS4a or higher successfully, providing important implications for clinical actions. These results suggest that the AI diagnostic system is sufficient to proceed to the next stage of clinical application.
Collapse
Affiliation(s)
- Tetsu Hayashida
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Erina Odani
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Masayuki Kikuchi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Aiko Nagayama
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Tomoko Seki
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Maiko Takahashi
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | | | - Akiko Matsumoto
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Takeshi Murata
- Department of Breast Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Rurina Watanuki
- Department of Breast Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Takamichi Yokoe
- Department of Breast Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Ayako Nakashoji
- Department of Breast Surgery, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Hinako Maeda
- Department of Breast and Thyroid Surgery, Kitasato University Kitasato Institute Hospital, Tokyo, Japan
| | - Tatsuya Onishi
- Department of Breast Surgery, National Cancer Center Hospital East, Chiba, Japan
| | - Sota Asaga
- Department of Breast Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Takashi Hojo
- Dept. of Breast Oncology, Saitama Medical University International Medical Center, Saitama, Japan
| | - Hiromitsu Jinno
- Department of Surgery, Teikyo University School of Medicine, Tokyo, Japan
| | - Keiichi Sotome
- Department of Breast and Thyroid Surgery, Kitasato University Kitasato Institute Hospital, Tokyo, Japan
| | - Akira Matsui
- Department of Breast Surgery, National Hospital Organization Tokyo Medical Center, Tokyo, Japan
| | - Akihiko Suto
- Department of Breast Surgery, National Cancer Center Hospital, Tokyo, Japan
| | - Shigeru Imoto
- Department of Breast Surgery, Kyorin University School of Medicine, Tokyo, Japan
| | - Yuko Kitagawa
- Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| |
Collapse
|
33
|
Hejduk P, Marcon M, Unkelbach J, Ciritsis A, Rossi C, Borkowski K, Boss A. Fully automatic classification of automated breast ultrasound (ABUS) imaging according to BI-RADS using a deep convolutional neural network. Eur Radiol 2022; 32:4868-4878. [PMID: 35147776 PMCID: PMC9213284 DOI: 10.1007/s00330-022-08558-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 12/14/2021] [Accepted: 12/26/2021] [Indexed: 12/15/2022]
Abstract
PURPOSE The aim of this study was to develop and test a post-processing technique for detection and classification of lesions according to the BI-RADS atlas in automated breast ultrasound (ABUS) based on deep convolutional neural networks (dCNNs). METHODS AND MATERIALS In this retrospective study, 645 ABUS datasets from 113 patients were included; 55 patients had lesions classified as high malignancy probability. Lesions were categorized in BI-RADS 2 (no suspicion of malignancy), BI-RADS 3 (probability of malignancy < 3%), and BI-RADS 4/5 (probability of malignancy > 3%). A deep convolutional neural network was trained after data augmentation with images of lesions and normal breast tissue, and a sliding-window approach for lesion detection was implemented. The algorithm was applied to a test dataset containing 128 images and performance was compared with readings of 2 experienced radiologists. RESULTS Results of calculations performed on single images showed accuracy of 79.7% and AUC of 0.91 [95% CI: 0.85-0.96] in categorization according to BI-RADS. Moderate agreement between dCNN and ground truth has been achieved (κ: 0.57 [95% CI: 0.50-0.64]) what is comparable with human readers. Analysis of whole dataset improved categorization accuracy to 90.9% and AUC of 0.91 [95% CI: 0.77-1.00], while achieving almost perfect agreement with ground truth (κ: 0.82 [95% CI: 0.69-0.95]), performing on par with human readers. Furthermore, the object localization technique allowed the detection of lesion position slice-wise. CONCLUSIONS Our results show that a dCNN can be trained to detect and distinguish lesions in ABUS according to the BI-RADS classification with similar accuracy as experienced radiologists. KEY POINTS • A deep convolutional neural network (dCNN) was trained for classification of ABUS lesions according to the BI-RADS atlas. • A sliding-window approach allows accurate automatic detection and classification of lesions in ABUS examinations.
Collapse
Affiliation(s)
- Patryk Hejduk
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland.
| | - Magda Marcon
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Alexander Ciritsis
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Cristina Rossi
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Karol Borkowski
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| | - Andreas Boss
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Rämistr. 100, 8091, Zurich, Switzerland
| |
Collapse
|
34
|
Abstract
Machine learning (ML) methods are pervading an increasing number of fields of application because of their capacity to effectively solve a wide variety of challenging problems. The employment of ML techniques in ultrasound imaging applications started several years ago but the scientific interest in this issue has increased exponentially in the last few years. The present work reviews the most recent (2019 onwards) implementations of machine learning techniques for two of the most popular ultrasound imaging fields, medical diagnostics and non-destructive evaluation. The former, which covers the major part of the review, was analyzed by classifying studies according to the human organ investigated and the methodology (e.g., detection, segmentation, and/or classification) adopted, while for the latter, some solutions to the detection/classification of material defects or particular patterns are reported. Finally, the main merits of machine learning that emerged from the study analysis are summarized and discussed.
Collapse
|
35
|
Li Y, Gu H, Wang H, Qin P, Wang J. BUSnet: A Deep Learning Model of Breast Tumor Lesion Detection for Ultrasound Images. Front Oncol 2022; 12:848271. [PMID: 35402269 PMCID: PMC8989926 DOI: 10.3389/fonc.2022.848271] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/23/2022] [Indexed: 12/01/2022] Open
Abstract
Ultrasound (US) imaging is a main modality for breast disease screening. Automatically detecting the lesions in US images is essential for developing the artificial-intelligence-based diagnostic support technologies. However, the intrinsic characteristics of ultrasound imaging, like speckle noise and acoustic shadow, always degenerate the detection accuracy. In this study, we developed a deep learning model called BUSnet to detect the breast tumor lesions in US images with high accuracy. We first developed a two-stage method including the unsupervised region proposal and bounding-box regression algorithms. Then, we proposed a post-processing method to enhance the detecting accuracy further. The proposed method was used to a benchmark dataset, which includes 487 benign samples and 210 malignant samples. The results proved the effectiveness and accuracy of the proposed method.
Collapse
Affiliation(s)
- Yujie Li
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hong Gu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Hongyu Wang
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Pan Qin
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Jia Wang
- Department of Surgery, The Second Hospital of Dalian Medical University, Dalian, China
| |
Collapse
|
36
|
Ahmed S, Le D, Son T, Adejumo T, Ma G, Yao X. ADC-Net: An Open-Source Deep Learning Network for Automated Dispersion Compensation in Optical Coherence Tomography. Front Med (Lausanne) 2022; 9:864879. [PMID: 35463032 PMCID: PMC9024062 DOI: 10.3389/fmed.2022.864879] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 03/14/2022] [Indexed: 11/23/2022] Open
Abstract
Chromatic dispersion is a common problem to degrade the system resolution in optical coherence tomography (OCT). This study is to develop a deep learning network for automated dispersion compensation (ADC-Net) in OCT. The ADC-Net is based on a modified UNet architecture which employs an encoder-decoder pipeline. The input section encompasses partially compensated OCT B-scans with individual retinal layers optimized. Corresponding output is a fully compensated OCT B-scan with all retinal layers optimized. Two numeric parameters, i.e., peak signal to noise ratio (PSNR) and structural similarity index metric computed at multiple scales (MS-SSIM), were used for objective assessment of the ADC-Net performance and optimal values of 29.95 ± 2.52 dB and 0.97 ± 0.014 were obtained respectively. Comparative analysis of training models, including single, three, five, seven and nine input channels were implemented. The mode with five-input channels was observed to be optimal for ADC-Net training to achieve robust dispersion compensation in OCT.
Collapse
Affiliation(s)
- Shaiban Ahmed
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, United States
| | - David Le
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, United States
| | - Taeyoon Son
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, United States
| | - Tobiloba Adejumo
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, United States
| | - Guangying Ma
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, United States
| | - Xincheng Yao
- Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL, United States
- Department of Ophthalmology and Visual Science, University of Illinois Chicago, Chicago, IL, United States
| |
Collapse
|
37
|
Lee SE, Han K, Youk JH, Lee JE, Hwang JY, Rho M, Yoon J, Kim EK, Yoon JH. Differing benefits of artificial intelligence-based computer-aided diagnosis (AI-CAD) for breast US according to workflow and experience level. Ultrasonography 2022; 41:718-727. [PMID: 35850498 PMCID: PMC9532201 DOI: 10.14366/usg.22014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 03/30/2022] [Indexed: 11/10/2022] Open
Abstract
Purpose This study evaluated how artificial intelligence-based computer-assisted diagnosis (AI-CAD) for breast ultrasonography (US) influences diagnostic performance and agreement between radiologists with varying experience levels in different workflows. Methods Images of 492 breast lesions (200 malignant and 292 benign masses) in 472 women taken from April 2017 to June 2018 were included. Six radiologists (three inexperienced [<1 year of experience] and three experienced [10-15 years of experience]) individually reviewed US images with and without the aid of AI-CAD, first sequentially and then simultaneously. Diagnostic performance and interobserver agreement were calculated and compared between radiologists and AI-CAD. Results After implementing AI-CAD, the specificity, positive predictive value (PPV), and accuracy significantly improved, regardless of experience and workflow (all P<0.001, respectively). The overall area under the receiver operating characteristic curve significantly increased in simultaneous reading, but only for inexperienced radiologists. The agreement for Breast Imaging Reporting and Database System (BI-RADS) descriptors generally increased when AI-CAD was used (κ=0.29-0.63 to 0.35-0.73). Inexperienced radiologists tended to concede to AI-CAD results more easily than experienced radiologists, especially in simultaneous reading (P<0.001). The conversion rates for final assessment changes from BI-RADS 2 or 3 to BI-RADS higher than 4a or vice versa were also significantly higher in simultaneous reading than sequential reading (overall, 15.8% and 6.2%, respectively; P<0.001) for both inexperienced and experienced radiologists. Conclusion Using AI-CAD to interpret breast US improved the specificity, PPV, and accuracy of radiologists regardless of experience level. AI-CAD may work better in simultaneous reading to improve diagnostic performance and agreement between radiologists, especially for inexperienced radiologists.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Department of Radiology, Research Institute of Radiological Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Korea
| | - Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jee Eun Lee
- Department of Radiology, Ewha Womans University College of Medicine, Seoul, Korea
| | - Ji-Young Hwang
- Department of Radiology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Miribi Rho
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jiyoung Yoon
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Department of Radiology, Research Institute of Radiological Science, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| | - Jung Hyun Yoon
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
- Correspondence to: Jung Hyun Yoon, MD, PhD, Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Yonsei University College of Medicine, 50-1 Yonsei-ro, Seodaemun-gu, Seoul 03722, Korea Tel. +82-2-2228-7400 Fax. +82-2-2227-8337 E-mail:
| |
Collapse
|
38
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
39
|
Ragab M, Albukhari A, Alyami J, Mansour RF. Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification on Ultrasound Images. BIOLOGY 2022; 11:439. [PMID: 35336813 PMCID: PMC8945718 DOI: 10.3390/biology11030439] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/25/2022] [Accepted: 03/11/2022] [Indexed: 01/02/2023]
Abstract
Clinical Decision Support Systems (CDSS) provide an efficient way to diagnose the presence of diseases such as breast cancer using ultrasound images (USIs). Globally, breast cancer is one of the major causes of increased mortality rates among women. Computer-Aided Diagnosis (CAD) models are widely employed in the detection and classification of tumors in USIs. The CAD systems are designed in such a way that they provide recommendations to help radiologists in diagnosing breast tumors and, furthermore, in disease prognosis. The accuracy of the classification process is decided by the quality of images and the radiologist's experience. The design of Deep Learning (DL) models is found to be effective in the classification of breast cancer. In the current study, an Ensemble Deep-Learning-Enabled Clinical Decision Support System for Breast Cancer Diagnosis and Classification (EDLCDS-BCDC) technique was developed using USIs. The proposed EDLCDS-BCDC technique was intended to identify the existence of breast cancer using USIs. In this technique, USIs initially undergo pre-processing through two stages, namely wiener filtering and contrast enhancement. Furthermore, Chaotic Krill Herd Algorithm (CKHA) is applied with Kapur's entropy (KE) for the image segmentation process. In addition, an ensemble of three deep learning models, VGG-16, VGG-19, and SqueezeNet, is used for feature extraction. Finally, Cat Swarm Optimization (CSO) with the Multilayer Perceptron (MLP) model is utilized to classify the images based on whether breast cancer exists or not. A wide range of simulations were carried out on benchmark databases and the extensive results highlight the better outcomes of the proposed EDLCDS-BCDC technique over recent methods.
Collapse
Affiliation(s)
- Mahmoud Ragab
- Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Mathematics Department, Faculty of Science, Al-Azhar University, Cairo 11884, Egypt
| | - Ashwag Albukhari
- Centre for Artificial Intelligence in Precision Medicines, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Biochemistry Department, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Jaber Alyami
- Diagnostic Radiology Department, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
- Imaging Unit, King Fahd Medical Research Center, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Romany F. Mansour
- Department of Mathematics, Faculty of Science, New Valley University, El-Kharga 72511, Egypt;
| |
Collapse
|
40
|
Deep learning in image-based breast and cervical cancer detection: a systematic review and meta-analysis. NPJ Digit Med 2022; 5:19. [PMID: 35169217 PMCID: PMC8847584 DOI: 10.1038/s41746-022-00559-z] [Citation(s) in RCA: 34] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 12/22/2021] [Indexed: 12/15/2022] Open
Abstract
Accurate early detection of breast and cervical cancer is vital for treatment success. Here, we conduct a meta-analysis to assess the diagnostic performance of deep learning (DL) algorithms for early breast and cervical cancer identification. Four subgroups are also investigated: cancer type (breast or cervical), validation type (internal or external), imaging modalities (mammography, ultrasound, cytology, or colposcopy), and DL algorithms versus clinicians. Thirty-five studies are deemed eligible for systematic review, 20 of which are meta-analyzed, with a pooled sensitivity of 88% (95% CI 85–90%), specificity of 84% (79–87%), and AUC of 0.92 (0.90–0.94). Acceptable diagnostic performance with analogous DL algorithms was highlighted across all subgroups. Therefore, DL algorithms could be useful for detecting breast and cervical cancer using medical imaging, having equivalent performance to human clinicians. However, this tentative assertion is based on studies with relatively poor designs and reporting, which likely caused bias and overestimated algorithm performance. Evidence-based, standardized guidelines around study methods and reporting are required to improve the quality of DL research.
Collapse
|
41
|
Misra S, Jeon S, Managuli R, Lee S, Kim G, Yoon C, Lee S, Barr RG, Kim C. Bi-Modal Transfer Learning for Classifying Breast Cancers via Combined B-Mode and Ultrasound Strain Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:222-232. [PMID: 34633928 DOI: 10.1109/tuffc.2021.3119251] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Although accurate detection of breast cancer still poses significant challenges, deep learning (DL) can support more accurate image interpretation. In this study, we develop a highly robust DL model based on combined B-mode ultrasound (B-mode) and strain elastography ultrasound (SE) images for classifying benign and malignant breast tumors. This study retrospectively included 85 patients, including 42 with benign lesions and 43 with malignancies, all confirmed by biopsy. Two deep neural network models, AlexNet and ResNet, were separately trained on combined 205 B-mode and 205 SE images (80% for training and 20% for validation) from 67 patients with benign and malignant lesions. These two models were then configured to work as an ensemble using both image-wise and layer-wise and tested on a dataset of 56 images from the remaining 18 patients. The ensemble model captures the diverse features present in the B-mode and SE images and also combines semantic features from AlexNet and ResNet models to classify the benign from the malignant tumors. The experimental results demonstrate that the accuracy of the proposed ensemble model is 90%, which is better than the individual models and the model trained using B-mode or SE images alone. Moreover, some patients that were misclassified by the traditional methods were correctly classified by the proposed ensemble method. The proposed ensemble DL model will enable radiologists to achieve superior detection efficiency owing to enhance classification accuracy for breast cancers in ultrasound (US) images.
Collapse
|
42
|
Chowdary J, Yogarajah P, Chaurasia P, Guruviah V. A Multi-Task Learning Framework for Automated Segmentation and Classification of Breast Tumors From Ultrasound Images. ULTRASONIC IMAGING 2022; 44:3-12. [PMID: 35128997 PMCID: PMC8902030 DOI: 10.1177/01617346221075769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Breast cancer is one of the most fatal diseases leading to the death of several women across the world. But early diagnosis of breast cancer can help to reduce the mortality rate. So an efficient multi-task learning approach is proposed in this work for the automatic segmentation and classification of breast tumors from ultrasound images. The proposed learning approach consists of an encoder, decoder, and bridge blocks for segmentation and a dense branch for the classification of tumors. For efficient classification, multi-scale features from different levels of the network are used. Experimental results show that the proposed approach is able to enhance the accuracy and recall of segmentation by 1.08%, 4.13%, and classification by 1.16%, 2.34%, respectively than the methods available in the literature.
Collapse
Affiliation(s)
| | - Pratheepan Yogarajah
- University of Ulster, Londonderry, UK
- Pratheepan Yogarajah, University of Ulster, Northland Road, Magee Campus, Londonderry, Northern Ireland BT48 7JL, UK.
| | | | | |
Collapse
|
43
|
Kim J, Kim HJ, Kim C, Lee JH, Kim KW, Park YM, Kim HW, Ki SY, Kim YM, Kim WH. Weakly-supervised deep learning for ultrasound diagnosis of breast cancer. Sci Rep 2021; 11:24382. [PMID: 34934144 PMCID: PMC8692405 DOI: 10.1038/s41598-021-03806-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Accepted: 11/30/2021] [Indexed: 11/21/2022] Open
Abstract
Conventional deep learning (DL) algorithm requires full supervision of annotating the region of interest (ROI) that is laborious and often biased. We aimed to develop a weakly-supervised DL algorithm that diagnosis breast cancer at ultrasound without image annotation. Weakly-supervised DL algorithms were implemented with three networks (VGG16, ResNet34, and GoogLeNet) and trained using 1000 unannotated US images (500 benign and 500 malignant masses). Two sets of 200 images (100 benign and 100 malignant masses) were used for internal and external validation sets. For comparison with fully-supervised algorithms, ROI annotation was performed manually and automatically. Diagnostic performances were calculated as the area under the receiver operating characteristic curve (AUC). Using the class activation map, we determined how accurately the weakly-supervised DL algorithms localized the breast masses. For internal validation sets, the weakly-supervised DL algorithms achieved excellent diagnostic performances, with AUC values of 0.92–0.96, which were not statistically different (all Ps > 0.05) from those of fully-supervised DL algorithms with either manual or automated ROI annotation (AUC, 0.92–0.96). For external validation sets, the weakly-supervised DL algorithms achieved AUC values of 0.86–0.90, which were not statistically different (Ps > 0.05) or higher (P = 0.04, VGG16 with automated ROI annotation) from those of fully-supervised DL algorithms (AUC, 0.84–0.92). In internal and external validation sets, weakly-supervised algorithms could localize 100% of malignant masses, except for ResNet34 (98%). The weakly-supervised DL algorithms developed in the present study were feasible for US diagnosis of breast cancer with well-performing localization and differential diagnosis.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Republic of Korea
| | - Jin Hwa Lee
- Department of Radiology, Dong-A University College of Medicine, Busan, Republic of Korea
| | - Keum Won Kim
- Departments of Radiology, School of Medicine, Konyang University, Konyang Univeristy Hospital, Daejeon, Republic of Korea
| | - Young Mi Park
- Department of Radiology, School of Medicine, Inje University, Busan Paik Hospital, Busan, Republic of Korea
| | - Hye Won Kim
- Department of Radiology, Wonkwang University Hospital, Wonkwang University School of Medicine, Iksan, Republic of Korea
| | - So Yeon Ki
- Department of Radiology, School of Medicine, Chonnam National University, Chonnam National University Hwasun Hospital, Hwasun, Republic of Korea
| | - You Me Kim
- Department of Radiology, School of Medicine, Dankook University, Dankook University Hospital, Cheonan, Republic of Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Republic of Korea.
| |
Collapse
|
44
|
Saba T, Abunadi I, Sadad T, Khan AR, Bahaj SA. Optimizing the transfer-learning with pretrained deep convolutional neural networks for first stage breast tumor diagnosis using breast ultrasound visual images. Microsc Res Tech 2021; 85:1444-1453. [PMID: 34908213 DOI: 10.1002/jemt.24008] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/09/2021] [Accepted: 10/26/2021] [Indexed: 11/10/2022]
Abstract
Female accounts for approximately 50% of the total population worldwide and many of them had breast cancer. Computer-aided diagnosis frameworks could reduce the number of needless biopsies and the workload of radiologists. This research aims to detect benign and malignant tumors automatically using breast ultrasound (BUS) images. Accordingly, two pretrained deep convolutional neural network (CNN) models were employed for transfer learning using BUS images like AlexNet and DenseNet201. A total of 697 BUS images containing benign and malignant tumors are preprocessed and performed classification tasks using the transfer learning-based CNN models. The classification accuracy of the benign and malignant tasks is completed and achieved 92.8% accuracy using the DensNet201 model. The results thus achieved compared in state of the art using benchmark data set and concluded proposed model outperforms in accuracy from first stage breast tumor diagnosis. Finally, the proposed model could help radiologists diagnose benign and malignant tumors swiftly by screening suspected patients.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, 44000, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|
45
|
Shen Y, Shamout FE, Oliver JR, Witowski J, Kannan K, Park J, Wu N, Huddleston C, Wolfson S, Millet A, Ehrenpreis R, Awal D, Tyma C, Samreen N, Gao Y, Chhor C, Gandhi S, Lee C, Kumari-Subaiya S, Leonard C, Mohammed R, Moczulski C, Altabet J, Babb J, Lewin A, Reig B, Moy L, Heacock L, Geras KJ. Artificial intelligence system reduces false-positive findings in the interpretation of breast ultrasound exams. Nat Commun 2021; 12:5645. [PMID: 34561440 PMCID: PMC8463596 DOI: 10.1038/s41467-021-26023-2] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 09/14/2021] [Indexed: 02/08/2023] Open
Abstract
Though consistently shown to detect mammographically occult cancers, breast ultrasound has been noted to have high false-positive rates. In this work, we present an AI system that achieves radiologist-level accuracy in identifying breast cancer in ultrasound images. Developed on 288,767 exams, consisting of 5,442,907 B-mode and Color Doppler images, the AI achieves an area under the receiver operating characteristic curve (AUROC) of 0.976 on a test set consisting of 44,755 exams. In a retrospective reader study, the AI achieves a higher AUROC than the average of ten board-certified breast radiologists (AUROC: 0.962 AI, 0.924 ± 0.02 radiologists). With the help of the AI, radiologists decrease their false positive rates by 37.3% and reduce requested biopsies by 27.8%, while maintaining the same level of sensitivity. This highlights the potential of AI in improving the accuracy, consistency, and efficiency of breast ultrasound diagnosis.
Collapse
Affiliation(s)
- Yiqiu Shen
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA
| | - Farah E. Shamout
- grid.440573.1Engineering Division, NYU Abu Dhabi, Abu Dhabi, UAE
| | - Jamie R. Oliver
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Jan Witowski
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Kawshik Kannan
- grid.482020.c0000 0001 1089 179XDepartment of Computer Science, Courant Institute, New York University, New York, NY USA
| | - Jungkyu Park
- grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| | - Nan Wu
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA
| | - Connor Huddleston
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Stacey Wolfson
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Alexandra Millet
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Robin Ehrenpreis
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Divya Awal
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cathy Tyma
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Naziya Samreen
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Yiming Gao
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Chloe Chhor
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Stacey Gandhi
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cindy Lee
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Sheila Kumari-Subaiya
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Cindy Leonard
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Reyhan Mohammed
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Christopher Moczulski
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Jaime Altabet
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - James Babb
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Alana Lewin
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Beatriu Reig
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Linda Moy
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA ,grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| | - Laura Heacock
- grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA
| | - Krzysztof J. Geras
- grid.137628.90000 0004 1936 8753Center for Data Science, New York University, New York, NY USA ,grid.137628.90000 0004 1936 8753Department of Radiology, NYU Grossman School of Medicine, New York, NY USA ,grid.137628.90000 0004 1936 8753Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY USA
| |
Collapse
|
46
|
Breast mass classification with transfer learning based on scaling of deep representations. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102828] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
47
|
Le D, Son T, Yao X. Machine learning in optical coherence tomography angiography. Exp Biol Med (Maywood) 2021; 246:2170-2183. [PMID: 34279136 DOI: 10.1177/15353702211026581] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
Optical coherence tomography angiography (OCTA) offers a noninvasive label-free solution for imaging retinal vasculatures at the capillary level resolution. In principle, improved resolution implies a better chance to reveal subtle microvascular distortions associated with eye diseases that are asymptomatic in early stages. However, massive screening requires experienced clinicians to manually examine retinal images, which may result in human error and hinder objective screening. Recently, quantitative OCTA features have been developed to standardize and document retinal vascular changes. The feasibility of using quantitative OCTA features for machine learning classification of different retinopathies has been demonstrated. Deep learning-based applications have also been explored for automatic OCTA image analysis and disease classification. In this article, we summarize recent developments of quantitative OCTA features, machine learning image analysis, and classification.
Collapse
Affiliation(s)
- David Le
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Taeyoon Son
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA
| | - Xincheng Yao
- Department of Bioengineering, 14681University of Illinois at Chicago, Chicago, IL 60607, USA.,Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, IL 60612, USA
| |
Collapse
|
48
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
49
|
Qian X, Pei J, Zheng H, Xie X, Yan L, Zhang H, Han C, Gao X, Zhang H, Zheng W, Sun Q, Lu L, Shung KK. Prospective assessment of breast cancer risk from multimodal multiview ultrasound images via clinically applicable deep learning. Nat Biomed Eng 2021; 5:522-532. [PMID: 33875840 DOI: 10.1038/s41551-021-00711-2] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Accepted: 03/08/2021] [Indexed: 02/02/2023]
Abstract
The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868-0.959) for bimodal images and 0.955 (95% CI = 0.909-0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows.
Collapse
Affiliation(s)
- Xuejun Qian
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA. .,Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Jing Pei
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Hui Zheng
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xinxin Xie
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lin Yan
- School of Computer Science and Technology, Xidian University, Xi'an, China
| | - Hao Zhang
- Department of Neurosurgery, University Hospital Heidelberg, Heidelberg, Germany
| | - Chunguang Han
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xiang Gao
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Hanqi Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weiwei Zheng
- Department of Ultrasound, Xuancheng People's Hospital, Xuancheng, China
| | - Qiang Sun
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.,Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lu Lu
- Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - K Kirk Shung
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| |
Collapse
|
50
|
A new CNN architecture for efficient classification of ultrasound breast tumor images with activation map clustering based prediction validation. Med Biol Eng Comput 2021; 59:957-968. [PMID: 33821451 DOI: 10.1007/s11517-021-02357-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 03/24/2021] [Indexed: 12/17/2022]
Abstract
Effective ultrasound (US) analysis for preliminary breast tumor diagnosis is constrained due to the presence of complex echogenic patterns. Implementing pretrained models of convolutional neural networks (CNNs) which mostly focuses on natural images and using transfer learning seldom gives good results in medical domain. In this work, a CNN architecture, StepNet, with step-wise incremental convolution layers for each downsampled block was developed for classification of breast tumors as benign/malignant. To increase noise robustness and as an improvement over existing methodologies, neutrosophic preprocessing was performed, and the enhanced images were appended to the original image during training and data augmentation. The final layers' activation maps are clustered using fuzzy c-means clustering which qualify as a validation method for the prediction of StepNet. Using neutrosophic preprocessing alone had increased the validation accuracy from 0.84 to 0.93, while using neutrosophic preprocessing and augmentation had increased the accuracy to 0.98. StepNet has comparably less training and validation time than other state of the art architectures and methods and shows an increase in prediction accuracy even for challenging isoechoic and hypoechoic tumors.
Collapse
|