1
|
Yu ZH, Hong YT, Chou CP. Enhancing Breast Cancer Diagnosis: A Nomogram Model Integrating AI Ultrasound and Clinical Factors. ULTRASOUND IN MEDICINE & BIOLOGY 2024:S0301-5629(24)00217-5. [PMID: 38897841 DOI: 10.1016/j.ultrasmedbio.2024.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 04/16/2024] [Accepted: 05/12/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE A novel nomogram incorporating artificial intelligence (AI) and clinical features for enhanced ultrasound prediction of benign and malignant breast masses. MATERIALS AND METHODS This study analyzed 340 breast masses identified through ultrasound in 308 patients. The masses were divided into training (n = 260) and validation (n = 80) groups. The AI-based analysis employed the Samsung Ultrasound AI system (S-detect). Univariate and multivariate analyses were conducted to construct nomograms using logistic regression. The AI-Nomogram was based solely on AI results, while the ClinAI- Nomogram incorporated additional clinical factors. Both nomograms underwent internal validation with 1000 bootstrap resamples and external validation using the independent validation group. Performance was evaluated by analyzing the area under the receiver operating characteristic (ROC) curve (AUC) and calibration curves. RESULTS The ClinAI-Nomogram, which incorporates patient age, AI-based mass size, and AI-based diagnosis, outperformed an existing AI-Nomogram in differentiating benign from malignant breast masses. The ClinAI-Nomogram surpassed the AI-Nomogram in predicting malignancy with significantly higher AUC scores in both training (0.873, 95% CI: 0.830-0.917 vs. 0.792, 95% CI: 0.748-0.836; p = 0.016) and validation phases (0.847, 95% CI: 0.763-0.932 vs. 0.770, 95% CI: 0.709-0.833; p < 0.001). Calibration curves further revealed excellent agreement between the ClinAI-Nomogram's predicted probabilities and actual observed risks of malignancy. CONCLUSION The ClinAI- Nomogram, combining AI alongside clinical data, significantly enhanced the differentiation of benign and malignant breast masses in clinical AI-facilitated ultrasound examinations.
Collapse
Affiliation(s)
- Zi-Han Yu
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan; Department of Radiology, Jiannren Hospital, Kaohsiung, Taiwan
| | - Yu-Ting Hong
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
| | - Chen-Pin Chou
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan; Department of Medical Laboratory Science and Biotechnology, Fooyin University, Kaohsiung, Taiwan; Department of Pharmacy, College of Pharmacy, Tajen University, Pingtung, Taiwan.
| |
Collapse
|
2
|
Yan L, Liang Z, Zhang H, Zhang G, Zheng W, Han C, Yu D, Zhang H, Xie X, Liu C, Zhang W, Zheng H, Pei J, Shen D, Qian X. A domain knowledge-based interpretable deep learning system for improving clinical breast ultrasound diagnosis. COMMUNICATIONS MEDICINE 2024; 4:90. [PMID: 38760506 PMCID: PMC11101659 DOI: 10.1038/s43856-024-00518-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 05/03/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND Though deep learning has consistently demonstrated advantages in the automatic interpretation of breast ultrasound images, its black-box nature hinders potential interactions with radiologists, posing obstacles for clinical deployment. METHODS We proposed a domain knowledge-based interpretable deep learning system for improving breast cancer risk prediction via paired multimodal ultrasound images. The deep learning system was developed on 4320 multimodal breast ultrasound images of 1440 biopsy-confirmed lesions from 1348 prospectively enrolled patients across two hospitals between August 2019 and December 2022. The lesions were allocated to 70% training cohort, 10% validation cohort, and 20% test cohort based on case recruitment date. RESULTS Here, we show that the interpretable deep learning system can predict breast cancer risk as accurately as experienced radiologists, with an area under the receiver operating characteristic curve of 0.902 (95% confidence interval = 0.882 - 0.921), sensitivity of 75.2%, and specificity of 91.8% on the test cohort. With the aid of the deep learning system, particularly its inherent explainable features, junior radiologists tend to achieve better clinical outcomes, while senior radiologists experience increased confidence levels. Multimodal ultrasound images augmented with domain knowledge-based reasoning cues enable an effective human-machine collaboration at a high level of prediction performance. CONCLUSIONS Such a clinically applicable deep learning system may be incorporated into future breast cancer screening and support assisted or second-read workflows.
Collapse
Affiliation(s)
- Lin Yan
- School of Mathematics, Xi'an University of Finance and Economics, Xi'an, China
| | - Zhiying Liang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Hao Zhang
- Department of Neurosurgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Gaosong Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Weiwei Zheng
- Department of Ultrasound, Xuancheng People's Hospital, Xuancheng, China
| | - Chunguang Han
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Dongsheng Yu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Hanqi Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Xinxin Xie
- Department of Ultrasound, Peking University Third Hospital, Beijing, China
| | - Chang Liu
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Wenxin Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Hui Zheng
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Jing Pei
- Department of General Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.
- Department of Breast Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.
- State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China.
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
- Shanghai Clinical Research and Trial Center, Shanghai, China.
| | - Xuejun Qian
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China.
- State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China.
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| |
Collapse
|
3
|
Chen J, Huang Z, Jiang Y, Wu H, Tian H, Cui C, Shi S, Tang S, Xu J, Xu D, Dong F. Diagnostic Performance of Deep Learning in Video-Based Ultrasonography for Breast Cancer: A Retrospective Multicentre Study. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:722-728. [PMID: 38369431 DOI: 10.1016/j.ultrasmedbio.2024.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/08/2024] [Accepted: 01/16/2024] [Indexed: 02/20/2024]
Abstract
OBJECTIVE Although ultrasound is a common tool for breast cancer screening, its accuracy is often operator-dependent. In this study, we proposed a new automated deep-learning framework that extracts video-based ultrasound data for breast cancer screening. METHODS Our framework incorporates DenseNet121, MobileNet, and Xception as backbones for both video- and image-based models. We used data from 3907 patients to train and evaluate the models, which were tested using video- and image-based methods, as well as reader studies with human experts. RESULTS This study evaluated 3907 female patients aged 22 to 86 years. The results indicated that the MobileNet video model achieved an AUROC of 0.961 in prospective data testing, surpassing the DenseNet121 video model. In real-world data testing, it demonstrated an accuracy of 92.59%, outperforming both the DenseNet121 and Xception video models, and exceeding the 76.00% to 85.60% accuracy range of human experts. Additionally, the MobileNet video model exceeded the performance of image models and other video models across all evaluation metrics, including accuracy, sensitivity, specificity, F1 score, and AUC. Its exceptional performance, particularly suitable for resource-limited clinical settings, demonstrates its potential for clinical application in breast cancer screening. CONCLUSIONS The level of expertise reached by the video models was greater than that achieved by image-based models. We have developed an artificial intelligence framework based on videos that may be able to aid breast cancer diagnosis and alleviate the shortage of experienced experts.
Collapse
Affiliation(s)
- Jing Chen
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | | | - Yitao Jiang
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, China
| | - Huaiyu Wu
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Hongtian Tian
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Chen Cui
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, China
| | - Siyuan Shi
- Research and development department, Illuminate, LLC, Shenzhen, Guangdong, China
| | | | - Jinfeng Xu
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China
| | - Dong Xu
- Institute of Basic Medicine and Cancer (IBMC), The Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Chinese Academy of Sciences, Hangzhou, Zhejiang, China
| | - Fajin Dong
- Ultrasound Department, The Second Clinical Medical College, Jinan University, The First Affiliated Hospital, Southern University of Science and Technology, Shenzhen People's Hospital, Shenzhen, Guangdong, China; Jinan University, Guangzhou, Guangdong, China.
| |
Collapse
|
4
|
He Q, Yang Q, Su H, Wang Y. Multi-task learning for segmentation and classification of breast tumors from ultrasound images. Comput Biol Med 2024; 173:108319. [PMID: 38513394 DOI: 10.1016/j.compbiomed.2024.108319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 03/03/2024] [Accepted: 03/12/2024] [Indexed: 03/23/2024]
Abstract
Segmentation and classification of breast tumors are critical components of breast ultrasound (BUS) computer-aided diagnosis (CAD), which significantly improves the diagnostic accuracy of breast cancer. However, the characteristics of tumor regions in BUS images, such as non-uniform intensity distributions, ambiguous or missing boundaries, and varying tumor shapes and sizes, pose significant challenges to automated segmentation and classification solutions. Many previous studies have proposed multi-task learning methods to jointly tackle tumor segmentation and classification by sharing the features extracted by the encoder. Unfortunately, this often introduces redundant or misleading information, which hinders effective feature exploitation and adversely affects performance. To address this issue, we present ACSNet, a novel multi-task learning network designed to optimize tumor segmentation and classification in BUS images. The segmentation network incorporates a novel gate unit to allow optimal transfer of valuable contextual information from the encoder to the decoder. In addition, we develop the Deformable Spatial Attention Module (DSAModule) to improve segmentation accuracy by overcoming the limitations of conventional convolution in dealing with morphological variations of tumors. In the classification branch, multi-scale feature extraction and channel attention mechanisms are integrated to discriminate between benign and malignant breast tumors. Experiments on two publicly available BUS datasets demonstrate that ACSNet not only outperforms mainstream multi-task learning methods for both breast tumor segmentation and classification tasks, but also achieves state-of-the-art results for BUS tumor segmentation. Code and models are available at https://github.com/qqhe-frank/BUS-segmentation-and-classification.git.
Collapse
Affiliation(s)
- Qiqi He
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China; School of Life Science and Technology, Xidian University, Xi'an, China
| | - Qiuju Yang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China.
| | - Hang Su
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| | - Yixuan Wang
- School of Physics and Information Technology, Shaanxi Normal University, Xi'an, China
| |
Collapse
|
5
|
Jakkaladiki SP, Maly F. Integrating hybrid transfer learning with attention-enhanced deep learning models to improve breast cancer diagnosis. PeerJ Comput Sci 2024; 10:e1850. [PMID: 38435578 PMCID: PMC10909230 DOI: 10.7717/peerj-cs.1850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 01/10/2024] [Indexed: 03/05/2024]
Abstract
Cancer, with its high fatality rate, instills fear in countless individuals worldwide. However, effective diagnosis and treatment can often lead to a successful cure. Computer-assisted diagnostics, especially in the context of deep learning, have become prominent methods for primary screening of various diseases, including cancer. Deep learning, an artificial intelligence technique that enables computers to reason like humans, has recently gained significant attention. This study focuses on training a deep neural network to predict breast cancer. With the advancements in medical imaging technologies such as X-ray, magnetic resonance imaging (MRI), and computed tomography (CT) scans, deep learning has become essential in analyzing and managing extensive image datasets. The objective of this research is to propose a deep-learning model for the identification and categorization of breast tumors. The system's performance was evaluated using the breast cancer identification (BreakHis) classification datasets from the Kaggle repository and the Wisconsin Breast Cancer Dataset (WBC) from the UCI repository. The study's findings demonstrated an impressive accuracy rate of 100%, surpassing other state-of-the-art approaches. The suggested model was thoroughly evaluated using F1-score, recall, precision, and accuracy metrics on the WBC dataset. Training, validation, and testing were conducted using pre-processed datasets, leading to remarkable results of 99.8% recall rate, 99.06% F1-score, and 100% accuracy rate on the BreakHis dataset. Similarly, on the WBC dataset, the model achieved a 99% accuracy rate, a 98.7% recall rate, and a 99.03% F1-score. These outcomes highlight the potential of deep learning models in accurately diagnosing breast cancer. Based on our research, it is evident that the proposed system outperforms existing approaches in this field.
Collapse
Affiliation(s)
- Sudha Prathyusha Jakkaladiki
- Faculty of Informatics and Management, University of Hradec Králové, Hradec Kralove, Hradec Kralove, Czech Republic
| | - Filip Maly
- Faculty of Informatics and Management, University of Hradec Králové, Hradec Kralove, Hradec Kralove, Czech Republic
| |
Collapse
|
6
|
Chen L, Zeng B, Shen J, Xu J, Cai Z, Su S, Chen J, Cai X, Ying T, Hu B, Wu M, Chen X, Zheng Y. Bone age assessment based on three-dimensional ultrasound and artificial intelligence compared with paediatrician-read radiographic bone age: protocol for a prospective, diagnostic accuracy study. BMJ Open 2024; 14:e079969. [PMID: 38401893 PMCID: PMC10895244 DOI: 10.1136/bmjopen-2023-079969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/31/2024] [Indexed: 02/26/2024] Open
Abstract
INTRODUCTION Radiographic bone age (BA) assessment is widely used to evaluate children's growth disorders and predict their future height. Moreover, children are more sensitive and vulnerable to X-ray radiation exposure than adults. The purpose of this study is to develop a new, safer, radiation-free BA assessment method for children by using three-dimensional ultrasound (3D-US) and artificial intelligence (AI), and to test the diagnostic accuracy and reliability of this method. METHODS AND ANALYSIS This is a prospective, observational study. All participants will be recruited through Paediatric Growth and Development Clinic. All participants will receive left hand 3D-US and X-ray examination at the Shanghai Sixth People's Hospital on the same day, all images will be recorded. These image related data will be collected and randomly divided into training set (80% of all) and test set (20% of all). The training set will be used to establish a cascade network of 3D-US skeletal image segmentation and BA prediction model to achieve end-to-end prediction of image to BA. The test set will be used to evaluate the accuracy of AI BA model of 3D-US. We have developed a new ultrasonic scanning device, which can be proposed to automatic 3D-US scanning of hands. AI algorithms, such as convolutional neural network, will be used to identify and segment the skeletal structures in the hand 3D-US images. We will achieve automatic segmentation of hand skeletal 3D-US images, establish BA prediction model of 3D-US, and test the accuracy of the prediction model. ETHICS AND DISSEMINATION The Ethics Committee of Shanghai Sixth People's Hospital approved this study. The approval number is 2022-019. A written informed consent will be obtained from their parent or guardian of each participant. Final results will be published in peer-reviewed journals and presented at national and international conferences. TRIAL REGISTRATION NUMBER ChiCTR2200057236.
Collapse
Affiliation(s)
- Li Chen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bolun Zeng
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jian Shen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zehang Cai
- Shantou Institute of Ultrasonic Instruments Co., Ltd, Shantou, China
| | - Shudian Su
- Shantou Institute of Ultrasonic Instruments Co., Ltd, Shantou, China
| | - Jie Chen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Cai
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Tao Ying
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bing Hu
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Min Wu
- Department of Pediatrics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yuanyi Zheng
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
7
|
Huang Z, Yang K, Tian H, Wu H, Tang S, Cui C, Shi S, Jiang Y, Chen J, Xu J, Dong F. A validation of an entropy-based artificial intelligence for ultrasound data in breast tumors. BMC Med Inform Decis Mak 2024; 24:1. [PMID: 38166852 PMCID: PMC10759705 DOI: 10.1186/s12911-023-02404-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 12/11/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND The application of artificial intelligence (AI) in the ultrasound (US) diagnosis of breast cancer (BCa) is increasingly prevalent. However, the impact of US-probe frequencies on the diagnostic efficacy of AI models has not been clearly established. OBJECTIVES To explore the impact of using US-video of variable frequencies on the diagnostic efficacy of AI in breast US screening. METHODS This study utilized different frequency US-probes (L14: frequency range: 3.0-14.0 MHz, central frequency 9 MHz, L9: frequency range: 2.5-9.0 MHz, central frequency 6.5 MHz and L13: frequency range: 3.6-13.5 MHz, central frequency 8 MHz, L7: frequency range: 3-7 MHz, central frequency 4.0 MHz, linear arrays) to collect breast-video and applied an entropy-based deep learning approach for evaluation. We analyzed the average two-dimensional image entropy (2-DIE) of these videos and the performance of AI models in processing videos from these different frequencies to assess how probe frequency affects AI diagnostic performance. RESULTS The study found that in testing set 1, L9 was higher than L14 in average 2-DIE; in testing set 2, L13 was higher in average 2-DIE than L7. The diagnostic efficacy of US-data, utilized in AI model analysis, varied across different frequencies (AUC: L9 > L14: 0.849 vs. 0.784; L13 > L7: 0.920 vs. 0.887). CONCLUSION This study indicate that US-data acquired using probes with varying frequencies exhibit diverse average 2-DIE values, and datasets characterized by higher average 2-DIE demonstrate enhanced diagnostic outcomes in AI-driven BCa diagnosis. Unlike other studies, our research emphasizes the importance of US-probe frequency selection on AI model diagnostic performance, rather than focusing solely on the AI algorithms themselves. These insights offer a new perspective for early BCa screening and diagnosis and are of significant for future choices of US equipment and optimization of AI algorithms.
Collapse
Affiliation(s)
- Zhibin Huang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Keen Yang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Hongtian Tian
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Huaiyu Wu
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Shuzhen Tang
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Chen Cui
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Siyuan Shi
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Yitao Jiang
- Research and development department, Illuminate, LLC, 518000, Shenzhen, Guangdong, China
| | - Jing Chen
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China
| | - Jinfeng Xu
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China.
- Shenzhen People's Hospital, 518020, Shenzhen, China.
| | - Fajin Dong
- The Second Clinical Medical College, Jinan University, 518020, Shenzhen, China.
- Shenzhen People's Hospital, 518020, Shenzhen, China.
| |
Collapse
|
8
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
9
|
Lv T, Hong X, Liu Y, Miao K, Sun H, Li L, Deng C, Jiang C, Pan X. AI-powered interpretable imaging phenotypes noninvasively characterize tumor microenvironment associated with diverse molecular signatures and survival in breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107857. [PMID: 37865058 DOI: 10.1016/j.cmpb.2023.107857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 08/23/2023] [Accepted: 10/08/2023] [Indexed: 10/23/2023]
Abstract
BACKGROUND AND OBJECTIVES Tumor microenvironment (TME) is a determining factor in decision-making and personalized treatment for breast cancer, which is highly intra-tumor heterogeneous (ITH). However, the noninvasive imaging phenotypes of TME are poorly understood, even invasive genotypes have been largely known in breast cancer. METHODS Here, we develop an artificial intelligence (AI)-driven approach for noninvasively characterizing TME by integrating the predictive power of deep learning with the explainability of human-interpretable imaging phenotypes (IMPs) derived from 4D dynamic imaging (DCE-MRI) of 342 breast tumors linked to genomic and clinical data, which connect cancer phenotypes to genotypes. An unsupervised dual-attention deep graph clustering model (DGCLM) is developed to divide bulk tumor into multiple spatially segregated and phenotypically consistent subclusters. The IMPs ranging from spatial heterogeneity to kinetic heterogeneity are leveraged to capture architecture, interaction, and proximity between intratumoral subclusters. RESULTS We demonstrate that our IMPs correlate with well-known markers of TME and also can predict distinct molecular signatures, including expression of hormone receptor, epithelial growth factor receptor and immune checkpoint proteins, with the performance of accuracy, reliability and transparency superior to recent state-of-the-art radiomics and 'black-box' deep learning methods. Moreover, prognostic value is confirmed by survival analysis accounting for IMPs. CONCLUSIONS Our approach provides an interpretable, quantitative, and comprehensive perspective to characterize TME in a noninvasive and clinically relevant manner.
Collapse
Affiliation(s)
- Tianxu Lv
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Xiaoyan Hong
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Yuan Liu
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China.
| | - Kai Miao
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China
| | - Heng Sun
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China.
| | - Lihua Li
- Institute of Biomedical Engineering and Instrumentation, Hangzhou Dianzi University, Hangzhou 310018, China.
| | - Chuxia Deng
- Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; MOE Frontier Science Centre for Precision Oncology, University of Macau, Macau SAR, China.
| | - Chunjuan Jiang
- Department of Nuclear Medicine, The Second Xiangya Hospital of Central South University, Changsha, Hunan, China.
| | - Xiang Pan
- School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China; Cancer Center, Faculty of Health Sciences, University of Macau, Macau SAR, China; MOE Frontier Science Centre for Precision Oncology, University of Macau, Macau SAR, China; Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai 201318, China.
| |
Collapse
|
10
|
Wang Q, Jia X, Luo T, Yu J, Xia S. Deep learning algorithm using bispectrum analysis energy feature maps based on ultrasound radiofrequency signals to detect breast cancer. Front Oncol 2023; 13:1272427. [PMID: 38179175 PMCID: PMC10766103 DOI: 10.3389/fonc.2023.1272427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 11/15/2023] [Indexed: 01/06/2024] Open
Abstract
Background Ultrasonography is an important imaging method for clinical breast cancer screening. As the original echo signals of ultrasonography, ultrasound radiofrequency (RF) signals provide abundant tissue macroscopic and microscopic information and have important development and utilization value in breast cancer detection. Methods In this study, we proposed a deep learning method based on bispectrum analysis feature maps to process RF signals and realize breast cancer detection. The bispectrum analysis energy feature maps with frequency subdivision were first proposed and applied to breast cancer detection in this study. Our deep learning network was based on a weight sharing network framework for the input of multiple feature maps. A feature map attention module was designed for multiple feature maps input of the network to adaptively learn both feature maps and features that were conducive to classification. We also designed a similarity constraint factor, learning the similarity and difference between feature maps by cosine distance. Results The experiment results showed that the areas under the receiver operating characteristic curves of our proposed method in the validation set and two independent test sets for benign and malignant breast tumor classification were 0.913, 0.900, and 0.885, respectively. The performance of the model combining four ultrasound bispectrum analysis energy feature maps in breast cancer detection was superior to that of the model using an ultrasound grayscale image and the model using a single bispectrum analysis energy feature map in this study. Conclusion The combination of deep learning technology and our proposed ultrasound bispectrum analysis energy feature maps effectively realized breast cancer detection and was an efficient method of feature extraction and utilization of ultrasound RF signals.
Collapse
Affiliation(s)
- Qingmin Wang
- School of Information Science and Engineering, Fudan University, Shanghai, China
| | - Xiaohong Jia
- Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Ting Luo
- Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Engineering, Fudan University, Shanghai, China
| | - Shujun Xia
- Ruijin Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
11
|
Li JW, Sheng DL, Chen JG, You C, Liu S, Xu HX, Chang C. Artificial intelligence in breast imaging: potentials and challenges. Phys Med Biol 2023; 68:23TR01. [PMID: 37722385 DOI: 10.1088/1361-6560/acfade] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 09/18/2023] [Indexed: 09/20/2023]
Abstract
Breast cancer, which is the most common type of malignant tumor among humans, is a leading cause of death in females. Standard treatment strategies, including neoadjuvant chemotherapy, surgery, postoperative chemotherapy, targeted therapy, endocrine therapy, and radiotherapy, are tailored for individual patients. Such personalized therapies have tremendously reduced the threat of breast cancer in females. Furthermore, early imaging screening plays an important role in reducing the treatment cycle and improving breast cancer prognosis. The recent innovative revolution in artificial intelligence (AI) has aided radiologists in the early and accurate diagnosis of breast cancer. In this review, we introduce the necessity of incorporating AI into breast imaging and the applications of AI in mammography, ultrasonography, magnetic resonance imaging, and positron emission tomography/computed tomography based on published articles since 1994. Moreover, the challenges of AI in breast imaging are discussed.
Collapse
Affiliation(s)
- Jia-Wei Li
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Dan-Li Sheng
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| | - Jian-Gang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication & Electronic Engineering, East China Normal University, People's Republic of China
| | - Chao You
- Department of Radiology, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Shuai Liu
- Department of Nuclear Medicine, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, People's Republic of China
| | - Hui-Xiong Xu
- Department of Ultrasound, Zhongshan Hospital, Institute of Ultrasound in Medicine and Engineering, Fudan University, Shanghai, 200032, People's Republic of China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center; Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, 200032, People's Republic of China
| |
Collapse
|
12
|
You C, Shen Y, Sun S, Zhou J, Li J, Su G, Michalopoulou E, Peng W, Gu Y, Guo W, Cao H. Artificial intelligence in breast imaging: Current situation and clinical challenges. EXPLORATION (BEIJING, CHINA) 2023; 3:20230007. [PMID: 37933287 PMCID: PMC10582610 DOI: 10.1002/exp.20230007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/30/2023] [Indexed: 11/08/2023]
Abstract
Breast cancer ranks among the most prevalent malignant tumours and is the primary contributor to cancer-related deaths in women. Breast imaging is essential for screening, diagnosis, and therapeutic surveillance. With the increasing demand for precision medicine, the heterogeneous nature of breast cancer makes it necessary to deeply mine and rationally utilize the tremendous amount of breast imaging information. With the rapid advancement of computer science, artificial intelligence (AI) has been noted to have great advantages in processing and mining of image information. Therefore, a growing number of scholars have started to focus on and research the utility of AI in breast imaging. Here, an overview of breast imaging databases and recent advances in AI research are provided, the challenges and problems in this field are discussed, and then constructive advice is further provided for ongoing scientific developments from the perspective of the National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Chao You
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yiyuan Shen
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Shiyun Sun
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiayin Zhou
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiawei Li
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Guanhua Su
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
- Department of Breast SurgeryKey Laboratory of Breast Cancer in ShanghaiFudan University Shanghai Cancer CenterShanghaiChina
| | | | - Weijun Peng
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yajia Gu
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Weisheng Guo
- Department of Minimally Invasive Interventional RadiologyKey Laboratory of Molecular Target and Clinical PharmacologySchool of Pharmaceutical Sciences and The Second Affiliated HospitalGuangzhou Medical UniversityGuangzhouChina
| | - Heqi Cao
- Department of Health SciencesNational Natural Science Foundation of ChinaBeijingChina
| |
Collapse
|
13
|
Yadav N, Dass R, Virmani J. Objective assessment of segmentation models for thyroid ultrasound images. J Ultrasound 2023; 26:673-685. [PMID: 36195781 PMCID: PMC10469139 DOI: 10.1007/s40477-022-00726-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 08/27/2022] [Indexed: 11/29/2022] Open
Abstract
Ultrasound features related to thyroid lesions structure, shape, volume, and margins are considered to determine cancer risk. Automatic segmentation of the thyroid lesion would allow the sonographic features to be estimated. On the basis of clinical ultrasonography B-mode scans, a multi-output CNN-based semantic segmentation is used to separate thyroid nodules' cystic & solid components. Semantic segmentation is an automatic technique that labels the ultrasound (US) pixels with an appropriate class or pixel category, i.e., belongs to a lesion or background. In the present study, encoder-decoder-based semantic segmentation models i.e. SegNet using VGG16, UNet, and Hybrid-UNet implemented for segmentation of thyroid US images. For this work, 820 thyroid US images are collected from the DDTI and ultrasoundcases.info (USC) datasets. These segmentation models were trained using a transfer learning approach with original and despeckled thyroid US images. The performance of segmentation models is evaluated by analyzing the overlap region between the true contour lesion marked by the radiologist and the lesion retrieved by the segmentation model. The mean intersection of union (mIoU), mean dice coefficient (mDC) metrics, TPR, TNR, FPR, and FNR metrics are used to measure performance. Based on the exhaustive experiments and performance evaluation parameters it is observed that the proposed Hybrid-UNet segmentation model segments thyroid nodules and cystic components effectively.
Collapse
Affiliation(s)
- Niranjan Yadav
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039 India
| | - Rajeshwar Dass
- Department of Electronics and Communication Engineering, Deenbandhu Chhotu Ram University of Science and Technology Murthal, Sonepat, 131039 India
| | - Jitendra Virmani
- Central Scientific Instruments Organization, Council of Scientific and Industrial Research, Chandigarh, 160030 India
| |
Collapse
|
14
|
Deb SD, Jha RK. Breast UltraSound Image classification using fuzzy-rank-based ensemble network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
15
|
Jalloul R, Chethan HK, Alkhatib R. A Review of Machine Learning Techniques for the Classification and Detection of Breast Cancer from Medical Images. Diagnostics (Basel) 2023; 13:2460. [PMID: 37510204 PMCID: PMC10378151 DOI: 10.3390/diagnostics13142460] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 07/17/2023] [Accepted: 07/18/2023] [Indexed: 07/30/2023] Open
Abstract
Cancer is an incurable disease based on unregulated cell division. Breast cancer is the most prevalent cancer in women worldwide, and early detection can lower death rates. Medical images can be used to find important information for locating and diagnosing breast cancer. The best information for identifying and diagnosing breast cancer comes from medical pictures. This paper reviews the history of the discipline and examines how deep learning and machine learning are applied to detect breast cancer. The classification of breast cancer, using several medical imaging modalities, is covered in this paper. Numerous medical imaging modalities' classification systems for tumors, non-tumors, and dense masses are thoroughly explained. The differences between various medical image types are initially examined using a variety of study datasets. Following that, numerous machine learning and deep learning methods exist for diagnosing and classifying breast cancer. Finally, this review addressed the challenges of categorization and detection and the best results of different approaches.
Collapse
Affiliation(s)
- Reem Jalloul
- Maharaja Research Foundation, University of Mysore, Mysuru 570005, India
| | - H K Chethan
- Department of Computer Science and Engineering, Maharaja Research Foundation, Maharaja Institute of Technology, Mysuru 570004, India
| | - Ramez Alkhatib
- Biomaterial Bank Nord, Research Center Borstel Leibniz Lung Center, Parkallee 35, 23845 Borstel, Germany
| |
Collapse
|
16
|
Matin Malakouti S. Heart disease classification based on ECG using machine learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
17
|
Afrin H, Larson NB, Fatemi M, Alizad A. Deep Learning in Different Ultrasound Methods for Breast Cancer, from Diagnosis to Prognosis: Current Trends, Challenges, and an Analysis. Cancers (Basel) 2023; 15:3139. [PMID: 37370748 PMCID: PMC10296633 DOI: 10.3390/cancers15123139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/02/2023] [Accepted: 06/08/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer is the second-leading cause of mortality among women around the world. Ultrasound (US) is one of the noninvasive imaging modalities used to diagnose breast lesions and monitor the prognosis of cancer patients. It has the highest sensitivity for diagnosing breast masses, but it shows increased false negativity due to its high operator dependency. Underserved areas do not have sufficient US expertise to diagnose breast lesions, resulting in delayed management of breast lesions. Deep learning neural networks may have the potential to facilitate early decision-making by physicians by rapidly yet accurately diagnosing and monitoring their prognosis. This article reviews the recent research trends on neural networks for breast mass ultrasound, including and beyond diagnosis. We discussed original research recently conducted to analyze which modes of ultrasound and which models have been used for which purposes, and where they show the best performance. Our analysis reveals that lesion classification showed the highest performance compared to those used for other purposes. We also found that fewer studies were performed for prognosis than diagnosis. We also discussed the limitations and future directions of ongoing research on neural networks for breast ultrasound.
Collapse
Affiliation(s)
- Humayra Afrin
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Nicholas B. Larson
- Department of Quantitative Health Sciences, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| | - Azra Alizad
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
- Department of Radiology, Mayo Clinic College of Medicine and Science, Rochester, MN 55905, USA
| |
Collapse
|
18
|
Zhang XY, Wei Q, Wu GG, Tang Q, Pan XF, Chen GQ, Zhang D, Dietrich CF, Cui XW. Artificial intelligence - based ultrasound elastography for disease evaluation - a narrative review. Front Oncol 2023; 13:1197447. [PMID: 37333814 PMCID: PMC10272784 DOI: 10.3389/fonc.2023.1197447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023] Open
Abstract
Ultrasound elastography (USE) provides complementary information of tissue stiffness and elasticity to conventional ultrasound imaging. It is noninvasive and free of radiation, and has become a valuable tool to improve diagnostic performance with conventional ultrasound imaging. However, the diagnostic accuracy will be reduced due to high operator-dependence and intra- and inter-observer variability in visual observations of radiologists. Artificial intelligence (AI) has great potential to perform automatic medical image analysis tasks to provide a more objective, accurate and intelligent diagnosis. More recently, the enhanced diagnostic performance of AI applied to USE have been demonstrated for various disease evaluations. This review provides an overview of the basic concepts of USE and AI techniques for clinical radiologists and then introduces the applications of AI in USE imaging that focus on the following anatomical sites: liver, breast, thyroid and other organs for lesion detection and segmentation, machine learning (ML) - assisted classification and prognosis prediction. In addition, the existing challenges and future trends of AI in USE are also discussed.
Collapse
Affiliation(s)
- Xian-Ya Zhang
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Wei
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ge-Ge Wu
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qi Tang
- Department of Ultrasonography, The First Hospital of Changsha, Changsha, China
| | - Xiao-Fang Pan
- Health Medical Department, Dalian Municipal Central Hospital, Dalian, China
| | - Gong-Quan Chen
- Department of Medical Ultrasound, Minda Hospital of Hubei Minzu University, Enshi, China
| | - Di Zhang
- Department of Medical Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | | | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
19
|
Chen H, Ma M, Liu G, Wang Y, Jin Z, Liu C. Breast Tumor Classification in Ultrasound Images by Fusion of Deep Convolutional Neural Network and Shallow LBP Feature. J Digit Imaging 2023; 36:932-946. [PMID: 36720840 PMCID: PMC10287618 DOI: 10.1007/s10278-022-00711-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 09/27/2022] [Accepted: 09/29/2022] [Indexed: 02/02/2023] Open
Abstract
Breast cancer is one of the most dangerous and common cancers in women which leads to a major research topic in medical science. To assist physicians in pre-screening for breast cancer to reduce unnecessary biopsies, breast ultrasound and computer-aided diagnosis (CAD) have been used to distinguish between benign and malignant tumors. In this study, we proposed a CAD system for tumor diagnosis using a multi-channel fusion method and feature extraction structure based on multi-feature fusion on breast ultrasound (BUS) images. In the pre-processing stage, the multi-channel fusion method completed the color conversion of the BUS image to make it contain richer information. In the feature extraction stage, the pre-trained ResNet50 network was selected as the basic network, and three levels of features were combined based on adaptive spatial feature fusion (ASFF), and finally, the shallow local binary pattern (LBP) texture features were fused. Support vector machine (SVM) was used for comparative analysis. A retrospective analysis was carried out, and 1615 breast tumor images (572 benign and 1043 malignant) confirmed by pathological examinations were collected. After data processing and augmentation, for an independent test set consisting of 874 breast ultrasound images (457 benign and 417 malignant), the accuracy, precision, recall, specificity, F1 score, and AUC of our method were 96.91%, 98.75%, 94.72%, 98.91%, 0.97, and 0.991, respectively. The results show that the integration of shallow LBP texture features and multi-level depth features can more effectively improve the comprehensive performance of breast tumor diagnosis, and has strong clinical application value. Compared with the past methods, our proposed method is expected to realize the automatic diagnosis of breast tumors and provide an auxiliary tool for radiologists to accurately diagnose breast diseases.
Collapse
Affiliation(s)
- Hua Chen
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Minglun Ma
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Gang Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China.
| | - Ying Wang
- The Second Hospital of Hebei Medical University, Shijiazhuang, 050000, China
| | - Zhihao Jin
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Chong Liu
- School of Electrical Engineering, Yanshan University, Qinhuangdao, 066004, China
| |
Collapse
|
20
|
Wang L, Zhang L, Shu X, Yi Z. Intra-class consistency and inter-class discrimination feature learning for automatic skin lesion classification. Med Image Anal 2023; 85:102746. [PMID: 36638748 DOI: 10.1016/j.media.2023.102746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 10/24/2022] [Accepted: 01/05/2023] [Indexed: 01/09/2023]
Abstract
Automated skin lesion classification has been proved to be capable of improving the diagnostic performance for dermoscopic images. Although many successes have been achieved, accurate classification remains challenging due to the significant intra-class variation and inter-class similarity. In this article, a deep learning method is proposed to increase the intra-class consistency as well as the inter-class discrimination of learned features in the automatic skin lesion classification. To enhance the inter-class discriminative feature learning, a CAM-based (class activation mapping) global-lesion localization module is proposed by optimizing the distance of CAMs for the same dermoscopic image generated by different skin lesion tasks. Then, a global features guided intra-class similarity learning module is proposed to generate the class center according to the deep features of all samples in one class and the history feature of one sample during the learning process. In this way, the performance can be improved with the collaboration of CAM-based inter-class feature discriminating and global features guided intra-class feature concentrating. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on the ISIC-2017 and ISIC-2018 datasets. Experimental results with different backbones have demonstrated that the proposed method has good generalizability and can adaptively focus on more discriminative regions of the skin lesion.
Collapse
Affiliation(s)
- Lituan Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China.
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu 610065, PR China
| |
Collapse
|
21
|
Manikandan P, Durga U, Ponnuraja C. An integrative machine learning framework for classifying SEER breast cancer. Sci Rep 2023; 13:5362. [PMID: 37005484 PMCID: PMC10067827 DOI: 10.1038/s41598-023-32029-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/21/2023] [Indexed: 04/04/2023] Open
Abstract
Breast cancer is the commonest type of cancer in women worldwide and the leading cause of mortality for females. The aim of this research is to classify the alive and death status of breast cancer patients using the Surveillance, Epidemiology, and End Results dataset. Due to its capacity to handle enormous data sets systematically, machine learning and deep learning has been widely employed in biomedical research to answer diverse classification difficulties. Pre-processing the data enables its visualization and analysis for use in making important decisions. This research presents a feasible machine learning-based approach for categorizing SEER breast cancer dataset. Moreover, a two-step feature selection method based on Variance Threshold and Principal Component Analysis was employed to select the features from the SEER breast cancer dataset. After selecting the features, the classification of the breast cancer dataset is carried out using Supervised and Ensemble learning techniques such as Ada Boosting, XG Boosting, Gradient Boosting, Naive Bayes and Decision Tree. Utilizing the train-test split and k-fold cross-validation approaches, the performance of various machine learning algorithms is examined. The accuracy of Decision Tree for both train-test split and cross validation achieved as 98%. In this study, it is observed that the Decision Tree algorithm outperforms other supervised and ensemble learning approaches for the SEER Breast Cancer dataset.
Collapse
Affiliation(s)
- P Manikandan
- Department of Data Science, Loyola College, Chennai, 600 034, India.
| | - U Durga
- Department of Data Science, Loyola College, Chennai, 600 034, India
| | - C Ponnuraja
- ICMR-National Institute for Research in Tuberculosis, Chennai, 600 031, India.
| |
Collapse
|
22
|
Gu Y, Xu W, Liu T, An X, Tian J, Ran H, Ren W, Chang C, Yuan J, Kang C, Deng Y, Wang H, Luo B, Guo S, Zhou Q, Xue E, Zhan W, Zhou Q, Li J, Zhou P, Chen M, Gu Y, Chen W, Zhang Y, Li J, Cong L, Zhu L, Wang H, Jiang Y. Ultrasound-based deep learning in the establishment of a breast lesion risk stratification system: a multicenter study. Eur Radiol 2023; 33:2954-2964. [PMID: 36418619 DOI: 10.1007/s00330-022-09263-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 09/03/2022] [Accepted: 10/22/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To establish a breast lesion risk stratification system using ultrasound images to predict breast malignancy and assess Breast Imaging Reporting and Data System (BI-RADS) categories simultaneously. METHODS This multicenter study prospectively collected a dataset of ultrasound images for 5012 patients at thirty-two hospitals from December 2018 to December 2020. A deep learning (DL) model was developed to conduct binary categorization (benign and malignant) and BI-RADS categories (2, 3, 4a, 4b, 4c, and 5) simultaneously. The training set of 4212 patients and the internal test set of 416 patients were from thirty hospitals. The remaining two hospitals with 384 patients were used as an external test set. Three experienced radiologists performed a reader study on 324 patients randomly selected from the test sets. We compared the performance of the DL model with that of three radiologists and the consensus of the three radiologists. RESULTS In the external test set, the DL model achieved areas under the receiver operating characteristic curve (AUCs) of 0.980 and 0.945 for the binary categorization and six-way categorizations, respectively. In the reader study set, the DL BI-RADS categories achieved a similar AUC (0.901 vs. 0.933, p = 0.0632), sensitivity (90.98% vs. 95.90%, p = 0.1094), and accuracy (83.33% vs. 79.01%, p = 0.0541), but higher specificity (78.71% vs. 68.81%, p = 0.0012) than those of the consensus of the three radiologists. CONCLUSIONS The DL model performed well in distinguishing benign from malignant breast lesions and yielded outcomes similar to experienced radiologists. This indicates the potential applicability of the DL model in clinical diagnosis. KEY POINTS • The DL model can achieve binary categorization for benign and malignant breast lesions and six-way BI-RADS categorizations for categories 2, 3, 4a, 4b, 4c, and 5, simultaneously. • The DL model showed acceptable agreement with radiologists for the classification of breast lesions. • The DL model performed well in distinguishing benign from malignant breast lesions and had promise in helping reduce unnecessary biopsies of BI-RADS 4a lesions.
Collapse
Affiliation(s)
- Yang Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Wen Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Ting Liu
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Xing An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital of Chongqing Medical University & Chongqing Key Laboratory of Ultrasound Molecular Imaging, Chongqing, China
| | - Weidong Ren
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, Fudan University, Shanghai, China
| | - Jianjun Yuan
- Department of Ultrasonography, Henan Provincial People's Hospital, Zhengzhou, China
| | - Chunsong Kang
- Department of Ultrasound, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Taiyuan, China
| | - Youbin Deng
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baoming Luo
- Department of Ultrasound, The Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shenglan Guo
- Department of Ultrasonography, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qi Zhou
- Department of Medical Ultrasound, The Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University, Xi'an, China
| | - Ensheng Xue
- Department of Ultrasound, Union Hospital of Fujian Medical University, Fujian Institute of Ultrasound Medicine, Fuzhou, China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University, School of Medicine, Shanghai, China
| | - Qing Zhou
- Department of Ultrasonography, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jie Li
- Department of Ultrasound, Qilu Hospital, Shandong University, Jinan, China
| | - Ping Zhou
- Department of Ultrasound, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ying Gu
- Department of Ultrasonography, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Yuhong Zhang
- Department of Ultrasound, The Second Hospital of Dalian Medical University, Dalian, China
| | - Jianchu Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Longfei Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Lei Zhu
- Department of Medical Imaging Advanced Research, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
23
|
Zhong S, Tu C, Dong X, Feng Q, Chen W, Zhang Y. MsGoF: Breast lesion classification on ultrasound images by multi-scale gradational-order fusion framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107346. [PMID: 36716637 DOI: 10.1016/j.cmpb.2023.107346] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 12/05/2022] [Accepted: 01/08/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Predicting the malignant potential of breast lesions based on breast ultrasound (BUS) images is a crucial component of computer-aided diagnosis system for breast cancers. However, since breast lesions in BUS images generally have various shapes with relatively low contrast and present complex textures, it still remains challenging to accurately identify the malignant potential of breast lesions. METHODS In this paper, we propose a multi-scale gradational-order fusion framework to make full advantages of multi-scale representations incorporating with gradational-order characteristics of BUS images for breast lesions classification. Specifically, we first construct a spatial context aggregation module to generate multi-scale context representations from the original BUS images. Subsequently, multi-scale representations are efficiently fused in feature fusion block that is armed with special fusion strategies to comprehensively capture morphological characteristics of breast lesions. To better characterize complex textures and enhance non-linear modeling capability, we further propose isotropous gradational-order feature module in the feature fusion block to learn and combine multi-order representations. Finally, these multi-scale gradational-order representations are utilized to perform prediction for the malignant potential of breast lesions. RESULTS The proposed model was evaluated on three open datasets by using 5-fold cross-validation. The experimental results (Accuracy: 85.32%, Sensitivity: 85.24%, Specificity: 88.57%, AUC: 90.63% on dataset A; Accuracy: 76.48%, Sensitivity: 72.45%, Specificity: 80.42%, AUC: 78.98% on dataset B) demonstrate that the proposed method achieves the promising performance when compared with other deep learning-based methods in BUS classification task. CONCLUSIONS The proposed method has demonstrated a promising potential to predict malignant potential of breast lesion using ultrasound image in an end-to-end manner.
Collapse
Affiliation(s)
- Shengzhou Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Chao Tu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Xiuyu Dong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
24
|
Mao Y, Liu H, Wang Y, Brenner ED. A deep learning approach to track Arabidopsis seedlings' circumnutation from time-lapse videos. PLANT METHODS 2023; 19:18. [PMID: 36849890 PMCID: PMC9969667 DOI: 10.1186/s13007-023-00984-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND Circumnutation (Darwin et al., Sci Rep 10(1):1-13, 2000) is the side-to-side movement common among growing plant appendages but the purpose of circumnutation is not always clear. Accurately tracking and quantifying circumnutation can help researchers to better study its underlying purpose. RESULTS In this paper, a deep learning-based model is proposed to track the circumnutating flowering apices in the plant Arabidopsis thaliana from time-lapse videos. By utilizing U-Net to segment the apex, and combining it with the model update mechanism, pre- and post- processing steps, the proposed model significantly improves the tracking time and accuracy over other baseline tracking methods. Additionally, we evaluate the computational complexity of the proposed model and further develop a method to accelerate the inference speed of the model. The fast algorithm can track the apices in real-time on a computer without a dedicated GPU. CONCLUSION We demonstrate that the accuracy of tracking the flowering apices in the plant Arabidopsis thaliana can be improved with our proposed deep learning-based model in terms of both the racking success rate and the tracking error. We also show that the improvement in the tracking accuracy is statistically significant. The time-lapse video dataset of Arabidopsis is also provided which can be used for future studies on Arabidopsis in various takes.
Collapse
Affiliation(s)
- Yixiang Mao
- Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA.
| | - Hejian Liu
- Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA
| | - Yao Wang
- Department of Electrical and Computer Engineering, New York University, Brooklyn, NY, USA
| | | |
Collapse
|
25
|
Boubacar Goga A. Artificial Intelligence at the Service of Medical Imaging in the Detection of Breast Tumors. ARTIF INTELL 2023. [DOI: 10.5772/intechopen.108739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Artificial intelligence is currently capable of imitating clinical reasoning in order to make a diagnosis, in particular that of breast cancer. This is possible, thanks to the exponential increase in medical images. Indeed, artificial intelligence systems are used to assist doctors and not replace them. Breast cancer is a cancerous tumor that can invade and destroy nearby tissue. Therefore, early and reliable detection of this disease is a great asset for the medical field. Some people use medical imaging techniques to diagnose this disease. Given the drawbacks of these techniques, diagnostic errors of doctors related to fatigue or inexperience, this work consists of showing how artificial intelligence methods, in particular artificial neural networks (ANN), deep learning (DL), support vector machines (SVM), expert systems, fuzzy logic can be applied on breast imaging, with the aim of improving the detection of this global scourge. Finally, the proposed system is composed of two (2) essential steps: the tumor detection phase and the diagnostic phase allowing the latter to decide whether the tumor is benign or malignant.
Collapse
|
26
|
Jingfang DMD, Jianyun WMD, Xiangzhu WMD. Predicting Malignancy in Sonographic Features of Thyroid Nodules Using Convolutional Neural Networks ResNet50 Model. ADVANCED ULTRASOUND IN DIAGNOSIS AND THERAPY 2023. [DOI: 10.37015/audt.2023.220023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023] Open
|
27
|
A Multi-Stage Approach to Breast Cancer Classification Using Histopathology Images. Diagnostics (Basel) 2022; 13:diagnostics13010126. [PMID: 36611418 PMCID: PMC9818545 DOI: 10.3390/diagnostics13010126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/16/2022] [Accepted: 12/25/2022] [Indexed: 01/03/2023] Open
Abstract
Breast cancer is one of the deadliest diseases worldwide among women. Early diagnosis and proper treatment can save many lives. Breast image analysis is a popular method for detecting breast cancer. Computer-aided diagnosis of breast images helps radiologists do the task more efficiently and appropriately. Histopathological image analysis is an important diagnostic method for breast cancer, which is basically microscopic imaging of breast tissue. In this work, we developed a deep learning-based method to classify breast cancer using histopathological images. We propose a patch-classification model to classify the image patches, where we divide the images into patches and pre-process these patches with stain normalization, regularization, and augmentation methods. We use machine-learning-based classifiers and ensembling methods to classify the image patches into four categories: normal, benign, in situ, and invasive. Next, we use the patch information from this model to classify the images into two classes (cancerous and non-cancerous) and four other classes (normal, benign, in situ, and invasive). We introduce a model to utilize the 2-class classification probabilities and classify the images into a 4-class classification. The proposed method yields promising results and achieves a classification accuracy of 97.50% for 4-class image classification and 98.6% for 2-class image classification on the ICIAR BACH dataset.
Collapse
|
28
|
Artificial Intelligence in Breast Ultrasound: From Diagnosis to Prognosis-A Rapid Review. Diagnostics (Basel) 2022; 13:diagnostics13010058. [PMID: 36611350 PMCID: PMC9818181 DOI: 10.3390/diagnostics13010058] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Ultrasound (US) is a fundamental diagnostic tool in breast imaging. However, US remains an operator-dependent examination. Research into and the application of artificial intelligence (AI) in breast US are increasing. The aim of this rapid review was to assess the current development of US-based artificial intelligence in the field of breast cancer. METHODS Two investigators with experience in medical research performed literature searching and data extraction on PubMed. The studies included in this rapid review evaluated the role of artificial intelligence concerning BC diagnosis, prognosis, molecular subtypes of breast cancer, axillary lymph node status, and the response to neoadjuvant chemotherapy. The mean values of sensitivity, specificity, and AUC were calculated for the main study categories with a meta-analytical approach. RESULTS A total of 58 main studies, all published after 2017, were included. Only 9/58 studies were prospective (15.5%); 13/58 studies (22.4%) used an ML approach. The vast majority (77.6%) used DL systems. Most studies were conducted for the diagnosis or classification of BC (55.1%). At present, all the included studies showed that AI has excellent performance in breast cancer diagnosis, prognosis, and treatment strategy. CONCLUSIONS US-based AI has great potential and research value in the field of breast cancer diagnosis, treatment, and prognosis. More prospective and multicenter studies are needed to assess the potential impact of AI in breast ultrasound.
Collapse
|
29
|
Kabir SM, Bhuiyan MIH. Correlated-Weighted Statistically Modeled Contourlet and Curvelet Coefficient Image-Based Breast Tumor Classification Using Deep Learning. Diagnostics (Basel) 2022; 13:diagnostics13010069. [PMID: 36611361 PMCID: PMC9818942 DOI: 10.3390/diagnostics13010069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 12/14/2022] [Accepted: 12/22/2022] [Indexed: 12/28/2022] Open
Abstract
Deep learning-based automatic classification of breast tumors using parametric imaging techniques from ultrasound (US) B-mode images is still an exciting research area. The Rician inverse Gaussian (RiIG) distribution is currently emerging as an appropriate example of statistical modeling. This study presents a new approach of correlated-weighted contourlet-transformed RiIG (CWCtr-RiIG) and curvelet-transformed RiIG (CWCrv-RiIG) image-based deep convolutional neural network (CNN) architecture for breast tumor classification from B-mode ultrasound images. A comparative study with other statistical models, such as Nakagami and normal inverse Gaussian (NIG) distributions, is also experienced here. The weighted entitled here is for weighting the contourlet and curvelet sub-band coefficient images by correlation with their corresponding RiIG statistically modeled images. By taking into account three freely accessible datasets (Mendeley, UDIAT, and BUSI), it is demonstrated that the proposed approach can provide more than 98 percent accuracy, sensitivity, specificity, NPV, and PPV values using the CWCtr-RiIG images. On the same datasets, the suggested method offers superior classification performance to several other existing strategies.
Collapse
Affiliation(s)
- Shahriar M. Kabir
- Department of Electrical and Electronic Engineering, Green University of Bangladesh, Dhaka 1207, Bangladesh
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh
- Correspondence: ; Tel.: +88-017-6461-0728
| | - Mohammed I. H. Bhuiyan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology, Dhaka 1000, Bangladesh
| |
Collapse
|
30
|
Baek J, O’Connell AM, Parker KJ. Improving breast cancer diagnosis by incorporating raw ultrasound parameters into machine learning. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022; 3:045013. [PMID: 36698865 PMCID: PMC9855672 DOI: 10.1088/2632-2153/ac9bcc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 10/15/2022] [Accepted: 10/19/2022] [Indexed: 01/28/2023] Open
Abstract
The improved diagnostic accuracy of ultrasound breast examinations remains an important goal. In this study, we propose a biophysical feature-based machine learning method for breast cancer detection to improve the performance beyond a benchmark deep learning algorithm and to furthermore provide a color overlay visual map of the probability of malignancy within a lesion. This overall framework is termed disease-specific imaging. Previously, 150 breast lesions were segmented and classified utilizing a modified fully convolutional network and a modified GoogLeNet, respectively. In this study multiparametric analysis was performed within the contoured lesions. Features were extracted from ultrasound radiofrequency, envelope, and log-compressed data based on biophysical and morphological models. The support vector machine with a Gaussian kernel constructed a nonlinear hyperplane, and we calculated the distance between the hyperplane and each feature's data point in multiparametric space. The distance can quantitatively assess a lesion and suggest the probability of malignancy that is color-coded and overlaid onto B-mode images. Training and evaluation were performed on in vivo patient data. The overall accuracy for the most common types and sizes of breast lesions in our study exceeded 98.0% for classification and 0.98 for an area under the receiver operating characteristic curve, which is more precise than the performance of radiologists and a deep learning system. Further, the correlation between the probability and Breast Imaging Reporting and Data System enables a quantitative guideline to predict breast cancer. Therefore, we anticipate that the proposed framework can help radiologists achieve more accurate and convenient breast cancer classification and detection.
Collapse
Affiliation(s)
- Jihye Baek
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America
| | - Avice M O’Connell
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, United States of America
| | - Kevin J Parker
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States of America,Author to whom any correspondence should be addressed
| |
Collapse
|
31
|
Nie S, Wei Y, Zhao F, Dong Y, Chen Y, Li Q, Du W, Li X, Yang X, Li Z. A dual deep neural network for auto-delineation in cervical cancer radiotherapy with clinical validation. Radiat Oncol 2022; 17:182. [DOI: 10.1186/s13014-022-02157-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 11/08/2022] [Indexed: 11/16/2022] Open
Abstract
Abstract
Background
Artificial intelligence (AI) algorithms are capable of automatically detecting contouring boundaries in medical images. However, the algorithms impact on clinical practice of cervical cancer are unclear. We aimed to develop an AI-assisted system for automatic contouring of the clinical target volume (CTV) and organs-at-risk (OARs) in cervical cancer radiotherapy and conduct clinical-based observations.
Methods
We first retrospectively collected data of 203 patients with cervical cancer from West China Hospital. The proposed method named as SegNet was developed and trained with different data groups. Quantitative metrics and clinical-based grading were used to evaluate differences between several groups of automatic contours. Then, 20 additional cases were conducted to compare the workload and quality of AI-assisted contours with manual delineation from scratch.
Results
For automatic CTVs, the dice similarity coefficient (DSC) values of the SegNet trained with incorporating multi-group data achieved 0.85 ± 0.02, which was statistically better than the DSC values of SegNet independently trained with the SegNet(A) (0.82 ± 0.04), SegNet(B) (0.82 ± 0.03) or SegNet(C) (0.81 ± 0.04). Moreover, the DSC values of the SegNet and UNet, respectively, 0.85 and 0.82 for the CTV (P < 0.001), 0.93 and 0.92 for the bladder (P = 0.44), 0.84 and 0.81 for the rectum (P = 0.02), 0.89 and 0.84 for the bowel bag (P < 0.001), 0.93 and 0.92 for the right femoral head (P = 0.17), and 0.92 and 0.91 for the left femoral head (P = 0.25). The clinical-based grading also showed that SegNet trained with multi-group data obtained better performance of 352/360 relative to it trained with the SegNet(A) (334/360), SegNet(B) (333/360) or SegNet(C) (320/360). The manual revision time for automatic CTVs (OARs not yet include) was 9.54 ± 2.42 min relative to fully manual delineation with 30.95 ± 15.24 min.
Conclusion
The proposed SegNet can improve the performance at automatic delineation for cervical cancer radiotherapy by incorporating multi-group data. It is clinically applicable that the AI-assisted system can shorten manual delineation time at no expense of quality.
Collapse
|
32
|
A. Mohamed E, Gaber T, Karam O, Rashed EA. A Novel CNN pooling layer for breast cancer segmentation and classification from thermograms. PLoS One 2022; 17:e0276523. [PMID: 36269756 PMCID: PMC9586394 DOI: 10.1371/journal.pone.0276523] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/10/2022] [Indexed: 11/06/2022] Open
Abstract
Breast cancer is the second most frequent cancer worldwide, following lung cancer and the fifth leading cause of cancer death and a major cause of cancer death among women. In recent years, convolutional neural networks (CNNs) have been successfully applied for the diagnosis of breast cancer using different imaging modalities. Pooling is a main data processing step in CNN that decreases the feature maps’ dimensionality without losing major patterns. However, the effect of pooling layer was not studied efficiently in literature. In this paper, we propose a novel design for the pooling layer called vector pooling block (VPB) for the CCN algorithm. The proposed VPB consists of two data pathways, which focus on extracting features along horizontal and vertical orientations. The VPB makes the CNNs able to collect both global and local features by including long and narrow pooling kernels, which is different from the traditional pooling layer, that gathers features from a fixed square kernel. Based on the novel VPB, we proposed a new pooling module called AVG-MAX VPB. It can collect informative features by using two types of pooling techniques, maximum and average pooling. The VPB and the AVG-MAX VPB are plugged into the backbone CNNs networks, such as U-Net, AlexNet, ResNet18 and GoogleNet, to show the advantages in segmentation and classification tasks associated with breast cancer diagnosis from thermograms. The proposed pooling layer was evaluated using a benchmark thermogram database (DMR-IR) and its results compared with U-Net results which was used as base results. The U-Net results were as follows: global accuracy = 96.6%, mean accuracy = 96.5%, mean IoU = 92.07%, and mean BF score = 78.34%. The VBP-based results were as follows: global accuracy = 98.3%, mean accuracy = 97.9%, mean IoU = 95.87%, and mean BF score = 88.68% while the AVG-MAX VPB-based results were as follows: global accuracy = 99.2%, mean accuracy = 98.97%, mean IoU = 98.03%, and mean BF score = 94.29%. Other network architectures also demonstrate superior improvement considering the use of VPB and AVG-MAX VPB.
Collapse
Affiliation(s)
- Esraa A. Mohamed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
| | - Tarek Gaber
- Faculty of Computers and Informatics, Suez Canal University, Ismailia, Egypt
- School of Science, Engineering and Environment University of Salford, Manchester, United Kingdom
- * E-mail:
| | - Omar Karam
- Faculty of Informatics and Computer Science, British University in Egypt (BUE), Cairo, Egypt
| | - Essam A. Rashed
- Faculty of Science, Department of Mathematics, Suez Canal University, Ismailia, Egypt
- Graduate School of Information Science, University of Hyogo, Kobe, Japan
| |
Collapse
|
33
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia,*Correspondence: Asif Hassan Syed,
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
34
|
Bydon M, Durrani S, Mualem W. Commentary: Validation of Machine Learning-Based Automated Surgical Instrument Annotation Using Publicly Available Intraoperative Video. Oper Neurosurg (Hagerstown) 2022; 23:e158-e159. [PMID: 35972093 DOI: 10.1227/ons.0000000000000285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 04/03/2022] [Indexed: 02/04/2023] Open
Affiliation(s)
- Mohamad Bydon
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota, USA.,Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | - Sulaman Durrani
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota, USA.,Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | - William Mualem
- Mayo Clinic Neuro-Informatics Laboratory, Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota, USA.,Department of Neurologic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
35
|
A hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme for breast cancer segmentation based on DCE-MRI. Med Image Anal 2022; 82:102572. [PMID: 36055051 DOI: 10.1016/j.media.2022.102572] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 07/08/2022] [Accepted: 08/11/2022] [Indexed: 11/24/2022]
Abstract
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Collapse
|
36
|
Gu Y, Xu W, Lin B, An X, Tian J, Ran H, Ren W, Chang C, Yuan J, Kang C, Deng Y, Wang H, Luo B, Guo S, Zhou Q, Xue E, Zhan W, Zhou Q, Li J, Zhou P, Chen M, Gu Y, Chen W, Zhang Y, Li J, Cong L, Zhu L, Wang H, Jiang Y. Deep learning based on ultrasound images assists breast lesion diagnosis in China: a multicenter diagnostic study. Insights Imaging 2022; 13:124. [PMID: 35900608 PMCID: PMC9334487 DOI: 10.1186/s13244-022-01259-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Accepted: 06/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Studies on deep learning (DL)-based models in breast ultrasound (US) remain at the early stage due to a lack of large datasets for training and independent test sets for verification. We aimed to develop a DL model for differentiating benign from malignant breast lesions on US using a large multicenter dataset and explore the model's ability to assist the radiologists. METHODS A total of 14,043 US images from 5012 women were prospectively collected from 32 hospitals. To develop the DL model, the patients from 30 hospitals were randomly divided into a training cohort (n = 4149) and an internal test cohort (n = 466). The remaining 2 hospitals (n = 397) were used as the external test cohorts (ETC). We compared the model with the prospective Breast Imaging Reporting and Data System assessment and five radiologists. We also explored the model's ability to assist the radiologists using two different methods. RESULTS The model demonstrated excellent diagnostic performance with the ETC, with a high area under the receiver operating characteristic curve (AUC, 0.913), sensitivity (88.84%), specificity (83.77%), and accuracy (86.40%). In the comparison set, the AUC was similar to that of the expert (p = 0.5629) and one experienced radiologist (p = 0.2112) and significantly higher than that of three inexperienced radiologists (p < 0.01). After model assistance, the accuracies and specificities of the radiologists were substantially improved without loss in sensitivities. CONCLUSIONS The DL model yielded satisfactory predictions in distinguishing benign from malignant breast lesions. The model showed the potential value in improving the diagnosis of breast lesions by radiologists.
Collapse
Affiliation(s)
- Yang Gu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Wen Xu
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Bin Lin
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Xing An
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Jiawei Tian
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Haitao Ran
- Department of Ultrasound, The Second Affiliated Hospital of Chongqing Medical University and Chongqing Key Laboratory of Ultrasound Molecular Imaging, Chongqing, China
| | - Weidong Ren
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Cai Chang
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jianjun Yuan
- Department of Ultrasonography, Henan Provincial People's Hospital, Zhengzhou, China
| | - Chunsong Kang
- Department of Ultrasound, Shanxi Bethune Hospital, Shanxi Academy of Medical Sciences, Taiyuan, China
| | - Youbin Deng
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, China
| | - Hui Wang
- Department of Ultrasound, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Baoming Luo
- Department of Ultrasound, The Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shenglan Guo
- Department of Ultrasonography, First Affiliated Hospital of Guangxi Medical University, Nanning, China
| | - Qi Zhou
- Department of Medical Ultrasound, The Second Affiliated Hospital, School of Medicine, Xi'an Jiaotong University, Xi'an, China
| | - Ensheng Xue
- Department of Ultrasound, Union Hospital of Fujian Medical University, Fujian Institute of Ultrasound Medicine, Fuzhou, China
| | - Weiwei Zhan
- Department of Ultrasound, Ruijin Hospital, Shanghai Jiaotong University, School of Medicine, Shanghai, China
| | - Qing Zhou
- Department of Ultrasonography, Renmin Hospital of Wuhan University, Wuhan, China
| | - Jie Li
- Department of Ultrasound, Qilu Hospital, Shandong University, Jinan, 250012, China
| | - Ping Zhou
- Department of Ultrasound, The Third Xiangya Hospital of Central South University, Changsha, China
| | - Man Chen
- Department of Ultrasound Medicine, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Ying Gu
- Department of Ultrasonography, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Wu Chen
- Department of Ultrasound, The First Hospital of Shanxi Medical University, Taiyuan, China
| | - Yuhong Zhang
- Department of Ultrasound, The Second Hospital of Dalian Medical University, Dalian, China
| | - Jianchu Li
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China
| | - Longfei Cong
- Department of Medical Imaging Advanced Research, Beijing Research Institute, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Beijing, China
| | - Lei Zhu
- Department of Medical Imaging Advanced Research, Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Hongyan Wang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| | - Yuxin Jiang
- Department of Ultrasound, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, No.1 Shuai Fu Yuan, Dong Cheng District, Beijing, 100730, China.
| |
Collapse
|
37
|
Gong X, Zhao X, Fan L, Li T, Guo Y, Luo J. BUS-net: a bimodal ultrasound network for breast cancer diagnosis. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01596-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
38
|
Wu H, Ye X, Jiang Y, Tian H, Yang K, Cui C, Shi S, Liu Y, Huang S, Chen J, Xu J, Dong F. A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images. Front Oncol 2022; 12:869421. [PMID: 35875151 PMCID: PMC9302001 DOI: 10.3389/fonc.2022.869421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 05/23/2022] [Indexed: 01/08/2023] Open
Abstract
Purpose The purpose of this study was to explore the performance of different parameter combinations of deep learning (DL) models (Xception, DenseNet121, MobileNet, ResNet50 and EfficientNetB0) and input image resolutions (REZs) (224 × 224, 320 × 320 and 488 × 488 pixels) for breast cancer diagnosis. Methods This multicenter study retrospectively studied gray-scale ultrasound breast images enrolled from two Chinese hospitals. The data are divided into training, validation, internal testing and external testing set. Three-hundreds images were randomly selected for the physician-AI comparison. The Wilcoxon test was used to compare the diagnose error of physicians and models under P=0.05 and 0.10 significance level. The specificity, sensitivity, accuracy, area under the curve (AUC) were used as primary evaluation metrics. Results A total of 13,684 images of 3447 female patients are finally included. In external test the 224 and 320 REZ achieve the best performance in MobileNet and EfficientNetB0 respectively (AUC: 0.893 and 0.907). Meanwhile, 448 REZ achieve the best performance in Xception, DenseNet121 and ResNet50 (AUC: 0.900, 0.883 and 0.871 respectively). In physician-AI test set, the 320 REZ for EfficientNetB0 (AUC: 0.896, P < 0.1) is better than senior physicians. Besides, the 224 REZ for MobileNet (AUC: 0.878, P < 0.1), 448 REZ for Xception (AUC: 0.895, P < 0.1) are better than junior physicians. While the 448 REZ for DenseNet121 (AUC: 0.880, P < 0.05) and ResNet50 (AUC: 0.838, P < 0.05) are only better than entry physicians. Conclusion Based on the gray-scale ultrasound breast images, we obtained the best DL combination which was better than the physicians.
Collapse
Affiliation(s)
- Huaiyu Wu
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Xiuqin Ye
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Yitao Jiang
- Research and Development Department, Microport Prophecy, Shanghai, China
- Research and Development Department, Illuminate Limited Liability Company, Shenzhen, China
| | - Hongtian Tian
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Keen Yang
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Chen Cui
- Research and Development Department, Microport Prophecy, Shanghai, China
- Research and Development Department, Illuminate Limited Liability Company, Shenzhen, China
| | - Siyuan Shi
- Research and Development Department, Microport Prophecy, Shanghai, China
- Research and Development Department, Illuminate Limited Liability Company, Shenzhen, China
| | - Yan Liu
- The Key Laboratory of Cardiovascular Remodeling and Function Research, Chinese Ministry of Education and Chinese Ministry of Health, and The State and Shandong Province Joint Key Laboratory of Translational Cardiovascular Medicine, Cheeloo College of Medicine, Shandong University, Qilu Hospital of Shandong University, Jinan, China
| | - Sijing Huang
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Jing Chen
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
| | - Jinfeng Xu
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
- *Correspondence: Jinfeng Xu, ; Fajin Dong,
| | - Fajin Dong
- Department of Ultrasound, First Clinical College of Jinan University, Second Clinical College of Jinan University, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen People’s Hospital, Shenzhen, China
- *Correspondence: Jinfeng Xu, ; Fajin Dong,
| |
Collapse
|
39
|
Wang Y, Zhang L, Shu X, Feng Y, Yi Z, Lv Q. Feature-Sensitive Deep Convolutional Neural Network for Multi-Instance Breast Cancer Detection. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:2241-2251. [PMID: 33600319 DOI: 10.1109/tcbb.2021.3060183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
To obtain a well-performed computer-aided detection model for detecting breast cancer, it is usually needed to design an effective and efficient algorithm and a well-labeled dataset to train it. In this paper, first, a multi-instance mammography clinic dataset was constructed. Each case in the dataset includes a different number of instances captured from different views, it is labeled according to the pathological report, and all the instances of one case share one label. Nevertheless, the instances captured from different views may have various levels of contributions to conclude the category of the target case. Motivated by this observation, a feature-sensitive deep convolutional neural network with an end-to-end training manner is proposed to detect breast cancer. The proposed method first uses a pre-train model with some custom layers to extract image features. Then, it adopts a feature fusion module to learn to compute the weight of each feature vector. It makes the different instances of each case have different sensibility on the classifier. Lastly, a classifier module is used to classify the fused features. The experimental results on both our constructed clinic dataset and two public datasets have demonstrated the effectiveness of the proposed method.
Collapse
|
40
|
van der Velden BH, Kuijf HJ, Gilhuijs KG, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal 2022; 79:102470. [DOI: 10.1016/j.media.2022.102470] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 03/15/2022] [Accepted: 05/02/2022] [Indexed: 12/11/2022]
|
41
|
Image Moment-Based Features for Mass Detection in Breast US Images via Machine Learning and Neural Network Classification Models. INVENTIONS 2022. [DOI: 10.3390/inventions7020042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Differentiating between malignant and benign masses using machine learning in the recognition of breast ultrasound (BUS) images is a technique with good accuracy and precision, which helps doctors make a correct diagnosis. The method proposed in this paper integrates Hu’s moments in the analysis of the breast tumor. The extracted features feed a k-nearest neighbor (k-NN) classifier and a radial basis function neural network (RBFNN) to classify breast tumors into benign and malignant. The raw images and the tumor masks provided as ground-truth images belong to the public digital BUS images database. Certain metrics such as accuracy, sensitivity, precision, and F1-score were used to evaluate the segmentation results and to select Hu’s moments showing the best capacity to discriminate between malignant and benign breast tissues in BUS images. Regarding the selection of Hu’s moments, the k-NN classifier reached 85% accuracy for moment M1 and 80% for moment M5 whilst RBFNN reached an accuracy of 76% for M1. The proposed method might be used to assist the clinical diagnosis of breast cancer identification by providing a good combination between segmentation and Hu’s moments.
Collapse
|
42
|
An optimized deep learning architecture for breast cancer diagnosis based on improved marine predators algorithm. Neural Comput Appl 2022; 34:18015-18033. [PMID: 35698722 PMCID: PMC9175533 DOI: 10.1007/s00521-022-07445-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 05/14/2022] [Indexed: 11/12/2022]
Abstract
Breast cancer is the second leading cause of death in women; therefore, effective early detection of this cancer can reduce its mortality rate. Breast cancer detection and classification in the early phases of development may allow for optimal therapy. Convolutional neural networks (CNNs) have enhanced tumor detection and classification efficiency in medical imaging compared to traditional approaches. This paper proposes a novel classification model for breast cancer diagnosis based on a hybridized CNN and an improved optimization algorithm, along with transfer learning, to help radiologists detect abnormalities efficiently. The marine predators algorithm (MPA) is the optimization algorithm we used, and we improve it using the opposition-based learning strategy to cope with the implied weaknesses of the original MPA. The improved marine predators algorithm (IMPA) is used to find the best values for the hyperparameters of the CNN architecture. The proposed method uses a pretrained CNN model called ResNet50 (residual network). This model is hybridized with the IMPA algorithm, resulting in an architecture called IMPA-ResNet50. Our evaluation is performed on two mammographic datasets, the mammographic image analysis society (MIAS) and curated breast imaging subset of DDSM (CBIS-DDSM) datasets. The proposed model was compared with other state-of-the-art approaches. The obtained results showed that the proposed model outperforms the compared state-of-the-art approaches, which are beneficial to classification performance, achieving 98.32% accuracy, 98.56% sensitivity, and 98.68% specificity on the CBIS-DDSM dataset and 98.88% accuracy, 97.61% sensitivity, and 98.40% specificity on the MIAS dataset. To evaluate the performance of IMPA in finding the optimal values for the hyperparameters of ResNet50 architecture, it compared to four other optimization algorithms including gravitational search algorithm (GSA), Harris hawks optimization (HHO), whale optimization algorithm (WOA), and the original MPA algorithm. The counterparts algorithms are also hybrid with the ResNet50 architecture produce models named GSA-ResNet50, HHO-ResNet50, WOA-ResNet50, and MPA-ResNet50, respectively. The results indicated that the proposed IMPA-ResNet50 is achieved a better performance than other counterparts.
Collapse
|
43
|
Inan MSK, Alam FI, Hasan R. Deep integrated pipeline of segmentation guided classification of breast cancer from ultrasound images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103553] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
44
|
A gated convolutional neural network for classification of breast lesions in ultrasound images. Soft comput 2022. [DOI: 10.1007/s00500-022-07024-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
45
|
Byra M, Jarosik P, Dobruch-Sobczak K, Klimonda Z, Piotrzkowska-Wroblewska H, Litniewski J, Nowicki A. Joint segmentation and classification of breast masses based on ultrasound radio-frequency data and convolutional neural networks. ULTRASONICS 2022; 121:106682. [PMID: 35065458 DOI: 10.1016/j.ultras.2021.106682] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 12/08/2021] [Accepted: 12/30/2021] [Indexed: 06/14/2023]
Abstract
In this paper, we propose a novel deep learning method for joint classification and segmentation of breast masses based on radio-frequency (RF) ultrasound (US) data. In comparison to commonly used classification and segmentation techniques, utilizing B-mode US images, we train the network with RF data (data before envelope detection and dynamic compression), which are considered to include more information on tissue's physical properties than standard B-mode US images. Our multi-task network, based on the Y-Net architecture, can effectively process large matrices of RF data by mixing 1D and 2D convolutional filters. We use data collected from 273 breast masses to compare the performance of networks trained with RF data and US images. The multi-task model developed based on the RF data achieved good classification performance, with area under the receiver operating characteristic curve (AUC) of 0.90. The network based on the US images achieved AUC of 0.87. In the case of the segmentation, we obtained mean Dice scores of 0.64 and 0.60 for the approaches utilizing US images and RF data, respectively. Moreover, the interpretability of the networks was studied using class activation mapping technique and by filter weights visualizations.
Collapse
Affiliation(s)
- Michal Byra
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland.
| | - Piotr Jarosik
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Katarzyna Dobruch-Sobczak
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland; Maria Sklodowska-Curie Memorial Cancer Centre and Institute of Oncology, Warsaw, Poland
| | - Ziemowit Klimonda
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | | | - Jerzy Litniewski
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| | - Andrzej Nowicki
- Institute of Fundamental Technological Research, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
46
|
Liu H, Cui G, Luo Y, Guo Y, Zhao L, Wang Y, Subasi A, Dogan S, Tuncer T. Artificial Intelligence-Based Breast Cancer Diagnosis Using Ultrasound Images and Grid-Based Deep Feature Generator. Int J Gen Med 2022; 15:2271-2282. [PMID: 35256855 PMCID: PMC8898057 DOI: 10.2147/ijgm.s347491] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/11/2022] [Indexed: 01/30/2023] Open
Abstract
Purpose Breast cancer is a prominent cancer type with high mortality. Early detection of breast cancer could serve to improve clinical outcomes. Ultrasonography is a digital imaging technique used to differentiate benign and malignant tumors. Several artificial intelligence techniques have been suggested in the literature for breast cancer detection using breast ultrasonography (BUS). Nowadays, particularly deep learning methods have been applied to biomedical images to achieve high classification performances. Patients and Methods This work presents a new deep feature generation technique for breast cancer detection using BUS images. The widely known 16 pre-trained CNN models have been used in this framework as feature generators. In the feature generation phase, the used input image is divided into rows and columns, and these deep feature generators (pre-trained models) have applied to each row and column. Therefore, this method is called a grid-based deep feature generator. The proposed grid-based deep feature generator can calculate the error value of each deep feature generator, and then it selects the best three feature vectors as a final feature vector. In the feature selection phase, iterative neighborhood component analysis (INCA) chooses 980 features as an optimal number of features. Finally, these features are classified by using a deep neural network (DNN). Results The developed grid-based deep feature generation-based image classification model reached 97.18% classification accuracy on the ultrasonic images for three classes, namely malignant, benign, and normal. Conclusion The findings obviously denoted that the proposed grid deep feature generator and INCA-based feature selection model successfully classified breast ultrasonic images.
Collapse
Affiliation(s)
- Haixia Liu
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Guozhong Cui
- Department of Surgical Oncology, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yi Luo
- Medical Statistics Room, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Yajie Guo
- Department of Ultrasound, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, People's Republic of China
| | - Lianli Zhao
- Department of Internal Medicine teaching and research group, Cangzhou Central Hospital, Cangzhou, Hebei Province, 061000, China
| | - Yueheng Wang
- Department of Ultrasound, The Second Hospital of Hebei MedicalUniversity, Shijiazhuang, Hebei Province, 050000, People's Republic of China
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, 20520, Finland.,Department of Computer Science, College of Engineering, Effat University, Jeddah, 21478, Saudi Arabia
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| | - Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, 23119, Turkey
| |
Collapse
|
47
|
Pi Y, Li Q, Qi X, Deng D, Yi Z. Automated assessment of BI-RADS categories for ultrasound images using multi-scale neural networks with an order-constrained loss function. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03140-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
48
|
Deep learning in image-based breast and cervical cancer detection: a systematic review and meta-analysis. NPJ Digit Med 2022; 5:19. [PMID: 35169217 PMCID: PMC8847584 DOI: 10.1038/s41746-022-00559-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 12/22/2021] [Indexed: 12/15/2022] Open
Abstract
Accurate early detection of breast and cervical cancer is vital for treatment success. Here, we conduct a meta-analysis to assess the diagnostic performance of deep learning (DL) algorithms for early breast and cervical cancer identification. Four subgroups are also investigated: cancer type (breast or cervical), validation type (internal or external), imaging modalities (mammography, ultrasound, cytology, or colposcopy), and DL algorithms versus clinicians. Thirty-five studies are deemed eligible for systematic review, 20 of which are meta-analyzed, with a pooled sensitivity of 88% (95% CI 85–90%), specificity of 84% (79–87%), and AUC of 0.92 (0.90–0.94). Acceptable diagnostic performance with analogous DL algorithms was highlighted across all subgroups. Therefore, DL algorithms could be useful for detecting breast and cervical cancer using medical imaging, having equivalent performance to human clinicians. However, this tentative assertion is based on studies with relatively poor designs and reporting, which likely caused bias and overestimated algorithm performance. Evidence-based, standardized guidelines around study methods and reporting are required to improve the quality of DL research.
Collapse
|
49
|
RiIG Modeled WCP Image-Based CNN Architecture and Feature-Based Approach in Breast Tumor Classification from B-Mode Ultrasound. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112412138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
This study presents two new approaches based on Weighted Contourlet Parametric (WCP) images for the classification of breast tumors from B-mode ultrasound images. The Rician Inverse Gaussian (RiIG) distribution is considered for modeling the statistics of ultrasound images in the Contourlet transform domain. The WCP images are obtained by weighting the RiIG modeled Contourlet sub-band coefficient images. In the feature-based approach, various geometrical, statistical, and texture features are shown to have low ANOVA p-value, thus indicating a good capacity for class discrimination. Using three publicly available datasets (Mendeley, UDIAT, and BUSI), it is shown that the classical feature-based approach can yield more than 97% accuracy across the datasets for breast tumor classification using WCP images while the custom-made convolutional neural network (CNN) can deliver more than 98% accuracy, sensitivity, specificity, NPV, and PPV values utilizing the same WCP images. Both methods provide superior classification performance, better than those of several existing techniques on the same datasets.
Collapse
|
50
|
Saba T, Abunadi I, Sadad T, Khan AR, Bahaj SA. Optimizing the transfer-learning with pretrained deep convolutional neural networks for first stage breast tumor diagnosis using breast ultrasound visual images. Microsc Res Tech 2021; 85:1444-1453. [PMID: 34908213 DOI: 10.1002/jemt.24008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 09/09/2021] [Accepted: 10/26/2021] [Indexed: 11/10/2022]
Abstract
Female accounts for approximately 50% of the total population worldwide and many of them had breast cancer. Computer-aided diagnosis frameworks could reduce the number of needless biopsies and the workload of radiologists. This research aims to detect benign and malignant tumors automatically using breast ultrasound (BUS) images. Accordingly, two pretrained deep convolutional neural network (CNN) models were employed for transfer learning using BUS images like AlexNet and DenseNet201. A total of 697 BUS images containing benign and malignant tumors are preprocessed and performed classification tasks using the transfer learning-based CNN models. The classification accuracy of the benign and malignant tasks is completed and achieved 92.8% accuracy using the DensNet201 model. The results thus achieved compared in state of the art using benchmark data set and concluded proposed model outperforms in accuracy from first stage breast tumor diagnosis. Finally, the proposed model could help radiologists diagnose benign and malignant tumors swiftly by screening suspected patients.
Collapse
Affiliation(s)
- Tanzila Saba
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Ibrahim Abunadi
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Tariq Sadad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, 44000, Pakistan
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Analytics Lab, CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Saeed Ali Bahaj
- MIS Department, College of Business Administration, Prince Sattam bin Abdulaziz University, Alkharj, 11942, Saudi Arabia
| |
Collapse
|