1
|
Li G, Su Win NS, Fan M, Li J, Lin L. Enhance registration precision of transmission breast images utilizing improved Levenberg-Marquardt optimization algorithm with normalized cross-correlation. Comput Biol Med 2025; 186:109654. [PMID: 39798506 DOI: 10.1016/j.compbiomed.2025.109654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 12/29/2024] [Accepted: 01/03/2025] [Indexed: 01/15/2025]
Abstract
Transmission imaging may become a possible advance for breast cancer screening with non-invasive, cost-effective, and radiation-free approaches for early detection. Frame accumulation can successfully eliminate the issue of low SNR, low grayscale and poor quality in transmission image. However, frame accumulation accuracy can be diminished because of inherent human body instability during image acquisition and the light absorption characteristics of breast tissue, resulting in distorted and misplaced image sequences. Therefore, improved Levenberg-Marquardt optimization algorithm with normalized cross-correlation is used as an innovative approach to rectify image sequences before frame accumulation processing. Two separate sets of data, showing breast images with and without markers, were collected using a halogen bulb and a mobile phone camera to validate the suggested method. The approach includes coarse registration utilizing normalized cross-correlation for initial value estimation, followed by fine registration using Levenberg-Marquardt algorithm. The results demonstrate a notable improvement in both accuracy of registration and frame accumulation quality. Specifically, the registration speed showed a remarkable increase, being 8.7 times faster, especially prominent in images that included markers. These images displayed normalized cross-correlation values reaching up to 0.99. The research emphasizes the future potential of the suggested method in overcoming the image quality challenges associated with breast transmission imaging, providing a significant milestone toward more accurate and efficient early breast cancer screening methods. Moreover, transmission imaging systems for the breast have been developed to verify the safety and effectiveness of the implemented technology.
Collapse
Affiliation(s)
- Gang Li
- Medical School of Tianjin University, Tianjin, 300072, China
| | - Nan Su Su Win
- Medical School of Tianjin University, Tianjin, 300072, China
| | - Meiling Fan
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin, 300072, China
| | - Jiatong Li
- Medical School of Tianjin University, Tianjin, 300072, China
| | - Ling Lin
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
2
|
Rehman ZU, Ahmad Fauzi MF, Wan Ahmad WSHM, Abas FS, Cheah PL, Chiew SF, Looi LM. Deep-Learning-Based Approach in Cancer-Region Assessment from HER2-SISH Breast Histopathology Whole Slide Images. Cancers (Basel) 2024; 16:3794. [PMID: 39594748 PMCID: PMC11593209 DOI: 10.3390/cancers16223794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Revised: 11/04/2024] [Accepted: 11/07/2024] [Indexed: 11/28/2024] Open
Abstract
Fluorescence in situ hybridization (FISH) is widely regarded as the gold standard for evaluating human epidermal growth factor receptor 2 (HER2) status in breast cancer; however, it poses challenges such as the need for specialized training and issues related to signal degradation from dye quenching. Silver-enhanced in situ hybridization (SISH) serves as an automated alternative, employing permanent staining suitable for bright-field microscopy. Determining HER2 status involves distinguishing between "Amplified" and "Non-Amplified" regions by assessing HER2 and centromere 17 (CEN17) signals in SISH-stained slides. This study is the first to leverage deep learning for classifying Normal, Amplified, and Non-Amplified regions within HER2-SISH whole slide images (WSIs), which are notably more complex to analyze compared to hematoxylin and eosin (H&E)-stained slides. Our proposed approach consists of a two-stage process: first, we evaluate deep-learning models on annotated image regions, and then we apply the most effective model to WSIs for regional identification and localization. Subsequently, pseudo-color maps representing each class are overlaid, and the WSIs are reconstructed with these mapped regions. Using a private dataset of HER2-SISH breast cancer slides digitized at 40× magnification, we achieved a patch-level classification accuracy of 99.9% and a generalization accuracy of 78.8% by applying transfer learning with a Vision Transformer (ViT) model. The robustness of the model was further evaluated through k-fold cross-validation, yielding an average performance accuracy of 98%, with metrics reported alongside 95% confidence intervals to ensure statistical reliability. This method shows significant promise for clinical applications, particularly in assessing HER2 expression status in HER2-SISH histopathology images. It provides an automated solution that can aid pathologists in efficiently identifying HER2-amplified regions, thus enhancing diagnostic outcomes for breast cancer treatment.
Collapse
Affiliation(s)
- Zaka Ur Rehman
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
| | | | - Wan Siti Halimatul Munirah Wan Ahmad
- Faculty of Engineering, Multimedia University, Cyberjaya 63100, Malaysia; (Z.U.R.); (W.S.H.M.W.A.)
- Institute for Research, Development and Innovation, IMU University, Bukit Jalil, Kuala Lumpur 57000, Malaysia
| | - Fazly Salleh Abas
- Faculty of Engineering and Technology, Multimedia University, Bukit Beruang, Melaka 75450, Malaysia;
| | - Phaik-Leng Cheah
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Seow-Fan Chiew
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| | - Lai-Meng Looi
- Department of Pathology, University Malaya-Medical Center, Kuala Lumpur 50603, Malaysia; (P.-L.C.); (S.-F.C.); (L.-M.L.)
| |
Collapse
|
3
|
Sunba A, AlShammari M, Almuhanna A, Alkhnbashi OS. An Integrated Multimodal-Based CAD System for Breast Cancer Diagnosis. Cancers (Basel) 2024; 16:3740. [PMID: 39594696 PMCID: PMC11591763 DOI: 10.3390/cancers16223740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 10/27/2024] [Accepted: 10/28/2024] [Indexed: 11/28/2024] Open
Abstract
Breast cancer has been one of the main causes of death among women recently, and it has been the focus of attention of many specialists and researchers in the health field. Because of its seriousness and spread speed, breast cancer-resisting methods, early diagnosis, diagnosis, and treatment have been the points of research discussion. Many computers-aided diagnosis (CAD) systems have been proposed to reduce the load on physicians and increase the accuracy of breast tumor diagnosis. To the best of our knowledge, combining patient information, including medical history, breast density, age, and other factors, with mammogram features from both breasts in craniocaudal (CC) and mediolateral oblique (MLO) views has not been previously investigated for breast tumor classification. In this paper, we investigated the effectiveness of using those inputs by comparing two combination approaches. The soft voting approach, produced from statistical information-based models (decision tree, random forest, K-nearest neighbor, Gaussian naive Bayes, gradient boosting, and MLP) and an image-based model (CNN), achieved 90% accuracy. Additionally, concatenating statistical and image-based features in a deep learning model achieved 93% accuracy. We found that it produced promising results that would enhance the CAD systems. As a result, this study finds that using both sides of mammograms outperformed the result of using only the infected side. In addition, integrating the mammogram features with statistical information enhanced the accuracy of the tumor classification. Our findings, based on a novel dataset, incorporate both patient information and four-view mammogram images, covering multiple classes: normal, benign, and malignant.
Collapse
Affiliation(s)
- Amal Sunba
- Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (A.S.); (M.A.)
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Maha AlShammari
- Information and Computer Science Department, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia; (A.S.); (M.A.)
- Computational Unit, Department of Environmental Health, Institute for Research and Medical Consultations, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Afnan Almuhanna
- Department of Radiology, College of Medicine, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia;
| | - Omer S. Alkhnbashi
- Center for Applied and Translational Genomics (CATG), Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai Healthcare City, Dubai P.O. Box 50505, United Arab Emirates
- College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai Healthcare City, Dubai P.O. Box 50505, United Arab Emirates
| |
Collapse
|
4
|
Feng S, Wang Z, Jin Y, Xu S. TabDEG: Classifying differentially expressed genes from RNA-seq data based on feature extraction and deep learning framework. PLoS One 2024; 19:e0305857. [PMID: 39037985 PMCID: PMC11262683 DOI: 10.1371/journal.pone.0305857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 06/05/2024] [Indexed: 07/24/2024] Open
Abstract
Traditional differential expression genes (DEGs) identification models have limitations in small sample size datasets because they require meeting distribution assumptions, otherwise resulting high false positive/negative rates due to sample variation. In contrast, tabular data model based on deep learning (DL) frameworks do not need to consider the data distribution types and sample variation. However, applying DL to RNA-Seq data is still a challenge due to the lack of proper labeling and the small sample size compared to the number of genes. Data augmentation (DA) extracts data features using different methods and procedures, which can significantly increase complementary pseudo-values from limited data without significant additional cost. Based on this, we combine DA and DL framework-based tabular data model, propose a model TabDEG, to predict DEGs and their up-regulation/down-regulation directions from gene expression data obtained from the Cancer Genome Atlas database. Compared to five counterpart methods, TabDEG has high sensitivity and low misclassification rates. Experiment shows that TabDEG is robust and effective in enhancing data features to facilitate classification of high-dimensional small sample size datasets and validates that TabDEG-predicted DEGs are mapped to important gene ontology terms and pathways associated with cancer.
Collapse
Affiliation(s)
- Sifan Feng
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, Guangdong, China
| | - Zhenyou Wang
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, Guangdong, China
| | - Yinghua Jin
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, Guangdong, China
| | - Shengbin Xu
- School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou, Guangdong, China
| |
Collapse
|
5
|
Wang W, Li Y, Lu K, Zhang J, Chen P, Yan K, Wang B. Medical Tumor Image Classification Based on Few-Shot Learning. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:715-724. [PMID: 37294647 DOI: 10.1109/tcbb.2023.3282226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As a high mortality disease, cancer seriously affects people's life and well-being. Reliance on pathologists to assess disease progression from pathological images is inaccurate and burdensome. Computer aided diagnosis (CAD) system can effectively assist diagnosis and make more credible decisions. However, a large number of labeled medical images that contribute to improve the accuracy of machine learning algorithm, especially for deep learning in CAD, are difficult to collect. Therefore, in this work, an improved few-shot learning method is proposed for medical image recognition. In addition, to make full use of the limited feature information in one or more samples, a feature fusion strategy is involved in our model. On the dataset of BreakHis and skin lesions, the experimental results show that our model achieved the classification accuracy of 91.22% and 71.20% respectively when only 10 labeled samples are given, which is superior to other state-of-the-art methods.
Collapse
|
6
|
Pang T, Wong JHD, Ng WL, Chan CS, Wang C, Zhou X, Yu Y. Radioport: a radiomics-reporting network for interpretable deep learning in BI-RADS classification of mammographic calcification. Phys Med Biol 2024; 69:065006. [PMID: 38373345 DOI: 10.1088/1361-6560/ad2a95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Accepted: 02/19/2024] [Indexed: 02/21/2024]
Abstract
Objective.Generally, due to a lack of explainability, radiomics based on deep learning has been perceived as a black-box solution for radiologists. Automatic generation of diagnostic reports is a semantic approach to enhance the explanation of deep learning radiomics (DLR).Approach.In this paper, we propose a novel model called radiomics-reporting network (Radioport), which incorporates text attention. This model aims to improve the interpretability of DLR in mammographic calcification diagnosis. Firstly, it employs convolutional neural networks to extract visual features as radiomics for multi-category classification based on breast imaging reporting and data system. Then, it builds a mapping between these visual features and textual features to generate diagnostic reports, incorporating an attention module for improved clarity.Main results.To demonstrate the effectiveness of our proposed model, we conducted experiments on a breast calcification dataset comprising mammograms and diagnostic reports. The results demonstrate that our model can: (i) semantically enhance the interpretability of DLR; and, (ii) improve the readability of generated medical reports.Significance.Our interpretable textual model can explicitly simulate the mammographic calcification diagnosis process.
Collapse
Affiliation(s)
- Ting Pang
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, 453000, People's Republic of China
- Center of Image and Signal Processing, Faculty of Computer Science and Infomation Technology, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
- Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, 453000, People's Republic of China
| | - Jeannie Hsiu Ding Wong
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| | - Wei Lin Ng
- Department of Biomedical Imaging, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| | - Chee Seng Chan
- Center of Image and Signal Processing, Faculty of Computer Science and Infomation Technology, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| | - Chang Wang
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, 453000, People's Republic of China
- Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, 453000, People's Republic of China
| | - Xuezhi Zhou
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, 453000, People's Republic of China
- Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, 453000, People's Republic of China
| | - Yi Yu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, 453000, People's Republic of China
- Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, 453000, People's Republic of China
| |
Collapse
|
7
|
Das S, Dey MK, Devireddy R, Gartia MR. Biomarkers in Cancer Detection, Diagnosis, and Prognosis. SENSORS (BASEL, SWITZERLAND) 2023; 24:37. [PMID: 38202898 PMCID: PMC10780704 DOI: 10.3390/s24010037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/27/2023] [Accepted: 12/15/2023] [Indexed: 01/12/2024]
Abstract
Biomarkers are vital in healthcare as they provide valuable insights into disease diagnosis, prognosis, treatment response, and personalized medicine. They serve as objective indicators, enabling early detection and intervention, leading to improved patient outcomes and reduced costs. Biomarkers also guide treatment decisions by predicting disease outcomes and facilitating individualized treatment plans. They play a role in monitoring disease progression, adjusting treatments, and detecting early signs of recurrence. Furthermore, biomarkers enhance drug development and clinical trials by identifying suitable patients and accelerating the approval process. In this review paper, we described a variety of biomarkers applicable for cancer detection and diagnosis, such as imaging-based diagnosis (CT, SPECT, MRI, and PET), blood-based biomarkers (proteins, genes, mRNA, and peptides), cell imaging-based diagnosis (needle biopsy and CTC), tissue imaging-based diagnosis (IHC), and genetic-based biomarkers (RNAseq, scRNAseq, and spatial transcriptomics).
Collapse
Affiliation(s)
| | | | | | - Manas Ranjan Gartia
- Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA 70803, USA; (S.D.); (M.K.D.); (R.D.)
| |
Collapse
|
8
|
Saikia S, Si T, Deb D, Bora K, Mallik S, Maulik U, Zhao Z. Lesion detection in women breast's dynamic contrast-enhanced magnetic resonance imaging using deep learning. Sci Rep 2023; 13:22555. [PMID: 38110462 PMCID: PMC10728155 DOI: 10.1038/s41598-023-48553-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 11/28/2023] [Indexed: 12/20/2023] Open
Abstract
Breast cancer is one of the most common cancers in women and the second foremost cause of cancer death in women after lung cancer. Recent technological advances in breast cancer treatment offer hope to millions of women in the world. Segmentation of the breast's Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) is one of the necessary tasks in the diagnosis and detection of breast cancer. Currently, a popular deep learning model, U-Net is extensively used in biomedical image segmentation. This article aims to advance the state of the art and conduct a more in-depth analysis with a focus on the use of various U-Net models in lesion detection in women's breast DCE-MRI. In this article, we perform an empirical study of the effectiveness and efficiency of U-Net and its derived deep learning models including ResUNet, Dense UNet, DUNet, Attention U-Net, UNet++, MultiResUNet, RAUNet, Inception U-Net and U-Net GAN for lesion detection in breast DCE-MRI. All the models are applied to the benchmarked 100 Sagittal T2-Weighted fat-suppressed DCE-MRI slices of 20 patients and their performance is compared. Also, a comparative study has been conducted with V-Net, W-Net, and DeepLabV3+. Non-parametric statistical test Wilcoxon Signed Rank Test is used to analyze the significance of the quantitative results. Furthermore, Multi-Criteria Decision Analysis (MCDA) is used to evaluate overall performance focused on accuracy, precision, sensitivity, F[Formula: see text]-score, specificity, Geometric-Mean, DSC, and false-positive rate. The RAUNet segmentation model achieved a high accuracy of 99.76%, sensitivity of 85.04%, precision of 90.21%, and Dice Similarity Coefficient (DSC) of 85.04% whereas ResNet achieved 99.62% accuracy, 62.26% sensitivity, 99.56% precision, and 72.86% DSC. ResUNet is found to be the most effective model based on MCDA. On the other hand, U-Net GAN takes the least computational time to perform the segmentation task. Both quantitative and qualitative results demonstrate that the ResNet model performs better than other models in segmenting the images and lesion detection, though computational time in achieving the objectives varies.
Collapse
Affiliation(s)
- Sudarshan Saikia
- Information Technology Department, Oil India Limited, Duliajan, Assam, 786602, India
| | - Tapas Si
- AI Innovation Lab, Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Jaipur, Rajasthan, 303807, India
| | - Darpan Deb
- Department of Computer Application, Christ University, Bengaluru, 560029, India
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, Assam, 781001, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, 02115, USA
| | - Ujjwal Maulik
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
9
|
Eftekharian M, Nodehi A, Enayatifar R. ML-DSTnet: A Novel Hybrid Model for Breast Cancer Diagnosis Improvement Based on Image Processing Using Machine Learning and Dempster-Shafer Theory. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7510419. [PMID: 37954096 PMCID: PMC10635746 DOI: 10.1155/2023/7510419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/18/2022] [Accepted: 04/25/2023] [Indexed: 11/14/2023]
Abstract
Medical intelligence detection systems have changed with the help of artificial intelligence and have also faced challenges. Breast cancer diagnosis and classification are part of this medical intelligence system. Early detection can lead to an increase in treatment options. On the other hand, uncertainty is a case that has always been with the decision-maker. The system's parameters cannot be accurately estimated, and the wrong decision is made. To solve this problem, we have proposed a method in this article that reduces the ignorance of the problem with the help of Dempster-Shafer theory so that we can make a better decision. This research on the MIAS dataset, based on image processing machine learning and Dempster-Shafer mathematical theory, tries to improve the diagnosis and classification of benign, malignant masses. We first determine the results of the diagnosis of mass type with MLP by using the texture feature and CNN. We combine the results of the two classifications with Dempster-Shafer theory and improve its accuracy. The obtained results show that the proposed approach has better performance than others based on evaluation criteria such as accuracy of 99.10%, sensitivity of 98.4%, and specificity of 100%.
Collapse
Affiliation(s)
- Mohsen Eftekharian
- Department of Computer Engineering, Gorgan Branch, Islamic Azad University, Gorgan, Iran
| | - Ali Nodehi
- Department of Computer Engineering, Gorgan Branch, Islamic Azad University, Gorgan, Iran
| | - Rasul Enayatifar
- Department of Computer Engineering, Firoozkooh Branch, Islamic Azad University, Firoozkooh, Iran
| |
Collapse
|
10
|
Cheng K, Wang J, Liu J, Zhang X, Shen Y, Su H. Public health implications of computer-aided diagnosis and treatment technologies in breast cancer care. AIMS Public Health 2023; 10:867-895. [PMID: 38187901 PMCID: PMC10764974 DOI: 10.3934/publichealth.2023057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Accepted: 10/10/2023] [Indexed: 01/09/2024] Open
Abstract
Breast cancer remains a significant public health issue, being a leading cause of cancer-related mortality among women globally. Timely diagnosis and efficient treatment are crucial for enhancing patient outcomes, reducing healthcare burdens and advancing community health. This systematic review, following the PRISMA guidelines, aims to comprehensively synthesize the recent advancements in computer-aided diagnosis and treatment for breast cancer. The study covers the latest developments in image analysis and processing, machine learning and deep learning algorithms, multimodal fusion techniques and radiation therapy planning and simulation. The results of the review suggest that machine learning, augmented and virtual reality and data mining are the three major research hotspots in breast cancer management. Moreover, this paper discusses the challenges and opportunities for future research in this field. The conclusion highlights the importance of computer-aided techniques in the management of breast cancer and summarizes the key findings of the review.
Collapse
Affiliation(s)
- Kai Cheng
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jiangtao Wang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Jian Liu
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Xiangsheng Zhang
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Yuanyuan Shen
- Yantai Affiliated Hospital of Binzhou Medical University, Yantai, 264100, China
| | - Hang Su
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| |
Collapse
|
11
|
Khanduri I, Maru DM, Parra ER. Exploratory study of macrophage polarization and spatial distribution in colorectal cancer liver metastasis: a pilot study. Front Immunol 2023; 14:1223864. [PMID: 37637998 PMCID: PMC10449458 DOI: 10.3389/fimmu.2023.1223864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/27/2023] [Indexed: 08/29/2023] Open
Abstract
Background The liver is the most typical site of metastatic disease for patients with colorectal cancer (CRC), and up to half the patients with CRC will develop colorectal liver metastasis (CLM). Studying the tumor microenvironment, particularly macrophages and their spatial distribution, can give us critical insight into treatment. Methods Ten CLMs (five treatment-naïve and five post-neoadjuvant chemotherapy) were stained with multiplex immunofluorescence panels against cytokeratins, CD68, Arg1, CD206, CD86, CD163, PD-L1, and MRP8-14. Densities of cell phenotypes and their spatial distribution in the tumor center and the normal liver-tumor interface were correlated with clinicopathological variables. Results M2 macrophages were the predominant subtype in both the tumor center and the periphery, with a relatively higher density at the periphery. The larger tumors, more than 3.9 cm, were associated with higher densities of total CD68+ macrophages and CD68+CD163+ CD206neg and CD68+CD206+ CD163neg M2 macrophage subtypes. Total macrophages in the tumor periphery demonstrated significantly greater proximity to malignant cells than did those in the tumor center (p=0.0371). The presence of higher than median CD68+MRP8-14+CD86neg M1 macrophages in the tumor center was associated with poor overall survival (median 2.34 years) compared to cases with lower than median M1 macrophages at the tumor center (median 6.41 years) in univariate analysis. Conclusion The dominant polarization of the M2 macrophage subtype could drive new therapeutic approaches in CLM patients.
Collapse
Affiliation(s)
- Isha Khanduri
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Dipen M. Maru
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Edwin R. Parra
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
12
|
Si T, Patra DK, Mallik S, Bandyopadhyay A, Sarkar A, Qin H. Identification of breast lesion through integrated study of gorilla troops optimization and rotation-based learning from MRI images. Sci Rep 2023; 13:11577. [PMID: 37463919 PMCID: PMC10354050 DOI: 10.1038/s41598-023-36300-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 05/31/2023] [Indexed: 07/20/2023] Open
Abstract
Breast cancer has emerged as the most life-threatening disease among women around the world. Early detection and treatment of breast cancer are thought to reduce the need for surgery and boost the survival rate. The Magnetic Resonance Imaging (MRI) segmentation techniques for breast cancer diagnosis are investigated in this article. Kapur's entropy-based multilevel thresholding is used in this study to determine optimal values for breast DCE-MRI lesion segmentation using Gorilla Troops Optimization (GTO). An improved GTO, is developed by incorporating Rotational opposition based-learning (RBL) into GTO called (GTORBL) and applied it to the same problem. The proposed approaches are tested on 20 patients' T2 Weighted Sagittal (T2 WS) DCE-MRI 100 slices. The proposed approaches are compared with Tunicate Swarm Algorithm (TSA), Particle Swarm Optimization (PSO), Arithmetic Optimization Algorithm (AOA), Slime Mould Algorithm (SMA), Multi-verse Optimization (MVO), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF). The Dice Similarity Coefficient (DSC), sensitivity, and accuracy of the proposed GTO-based approach is achieved [Formula: see text], [Formula: see text], and [Formula: see text] respectively. Another proposed GTORBL-based segmentation method achieves accuracy values of [Formula: see text] , sensitivity of [Formula: see text] , and DSC of [Formula: see text]. The one-way ANOVA test followed by Tukey HSD and Wilcoxon Signed Rank Test are used to examine the results. Furthermore, Multi-Criteria Decision Making is used to evaluate overall performance focused on sensitivity, accuracy, false-positive rate, precision, specificity, [Formula: see text]-score, Geometric-Mean, and DSC. According to both quantitative and qualitative findings, the proposed strategies outperform other compared methodologies.
Collapse
Affiliation(s)
- Tapas Si
- Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Sikar Road (NH-11), Udaipuria Mod, Jaipur, Rajasthan, 303807, India
| | - Dipak Kumar Patra
- Department of Computer Science, Hijli College, Kharagpur, West Bengal, 721306, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA, USA.
| | - Anjan Bandyopadhyay
- School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, Odisha, India
| | - Achyuth Sarkar
- Department of Computer Science & Engineering, National Institute of Technology Arunachal Pradesh, Arunachal Pradesh, 791113, India
| | - Hong Qin
- Department of Computer Science and Engineering, University of Tennessee at Chattanooga, Chattanooga, TN, USA.
| |
Collapse
|
13
|
Rafiq A, Chursin A, Awad Alrefaei W, Rashed Alsenani T, Aldehim G, Abdel Samee N, Menzli LJ. Detection and Classification of Histopathological Breast Images Using a Fusion of CNN Frameworks. Diagnostics (Basel) 2023; 13:diagnostics13101700. [PMID: 37238186 DOI: 10.3390/diagnostics13101700] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is responsible for the deaths of thousands of women each year. The diagnosis of breast cancer (BC) frequently makes the use of several imaging techniques. On the other hand, incorrect identification might occasionally result in unnecessary therapy and diagnosis. Therefore, the accurate identification of breast cancer can save a significant number of patients from undergoing unnecessary surgery and biopsy procedures. As a result of recent developments in the field, the performance of deep learning systems used for medical image processing has showed significant benefits. Deep learning (DL) models have found widespread use for the aim of extracting important features from histopathologic BC images. This has helped to improve the classification performance and has assisted in the automation of the process. In recent times, both convolutional neural networks (CNNs) and hybrid models of deep learning-based approaches have demonstrated impressive performance. In this research, three different types of CNN models are proposed: a straightforward CNN model (1-CNN), a fusion CNN model (2-CNN), and a three CNN model (3-CNN). The findings of the experiment demonstrate that the techniques based on the 3-CNN algorithm performed the best in terms of accuracy (90.10%), recall (89.90%), precision (89.80%), and f1-Score (89.90%). In conclusion, the CNN-based approaches that have been developed are contrasted with more modern machine learning and deep learning models. The application of CNN-based methods has resulted in a significant increase in the accuracy of the BC classification.
Collapse
Affiliation(s)
- Ahsan Rafiq
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Alexander Chursin
- Higher School of Industrial Policy and Entrepreneurship, RUDN University, 6 Miklukho-Maklaya St, Moscow 117198, Russia
| | - Wejdan Awad Alrefaei
- Department of Programming and Computer Sciences, Applied College in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 16245, Saudi Arabia
| | - Tahani Rashed Alsenani
- Department of Biology, College of Sciences in Yanbu, Taibah University, Yanbu 46522, Saudi Arabia
| | - Ghadah Aldehim
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Leila Jamel Menzli
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
14
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
15
|
Hassan AM, Yahya A, Aboshosha A. A framework for classifying breast cancer based on deep features integration and selection. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08341-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
AbstractDeep convolutional neural networks (DCNNs) are one of the most advanced techniques for classifying images in a range of applications. One of the most prevalent cancers that cause death in women is breast cancer. For survival rates to increase, early detection and treatment of breast cancer is essential. Deep learning (DL) can help radiologists diagnose and classify breast cancer lesions. This paper proposes a computer-aided system based on DL techniques for automatically classify breast cancer tumors in histopathological images. There are nine DCNN architectures used in this work. Four schemes are performed in the proposed framework to find the best approach. The first scheme consists of pre-trained DCNNs based on the transfer learning concept. The second scheme performs feature extraction of the DCNN architectures and uses a support vector machine (SVM) classifier for evaluation. The third one performs feature integration to show how the integrated deep features may enhance the SVM classifiers' accuracy. Finally, in the fourth scheme, the Chi-square (χ2) feature selection method is applied to reduce the large feature size in the feature integration step. The results of the proposed system present a promising performance for breast cancer classification with an accuracy of 99.24%. The system performance shows that the proposed tool is suitable to assist radiologists in diagnosing breast cancer tumors.
Collapse
|
16
|
Kuo CFJ, Chen HY, Barman J, Ko KH, Hsu HH. Complete, Fully Automatic Detection and Classification of Benign and Malignant Breast Tumors Based on CT Images Using Artificial Intelligent and Image Processing. J Clin Med 2023; 12:1582. [PMID: 36836118 PMCID: PMC9960342 DOI: 10.3390/jcm12041582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/09/2022] [Accepted: 12/19/2022] [Indexed: 02/19/2023] Open
Abstract
Breast cancer is the most common type of cancer in women, and early detection is important to significantly reduce its mortality rate. This study introduces a detection and diagnosis system that automatically detects and classifies breast tumors in CT scan images. First, the contours of the chest wall are extracted from computed chest tomography images, and two-dimensional image characteristics and three-dimensional image features, together with the application of active contours without edge and geodesic active contours methods, are used to detect, locate, and circle the tumor. Then, the computer-assisted diagnostic system extracts features, quantifying and classifying benign and malignant breast tumors using a greedy algorithm and a support vector machine. The study used 174 breast tumors for experiment and training and performed cross-validation 10 times (k-fold cross-validation) to evaluate performance of the system. The accuracy, sensitivity, specificity, and positive and negative predictive values of the system were 99.43%, 98.82%, 100%, 100%, and 98.89% respectively. This system supports the rapid extraction and classification of breast tumors as either benign or malignant, helping physicians to improve clinical diagnosis.
Collapse
Affiliation(s)
- Chung-Feng Jeffrey Kuo
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Hsuan-Yu Chen
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Jagadish Barman
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Kai-Hsiung Ko
- Department of Radiology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| | - Hsian-He Hsu
- Department of Radiology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| |
Collapse
|
17
|
Karger E, Kureljusic M. Artificial Intelligence for Cancer Detection-A Bibliometric Analysis and Avenues for Future Research. Curr Oncol 2023; 30:1626-1647. [PMID: 36826086 PMCID: PMC9954989 DOI: 10.3390/curroncol30020125] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 01/18/2023] [Accepted: 01/27/2023] [Indexed: 01/31/2023] Open
Abstract
After cardiovascular diseases, cancer is responsible for the most deaths worldwide. Detecting a cancer disease early improves the chances for healing significantly. One group of technologies that is increasingly applied for detecting cancer is artificial intelligence. Artificial intelligence has great potential to support clinicians and medical practitioners as it allows for the early detection of carcinomas. During recent years, research on artificial intelligence for cancer detection grew a lot. Within this article, we conducted a bibliometric study of the existing research dealing with the application of artificial intelligence in cancer detection. We analyzed 6450 articles on that topic that were published between 1986 and 2022. By doing so, we were able to give an overview of this research field, including its key topics, relevant outlets, institutions, and articles. Based on our findings, we developed a future research agenda that can help to advance research on artificial intelligence for cancer detection. In summary, our study is intended to serve as a platform and foundation for researchers that are interested in the potential of artificial intelligence for detecting cancer.
Collapse
Affiliation(s)
- Erik Karger
- Information Systems and Strategic IT Management, University of Duisburg-Essen, 45141 Essen, Germany
| | - Marko Kureljusic
- International Accounting, University of Duisburg-Essen, 45141 Essen, Germany
| |
Collapse
|
18
|
El-Helkan B, Emam M, Mohanad M, Fathy S, Zekri AR, Ahmed OS. Long non-coding RNAs as novel prognostic biomarkers for breast cancer in Egyptian women. Sci Rep 2022; 12:19498. [PMID: 36376369 PMCID: PMC9663553 DOI: 10.1038/s41598-022-23938-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 11/08/2022] [Indexed: 11/16/2022] Open
Abstract
Breast cancer (BC), the most common type of malignant tumor, is the leading cause of death, having the highest incidence rate among women. The lack of early diagnostic tools is one of the clinical obstacles for BC treatment. The current study was designed to evaluate a panel of long non-coding RNAs (lncRNAs) BC040587, HOTAIR, MALAT1, CCAT1, CCAT2, PVT1, UCA1, SPRY4-IT1, PANDAR, and AK058003-and two mRNAs (SNCG, BDNF) as novel prognostic biomarkers for BC. This study was ethically approved by the Institutional Review Board of the National Cancer Institute, Cairo University. Our study included 75 women recently diagnosed with BC and 25 healthy women as normal controls. Patients were divided into three groups: 24 with benign breast diseases, 28 with metastatic breast cancer (MBC, stage IV), and 23 with non-metastatic breast cancer (NMBC, stage III). LncRNA and mRNA expression levels were measured in patient plasma using quantitative real-time PCR. We found that 10 lncRNAs (BCO40587, HOTAIR, PVT1, CCAT2, PANDAR, CCAT1, UCA1, SPRY4-IT1, AK058003, and MALAT1) and both mRNAs demonstrated at least a 2-fold change in expression with a more than 95% probability of significance. BCO40587 and SNCG were significantly up-regulated in MBC and NMBC patients (3.2- and 4-fold, respectively) compared with normal controls. The expression of UCA1 was repressed by 1.78-fold in MBC and NMBC patients compared with those with benign diseases. SPRY4-IT1 was down-regulated by 1.45-fold in MBC patients compared with NMBC and benign disease patients. Up-regulation of lncRNAs plays an important role in BC development. SNCG and BCO40587 may be potential prognostic markers for BC.The organization number is IORG0003381 (IRB No: IRB00004025).
Collapse
Affiliation(s)
- Basma El-Helkan
- grid.7269.a0000 0004 0621 1570Department of Biochemistry, Faculty of Science-Ain Shams University, Cairo, Egypt
| | - Manal Emam
- grid.7269.a0000 0004 0621 1570Department of Biochemistry, Faculty of Science-Ain Shams University, Cairo, Egypt
| | - Marwa Mohanad
- grid.440875.a0000 0004 1765 2064College of Pharmaceutical Sciences and Drug Manufacturing, Misr University for Science and Technology, 6th of October ,Giza, Egypt
| | - Shadia Fathy
- grid.7269.a0000 0004 0621 1570Department of Biochemistry, Faculty of Science-Ain Shams University, Cairo, Egypt
| | - Abdel Rahman Zekri
- grid.7776.10000 0004 0639 9286Virology and Immunology Unit, Cancer Biology Department, National Cancer Institute, Cairo University, Cairo, Egypt
| | - Ola S. Ahmed
- grid.7776.10000 0004 0639 9286Virology and Immunology Unit, Cancer Biology Department, National Cancer Institute, Cairo University, Cairo, Egypt
| |
Collapse
|
19
|
Shawi RE, Kilanava K, Sakr S. An interpretable semi-supervised framework for patch-based classification of breast cancer. Sci Rep 2022; 12:16734. [PMID: 36202832 PMCID: PMC9537500 DOI: 10.1038/s41598-022-20268-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 09/12/2022] [Indexed: 12/02/2022] Open
Abstract
Developing effective invasive Ductal Carcinoma (IDC) detection methods remains a challenging problem for breast cancer diagnosis. Recently, there has been notable success in utilizing deep neural networks in various application domains; however, it is well-known that deep neural networks require a large amount of labelled training data to achieve high accuracy. Such amounts of manually labelled data are time-consuming and expensive, especially when domain expertise is required. To this end, we present a novel semi-supervised learning framework for IDC detection using small amounts of labelled training examples to take advantage of cheap available unlabeled data. To gain trust in the prediction of the framework, we explain the prediction globally. Our proposed framework consists of five main stages: data augmentation, feature selection, dividing co-training data labelling, deep neural network modelling, and the interpretability of neural network prediction. The data cohort used in this study contains digitized BCa histopathology slides from 162 women with IDC at the Hospital of the University of Pennsylvania and the Cancer Institute of New Jersey. To evaluate the effectiveness of the deep neural network model used by the proposed approach, we compare it to different state-of-the-art network architectures; AlexNet and a shallow VGG network trained only on the labelled data. The results show that the deep neural network used in our proposed approach outperforms the state-of-the-art techniques achieving balanced accuracy of 0.73 and F-measure of 0.843. In addition, we compare the performance of the proposed semi-supervised approach to state-of-the-art semi-supervised DCGAN technique and self-learning technique. The experimental evaluation shows that our framework outperforms both semi-supervised techniques and detects IDC with an accuracy of 85.75%, a balanced accuracy of 0.865, and an F-measure of 0.773 using only 10% labelled instances from the training dataset while the rest of the training dataset is treated as unlabeled.
Collapse
Affiliation(s)
- Radwa El Shawi
- Institute of Computer Science, Tartu University, Tartu, Estonia.
| | - Khatia Kilanava
- Institute of Computer Science, Tartu University, Tartu, Estonia
| | - Sherif Sakr
- Institute of Computer Science, Tartu University, Tartu, Estonia
| |
Collapse
|
20
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
21
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
22
|
Albashish D. Ensemble of adapted convolutional neural networks (CNN) methods for classifying colon histopathological images. PeerJ Comput Sci 2022; 8:e1031. [PMID: 35875641 PMCID: PMC9299234 DOI: 10.7717/peerj-cs.1031] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Deep convolutional neural networks (CNN) manifest the potential for computer-aided diagnosis systems (CADs) by learning features directly from images rather than using traditional feature extraction methods. Nevertheless, due to the limited sample sizes and heterogeneity in tumor presentation in medical images, CNN models suffer from training issues, including training from scratch, which leads to overfitting. Alternatively, a pre-trained neural network's transfer learning (TL) is used to derive tumor knowledge from medical image datasets using CNN that were designed for non-medical activations, alleviating the need for large datasets. This study proposes two ensemble learning techniques: E-CNN (product rule) and E-CNN (majority voting). These techniques are based on the adaptation of the pretrained CNN models to classify colon cancer histopathology images into various classes. In these ensembles, the individuals are, initially, constructed by adapting pretrained DenseNet121, MobileNetV2, InceptionV3, and VGG16 models. The adaptation of these models is based on a block-wise fine-tuning policy, in which a set of dense and dropout layers of these pretrained models is joined to explore the variation in the histology images. Then, the models' decisions are fused via product rule and majority voting aggregation methods. The proposed model was validated against the standard pretrained models and the most recent works on two publicly available benchmark colon histopathological image datasets: Stoean (357 images) and Kather colorectal histology (5,000 images). The results were 97.20% and 91.28% accurate, respectively. The achieved results outperformed the state-of-the-art studies and confirmed that the proposed E-CNNs could be extended to be used in various medical image applications.
Collapse
Affiliation(s)
- Dheeb Albashish
- Computer Science Department/ Prince Abdullah bin Ghazi Faculty of Information and Communication Technology, Al-Balqa Applied University, Alsalt, Jordan
| |
Collapse
|
23
|
Beyond the colors: enhanced deep learning on invasive ductal carcinoma. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07478-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
24
|
Rojas F, Hernandez S, Lazcano R, Laberiano-Fernandez C, Parra ER. Multiplex Immunofluorescence and the Digital Image Analysis Workflow for Evaluation of the Tumor Immune Environment in Translational Research. Front Oncol 2022; 12:889886. [PMID: 35832550 PMCID: PMC9271766 DOI: 10.3389/fonc.2022.889886] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/27/2022] [Indexed: 11/13/2022] Open
Abstract
A robust understanding of the tumor immune environment has important implications for cancer diagnosis, prognosis, research, and immunotherapy. Traditionally, immunohistochemistry (IHC) has been regarded as the standard method for detecting proteins in situ, but this technique allows for the evaluation of only one cell marker per tissue sample at a time. However, multiplexed imaging technologies enable the multiparametric analysis of a tissue section at the same time. Also, through the curation of specific antibody panels, these technologies enable researchers to study the cell subpopulations within a single immunological cell group. Thus, multiplexed imaging gives investigators the opportunity to better understand tumor cells, immune cells, and the interactions between them. In the multiplexed imaging technology workflow, once the protocol for a tumor immune micro environment study has been defined, histological slides are digitized to produce high-resolution images in which regions of interest are selected for the interrogation of simultaneously expressed immunomarkers (including those co-expressed by the same cell) by using an image analysis software and algorithm. Most currently available image analysis software packages use similar machine learning approaches in which tissue segmentation first defines the different components that make up the regions of interest and cell segmentation, then defines the different parameters, such as the nucleus and cytoplasm, that the software must utilize to segment single cells. Image analysis tools have driven dramatic evolution in the field of digital pathology over the past several decades and provided the data necessary for translational research and the discovery of new therapeutic targets. The next step in the growth of digital pathology is optimization and standardization of the different tasks in cancer research, including image analysis algorithm creation, to increase the amount of data generated and their accuracy in a short time as described herein. The aim of this review is to describe this process, including an image analysis algorithm creation for multiplex immunofluorescence analysis, as an essential part of the optimization and standardization of the different processes in cancer research, to increase the amount of data generated and their accuracy in a short time.
Collapse
|
25
|
Ahmad S, Ullah T, Ahmad I, AL-Sharabi A, Ullah K, Khan RA, Rasheed S, Ullah I, Uddin MN, Ali MS. A Novel Hybrid Deep Learning Model for Metastatic Cancer Detection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8141530. [PMID: 35785076 PMCID: PMC9249449 DOI: 10.1155/2022/8141530] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Revised: 04/28/2022] [Accepted: 06/01/2022] [Indexed: 12/18/2022]
Abstract
Cancer has been found as a heterogeneous disease with various subtypes and aims to destroy the body's normal cells abruptly. As a result, it is essential to detect and prognosis the distinct type of cancer since they may help cancer survivors with treatment in the early stage. It must also divide cancer patients into high- and low-risk groups. While realizing efficient detection of cancer is frequently a time-taking and exhausting task with the high possibility of pathologist errors and previous studies employed data mining and machine learning (ML) techniques to identify cancer, these strategies rely on handcrafted feature extraction techniques that result in incorrect classification. On the contrary, deep learning (DL) is robust in feature extraction and has recently been widely used for classification and detection purposes. This research implemented a novel hybrid AlexNet-gated recurrent unit (AlexNet-GRU) model for the lymph node (LN) breast cancer detection and classification. We have used a well-known Kaggle (PCam) data set to classify LN cancer samples. This study is tested and compared among three models: convolutional neural network GRU (CNN-GRU), CNN long short-term memory (CNN-LSTM), and the proposed AlexNet-GRU. The experimental results indicated that the performance metrics accuracy, precision, sensitivity, and specificity (99.50%, 98.10%, 98.90%, and 97.50) of the proposed model can reduce the pathologist errors that occur during the diagnosis process of incorrect classification and significantly better performance than CNN-GRU and CNN-LSTM models. The proposed model is compared with other recent ML/DL algorithms to analyze the model's efficiency, which reveals that the proposed AlexNet-GRU model is computationally efficient. Also, the proposed model presents its superiority over state-of-the-art methods for LN breast cancer detection and classification.
Collapse
Affiliation(s)
- Shahab Ahmad
- School of Management Science and Engineering, Chongqing University of Post and Telecommunication, Chongqing 400065, China
| | - Tahir Ullah
- Department of Electronics and Information Engineering, Xian Jiaotong University, Xian, China
| | - Ijaz Ahmad
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | | | - Kalim Ullah
- Department of Zoology, Kohat University of Science and Technology, Kohat 26000, Pakistan
| | - Rehan Ali Khan
- Department of Electrical Engineering, University of Science and Technology, Bannu 28100, Pakistan
| | - Saim Rasheed
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University Jeddah, Saudi Arabia
| | - Inam Ullah
- College of Internet of Things (IoT) Engineering, Hohai University (HHU), Changzhou Campus, Nanjing 213022, China
| | - Md. Nasir Uddin
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| | - Md. Sadek Ali
- Communication Research Laboratory, Department of Information and Communication Technology, Islamic University, Kushtia 7003, Bangladesh
| |
Collapse
|
26
|
Mahmoud HAH, AlArfaj AA, Hafez AM. A Fast Hybrid Classification Algorithm with Feature Reduction for Medical Images. Appl Bionics Biomech 2022; 2022:1367366. [PMID: 35360292 PMCID: PMC8964210 DOI: 10.1155/2022/1367366] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 03/05/2022] [Indexed: 11/18/2022] Open
Abstract
In this paper, we are introducing a fast hybrid fuzzy classification algorithm with feature reduction for medical images. We incorporated the quantum-based grasshopper computing algorithm (QGH) with feature extraction using fuzzy clustering technique (C-means). QGH integrates quantum computing into machine learning and intelligence applications. The objective of our technique is to the integrate QGH method, specifically into cervical cancer detection that is based on image processing. Many features such as color, geometry, and texture found in the cells imaged in Pap smear lab test are very crucial in cancer diagnosis. Our proposed technique is based on the extraction of the best features using a more than 2600 public Pap smear images and further applies feature reduction technique to reduce the feature space. Performance evaluation of our approach evaluates the influence of the extracted feature on the classification precision by performing two experimental setups. First setup is using all the extracted features which leads to classification without feature bias. The second setup is a fusion technique which utilized QGH with the fuzzy C-means algorithm to choose the best features. In the setups, we allocate the assessment to accuracy based on the selection of best features and of different categories of the cancer. In the last setup, we utilized a fusion technique engaged with statistical techniques to launch a qualitative agreement with the feature selection in several experimental setups.
Collapse
Affiliation(s)
- Hanan Ahmed Hosni Mahmoud
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Abeer Abdulaziz AlArfaj
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Alaaeldin M. Hafez
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
27
|
Sethy PK, Behera SK. Automatic classification with concatenation of deep and handcrafted features of histological images for breast carcinoma diagnosis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:9631-9643. [DOI: 10.1007/s11042-021-11756-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 09/27/2021] [Accepted: 11/22/2021] [Indexed: 08/02/2023]
|
28
|
Analyzing histopathological images by using machine learning techniques. APPLIED NANOSCIENCE 2022. [DOI: 10.1007/s13204-021-02217-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
29
|
Singh H, Sharma V, Singh D. Comparative analysis of proficiencies of various textures and geometric features in breast mass classification using k-nearest neighbor. Vis Comput Ind Biomed Art 2022; 5:3. [PMID: 35018506 PMCID: PMC8752652 DOI: 10.1186/s42492-021-00100-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 12/23/2021] [Indexed: 11/10/2022] Open
Abstract
This paper introduces a comparative analysis of the proficiencies of various textures and geometric features in the diagnosis of breast masses on mammograms. An improved machine learning-based framework was developed for this study. The proposed system was tested using 106 full field digital mammography images from the INbreast dataset, containing a total of 115 breast mass lesions. The proficiencies of individual and various combinations of computed textures and geometric features were investigated by evaluating their contributions towards attaining higher classification accuracies. Four state-of-the-art filter-based feature selection algorithms (Relief-F, Pearson correlation coefficient, neighborhood component analysis, and term variance) were employed to select the top 20 most discriminative features. The Relief-F algorithm outperformed other feature selection algorithms in terms of classification results by reporting 85.2% accuracy, 82.0% sensitivity, and 88.0% specificity. A set of nine most discriminative features were then selected, out of the earlier mentioned 20 features obtained using Relief-F, as a result of further simulations. The classification performances of six state-of-the-art machine learning classifiers, namely k-nearest neighbor (k-NN), support vector machine, decision tree, Naive Bayes, random forest, and ensemble tree, were investigated, and the obtained results revealed that the best classification results (accuracy = 90.4%, sensitivity = 92.0%, specificity = 88.0%) were obtained for the k-NN classifier with the number of neighbors having k = 5 and squared inverse distance weight. The key findings include the identification of the nine most discriminative features, that is, FD26 (Fourier Descriptor), Euler number, solidity, mean, FD14, FD13, periodicity, skewness, and contrast out of a pool of 125 texture and geometric features. The proposed results revealed that the selected nine features can be used for the classification of breast masses in mammograms.
Collapse
Affiliation(s)
- Harmandeep Singh
- Department of Computer Science and Engineering, IKG Punjab Technical University, Jalandhar, Punjab, 144603, India.
| | - Vipul Sharma
- Department of Computer Science and Engineering, IKG Punjab Technical University, Jalandhar, Punjab, 144603, India
| | - Damanpreet Singh
- Department of Computer Science and Engineering, Sant Longowal Institute of Engineering and Technology, Sangrur, Punjab, 148106, India
| |
Collapse
|
30
|
Ha SM, Kim HH, Kang E, Seo BK, Choi N, Kim TH, Ku YJ, Ye JC. Radiation Dose Reduction in Digital Mammography by Deep-Learning Algorithm Image Reconstruction: A Preliminary Study. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2022; 83:344-359. [PMID: 36237936 PMCID: PMC9514435 DOI: 10.3348/jksr.2020.0152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 10/28/2020] [Accepted: 07/23/2021] [Indexed: 11/15/2022]
Abstract
Purpose To develop a denoising convolutional neural network-based image processing technique and investigate its efficacy in diagnosing breast cancer using low-dose mammography imaging. Materials and Methods A total of 6 breast radiologists were included in this prospective study. All radiologists independently evaluated low-dose images for lesion detection and rated them for diagnostic quality using a qualitative scale. After application of the denoising network, the same radiologists evaluated lesion detectability and image quality. For clinical application, a consensus on lesion type and localization on preoperative mammographic examinations of breast cancer patients was reached after discussion. Thereafter, coded low-dose, reconstructed full-dose, and full-dose images were presented and assessed in a random order. Results Lesions on 40% reconstructed full-dose images were better perceived when compared with low-dose images of mastectomy specimens as a reference. In clinical application, as compared to 40% reconstructed images, higher values were given on full-dose images for resolution (p < 0.001); diagnostic quality for calcifications (p < 0.001); and for masses, asymmetry, or architectural distortion (p = 0.037). The 40% reconstructed images showed comparable values to 100% full-dose images for overall quality (p = 0.547), lesion visibility (p = 0.120), and contrast (p = 0.083), without significant differences. Conclusion Effective denoising and image reconstruction processing techniques can enable breast cancer diagnosis with substantial radiation dose reduction.
Collapse
Affiliation(s)
- Su Min Ha
- Department of Radiology, Research Institute of Radiology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
- Department of Radiology, Research Institute of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Hak Hee Kim
- Department of Radiology, Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Eunhee Kang
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
| | - Bo Kyoung Seo
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Korea
| | - Nami Choi
- Department of Radiology, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Korea
| | - Tae Hee Kim
- Department of Radiology, Ajou University Hospital, Ajou University School of Medicine, Suwon, Korea
| | - You Jin Ku
- Department of Radiology, Catholic Kwangdong University International St. Mary’s Hospital, Catholic Kwandong University, Incheon, Korea
| | - Jong Chul Ye
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea
| |
Collapse
|
31
|
Agraz JL, Grenko CM, Chen AA, Viaene AN, Nasrallah MD, Pati S, Kurc T, Saltz J, Feldman MD, Akbari H, Sharma P, Shinohara RT, Bakas S. Robust Image Population Based Stain Color Normalization: How Many Reference Slides Are Enough? IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2022; 3:218-226. [PMID: 36860498 PMCID: PMC9970045 DOI: 10.1109/ojemb.2023.3234443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 11/08/2022] [Accepted: 01/01/2023] [Indexed: 01/06/2023] Open
Abstract
Histopathologic evaluation of Hematoxylin & Eosin (H&E) stained slides is essential for disease diagnosis, revealing tissue morphology, structure, and cellular composition. Variations in staining protocols and equipment result in images with color nonconformity. Although pathologists compensate for color variations, these disparities introduce inaccuracies in computational whole slide image (WSI) analysis, accentuating data domain shift and degrading generalization. Current state-of-the-art normalization methods employ a single WSI as reference, but selecting a single WSI representative of a complete WSI-cohort is infeasible, inadvertently introducing normalization bias. We seek the optimal number of slides to construct a more representative reference based on composite/aggregate of multiple H&E density histograms and stain-vectors, obtained from a randomly selected WSI population (WSI-Cohort-Subset). We utilized 1,864 IvyGAP WSIs as a WSI-cohort, and built 200 WSI-Cohort-Subsets varying in size (from 1 to 200 WSI-pairs) using randomly selected WSIs. The WSI-pairs' mean Wasserstein Distances and WSI-Cohort-Subsets' standard deviations were calculated. The Pareto Principle defined the optimal WSI-Cohort-Subset size. The WSI-cohort underwent structure-preserving color normalization using the optimal WSI-Cohort-Subset histogram and stain-vector aggregates. Numerous normalization permutations support WSI-Cohort-Subset aggregates as representative of a WSI-cohort through WSI-cohort CIELAB color space swift convergence, as a result of the law of large numbers and shown as a power law distribution. We show normalization at the optimal (Pareto Principle) WSI-Cohort-Subset size and corresponding CIELAB convergence: a) Quantitatively, using 500 WSI-cohorts; b) Quantitatively, using 8,100 WSI-regions; c) Qualitatively, using 30 cellular tumor normalization permutations. Aggregate-based stain normalization may contribute in increasing computational pathology robustness, reproducibility, and integrity.
Collapse
Affiliation(s)
- Jose L Agraz
- Center for Biomedical Image Computing and Analytics (CBICA) Philaldelphia PA 19139 USA.,Department of Pathology and Laboratory Medicine, Perelman School of Medicine Philaldelphia PA 19139 USA.,Department of Radiology at Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA
| | - Caleb M Grenko
- Department of Pathology and Laboratory Medicine, Perelman School of MedicineUniversity of Pennsylvania and the Center for Interdisciplinary Studies Davidson College NC 28035 USA
| | - Andrew A Chen
- Penn Statistical Imaging and Visualization Endeavor (PennSIVE)University of Pennsylvania Philaldelphia PA 19139 USA
| | - Angela N Viaene
- Department of Pathology and Laboratory Medicine, Children's Hospital of PhiladelphiaUniversity of Pennsylvania Philaldelphia PA 19139 USA
| | - MacLean D Nasrallah
- Department of Pathology and Laboratory Medicine, Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA
| | - Sarthak Pati
- CBICA and Department of Pathology and Laboratory Medicine, Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA.,Department of Radiology at Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA
| | - Tahsin Kurc
- Department of Biomedical InformaticsStony Brook University Stony Brook NY 11794-0751 USA
| | - Joel Saltz
- Department of Biomedical InformaticsStony Brook University Stony Brook NY 11794-0751 USA
| | - Michael D Feldman
- Department of Pathology and Laboratory Medicine, Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA
| | - Hamed Akbari
- CBICA and the Department of Radiology, Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA
| | | | - Russell T Shinohara
- CBICA and the Penn Statistical Imaging and Visualization Endeavor (PennSIVE)University of Pennsylvania Philaldelphia PA 19139 USA
| | - Spyridon Bakas
- CBICA, and the Department of Pathology and Laboratory Medicine, Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA.,Department of Radiology, Perelman School of MedicineUniversity of Pennsylvania Philaldelphia PA 19139 USA
| |
Collapse
|
32
|
Laxmisagar HS, Hanumantharaju MC. Detection of Breast Cancer with Lightweight Deep Neural Networks for Histology Image Classification. Crit Rev Biomed Eng 2022; 50:1-19. [PMID: 36374820 DOI: 10.1615/critrevbiomedeng.2022043417] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Many researchers have developed computer-assisted diagnostic (CAD) methods to diagnose breast cancer using histopathology microscopic images. These techniques help to improve the accuracy of biopsy diagnosis with hematoxylin and eosin-stained images. On the other hand, most CAD systems usually rely on inefficient and time-consuming manual feature extraction methods. Using a deep learning (DL) model with convolutional layers, we present a method to extract the most useful pictorial information for breast cancer classification. Breast biopsy images stained with hematoxylin and eosin can be categorized into four groups namely benign lesions, normal tissue, carcinoma in situ, and invasive carcinoma. To correctly classify different types of breast cancer, it is important to classify histopathological images accurately. The MobileNet architecture model is used to obtain high accuracy with less resource utilization. The proposed model is fast, inexpensive, and safe due to which it is suitable for the detection of breast cancer at an early stage. This lightweight deep neural network can be accelerated using field-programmable gate arrays for the detection of breast cancer. DL has been implemented to successfully classify breast cancer. The model uses categorical cross-entropy to learn to give the correct class a high probability and other classes a low probability. It is used in the classification stage of the convolutional neural network (CNN) after the clustering stage, thereby improving the performance of the proposed system. To measure training and validation accuracy, the model was trained on Google Colab for 280 epochs with a powerful GPU with 2496 CUDA cores, 12 GB GDDR5 VRAM, and 12.6 GB RAM. Our results demonstrate that deep CNN with a chi-square test has improved the accuracy of histopathological image classification of breast cancer by greater than 11% compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- H S Laxmisagar
- Department of Electronics and Communication Engineering, BMS Institute of Technology Management, Bengaluru 560064, India
| | - M C Hanumantharaju
- Department of Electronics and Communication Engineering, BMS Institute of Technology Management, Bengaluru 560064, India
| |
Collapse
|
33
|
Mammography Image-Based Diagnosis of Breast Cancer Using Machine Learning: A Pilot Study. SENSORS 2021; 22:s22010203. [PMID: 35009746 PMCID: PMC8749541 DOI: 10.3390/s22010203] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 12/22/2021] [Accepted: 12/24/2021] [Indexed: 02/08/2023]
Abstract
A tumor is an abnormal tissue classified as either benign or malignant. A breast tumor is one of the most common tumors in women. Radiologists use mammograms to identify a breast tumor and classify it, which is a time-consuming process and prone to error due to the complexity of the tumor. In this study, we applied machine learning-based techniques to assist the radiologist in reading mammogram images and classifying the tumor in a very reasonable time interval. We extracted several features from the region of interest in the mammogram, which the radiologist manually annotated. These features are incorporated into a classification engine to train and build the proposed structure classification models. We used a dataset that was not previously seen in the model to evaluate the accuracy of the proposed system following the standard model evaluation schemes. Accordingly, this study found that various factors could affect the performance, which we avoided after experimenting all the possible ways. This study finally recommends using the optimized Support Vector Machine or Naïve Bayes, which produced 100% accuracy after integrating the feature selection and hyper-parameter optimization schemes.
Collapse
|
34
|
Montaha S, Azam S, Rafid AKMRH, Ghosh P, Hasan MZ, Jonkman M, De Boer F. BreastNet18: A High Accuracy Fine-Tuned VGG16 Model Evaluated Using Ablation Study for Diagnosing Breast Cancer from Enhanced Mammography Images. BIOLOGY 2021; 10:biology10121347. [PMID: 34943262 PMCID: PMC8698892 DOI: 10.3390/biology10121347] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/12/2021] [Accepted: 12/14/2021] [Indexed: 12/14/2022]
Abstract
Simple Summary Breast cancer diagnosis at an early stage using mammography is important, as it assists clinical specialists in treatment planning to increase survival rates. The aim of this study is to construct an effective method to classify breast images into four classes with a low error rate. Initially, unwanted regions of mammograms are removed, the quality is enhanced, and the cancerous lesions are highlighted with different artifacts removal, noise reduction, and enhancement techniques. The number of mammograms is increased using seven augmentation techniques to deal with over-fitting and under-fitting problems. Afterwards, six fine-tuned convolution neural networks (CNNs), originally developed for other purposes, are evaluated, and VGG16 yielded the highest performance. We propose a BreastNet18 model based on the fine-tuned VGG16, changing different hyper parameters and layer structures after experimentation with our dataset. Performing an ablation study on the proposed model and selecting suitable parameter values for preprocessing algorithms increases the accuracy of our model to 98.02%, outperforming some existing state-of-the-art approaches. To analyze the performance, several performance metrics are generated and evaluated for every model and for BreastNet18. Results suggest that accuracy improvement can be obtained through image pre-processing techniques, augmentation, and ablation study. To investigate possible overfitting issues, a k-fold cross validation is carried out. To assert the robustness of the network, the model is tested on a dataset containing noisy mammograms. This may help medical specialists in efficient and accurate diagnosis and early treatment planning. Abstract Background: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. Methods: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. Results: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. Conclusions: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.M.); (A.K.M.R.H.R.); (M.Z.H.)
| | - Sami Azam
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT 0909, Australia; (M.J.); (F.D.B.)
- Correspondence:
| | | | - Pronab Ghosh
- Department of Computer Science (CS), Lakehead University, 955 Oliver Rd, Thunder Bay, ON P7B 5E1, Canada;
| | - Md. Zahid Hasan
- Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (S.M.); (A.K.M.R.H.R.); (M.Z.H.)
| | - Mirjam Jonkman
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT 0909, Australia; (M.J.); (F.D.B.)
| | - Friso De Boer
- College of Engineering, IT and Environment, Charles Darwin University, Darwin, NT 0909, Australia; (M.J.); (F.D.B.)
| |
Collapse
|
35
|
Breast Cancer Calcifications: Identification Using a Novel Segmentation Approach. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9905808. [PMID: 34659451 PMCID: PMC8514925 DOI: 10.1155/2021/9905808] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Revised: 09/12/2021] [Accepted: 09/21/2021] [Indexed: 12/23/2022]
Abstract
Breast cancer is a strong risk factor of cancer amongst women. One in eight women suffers from breast cancer. It is a life-threatening illness and is utterly dreadful. The root cause which is the breast cancer agent is still under research. There are, however, certain potentially dangerous factors like age, genetics, obesity, birth control, cigarettes, and tablets. Breast cancer is often a malignant tumor that begins in the breast cells and eventually spreads to the surrounding tissue. If detected early, the illness may be reversible. The probability of preservation diminishes as the number of measurements increases. Numerous imaging techniques are used to identify breast cancer. This research examines different breast cancer detection strategies via the use of imaging techniques, data mining techniques, and various characteristics, as well as a brief comparative analysis of the existing breast cancer detection system. Breast cancer mortality will be significantly reduced if it is identified and treated early. There are technological difficulties linked to scans and people's inconsistency with breast cancer. In this study, we introduced a form of breast cancer diagnosis. There are different methods involved to collect and analyze details. In the preprocessing stage, the input data picture is filtered by using a window or by cropping. Segmentation can be performed using k-means algorithm. This study is aimed at identifying the calcifications found in bosom cancer in the last phase. The suggested approach is already implemented in MATLAB, and it produces reliable performance.
Collapse
|
36
|
Laishram R, Rabidas R. WDO optimized detection for mammographic masses and its diagnosis: A unified CAD system. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107620] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
37
|
Liu Y, Han L, Wang H, Yin B. Classification of papillary thyroid carcinoma histological images based on deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-210100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.
Collapse
Affiliation(s)
- Yaning Liu
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| | - Lin Han
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, China
| | - Hexiang Wang
- Department of Pathology, Qingdao Hospital of Traditional Chinese Medicine, Qingdao, China
| | - Bo Yin
- College of Information Science and Engineering, Ocean University of China, Qingdao, China
| |
Collapse
|
38
|
Chen Z, Chen Z, Liu J, Zheng Q, Zhu Y, Zuo Y, Wang Z, Guan X, Wang Y, Li Y. Weakly Supervised Histopathology Image Segmentation With Sparse Point Annotations. IEEE J Biomed Health Inform 2021; 25:1673-1685. [PMID: 32931437 DOI: 10.1109/jbhi.2020.3024262] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Digital histopathology image segmentation can facilitate computer-assisted cancer diagnostics. Given the difficulty of obtaining manual annotations, weak supervision is more suitable for the task than full supervision is. However, most weakly supervised models are not ideal for handling severe intra-class heterogeneity and inter-class homogeneity in histopathology images. Therefore, we propose a novel end-to-end weakly supervised learning framework named WESUP. With only sparse point annotations, it performs accurate segmentation and exhibits good generalizability. The training phase comprises two major parts, hierarchical feature representation and deep dynamic label propagation. The former uses superpixels to capture local details and global context from the convolutional feature maps obtained via transfer learning. The latter recognizes the manifold structure of the hierarchical features and identifies potential targets with the sparse annotations. Moreover, these two parts are trained jointly to improve the performance of the whole framework. To further boost test performance, pixel-wise inference is adopted for finer prediction. As demonstrated by experimental results, WESUP is able to largely resolve the confusion between histological foreground and background. It outperforms several state-of-the-art weakly supervised methods on a variety of histopathology datasets with minimal annotation efforts. Trained by very sparse point annotations, WESUP can even beat an advanced fully supervised segmentation network.
Collapse
|
39
|
Detection and Segmentation of Breast Masses Based on Multi-Layer Feature Fusion. Methods 2021; 202:54-61. [PMID: 33930573 DOI: 10.1016/j.ymeth.2021.04.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 04/05/2021] [Accepted: 04/25/2021] [Indexed: 11/21/2022] Open
Abstract
In breast mass detection, there are many different sizes of masses in the image. However, when the existing target detection model is directly used to detect the breast mass, it is easy to appear the phenomenon of misdetection and missed detection. Therefore, in order to improve the detection accuracy of breast masses, this paper proposed a target detection model D-Mask R-CNN based on Mask R-CNN, which is suitable for breast masses detection. Firstly, this paper improved the internal structure of FPN, and modified the lateral connection mode in the original FPN structure to dense connection. Secondly, modified the size of the anchor of RPN to improve the location accuracy of breast masses. Finally, Soft-NMS was used to replace the NMS in the original model to reduce the possibility that the correct prediction results may be eliminated during the NMS process. This paper used the CBIS-DDSM dataset for all experiments. The results showed that the mAP value of the improved model for detecting breast masses reached 0.66 in the test set, which was 0.05 higher than that of the original Mask R-CNN.
Collapse
|
40
|
Feng Y, Hafiane A, Laurent H. A deep learning based multiscale approach to segment the areas of interest in whole slide images. Comput Med Imaging Graph 2021; 90:101923. [PMID: 33894669 DOI: 10.1016/j.compmedimag.2021.101923] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 01/23/2021] [Accepted: 04/01/2021] [Indexed: 11/28/2022]
Abstract
This paper addresses the problem of liver cancer segmentation in Whole Slide Images (WSIs). We propose a multi-scale image processing method based on an automatic end-to-end deep neural network algorithm for the segmentation of cancerous areas. A seven-level gaussian pyramid representation of the histopathological image was built to provide the texture information at different scales. In this work, several neural architectures were compared using the original image level for the training procedure. The proposed method is based on U-Net applied to seven levels of various resolutions (pyramidal subsampling). The predictions in different levels are combined through a voting mechanism. The final segmentation result is generated at the original image level. Partial color normalization and the weighted overlapping method were applied in preprocessing and prediction separately. The results show the effectiveness of the proposed multi-scale approach which achieved better scores than state-of-the-art methods.
Collapse
Affiliation(s)
- Yanbo Feng
- INSA CVL, University of Orléans, PRISME, EA 4229, 18022 Bourges, France.
| | - Adel Hafiane
- INSA CVL, University of Orléans, PRISME, EA 4229, 18022 Bourges, France
| | - Hélène Laurent
- INSA CVL, University of Orléans, PRISME, EA 4229, 18022 Bourges, France
| |
Collapse
|
41
|
Aly GH, Marey M, El-Sayed SA, Tolba MF. YOLO Based Breast Masses Detection and Classification in Full-Field Digital Mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105823. [PMID: 33190942 DOI: 10.1016/j.cmpb.2020.105823] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Accepted: 10/27/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE With the recent development in deep learning since 2012, the use of Convolutional Neural Networks (CNNs) in bioinformatics, especially medical imaging, achieved tremendous success. Besides that, breast masses detection and classifications in mammograms and their pathology classification are considered a critical challenge. Till now, the evaluation process of the screening mammograms is held by human readers which is considered very monotonous, tiring, lengthy, costly, and significantly prone to errors. METHODS We propose an end to end computer-aided diagnosis system based on You Only Look Once (YOLO). The proposed system first preprocesses the mammograms from their DICOM format to images without losing data. Then, it detects masses in full-field digital mammograms and distinguishes between the malignant and benign lesions without any human intervention. YOLO has three different architectures, and, in this paper, the three versions are used for mass detection and classification in the mammograms to compare their performance. The use of anchors in YOLO-V3 on the original form of data and its augmented version is proved to improve the detection accuracy especially when the k-means clustering is applied to generate anchors corresponding to the used dataset. Finally, ResNet and Inception are used as feature extractors to compare their classification performance against YOLO. RESULTS Mammograms with different resolutions are used and based on YOLO-V3, the best results are obtained through detecting 89.4% of the masses in the INbreast mammograms with an average precision of 94.2% and 84.6% for classifying the masses as benign and malignant respectively. YOLO's classification network is replaced with ResNet and InceptionV3 to get overall accuracy of 91.0% and 95.5%, respectively. CONCLUSION The proposed system showed using the experimental results the YOLO impact on the breast masses detection and classification. Especially using the anchor boxes concept in YOLO-V3 that are generated by applying k-means clustering on the dataset, we can detect most of the challenging cases of masses and classify them correctly. Also, by augmenting the dataset using different approaches and comparing with other recent YOLO based studies, it is found that augmenting the training set only is the fairest and accurate to be applied in the realistic scenarios.
Collapse
Affiliation(s)
- Ghada Hamed Aly
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt.
| | - Mohammed Marey
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| | - Safaa Amin El-Sayed
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| | - Mohamed Fahmy Tolba
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| |
Collapse
|
42
|
Avanzo M, Wei L, Stancanello J, Vallières M, Rao A, Morin O, Mattonen SA, El Naqa I. Machine and deep learning methods for radiomics. Med Phys 2021; 47:e185-e202. [PMID: 32418336 DOI: 10.1002/mp.13678] [Citation(s) in RCA: 262] [Impact Index Per Article: 65.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 05/22/2019] [Accepted: 06/13/2019] [Indexed: 12/12/2022] Open
Abstract
Radiomics is an emerging area in quantitative image analysis that aims to relate large-scale extracted imaging information to clinical and biological endpoints. The development of quantitative imaging methods along with machine learning has enabled the opportunity to move data science research towards translation for more personalized cancer treatments. Accumulating evidence has indeed demonstrated that noninvasive advanced imaging analytics, that is, radiomics, can reveal key components of tumor phenotype for multiple three-dimensional lesions at multiple time points over and beyond the course of treatment. These developments in the use of CT, PET, US, and MR imaging could augment patient stratification and prognostication buttressing emerging targeted therapeutic approaches. In recent years, deep learning architectures have demonstrated their tremendous potential for image segmentation, reconstruction, recognition, and classification. Many powerful open-source and commercial platforms are currently available to embark in new research areas of radiomics. Quantitative imaging research, however, is complex and key statistical principles should be followed to realize its full potential. The field of radiomics, in particular, requires a renewed focus on optimal study design/reporting practices and standardization of image acquisition, feature calculation, and rigorous statistical analysis for the field to move forward. In this article, the role of machine and deep learning as a major computational vehicle for advanced model building of radiomics-based signatures or classifiers, and diverse clinical applications, working principles, research opportunities, and available computational platforms for radiomics will be reviewed with examples drawn primarily from oncology. We also address issues related to common applications in medical physics, such as standardization, feature extraction, model building, and validation.
Collapse
Affiliation(s)
- Michele Avanzo
- Department of Medical Physics, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, Aviano, PN, 33081, Italy
| | - Lise Wei
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA
| | | | - Martin Vallières
- Medical Physics Unit, McGill University, Montreal, QC, Canada.,Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA, 94143, USA
| | - Arvind Rao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA.,Department of Computational Medicine & Bioinformatics, University of Michigan, Ann Arbor, MI, 48103, USA
| | - Olivier Morin
- Department of Radiation Oncology, University of California, San Francisco, San Francisco, CA, 94143, USA
| | - Sarah A Mattonen
- Department of Radiology, Stanford University, Stanford, CA, 94305, USA
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA
| |
Collapse
|
43
|
A framework for breast cancer classification using Multi-DCNNs. Comput Biol Med 2021; 131:104245. [PMID: 33556893 DOI: 10.1016/j.compbiomed.2021.104245] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/22/2021] [Accepted: 01/23/2021] [Indexed: 11/20/2022]
Abstract
BACKGROUND Deep learning (DL) is the fastest-growing field of machine learning (ML). Deep convolutional neural networks (DCNN) are currently the main tool used for image analysis and classification purposes. There are several DCNN architectures among them AlexNet, GoogleNet, and residual networks (ResNet). METHOD This paper presents a new computer-aided diagnosis (CAD) system based on feature extraction and classification using DL techniques to help radiologists to classify breast cancer lesions in mammograms. This is performed by four different experiments to determine the optimum approach. The first one consists of end-to-end pre-trained fine-tuned DCNN networks. In the second one, the deep features of the DCNNs are extracted and fed to a support vector machine (SVM) classifier with different kernel functions. The third experiment performs deep features fusion to demonstrate that combining deep features will enhance the accuracy of the SVM classifiers. Finally, in the fourth experiment, principal component analysis (PCA) is introduced to reduce the large feature vector produced in feature fusion and to decrease the computational cost. The experiments are performed on two datasets (1) the curated breast imaging subset of the digital database for screening mammography (CBIS-DDSM) and (2) the mammographic image analysis society digital mammogram database (MIAS). RESULTS The accuracy achieved using deep features fusion for both datasets proved to be the highest compared to the state-of-the-art CAD systems. Conversely, when applying the PCA on the feature fusion sets, the accuracy did not improve; however, the computational cost decreased as the execution time decreased.
Collapse
|
44
|
Li Y, Zhao G, Zhang Q, Lin Y, Wang M. SAP-cGAN: Adversarial learning for breast mass segmentation in digital mammogram based on superpixel average pooling. Med Phys 2021; 48:1157-1167. [PMID: 33340125 DOI: 10.1002/mp.14671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 12/11/2020] [Accepted: 12/11/2020] [Indexed: 01/23/2023] Open
Abstract
PURPOSE Breast mass segmentation is a prerequisite step in the use of computer-aided tools designed for breast cancer diagnosis and treatment planning. However, mass segmentation remains challenging due to the low contrast, irregular shapes, and fuzzy boundaries of masses. In this work, we propose a mammography mass segmentation model for improving segmentation performance. METHODS We propose a mammography mass segmentation model called SAP-cGAN, which is based on an improved conditional generative adversarial network (cGAN). We introduce a superpixel average pooling layer into the cGAN decoder, which utilizes superpixels as a pooling layout to improve boundary segmentation. In addition, we adopt a multiscale input strategy to enable the network to learn scale-invariant features with increased robustness. The performance of the model is evaluated with two public datasets: CBIS-DDSM and INbreast. Moreover, ablation analysis is conducted to evaluate further the individual contribution of each block to the performance of the network. RESULTS Dice and Jaccard scores of 93.37% and 87.57%, respectively, are obtained for the CBIS-DDSM dataset. The Dice and Jaccard scores for the INbreast dataset are 91.54% and 84.40%, respectively. These results indicate that our proposed model outperforms current state-of-the-art breast mass segmentation methods. The superpixel average pooling layer and multiscale input strategy has improved the Dice and Jaccard scores of the original cGAN by 7.8% and 12.79%, respectively. CONCLUSIONS Adversarial learning with the addition of a superpixel average pooling layer and multiscale input strategy can encourage the Generator network to generate masks with increased realism and improve breast mass segmentation performance through the minimax game between the Generator network and Discriminator network.
Collapse
Affiliation(s)
- Yamei Li
- School of Information Engineering, Zhengzhou University, Zhengzhou, 450001, China.,Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou, 450052, China
| | - Guohua Zhao
- School of Information Engineering, Zhengzhou University, Zhengzhou, 450001, China.,Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou, 450052, China
| | - Qian Zhang
- School of Computer Science, Zhongyuan University of Technology, Zhengzhou, 450007, China
| | - Yusong Lin
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou, 450052, China.,School of Software, Zhengzhou University, Zhengzhou, 450002, China.,Hanwei IoT Institute, Zhengzhou University, Zhengzhou, 450002, China
| | - Meiyun Wang
- Collaborative Innovation Center for Internet Healthcare, Zhengzhou University, Zhengzhou, 450052, China.,Department of Radiology, People's Hospital of Zhengzhou University, Zhengzhou, 450003, China
| |
Collapse
|
45
|
Salvi M, Acharya UR, Molinari F, Meiburger KM. The impact of pre- and post-image processing techniques on deep learning frameworks: A comprehensive review for digital pathology image analysis. Comput Biol Med 2021; 128:104129. [DOI: 10.1016/j.compbiomed.2020.104129] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 11/13/2020] [Indexed: 12/12/2022]
|
46
|
Abdelhafiz D, Bi J, Ammar R, Yang C, Nabavi S. Convolutional neural network for automated mass segmentation in mammography. BMC Bioinformatics 2020; 21:192. [PMID: 33297952 PMCID: PMC7724817 DOI: 10.1186/s12859-020-3521-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 04/29/2020] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). RESULTS We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. CONCLUSIONS The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.
Collapse
Affiliation(s)
- Dina Abdelhafiz
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
- The Informatics Research Institute (IRI), City of Scientific Research and Technological Applications (SRTA-City), Alexandria, Egypt
| | - Jinbo Bi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Reda Ammar
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| | - Clifford Yang
- Departments of Diagnostic Imaging, University of Connecticut Health Center, Farmington, 06030 CT USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, 06269 CT USA
| |
Collapse
|
47
|
A Semisupervised Learning Scheme with Self-Paced Learning for Classifying Breast Cancer Histopathological Images. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8826568. [PMID: 33376479 PMCID: PMC7738795 DOI: 10.1155/2020/8826568] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 11/02/2020] [Accepted: 11/05/2020] [Indexed: 12/18/2022]
Abstract
The unavailability of large amounts of well-labeled data poses a significant challenge in many medical imaging tasks. Even in the likelihood of having access to sufficient data, the process of accurately labeling the data is an arduous and time-consuming one, requiring expertise skills. Again, the issue of unbalanced data further compounds the abovementioned problems and presents a considerable challenge for many machine learning algorithms. In lieu of this, the ability to develop algorithms that can exploit large amounts of unlabeled data together with a small amount of labeled data, while demonstrating robustness to data imbalance, can offer promising prospects in building highly efficient classifiers. This work proposes a semisupervised learning method that integrates self-training and self-paced learning to generate and select pseudolabeled samples for classifying breast cancer histopathological images. A novel pseudolabel generation and selection algorithm is introduced in the learning scheme to generate and select highly confident pseudolabeled samples from both well-represented classes to less-represented classes. Such a learning approach improves the performance by jointly learning a model and optimizing the generation of pseudolabels on unlabeled-target data to augment the training data and retraining the model with the generated labels. A class balancing framework that normalizes the class-wise confidence scores is also proposed to prevent the model from ignoring samples from less represented classes (hard-to-learn samples), hence effectively handling the issue of data imbalance. Extensive experimental evaluation of the proposed method on the BreakHis dataset demonstrates the effectiveness of the proposed method.
Collapse
|
48
|
Ahmad HM, Khan MJ, Yousaf A, Ghuffar S, Khurshid K. Deep Learning: A Breakthrough in Medical Imaging. Curr Med Imaging 2020; 16:946-956. [PMID: 33081657 DOI: 10.2174/1573405615666191219100824] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 11/25/2019] [Accepted: 12/06/2019] [Indexed: 02/08/2023]
Abstract
Deep learning has attracted great attention in the medical imaging community as a promising solution for automated, fast and accurate medical image analysis, which is mandatory for quality healthcare. Convolutional neural networks and its variants have become the most preferred and widely used deep learning models in medical image analysis. In this paper, concise overviews of the modern deep learning models applied in medical image analysis are provided and the key tasks performed by deep learning models, i.e. classification, segmentation, retrieval, detection, and registration are reviewed in detail. Some recent researches have shown that deep learning models can outperform medical experts in certain tasks. With the significant breakthroughs made by deep learning methods, it is expected that patients will soon be able to safely and conveniently interact with AI-based medical systems and such intelligent systems will actually improve patient healthcare. There are various complexities and challenges involved in deep learning-based medical image analysis, such as limited datasets. But researchers are actively working in this area to mitigate these challenges and further improve health care with AI.
Collapse
Affiliation(s)
- Hafiz Mughees Ahmad
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| | - Muhammad Jaleed Khan
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| | - Adeel Yousaf
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan,Department of Avionics Engineering, Institute of Space Technology, Islamabad, Pakistan
| | - Sajid Ghuffar
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan,Department of Space Science, Institute of Space Technology, Islamabad, Pakistan
| | - Khurram Khurshid
- Artificial Intelligence and Computer Vision (iVision) Lab, Department of Electrical Engineering, Institute of Space
Technology, Islamabad, Pakistan
| |
Collapse
|
49
|
Das P, Das A. Shift invariant extrema based feature analysis scheme to discriminate the spiculation nature of mammograms. ISA TRANSACTIONS 2020; 103:156-165. [PMID: 32216985 DOI: 10.1016/j.isatra.2020.03.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 03/11/2020] [Accepted: 03/12/2020] [Indexed: 06/10/2023]
Abstract
Since uncontrolled growth of malignant masses introduces uneven shape irregularities and spiculations in the boundary, shape representing shift invariant features are essential to resolve the problem of discrimination. However, ambiguous nature of shape, size, margin, orientation of masses produces imprecise feature values. In this view, a new concept of extrema based feature characterization scheme is proposed for capturing radiating nature of mass morphology. Computation of extrema patterns needs only few algorithmic steps. Beside this, present study employs an automated enhancement procedure to improve the classification accuracy. Experimental results show that extrema characterization reduces the feature redundancy to produce high efficiency in reasonably low time.
Collapse
Affiliation(s)
- Poulomi Das
- OmDayal Group of Institutions, Maulana Abul Kalam Azad University of Technology, India.
| | - Arpita Das
- Department of Radio Physics and Electronics, University of Calcutta, Rajabazar Science College Campus, India.
| |
Collapse
|
50
|
Gnanasekaran VS, Joypaul S, Sundaram PM. A Survey on Machine Learning Algorithms for the Diagnosis of Breast Masses with Mammograms. Curr Med Imaging 2020; 16:639-652. [DOI: 10.2174/1573405615666190903141554] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Revised: 07/08/2019] [Accepted: 07/17/2019] [Indexed: 01/22/2023]
Abstract
Breast cancer is leading cancer among women for the past 60 years. There are no effective
mechanisms for completely preventing breast cancer. Rather it can be detected at its earlier
stages so that unnecessary biopsy can be reduced. Although there are several imaging modalities
available for capturing the abnormalities in breasts, mammography is the most commonly used
technique, because of its low cost. Computer-Aided Detection (CAD) system plays a key role in
analyzing the mammogram images to diagnose the abnormalities. CAD assists the radiologists for
diagnosis. This paper intends to provide an outline of the state-of-the-art machine learning algorithms
used in the detection of breast cancer developed in recent years. We begin the review with
a concise introduction about the fundamental concepts related to mammograms and CAD systems.
We then focus on the techniques used in the diagnosis of breast cancer with mammograms.
Collapse
Affiliation(s)
| | - Sutha Joypaul
- AAA College of Engineering and Technology, Sivakasi 626123, Virudhunagar District, Tamil Nadu, India
| | | |
Collapse
|