1
|
Jones MA, Zhang K, Faiz R, Islam W, Jo J, Zheng B, Qiu Y. Utilizing Pseudo Color Image to Improve the Performance of Deep Transfer Learning-Based Computer-Aided Diagnosis Schemes in Breast Mass Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01237-0. [PMID: 39455542 DOI: 10.1007/s10278-024-01237-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 07/15/2024] [Accepted: 08/14/2024] [Indexed: 10/28/2024]
Abstract
The purpose of this study is to investigate the impact of using morphological information in classifying suspicious breast lesions. The widespread use of deep transfer learning can significantly improve the performance of the mammogram based CADx schemes. However, digital mammograms are grayscale images, while deep learning models are typically optimized using the natural images containing three channels. Thus, it is needed to convert the grayscale mammograms into three channel images for the input of deep transfer models. This study aims to develop a novel pseudo color image generation method which utilizes the mass contour information to enhance the classification performance. Accordingly, a total of 830 breast cancer cases were retrospectively collected, which contains 310 benign and 520 malignant cases, respectively. For each case, a total of four regions of interest (ROI) are collected from the grayscale images captured for both the CC and MLO views of the two breasts. Meanwhile, a total of seven pseudo color image sets are generated as the input of the deep learning models, which are created through a combination of the original grayscale image, a histogram equalized image, a bilaterally filtered image, and a segmented mass. Accordingly, the output features from four identical pre-trained deep learning models are concatenated and then processed by a support vector machine-based classifier to generate the final benign/malignant labels. The performance of each image set was evaluated and compared. The results demonstrate that the pseudo color sets containing the manually segmented mass performed significantly better than all other pseudo color sets, which achieved an AUC (area under the ROC curve) up to 0.889 ± 0.012 and an overall accuracy up to 0.816 ± 0.020, respectively. At the same time, the performance improvement is also dependent on the accuracy of the mass segmentation. The results of this study support our hypothesis that adding accurately segmented mass contours can provide complementary information, thereby enhancing the performance of the deep transfer model in classifying suspicious breast lesions.
Collapse
Affiliation(s)
- Meredith A Jones
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Ke Zhang
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Rowzat Faiz
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Warid Islam
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Javier Jo
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA
| | - Yuchen Qiu
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK, 73019, USA.
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK, 73019, USA.
| |
Collapse
|
2
|
Oza U, Gohel B, Kumar P, Oza P. Presegmenter Cascaded Framework for Mammogram Mass Segmentation. Int J Biomed Imaging 2024; 2024:9422083. [PMID: 39155940 PMCID: PMC11329304 DOI: 10.1155/2024/9422083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 05/09/2024] [Accepted: 06/26/2024] [Indexed: 08/20/2024] Open
Abstract
Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.
Collapse
Affiliation(s)
- Urvi Oza
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Bakul Gohel
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Pankaj Kumar
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| | - Parita Oza
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| |
Collapse
|
3
|
Sureshkumar V, Prasad RSN, Balasubramaniam S, Jagannathan D, Daniel J, Dhanasekaran S. Breast Cancer Detection and Analytics Using Hybrid CNN and Extreme Learning Machine. J Pers Med 2024; 14:792. [PMID: 39201984 PMCID: PMC11355507 DOI: 10.3390/jpm14080792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 07/08/2024] [Accepted: 07/15/2024] [Indexed: 09/03/2024] Open
Abstract
Early detection of breast cancer is essential for increasing survival rates, as it is one of the primary causes of death for women globally. Mammograms are extensively used by physicians for diagnosis, but selecting appropriate algorithms for image enhancement, segmentation, feature extraction, and classification remains a significant research challenge. This paper presents a computer-aided diagnosis (CAD)-based hybrid model combining convolutional neural networks (CNN) with a pruned ensembled extreme learning machine (HCPELM) to enhance breast cancer detection, segmentation, feature extraction, and classification. The model employs the rectified linear unit (ReLU) activation function to enhance data analytics after removing artifacts and pectoral muscles, and the HCPELM hybridized with the CNN model improves feature extraction. The hybrid elements are convolutional and fully connected layers. Convolutional layers extract spatial features like edges, textures, and more complex features in deeper layers. The fully connected layers take these features and combine them in a non-linear manner to perform the final classification. ELM performs classification and recognition tasks, aiming for state-of-the-art performance. This hybrid classifier is used for transfer learning by freezing certain layers and modifying the architecture to reduce parameters, easing cancer detection. The HCPELM classifier was trained using the MIAS database and evaluated against benchmark methods. It achieved a breast image recognition accuracy of 86%, outperforming benchmark deep learning models. HCPELM is demonstrating superior performance in early detection and diagnosis, thus aiding healthcare practitioners in breast cancer diagnosis.
Collapse
Affiliation(s)
- Vidhushavarshini Sureshkumar
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Vadapalani, Chennai 600026, India
| | | | | | - Dhayanithi Jagannathan
- Department of Computer Science and Engineering, Sona College of Technology, Salem 636005, India; (S.B.); (D.J.)
| | - Jayanthi Daniel
- Department of Electronics and Communication Engineering, Rajalakshmi Engineering College, Chennai 602105, India;
| | | |
Collapse
|
4
|
Hou W, Zou L, Wang D. Tumor Segmentation in Intraoperative Fluorescence Images Based on Transfer Learning and Convolutional Neural Networks. Surg Innov 2024:15533506241246576. [PMID: 38619039 DOI: 10.1177/15533506241246576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
OBJECTIVE To propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest. METHODS We employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method. RESULTS The transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset. CONCLUSION To the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.
Collapse
Affiliation(s)
- Weijia Hou
- College of Science, Nanjing Forestry University, Nanjing, China
| | - Liwen Zou
- Department of Mathematics, Nanjing University, Nanjing, China
| | - Dong Wang
- Group A: Large-Scale Scientific Computing and Media Imaging, Nanjing Center for Applied Mathematics, Nanjing, China
| |
Collapse
|
5
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
6
|
Aliniya P, Nicolescu M, Nicolescu M, Bebis G. Improved Loss Function for Mass Segmentation in Mammography Images Using Density and Mass Size. J Imaging 2024; 10:20. [PMID: 38249005 PMCID: PMC10816853 DOI: 10.3390/jimaging10010020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 01/23/2024] Open
Abstract
Mass segmentation is one of the fundamental tasks used when identifying breast cancer due to the comprehensive information it provides, including the location, size, and border of the masses. Despite significant improvement in the performance of the task, certain properties of the data, such as pixel class imbalance and the diverse appearance and sizes of masses, remain challenging. Recently, there has been a surge in articles proposing to address pixel class imbalance through the formulation of the loss function. While demonstrating an enhancement in performance, they mostly fail to address the problem comprehensively. In this paper, we propose a new perspective on the calculation of the loss that enables the binary segmentation loss to incorporate the sample-level information and region-level losses in a hybrid loss setting. We propose two variations of the loss to include mass size and density in the loss calculation. Also, we introduce a single loss variant using the idea of utilizing mass size and density to enhance focal loss. We tested the proposed method on benchmark datasets: CBIS-DDSM and INbreast. Our approach outperformed the baseline and state-of-the-art methods on both datasets.
Collapse
Affiliation(s)
- Parvaneh Aliniya
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | - Mircea Nicolescu
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | | | | |
Collapse
|
7
|
Gao Y, Lin J, Zhou Y, Lin R. The application of traditional machine learning and deep learning techniques in mammography: a review. Front Oncol 2023; 13:1213045. [PMID: 37637035 PMCID: PMC10453798 DOI: 10.3389/fonc.2023.1213045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Breast cancer, the most prevalent malignant tumor among women, poses a significant threat to patients' physical and mental well-being. Recent advances in early screening technology have facilitated the early detection of an increasing number of breast cancers, resulting in a substantial improvement in patients' overall survival rates. The primary techniques used for early breast cancer diagnosis include mammography, breast ultrasound, breast MRI, and pathological examination. However, the clinical interpretation and analysis of the images produced by these technologies often involve significant labor costs and rely heavily on the expertise of clinicians, leading to inherent deviations. Consequently, artificial intelligence(AI) has emerged as a valuable technology in breast cancer diagnosis. Artificial intelligence includes Machine Learning(ML) and Deep Learning(DL). By simulating human behavior to learn from and process data, ML and DL aid in lesion localization reduce misdiagnosis rates, and improve accuracy. This narrative review provides a comprehensive review of the current research status of mammography using traditional ML and DL algorithms. It particularly highlights the latest advancements in DL methods for mammogram image analysis and offers insights into future development directions.
Collapse
Affiliation(s)
- Ying’e Gao
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Jingjing Lin
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Yuzhuo Zhou
- Department of Surgery, Hannover Medical School, Hannover, Germany
| | - Rongjin Lin
- School of Nursing Fujian Medical University, Fuzhou, China
- Department of Nursing, the First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| |
Collapse
|
8
|
Si T, Patra DK, Mallik S, Bandyopadhyay A, Sarkar A, Qin H. Identification of breast lesion through integrated study of gorilla troops optimization and rotation-based learning from MRI images. Sci Rep 2023; 13:11577. [PMID: 37463919 PMCID: PMC10354050 DOI: 10.1038/s41598-023-36300-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 05/31/2023] [Indexed: 07/20/2023] Open
Abstract
Breast cancer has emerged as the most life-threatening disease among women around the world. Early detection and treatment of breast cancer are thought to reduce the need for surgery and boost the survival rate. The Magnetic Resonance Imaging (MRI) segmentation techniques for breast cancer diagnosis are investigated in this article. Kapur's entropy-based multilevel thresholding is used in this study to determine optimal values for breast DCE-MRI lesion segmentation using Gorilla Troops Optimization (GTO). An improved GTO, is developed by incorporating Rotational opposition based-learning (RBL) into GTO called (GTORBL) and applied it to the same problem. The proposed approaches are tested on 20 patients' T2 Weighted Sagittal (T2 WS) DCE-MRI 100 slices. The proposed approaches are compared with Tunicate Swarm Algorithm (TSA), Particle Swarm Optimization (PSO), Arithmetic Optimization Algorithm (AOA), Slime Mould Algorithm (SMA), Multi-verse Optimization (MVO), Hidden Markov Random Field (HMRF), Improved Markov Random Field (IMRF), and Conventional Markov Random Field (CMRF). The Dice Similarity Coefficient (DSC), sensitivity, and accuracy of the proposed GTO-based approach is achieved [Formula: see text], [Formula: see text], and [Formula: see text] respectively. Another proposed GTORBL-based segmentation method achieves accuracy values of [Formula: see text] , sensitivity of [Formula: see text] , and DSC of [Formula: see text]. The one-way ANOVA test followed by Tukey HSD and Wilcoxon Signed Rank Test are used to examine the results. Furthermore, Multi-Criteria Decision Making is used to evaluate overall performance focused on sensitivity, accuracy, false-positive rate, precision, specificity, [Formula: see text]-score, Geometric-Mean, and DSC. According to both quantitative and qualitative findings, the proposed strategies outperform other compared methodologies.
Collapse
Affiliation(s)
- Tapas Si
- Department of Computer Science & Engineering, University of Engineering & Management, Jaipur, GURUKUL, Sikar Road (NH-11), Udaipuria Mod, Jaipur, Rajasthan, 303807, India
| | - Dipak Kumar Patra
- Department of Computer Science, Hijli College, Kharagpur, West Bengal, 721306, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T H Chan School of Public Health, Boston, MA, USA.
| | - Anjan Bandyopadhyay
- School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, Odisha, India
| | - Achyuth Sarkar
- Department of Computer Science & Engineering, National Institute of Technology Arunachal Pradesh, Arunachal Pradesh, 791113, India
| | - Hong Qin
- Department of Computer Science and Engineering, University of Tennessee at Chattanooga, Chattanooga, TN, USA.
| |
Collapse
|
9
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
10
|
Watanabe H, Hayashi S, Kondo Y, Matsuyama E, Hayashi N, Ogura T, Shimosegawa M. Quality control system for mammographic breast positioning using deep learning. Sci Rep 2023; 13:7066. [PMID: 37127674 PMCID: PMC10151341 DOI: 10.1038/s41598-023-34380-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 04/28/2023] [Indexed: 05/03/2023] Open
Abstract
This study proposes a deep convolutional neural network (DCNN) classification for the quality control and validation of breast positioning criteria in mammography. A total of 1631 mediolateral oblique mammographic views were collected from an open database. We designed two main steps for mammographic verification: automated detection of the positioning part and classification of three scales that determine the positioning quality using DCNNs. After acquiring labeled mammograms with three scales visually evaluated based on guidelines, the first step was automatically detecting the region of interest of the subject part by image processing. The next step was classifying mammographic positioning accuracy into three scales using four representative DCNNs. The experimental results showed that the DCNN model achieved the best positioning classification accuracy of 0.7836 using VGG16 in the inframammary fold and a classification accuracy of 0.7278 using Xception in the nipple profile. Furthermore, using the softmax function, the breast positioning criteria could be evaluated quantitatively by presenting the predicted value, which is the probability of determining positioning accuracy. The proposed method can be quantitatively evaluated without the need for an individual qualitative evaluation and has the potential to improve the quality control and validation of breast positioning criteria in mammography.
Collapse
Affiliation(s)
- Haruyuki Watanabe
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan.
| | - Saeko Hayashi
- Department of Radiology, National Hospital Organization Shibukawa Medical Center, Shibukawa, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, Niigata, Japan
| | - Eri Matsuyama
- Faculty of Informatics, The University of Fukuchiyama, Fukuchiyama, Japan
| | - Norio Hayashi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Toshihiro Ogura
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Masayuki Shimosegawa
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| |
Collapse
|
11
|
Goceri E. Medical image data augmentation: techniques, comparisons and interpretations. Artif Intell Rev 2023; 56:1-45. [PMID: 37362888 PMCID: PMC10027281 DOI: 10.1007/s10462-023-10453-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2023] [Indexed: 03/29/2023]
Abstract
Designing deep learning based methods with medical images has always been an attractive area of research to assist clinicians in rapid examination and accurate diagnosis. Those methods need a large number of datasets including all variations in their training stages. On the other hand, medical images are always scarce due to several reasons, such as not enough patients for some diseases, patients do not want to allow their images to be used, lack of medical equipment or equipment, inability to obtain images that meet the desired criteria. This issue leads to bias in datasets, overfitting, and inaccurate results. Data augmentation is a common solution to overcome this issue and various augmentation techniques have been applied to different types of images in the literature. However, it is not clear which data augmentation technique provides more efficient results for which image type since different diseases are handled, different network architectures are used, and these architectures are trained and tested with different numbers of data sets in the literature. Therefore, in this work, the augmentation techniques used to improve performances of deep learning based diagnosis of the diseases in different organs (brain, lung, breast, and eye) from different imaging modalities (MR, CT, mammography, and fundoscopy) have been examined. Also, the most commonly used augmentation methods have been implemented, and their effectiveness in classifications with a deep network has been discussed based on quantitative performance evaluations. Experiments indicated that augmentation techniques should be chosen carefully according to image types.
Collapse
Affiliation(s)
- Evgin Goceri
- Department of Biomedical Engineering, Engineering Faculty, Akdeniz University, Antalya, Turkey
| |
Collapse
|
12
|
Kwak D, Choi J, Lee S. Rethinking Breast Cancer Diagnosis through Deep Learning Based Image Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:2307. [PMID: 36850906 PMCID: PMC9958611 DOI: 10.3390/s23042307] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 02/07/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
This paper explored techniques for diagnosing breast cancer using deep learning based medical image recognition. X-ray (Mammography) images, ultrasound images, and histopathology images are used to improve the accuracy of the process by diagnosing breast cancer classification and by inferring their affected location. For this goal, the image recognition application strategies for the maximal diagnosis accuracy in each medical image data are investigated in terms of various image classification (VGGNet19, ResNet50, DenseNet121, EfficietNet v2), image segmentation (UNet, ResUNet++, DeepLab v3), and related loss functions (binary cross entropy, dice Loss, Tversky loss), and data augmentation. As a result of evaluations through the presented methods, when using filter-based data augmentation, ResNet50 showed the best performance in image classification, and UNet showed the best performance in both X-ray image and ultrasound image as image segmentation. When applying the proposed image recognition strategies for the maximal diagnosis accuracy in each medical image data, the accuracy can be improved by 33.3% in image segmentation in X-ray images, 29.9% in image segmentation in ultrasound images, and 22.8% in image classification in histopathology images.
Collapse
Affiliation(s)
- Deawon Kwak
- Electronic Engineering Department, Dong Seoul University, Seongnam 13120, Republic of Korea
| | - Jiwoo Choi
- Choi’s Breast Clinic, 197, Gwongwang-ro, Paldal-gu, Suwon-si 16489, Republic of Korea
| | - Sungjin Lee
- Electronic Engineering Department, Dong Seoul University, Seongnam 13120, Republic of Korea
| |
Collapse
|
13
|
Loizidou K, Elia R, Pitris C. Computer-aided breast cancer detection and classification in mammography: A comprehensive review. Comput Biol Med 2023; 153:106554. [PMID: 36646021 DOI: 10.1016/j.compbiomed.2023.106554] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/13/2022] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Cancer is the second cause of mortality worldwide and it has been identified as a perilous disease. Breast cancer accounts for ∼20% of all new cancer cases worldwide, making it a major cause of morbidity and mortality. Mammography is an effective screening tool for the early detection and management of breast cancer. However, the identification and interpretation of breast lesions is challenging even for expert radiologists. For that reason, several Computer-Aided Diagnosis (CAD) systems are being developed to assist radiologists to accurately detect and/or classify breast cancer. This review examines the recent literature on the automatic detection and/or classification of breast cancer in mammograms, using both conventional feature-based machine learning and deep learning algorithms. The review begins with a comparison of algorithms developed specifically for the detection and/or classification of two types of breast abnormalities, micro-calcifications and masses, followed by the use of sequential mammograms for improving the performance of the algorithms. The available Food and Drug Administration (FDA) approved CAD systems related to triage and diagnosis of breast cancer in mammograms are subsequently presented. Finally, a description of the open access mammography datasets is provided and the potential opportunities for future work in this field are highlighted. The comprehensive review provided here can serve both as a thorough introduction to the field but also provide indicative directions to guide future applications.
Collapse
Affiliation(s)
- Kosmia Loizidou
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Rafaella Elia
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Costas Pitris
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| |
Collapse
|
14
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
15
|
|
16
|
Oza P, Sharma P, Patel S, Adedoyin F, Bruno A. Image Augmentation Techniques for Mammogram Analysis. J Imaging 2022; 8:141. [PMID: 35621905 PMCID: PMC9147240 DOI: 10.3390/jimaging8050141] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 04/19/2022] [Accepted: 04/22/2022] [Indexed: 01/30/2023] Open
Abstract
Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods' performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming task. Therefore, most of the freely accessible biomedical image datasets are small-sized. Furthermore, it is challenging to have big-sized medical image datasets due to privacy and legal issues. Consequently, not a small number of supervised deep learning models are prone to overfitting and cannot produce generalized output. One of the most popular methods to mitigate the issue above goes under the name of data augmentation. This technique helps increase training set size by utilizing various transformations and has been publicized to improve the model performance when tested on new data. This article surveyed different data augmentation techniques employed on mammogram images. The article aims to provide insights into basic and deep learning-based augmentation techniques.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Festus Adedoyin
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| |
Collapse
|
17
|
Geng Q, Yan H. Image Segmentation under the Optimization Algorithm of Krill Swarm and Machine Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8771650. [PMID: 35371201 PMCID: PMC8970905 DOI: 10.1155/2022/8771650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/21/2022] [Accepted: 03/05/2022] [Indexed: 12/02/2022]
Abstract
This study aims to improve the efficiency and accuracy of image segmentation, and to compare and study traditional threshold-based image segmentation methods and machine learning model-based image segmentation methods. The krill herb optimization algorithm is combined with the traditional maximum between-class variance function to form a new graph segmentation algorithm. The pet dataset is used to train the algorithm model and build an image semantic segmentation system. The results show that when the traditional Ostu algorithm performs image single-threshold segmentation, the number of iterations is about 256. When double-threshold segmentation is performed, the number of iterations increases exponentially, and the execution time is about 2 s. The number of iterations of the improved Krill Herd algorithm in single-threshold segmentation is 6.95 times, respectively. The execution time for double-threshold segmentation is about 0.24 s. The number of iterations is only improved by a factor of 0.19. The average classification accuracy of the Unet network model and the SegNet network model is 86.3% and 91.9%, respectively. The average classification accuracy of the DC-Unet network model reaches 93.1%. This shows that the proposed fusion algorithm has high optimization efficiency and stronger practicability in multithreshold image segmentation. The DC-Unet network model can improve the image detail segmentation effect. The research provides a new idea for finding an efficient and accurate image segmentation method.
Collapse
Affiliation(s)
- Qiang Geng
- School of Big Data & Software Engineering, Chongqing College of Mobile Communication, Chongqing 401520, China
- Chongqing Key Laboratory of Public Big Data Security Technology, Chongqing 401420, China
| | - Huifeng Yan
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
18
|
Segmentation of Breast Masses in Mammogram Image Using Multilevel Multiobjective Electromagnetism-Like Optimization Algorithm. BIOMED RESEARCH INTERNATIONAL 2022; 2022:8576768. [PMID: 35083334 PMCID: PMC8786533 DOI: 10.1155/2022/8576768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 11/26/2021] [Accepted: 12/17/2021] [Indexed: 11/18/2022]
Abstract
In recent times, breast mass is the most diagnostic sign for early detection of breast cancer, where the precise segmentation of masses is important to reduce the mortality rate. This research proposes a new multiobjective optimization technique for segmenting the breast masses from the mammographic image. The proposed model includes three phases such as image collection, image denoising, and segmentation. Initially, the mammographic images are collected from two benchmark datasets like Digital Database for Screening Mammography (DDSM) and Mammographic Image Analysis Society (MIAS). Next, image normalization and Contrast-Limited Adaptive Histogram Equalization (CLAHE) techniques are employed for enhancing the visual capability and contrast of the mammographic images. After image denoising, electromagnetism-like (EML) optimization technique is used for segmenting the noncancer and cancer portions from the mammogram image. The proposed EML technique includes the advantages like enhanced robustness to hold the image details and adaptive to local context. Lastly, template matching is carried out after segmentation to detect the cancer regions, and then, the effectiveness of the proposed model is analysed in light of Jaccard coefficient, dice coefficient, specificity, sensitivity, and accuracy. Hence, the proposed model averagely achieved 92.3% of sensitivity, 99.21% of specificity, and 98.68% of accuracy on DDSM dataset, and the proposed model averagely achieved 92.11% of sensitivity, 99.45% of specificity, and 98.93% of accuracy on MIAS dataset.
Collapse
|
19
|
Breast Cancer Segmentation Methods: Current Status and Future Potentials. BIOMED RESEARCH INTERNATIONAL 2021; 2021:9962109. [PMID: 34337066 PMCID: PMC8321730 DOI: 10.1155/2021/9962109] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 05/14/2021] [Accepted: 06/11/2021] [Indexed: 12/24/2022]
Abstract
Early breast cancer detection is one of the most important issues that need to be addressed worldwide as it can help increase the survival rate of patients. Mammograms have been used to detect breast cancer in the early stages; if detected in the early stages, it can drastically reduce treatment costs. The detection of tumours in the breast depends on segmentation techniques. Segmentation plays a significant role in image analysis and includes detection, feature extraction, classification, and treatment. Segmentation helps physicians quantify the volume of tissue in the breast for treatment planning. In this work, we have grouped segmentation methods into three groups: classical segmentation that includes region-, threshold-, and edge-based segmentation; machine learning segmentation; and supervised and unsupervised and deep learning segmentation. The findings of our study revealed that region-based segmentation is frequently used for classical methods, and the most frequently used techniques are region growing. Further, a median filter is a robust tool for removing noise. Moreover, the MIAS database is frequently used in classical segmentation methods. Meanwhile, in machine learning segmentation, unsupervised machine learning methods are more frequently used, and U-Net is frequently used for mammogram image segmentation because it does not require many annotated images compared with other deep learning models. Furthermore, reviewed papers revealed that it is possible to train a deep learning model without performing any preprocessing or postprocessing and also showed that the U-Net model is frequently used for mammogram segmentation. The U-Net model is frequently used because it does not require many annotated images and also because of the presence of high-performance GPU computing, which makes it easy to train networks with more layers. Additionally, we identified mammograms and utilised widely used databases, wherein 3 and 28 are public and private databases, respectively.
Collapse
|
20
|
Adaptive channel and multiscale spatial context network for breast mass segmentation in full-field mammograms. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02297-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
21
|
Kang J, Chen T, Luo H, Luo Y, Du G, Jiming-Yang M. Machine learning predictive model for severe COVID-19. INFECTION GENETICS AND EVOLUTION 2021; 90:104737. [PMID: 33515712 PMCID: PMC7840410 DOI: 10.1016/j.meegid.2021.104737] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/29/2020] [Accepted: 01/24/2021] [Indexed: 01/08/2023]
Abstract
To develop a modified predictive model for severe COVID-19 in people infected with Sars-Cov-2. We developed the predictive model for severe patients of COVID-19 based on the clinical date from the Tumor Center of Union Hospital affiliated with Tongji Medical College, China. A total of 151 cases from Jan. 26 to Mar. 20, 2020, were included. Then we followed 5 steps to predict and evaluate the model: data preprocessing, data splitting, feature selection, model building, prevention of overfitting, and Evaluation, and combined with artificial neural network algorithms. We processed the results in the 5 steps. In feature selection, ALB showed a strong negative correlation (r = 0.771, P < 0.001) whereas GLB (r = 0.661, P < 0.001) and BUN (r = 0.714, P < 0.001) showed a strong positive correlation with severity of COVID-19. TensorFlow was subsequently applied to develop a neural network model. The model achieved good prediction performance, with an area under the curve value of 0.953(0.889-0.982). Our results showed its outstanding performance in prediction. GLB and BUN may be two risk factors for severe COVID-19. Our findings could be of great benefit in the future treatment of patients with COVID-19 and will help to improve the quality of care in the long term. This model has great significance to rationalize early clinical interventions and improve the cure rate.
Collapse
Affiliation(s)
- Jianhong Kang
- Department of Thoracic Surgery, First Affiliated Hospital, Sun-Yat-sen University, Guangzhou, China.
| | - Ting Chen
- Chengdu Medical College, Chengdu, China.
| | - Honghe Luo
- Department of Thoracic Surgery, First Affiliated Hospital, Sun-Yat-sen University, Guangzhou, China.
| | - Yifeng Luo
- Department of Respiratory and Critical Care Medicine, First Affiliated Hospital, Sun‑Yat-sen University, Guangzhou, China.
| | - Guipeng Du
- Department of Respiratory and Critical Care Medicine, The Second Affiliated Hospital of Chengdu Medical College (China National Nuclear Corporation 416 Hospital), Chengdu, China
| | - Mia Jiming-Yang
- Medicine Campus Oberfranken, University of Bayreuth, Bavaria, Germany
| |
Collapse
|
22
|
Debelee TG, Kebede SR, Schwenker F, Shewarega ZM. Deep Learning in Selected Cancers' Image Analysis-A Survey. J Imaging 2020; 6:121. [PMID: 34460565 PMCID: PMC8321208 DOI: 10.3390/jimaging6110121] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/19/2020] [Accepted: 10/26/2020] [Indexed: 02/08/2023] Open
Abstract
Deep learning algorithms have become the first choice as an approach to medical image analysis, face recognition, and emotion recognition. In this survey, several deep-learning-based approaches applied to breast cancer, cervical cancer, brain tumor, colon and lung cancers are studied and reviewed. Deep learning has been applied in almost all of the imaging modalities used for cervical and breast cancers and MRIs for the brain tumor. The result of the review process indicated that deep learning methods have achieved state-of-the-art in tumor detection, segmentation, feature extraction and classification. As presented in this paper, the deep learning approaches were used in three different modes that include training from scratch, transfer learning through freezing some layers of the deep learning network and modifying the architecture to reduce the number of parameters existing in the network. Moreover, the application of deep learning to imaging devices for the detection of various cancer cases has been studied by researchers affiliated to academic and medical institutes in economically developed countries; while, the study has not had much attention in Africa despite the dramatic soar of cancer risks in the continent.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, 120611 Addis Ababa, Ethiopia
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, 40782 Addis Ababa, Ethiopia; (S.R.K.); (Z.M.S.)
- Department of Electrical and Computer Engineering, Debreberhan University, 445 Debre Berhan, Ethiopia
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, University of Ulm, 89081 Ulm, Germany;
| | | |
Collapse
|