1
|
Ensemble Learning-Based Hybrid Segmentation of Mammographic Images for Breast Cancer Risk Prediction Using Fuzzy C-Means and CNN Model. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:1491955. [PMID: 36760835 PMCID: PMC9904922 DOI: 10.1155/2023/1491955] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 07/23/2022] [Accepted: 11/25/2022] [Indexed: 02/02/2023]
Abstract
The research interest in this field is that females are not aware of their health conditions until they develop tumour, especially when breast cancer is concerned. The breast cancer risk factors include genetics, heredity, and sedentary lifestyle. The prime concern for the mortality rate among females is breast cancer, and breast cancer is on the rise, both in rural and urban India. Women aged 45 or above are more vulnerable to this disease. Images are more effective at depicting information as compared to text. With the advancement in technology, several computerized techniques have come up to extract hidden information from the images. The processed images have found their application in several sectors and medical science is one of them. Disease-like breast cancer affects most women universally and it happens due to the existence of breast masses in the breast region for the development of breast cancer in women. Timely breast cancer detection can also increase the rate of effective treatment and the survival of women suffering from breast cancer. This work elaborates the method of performing hybrid segmentation techniques using CLAHE, morphological operations on mammogram images, and classified images using deep learning. Images from the MIAS database have been used to obtain readings for parameters: threshold, accuracy, sensitivity, specificity rate, biopsy rate, or a combination of all the parameters and many others under study.
Collapse
|
2
|
Mahmood T, Li J, Pei Y, Akhtar F, Rehman MU, Wasti SH. Breast lesions classifications of mammographic images using a deep convolutional neural network-based approach. PLoS One 2022; 17:e0263126. [PMID: 35085352 PMCID: PMC8794221 DOI: 10.1371/journal.pone.0263126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 01/12/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is one of the worst illnesses, with a higher fatality rate among women globally. Breast cancer detection needs accurate mammography interpretation and analysis, which is challenging for radiologists owing to the intricate anatomy of the breast and low image quality. Advances in deep learning-based models have significantly improved breast lesions’ detection, localization, risk assessment, and categorization. This study proposes a novel deep learning-based convolutional neural network (ConvNet) that significantly reduces human error in diagnosing breast malignancy tissues. Our methodology is most effective in eliciting task-specific features, as feature learning is coupled with classification tasks to achieve higher performance in automatically classifying the suspicious regions in mammograms as benign and malignant. To evaluate the model’s validity, 322 raw mammogram images from Mammographic Image Analysis Society (MIAS) and 580 from Private datasets were obtained to extract in-depth features, the intensity of information, and the high likelihood of malignancy. Both datasets are magnificently improved through preprocessing, synthetic data augmentation, and transfer learning techniques to attain the distinctive combination of breast tumors. The experimental findings indicate that the proposed approach achieved remarkable training accuracy of 0.98, test accuracy of 0.97, high sensitivity of 0.99, and an AUC of 0.99 in classifying breast masses on mammograms. The developed model achieved promising performance that helps the clinician in the speedy computation of mammography, breast masses diagnosis, treatment planning, and follow-up of disease progression. Moreover, it has the immense potential over retrospective approaches in consistency feature extraction and precise lesions classification.
Collapse
Affiliation(s)
- Tariq Mahmood
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing Engineering Research Center for IoT Software and Systems, Beijing, China
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu, Fukushima, Japan
- * E-mail:
| | - Faheem Akhtar
- Department of Computer Science, Sukkur IBA University, Sukkur, Pakistan
| | - Mujeeb Ur Rehman
- Radiology Department, Continental Medical College and Hayat Memorial Teaching Hospital, Lahore, Pakistan
| | - Shahbaz Hassan Wasti
- Division of Science and Technology, Department of Information Sciences, University of Education, Lahore, Pakistan
| |
Collapse
|
3
|
Chowdary J, Yogarajah P, Chaurasia P, Guruviah V. A Multi-Task Learning Framework for Automated Segmentation and Classification of Breast Tumors From Ultrasound Images. ULTRASONIC IMAGING 2022; 44:3-12. [PMID: 35128997 PMCID: PMC8902030 DOI: 10.1177/01617346221075769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Breast cancer is one of the most fatal diseases leading to the death of several women across the world. But early diagnosis of breast cancer can help to reduce the mortality rate. So an efficient multi-task learning approach is proposed in this work for the automatic segmentation and classification of breast tumors from ultrasound images. The proposed learning approach consists of an encoder, decoder, and bridge blocks for segmentation and a dense branch for the classification of tumors. For efficient classification, multi-scale features from different levels of the network are used. Experimental results show that the proposed approach is able to enhance the accuracy and recall of segmentation by 1.08%, 4.13%, and classification by 1.16%, 2.34%, respectively than the methods available in the literature.
Collapse
Affiliation(s)
| | - Pratheepan Yogarajah
- University of Ulster, Londonderry, UK
- Pratheepan Yogarajah, University of Ulster, Northland Road, Magee Campus, Londonderry, Northern Ireland BT48 7JL, UK.
| | | | | |
Collapse
|
4
|
Zhuang Z, Yang Z, Raj ANJ, Wei C, Jin P, Zhuang S. Breast ultrasound tumor image classification using image decomposition and fusion based on adaptive multi-model spatial feature fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106221. [PMID: 34144251 DOI: 10.1016/j.cmpb.2021.106221] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 05/26/2021] [Indexed: 05/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is a fatal threat to the health of women. Ultrasonography is a common method for the detection of breast cancer. Computer-aided diagnosis of breast ultrasound images can help doctors in diagnosing benign and malignant lesions. In this paper, by combining image decomposition and fusion techniques with adaptive spatial feature fusion technology, a reliable classification method for breast ultrasound images of tumors is proposed. METHODS First, fuzzy enhancement and bilateral filtering algorithms are used to process the original breast ultrasound image. Then, various decomposition images representing the clinical characteristics of breast tumors are obtained using the original and mask images. By considering the diversity of the benign and malignant characteristic information represented by each decomposition image, the decomposition images are fused through the RGB channel, and three types of fusion images are generated. Then, from a series of candidate deep learning models, transfer learning is used to select the best model as the base model to extract deep learning features. Finally, while training the classification network, adaptive spatial feature fusion technology is used to train the weight network to complete deep learning feature fusion and classification. RESULTS In this study, 1328 breast ultrasound images were collected for training and testing. The experimental results show that the values of accuracy, precision, specificity, sensitivity/recall, F1 score, and area under the curve of the proposed method were 0.9548, 0.9811, 0.9833, 0.9392, 0.9571, and 0.9883, respectively. CONCLUSION Our research can automate breast cancer detection and has strong clinical utility. When compared to previous methods, our proposed method is expected to be more effective while assisting doctors in diagnosing breast ultrasound images.
Collapse
Affiliation(s)
- Zhemin Zhuang
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Zengbiao Yang
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Alex Noel Joseph Raj
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Chuliang Wei
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Pengcheng Jin
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China
| | - Shuxin Zhuang
- Department of Electronic Engineering, Shantou University, Shantou, Guangdong, China.
| |
Collapse
|
5
|
Niu J, Li H, Zhang C, Li D. Multi-scale attention-based convolutional neural network for classification of breast masses in mammograms. Med Phys 2021; 48:3878-3892. [PMID: 33982807 DOI: 10.1002/mp.14942] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 04/21/2021] [Accepted: 05/07/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Breast cancer is the cancer with the highest incidence in women, and early detection can effectively improve the survival rate of patients. Mammography is an important method for physicians to screening breast cancer, but the diagnosis of mammograms by physicians depends largely on clinical practice experience. Studies have shown that using computer-aided diagnosis techniques can help doctors diagnose breast cancer. METHODS In this paper, the method of convolutional neural network is mainly used to classify benign and malignant breast masses in the mammograms. First, we use multi-scale residual networks and densely connected networks as backbone networks to extract the features of global image patches and local image patches. Second, we use the attention module named convolutional block attention module (CBAM) to improve the two feature extraction networks to enhance the network's feature expression ability. Finally, we fuse the features of multi-scale image patches to achieve the classification of benign and malignant breast masses. RESULTS In the digital database for screening mammography (DDSM) database, the accuracy, sensitivity, AUC value and corresponding standard deviation of our method are 0.9626 ± 0.0110, 0.9719 ± 0.0126, and 0.9576 ± 0.0064, respectively. Compared with the commonly used ResNet (AUC = 0.8823 ± 0.0112) and DenseNet (AUC = 0.9141 ± 0.0085), the performance of our method has improved. In addition, we also used the INbreast database to train and validate the proposed method. The accuracy, sensitivity, AUC and corresponding standard deviations are 0.9554 ± 0.0296, 0.9605 ± 0.0228, and 0.9468 ± 0.0085, respectively. CONCLUSIONS Compared with the previous work, our proposed method uses multi-scale image features, has better classification performance in breast mass patches classification tasks, and can effectively assist physicians in breast cancer diagnosis.
Collapse
Affiliation(s)
- Jing Niu
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600, China
| | - Hua Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600, China
| | - Chen Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, 030600, China
| | - Dengao Li
- College of Data Science, Taiyuan University of Technology, Taiyuan, 030600, China.,Shanxi Engineering Technology Research Center for Spatial Information Network, Taiyuan, 030600, China
| |
Collapse
|
6
|
Implementing Multilabeling, ADASYN, and ReliefF Techniques for Classification of Breast Cancer Diagnostic through Machine Learning: Efficient Computer-Aided Diagnostic System. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5577636. [PMID: 33859807 PMCID: PMC8009715 DOI: 10.1155/2021/5577636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 02/19/2021] [Accepted: 02/27/2021] [Indexed: 11/17/2022]
Abstract
Multilabel recognition of morphological images and detection of cancerous areas are difficult to locate in the scenario of the image redundancy and less resolution. Cancerous tissues are incredibly tiny in various scenarios. Therefore, for automatic classification, the characteristics of cancer patches in the X-ray image are of critical importance. Due to the slight variation between the textures, using just one feature or using a few features contributes to inaccurate classification outcomes. The present study focuses on five different algorithms for extracting features that can extract further different features. The algorithms are GLCM, LBGLCM, LBP, GLRLM, and SFTA from 8 image groups, and then, the extracted feature spaces are combined. The dataset used for classification is most probably imbalanced. Additionally, another focal point is to eradicate the unbalanced data problem by creating more samples using the ADASYN algorithm so that the error rate is minimized and the accuracy is increased. By using the ReliefF algorithm, it skips less contributing features that relieve the burden on the process. Finally, the feedforward neural network is used for the classification of data. The proposed method showed 99.5% micro, 99.5% macro, 0.5% misclassification, 99.5% recall rats, specificity 99.4%, precision 99.5%, and accuracy 99.5%, showing its robustness in these results. To assess the feasibility of the new system, the INbreast database was used.
Collapse
|
7
|
Bruno A, Ardizzone E, Vitabile S, Midiri M. A Novel Solution Based on Scale Invariant Feature Transform Descriptors and Deep Learning for the Detection of Suspicious Regions in Mammogram Images. JOURNAL OF MEDICAL SIGNALS & SENSORS 2020; 10:158-173. [PMID: 33062608 PMCID: PMC7528986 DOI: 10.4103/jmss.jmss_31_19] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 10/01/2019] [Accepted: 05/06/2020] [Indexed: 12/24/2022]
Abstract
BACKGROUND Deep learning methods have become popular for their high-performance rate in the classification and detection of events in computer vision tasks. Transfer learning paradigm is widely adopted to apply pretrained convolutional neural network (CNN) on medical domains overcoming the problem of the scarcity of public datasets. Some investigations to assess transfer learning knowledge inference abilities in the context of mammogram screening and possible combinations with unsupervised techniques are in progress. METHODS We propose a novel technique for the detection of suspicious regions in mammograms that consist of the combination of two approaches based on scale invariant feature transform (SIFT) keypoints and transfer learning with pretrained CNNs such as PyramidNet and AlexNet fine-tuned on digital mammograms generated by different mammography devices. Preprocessing, feature extraction, and selection steps characterize the SIFT-based method, while the deep learning network validates the candidate suspicious regions detected by the SIFT method. RESULTS The experiments conducted on both mini-MIAS dataset and our new public dataset Suspicious Region Detection on Mammogram from PP (SuReMaPP) of 384 digital mammograms exhibit high performances compared to several state-of-the-art methods. Our solution reaches 98% of sensitivity and 90% of specificity on SuReMaPP and 94% of sensitivity and 91% of specificity on mini-MIAS. CONCLUSIONS The experimental sessions conducted so far prompt us to further investigate the powerfulness of transfer learning over different CNNs and possible combinations with unsupervised techniques. Transfer learning performances' accuracy may decrease when the training and testing images come out from mammography devices with different properties.
Collapse
Affiliation(s)
- Alessandro Bruno
- Faculty of Media and Communication, Department - NCCA (National Centre for Computer Animation) at Bournemouth University, Poole, Dorset, United Kingdom
| | | | - Salvatore Vitabile
- Department of Biomedicine, Neuroscience and Advanced Diagnostic at Palermo University, Palermo, Italy
| | - Massimo Midiri
- Department of Biomedicine, Neuroscience and Advanced Diagnostic at Palermo University, Palermo, Italy
| |
Collapse
|
8
|
Automatic Identification of Breast Ultrasound Image Based on Supervised Block-Based Region Segmentation Algorithm and Features Combination Migration Deep Learning Model. IEEE J Biomed Health Inform 2020; 24:984-993. [DOI: 10.1109/jbhi.2019.2960821] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Classification of Mammogram Images Using Multiscale all Convolutional Neural Network (MA-CNN). J Med Syst 2019; 44:30. [PMID: 31838610 DOI: 10.1007/s10916-019-1494-z] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 11/03/2019] [Indexed: 11/27/2022]
Abstract
Breast cancer is one of the leading causes of cancer death among women in worldwide. Early diagnosis of breast cancer improves the chance of survival by aiding proper clinical treatments. The digital mammography examination helps in diagnosing the breast cancer at its earlier stage. In this paper, Multiscale All Convolutional Neural Network (MA-CNN) is developed to assist the radiologist in diagnosing the breast cancer effectively. MA-CNN is a convolutional neural network-based approach that classifies mammogram images accurately. Convolutional neural networks are excellent in extracting the task specific features, since the feature learning is associated with classification task in order to attain the improved performance. The proposed approach automatically categorizes the mammographic images on mini-MIAS dataset into normal, malignant and benign classes. This model improves the accuracy of the classification system by fusing the wider context of information using multiscale filters without negotiating the computation speed. Experimental results show that MA-CNN is a powerful tool for diagnosing breast cancer by means of classifying the mammogram images with overall sensitivity of 96% and 0.99 AUC.
Collapse
|
10
|
Hmida M, Hamrouni K, Solaiman B, Boussetta S. Mammographic mass segmentation using fuzzy contours. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 164:131-142. [PMID: 30195421 DOI: 10.1016/j.cmpb.2018.07.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/15/2018] [Accepted: 07/16/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate mass segmentation in mammographic images is a critical requirement for computer-aided diagnosis systems since it allows accurate feature extraction and thus improves classification precision. METHODS In this paper, a novel automatic breast mass segmentation approach is presented. This approach consists of mainly three stages: contour initialization applied to a given region of interest; construction of fuzzy contours and estimation of fuzzy membership maps of different classes in the considered image; integration of these maps in the Chan-Vese model to get a fuzzy-energy based model that is used for final delineation of mass. RESULTS The proposed approach is evaluated using mass regions of interest extracted from the mini-MIAS database. The experimental results show that the proposed method achieves an average true positive rate of 91.12% with a precision of 88.08%. CONCLUSIONS The achieved results show high accuracy in breast mass segmentation when compared to manually annotated ground truth and to other methods from the literature.
Collapse
Affiliation(s)
- Marwa Hmida
- Université de Tunis El Manar, Ecole Nationale d'Ingnieurs de Tunis, LR-Signal Image et Technologies de l'Information, Tunis 1002, Tunisie; IMT Atlantique, ITI Laboratory, Brest 29238, France.
| | - Kamel Hamrouni
- Université de Tunis El Manar, Ecole Nationale d'Ingnieurs de Tunis, LR-Signal Image et Technologies de l'Information, Tunis 1002, Tunisie.
| | | | | |
Collapse
|
11
|
Comparison of Transferred Deep Neural Networks in Ultrasonic Breast Masses Discrimination. BIOMED RESEARCH INTERNATIONAL 2018; 2018:4605191. [PMID: 30035122 PMCID: PMC6033250 DOI: 10.1155/2018/4605191] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 04/24/2018] [Accepted: 05/24/2018] [Indexed: 02/06/2023]
Abstract
This research aims to address the problem of discriminating benign cysts from malignant masses in breast ultrasound (BUS) images based on Convolutional Neural Networks (CNNs). The biopsy-proven benchmarking dataset was built from 1422 patient cases containing a total of 2058 breast ultrasound masses, comprising 1370 benign and 688 malignant lesions. Three transferred models, InceptionV3, ResNet50, and Xception, a CNN model with three convolutional layers (CNN3), and traditional machine learning-based model with hand-crafted features were developed for differentiating benign and malignant tumors from BUS data. Cross-validation results have demonstrated that the transfer learning method outperformed the traditional machine learning model and the CNN3 model, where the transferred InceptionV3 achieved the best performance with an accuracy of 85.13% and an AUC of 0.91. Moreover, classification models based on deep features extracted from the transferred models were also built, where the model with combined features extracted from all three transferred models achieved the best performance with an accuracy of 89.44% and an AUC of 0.93 on an independent test set.
Collapse
|
12
|
Liu L, Li K, Qin W, Wen T, Li L, Wu J, Gu J. Automated breast tumor detection and segmentation with a novel computational framework of whole ultrasound images. Med Biol Eng Comput 2018; 56:183-199. [PMID: 29292471 DOI: 10.1007/s11517-017-1770-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 12/13/2017] [Indexed: 01/12/2023]
Abstract
Due to the low contrast and ambiguous boundaries of the tumors in breast ultrasound (BUS) images, it is still a challenging task to automatically segment the breast tumors from the ultrasound. In this paper, we proposed a novel computational framework that can detect and segment breast lesions fully automatic in the whole ultrasound images. This framework includes several key components: pre-processing, contour initialization, and tumor segmentation. In the pre-processing step, we applied non-local low-rank (NLLR) filter to reduce the speckle noise. In contour initialization step, we cascaded a two-step Otsu-based adaptive thresholding (OBAT) algorithm with morphologic operations to effectively locate the tumor regions and initialize the tumor contours. Finally, given the initial tumor contours, the improved Chan-Vese model based on the ratio of exponentially weighted averages (CV-ROEWA) method was utilized. This pipeline was tested on a set of 61 breast ultrasound (BUS) images with diagnosed tumors. The experimental results in clinical ultrasound images prove the high accuracy and robustness of the proposed framework, indicating its potential applications in clinical practice. Graphical abstract ᅟ.
Collapse
Affiliation(s)
- Lei Liu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Kai Li
- Department of Medical Ultrasonics, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510630, People's Republic of China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China.,University of Chinese Academy of Sciences, Beijing, 100049, People's Republic of China
| | - Tiexiang Wen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Ling Li
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China
| | - Jia Wu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Jia Gu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, People's Republic of China.
| |
Collapse
|
13
|
Isikli Esener I, Ergin S, Yuksel T. A New Feature Ensemble with a Multistage Classification Scheme for Breast Cancer Diagnosis. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:3895164. [PMID: 29065592 PMCID: PMC5494793 DOI: 10.1155/2017/3895164] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2017] [Revised: 03/11/2017] [Accepted: 04/06/2017] [Indexed: 11/21/2022]
Abstract
A new and effective feature ensemble with a multistage classification is proposed to be implemented in a computer-aided diagnosis (CAD) system for breast cancer diagnosis. A publicly available mammogram image dataset collected during the Image Retrieval in Medical Applications (IRMA) project is utilized to verify the suggested feature ensemble and multistage classification. In achieving the CAD system, feature extraction is performed on the mammogram region of interest (ROI) images which are preprocessed by applying a histogram equalization followed by a nonlocal means filtering. The proposed feature ensemble is formed by concatenating the local configuration pattern-based, statistical, and frequency domain features. The classification process of these features is implemented in three cases: a one-stage study, a two-stage study, and a three-stage study. Eight well-known classifiers are used in all cases of this multistage classification scheme. Additionally, the results of the classifiers that provide the top three performances are combined via a majority voting technique to improve the recognition accuracy on both two- and three-stage studies. A maximum of 85.47%, 88.79%, and 93.52% classification accuracies are attained by the one-, two-, and three-stage studies, respectively. The proposed multistage classification scheme is more effective than the single-stage classification for breast cancer diagnosis.
Collapse
Affiliation(s)
- Idil Isikli Esener
- Department of Electrical Electronics Engineering, Bilecik Seyh Edebali University, 11210 Bilecik, Turkey
| | - Semih Ergin
- Department of Electrical Electronics Engineering, Eskisehir Osmangazi University, 26480 Eskisehir, Turkey
| | - Tolga Yuksel
- Department of Electrical Electronics Engineering, Bilecik Seyh Edebali University, 11210 Bilecik, Turkey
| |
Collapse
|
14
|
Anitha J, Dinesh Peter J, Immanuel Alex Pandian S. A dual stage adaptive thresholding (DuSAT) for automatic mass detection in mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 138:93-104. [PMID: 27886719 DOI: 10.1016/j.cmpb.2016.10.026] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Revised: 10/10/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Early detection and diagnosis of breast cancer through mammography screening reduces breast cancer mortality by around 20%. However it is often a complex process to differentiate abnormalities due to the ill-defined margins and subtle appearances. METHOD This paper investigates a new computer aided approach to detect the abnormalities in the digital mammograms using a Dual Stage Adaptive Thresholding (DuSAT). The suspicious mass region is identified using global histogram and local window thresholding method. The global thresholding is done based on the Histogram Peak Analysis (HPA) of the entire image and the threshold is obtained by maximizing the proposed threshold selection criteria. The local thresholding is carried out for each pixel in a defined neighborhood window that provides precise segmentation results. RESULTS The algorithm is verified with 300 images in the DDSM database and 170 images in the mini-MIAS database. Experimental results show that the proposed algorithm achieves an average sensitivity of 92.5% with 1.06 FP/image for DDSM database and an average sensitivity of 93.5% with 0.62 FP/image for mini-MIAS database. CONCLUSION The achieved results depict that the proposed approach provides better results compared to other state-of-art methods for mass detection that helps the radiologists in diagnosis of breast cancer at early stage.
Collapse
Affiliation(s)
- J Anitha
- Department of CSE, Karunya University, Coimbatore, India.
| | | | | |
Collapse
|
15
|
Desbordes P, Petitjean C, Ruan S. Segmentation of lymphoma tumor in PET images using cellular automata: A preliminary study. Ing Rech Biomed 2016. [DOI: 10.1016/j.irbm.2015.11.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|