1
|
Bhalla D, Rangarajan K, Chandra T, Banerjee S, Arora C. Reproducibility and Explainability of Deep Learning in Mammography: A Systematic Review of Literature. Indian J Radiol Imaging 2024; 34:469-487. [PMID: 38912238 PMCID: PMC11188703 DOI: 10.1055/s-0043-1775737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024] Open
Abstract
Background Although abundant literature is currently available on the use of deep learning for breast cancer detection in mammography, the quality of such literature is widely variable. Purpose To evaluate published literature on breast cancer detection in mammography for reproducibility and to ascertain best practices for model design. Methods The PubMed and Scopus databases were searched to identify records that described the use of deep learning to detect lesions or classify images into cancer or noncancer. A modification of Quality Assessment of Diagnostic Accuracy Studies (mQUADAS-2) tool was developed for this review and was applied to the included studies. Results of reported studies (area under curve [AUC] of receiver operator curve [ROC] curve, sensitivity, specificity) were recorded. Results A total of 12,123 records were screened, of which 107 fit the inclusion criteria. Training and test datasets, key idea behind model architecture, and results were recorded for these studies. Based on mQUADAS-2 assessment, 103 studies had high risk of bias due to nonrepresentative patient selection. Four studies were of adequate quality, of which three trained their own model, and one used a commercial network. Ensemble models were used in two of these. Common strategies used for model training included patch classifiers, image classification networks (ResNet in 67%), and object detection networks (RetinaNet in 67%). The highest reported AUC was 0.927 ± 0.008 on a screening dataset, while it reached 0.945 (0.919-0.968) on an enriched subset. Higher values of AUC (0.955) and specificity (98.5%) were reached when combined radiologist and Artificial Intelligence readings were used than either of them alone. None of the studies provided explainability beyond localization accuracy. None of the studies have studied interaction between AI and radiologist in a real world setting. Conclusion While deep learning holds much promise in mammography interpretation, evaluation in a reproducible clinical setting and explainable networks are the need of the hour.
Collapse
Affiliation(s)
- Deeksha Bhalla
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Tany Chandra
- Department of Radiodiagnosis, All India Institute of Medical Sciences, New Delhi, India
| | - Subhashis Banerjee
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| | - Chetan Arora
- Department of Computer Science and Engineering, Indian Institute of Technology, New Delhi, India
| |
Collapse
|
2
|
Vellan CJ, Islam T, De Silva S, Mohd Taib NA, Prasanna G, Jayapalan JJ. Exploring novel protein-based biomarkers for advancing breast cancer diagnosis: A review. Clin Biochem 2024; 129:110776. [PMID: 38823558 DOI: 10.1016/j.clinbiochem.2024.110776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 04/26/2024] [Accepted: 05/29/2024] [Indexed: 06/03/2024]
Abstract
This review provides a contemporary examination of the evolving landscape of breast cancer (BC) diagnosis, focusing on the pivotal role of novel protein-based biomarkers. The overview begins by elucidating the multifaceted nature of BC, exploring its prevalence, subtypes, and clinical complexities. A critical emphasis is placed on the transformative impact of proteomics, dissecting the proteome to unravel the molecular intricacies of BC. Navigating through various sources of samples crucial for biomarker investigations, the review underscores the significance of robust sample processing methods and their validation in ensuring reliable outcomes. The central theme of the review revolves around the identification and evaluation of novel protein-based biomarkers. Cutting-edge discoveries are summarised, shedding light on emerging biomarkers poised for clinical application. Nevertheless, the review candidly addresses the challenges inherent in biomarker discovery, including issues of standardisation, reproducibility, and the complex heterogeneity of BC. The future direction section envisions innovative strategies and technologies to overcome existing challenges. In conclusion, the review summarises the current state of BC biomarker research, offering insights into the intricacies of proteomic investigations. As precision medicine gains momentum, the integration of novel protein-based biomarkers emerges as a promising avenue for enhancing the accuracy and efficacy of BC diagnosis. This review serves as a compass for researchers and clinicians navigating the evolving landscape of BC biomarker discovery, guiding them toward transformative advancements in diagnostic precision and personalised patient care.
Collapse
Affiliation(s)
- Christina Jane Vellan
- Department of Molecular Medicine, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Tania Islam
- Department of Surgery, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Sumadee De Silva
- Institute of Biochemistry, Molecular Biology and Biotechnology, University of Colombo, Colombo 03, Sri Lanka
| | - Nur Aishah Mohd Taib
- Department of Surgery, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia
| | - Galhena Prasanna
- Institute of Biochemistry, Molecular Biology and Biotechnology, University of Colombo, Colombo 03, Sri Lanka
| | - Jaime Jacqueline Jayapalan
- Department of Molecular Medicine, Faculty of Medicine, Universiti Malaya, 50603, Kuala Lumpur, Malaysia; Universiti Malaya Centre for Proteomics Research (UMCPR), Universiti Malaya, 50603, Kuala Lumpur, Malaysia.
| |
Collapse
|
3
|
Ma Y, Peng Y. Mammogram mass segmentation and classification based on cross-view VAE and spatial hidden factor disentanglement. Phys Eng Sci Med 2024; 47:223-238. [PMID: 38150059 DOI: 10.1007/s13246-023-01359-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 11/19/2023] [Indexed: 12/28/2023]
Abstract
Breast masses are the most important clinical findings of breast carcinomas. The mass segmentation and classification in mammograms remain a crucial yet challenging topic in computer-aided diagnosis systems, as the masses show their irregularities in shape, size and texture. In this paper, we propose a new framework for mammogram mass classification and segmentation. Specifically, to utilize the complementary information within the mammographic cross-views, cranio caudal and mediolateral oblique, a cross-view based variational autoencoder (CV-VAE) combined with a spatial hidden factor disentanglement module is presented, where the two views can be reconstructed from each other through two explicitly disentangled hidden factors: class related (specified) and background common (unspecified). Then, the specified factor is not only divided into two categories: benign and malignant by a new introduced feature pyramid networks based mass classifier, but also used to predict the mass mask label based on a U-Net-like decoder. By integrating the two complementary modules, more discriminative morphological and semantic features can be learned to solve the mass classification and segmentation problems simultaneously. The proposed method is evaluated on two most used public mammography datasets, CBIS-DDSM and INbreast, achieving the Dice similarity coefficient (DSC) of 92.46% and 93.70% for segmentation and the area under receiver operating characteristic curve (AUC) of 93.20% and 95.01% for classification, respectively. Compared with other state-of-the-art approaches, it gives competitive results.
Collapse
Affiliation(s)
- Yingran Ma
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, CO, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, CO, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, CO, China.
| |
Collapse
|
4
|
Aguerchi K, Jabrane Y, Habba M, El Hassani AH. A CNN Hyperparameters Optimization Based on Particle Swarm Optimization for Mammography Breast Cancer Classification. J Imaging 2024; 10:30. [PMID: 38392079 PMCID: PMC10889268 DOI: 10.3390/jimaging10020030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/30/2023] [Accepted: 12/08/2023] [Indexed: 02/24/2024] Open
Abstract
Breast cancer is considered one of the most-common types of cancers among females in the world, with a high mortality rate. Medical imaging is still one of the most-reliable tools to detect breast cancer. Unfortunately, manual image detection takes much time. This paper proposes a new deep learning method based on Convolutional Neural Networks (CNNs). Convolutional Neural Networks are widely used for image classification. However, the determination process for accurate hyperparameters and architectures is still a challenging task. In this work, a highly accurate CNN model to detect breast cancer by mammography was developed. The proposed method is based on the Particle Swarm Optimization (PSO) algorithm in order to look for suitable hyperparameters and the architecture for the CNN model. The CNN model using PSO achieved success rates of 98.23% and 97.98% on the DDSM and MIAS datasets, respectively. The experimental results proved that the proposed CNN model gave the best accuracy values in comparison with other studies in the field. As a result, CNN models for mammography classification can now be created automatically. The proposed method can be considered as a powerful technique for breast cancer prediction.
Collapse
Affiliation(s)
| | - Younes Jabrane
- MSC Laboratory, Cadi Ayyad University, Marrakech 40000, Morocco
| | - Maryam Habba
- National School of Applied Sciences of Safi, Cadi Ayyad University, Safi 46000, Morocco
| | - Amir Hajjam El Hassani
- Nanomedicine Imagery & Therapeutics Laboratory, EA4662-Bourgogne-Franche-Comté University, 90010 Belfort, France
| |
Collapse
|
5
|
Jiang X, Hu Z, Wang S, Zhang Y. Deep Learning for Medical Image-Based Cancer Diagnosis. Cancers (Basel) 2023; 15:3608. [PMID: 37509272 PMCID: PMC10377683 DOI: 10.3390/cancers15143608] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/30/2023] Open
Abstract
(1) Background: The application of deep learning technology to realize cancer diagnosis based on medical images is one of the research hotspots in the field of artificial intelligence and computer vision. Due to the rapid development of deep learning methods, cancer diagnosis requires very high accuracy and timeliness as well as the inherent particularity and complexity of medical imaging. A comprehensive review of relevant studies is necessary to help readers better understand the current research status and ideas. (2) Methods: Five radiological images, including X-ray, ultrasound (US), computed tomography (CT), magnetic resonance imaging (MRI), positron emission computed tomography (PET), and histopathological images, are reviewed in this paper. The basic architecture of deep learning and classical pretrained models are comprehensively reviewed. In particular, advanced neural networks emerging in recent years, including transfer learning, ensemble learning (EL), graph neural network, and vision transformer (ViT), are introduced. Five overfitting prevention methods are summarized: batch normalization, dropout, weight initialization, and data augmentation. The application of deep learning technology in medical image-based cancer analysis is sorted out. (3) Results: Deep learning has achieved great success in medical image-based cancer diagnosis, showing good results in image classification, image reconstruction, image detection, image segmentation, image registration, and image synthesis. However, the lack of high-quality labeled datasets limits the role of deep learning and faces challenges in rare cancer diagnosis, multi-modal image fusion, model explainability, and generalization. (4) Conclusions: There is a need for more public standard databases for cancer. The pre-training model based on deep neural networks has the potential to be improved, and special attention should be paid to the research of multimodal data fusion and supervised paradigm. Technologies such as ViT, ensemble learning, and few-shot learning will bring surprises to cancer diagnosis based on medical images.
Collapse
Grants
- RM32G0178B8 BBSRC
- MC_PC_17171 MRC, UK
- RP202G0230 Royal Society, UK
- AA/18/3/34220 BHF, UK
- RM60G0680 Hope Foundation for Cancer Research, UK
- P202PF11 GCRF, UK
- RP202G0289 Sino-UK Industrial Fund, UK
- P202ED10, P202RE969 LIAS, UK
- P202RE237 Data Science Enhancement Fund, UK
- 24NN201 Fight for Sight, UK
- OP202006 Sino-UK Education Fund, UK
- RM32G0178B8 BBSRC, UK
- 2023SJZD125 Major project of philosophy and social science research in colleges and universities in Jiangsu Province, China
Collapse
Affiliation(s)
- Xiaoyan Jiang
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Zuojin Hu
- School of Mathematics and Information Science, Nanjing Normal University of Special Education, Nanjing 210038, China; (X.J.); (Z.H.)
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK;
| |
Collapse
|
6
|
Rafiq A, Chursin A, Awad Alrefaei W, Rashed Alsenani T, Aldehim G, Abdel Samee N, Menzli LJ. Detection and Classification of Histopathological Breast Images Using a Fusion of CNN Frameworks. Diagnostics (Basel) 2023; 13:diagnostics13101700. [PMID: 37238186 DOI: 10.3390/diagnostics13101700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/28/2023] Open
Abstract
Breast cancer is responsible for the deaths of thousands of women each year. The diagnosis of breast cancer (BC) frequently makes the use of several imaging techniques. On the other hand, incorrect identification might occasionally result in unnecessary therapy and diagnosis. Therefore, the accurate identification of breast cancer can save a significant number of patients from undergoing unnecessary surgery and biopsy procedures. As a result of recent developments in the field, the performance of deep learning systems used for medical image processing has showed significant benefits. Deep learning (DL) models have found widespread use for the aim of extracting important features from histopathologic BC images. This has helped to improve the classification performance and has assisted in the automation of the process. In recent times, both convolutional neural networks (CNNs) and hybrid models of deep learning-based approaches have demonstrated impressive performance. In this research, three different types of CNN models are proposed: a straightforward CNN model (1-CNN), a fusion CNN model (2-CNN), and a three CNN model (3-CNN). The findings of the experiment demonstrate that the techniques based on the 3-CNN algorithm performed the best in terms of accuracy (90.10%), recall (89.90%), precision (89.80%), and f1-Score (89.90%). In conclusion, the CNN-based approaches that have been developed are contrasted with more modern machine learning and deep learning models. The application of CNN-based methods has resulted in a significant increase in the accuracy of the BC classification.
Collapse
Affiliation(s)
- Ahsan Rafiq
- School of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Alexander Chursin
- Higher School of Industrial Policy and Entrepreneurship, RUDN University, 6 Miklukho-Maklaya St, Moscow 117198, Russia
| | - Wejdan Awad Alrefaei
- Department of Programming and Computer Sciences, Applied College in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 16245, Saudi Arabia
| | - Tahani Rashed Alsenani
- Department of Biology, College of Sciences in Yanbu, Taibah University, Yanbu 46522, Saudi Arabia
| | - Ghadah Aldehim
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Leila Jamel Menzli
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
7
|
Wu Z, Li X, Zuo J. RAD-UNet: Research on an improved lung nodule semantic segmentation algorithm based on deep learning. Front Oncol 2023; 13:1084096. [PMID: 37035155 PMCID: PMC10076852 DOI: 10.3389/fonc.2023.1084096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/01/2023] [Indexed: 04/11/2023] Open
Abstract
Objective Due to the small proportion of target pixels in computed tomography (CT) images and the high similarity with the environment, convolutional neural network-based semantic segmentation models are difficult to develop by using deep learning. Extracting feature information often leads to under- or oversegmentation of lesions in CT images. In this paper, an improved convolutional neural network segmentation model known as RAD-UNet, which is based on the U-Net encoder-decoder architecture, is proposed and applied to lung nodular segmentation in CT images. Method The proposed RAD-UNet segmentation model includes several improved components: the U-Net encoder is replaced by a ResNet residual network module; an atrous spatial pyramid pooling module is added after the U-Net encoder; and the U-Net decoder is improved by introducing a cross-fusion feature module with channel and spatial attention. Results The segmentation model was applied to the LIDC dataset and a CT dataset collected by the Affiliated Hospital of Anhui Medical University. The experimental results show that compared with the existing SegNet [14] and U-Net [15] methods, the proposed model demonstrates better lung lesion segmentation performance. On the above two datasets, the mIoU reached 87.76% and 88.13%, and the F1-score reached 93.56% and 93.72%, respectively. Conclusion: The experimental results show that the improved RAD-UNet segmentation method achieves more accurate pixel-level segmentation in CT images of lung tumours and identifies lung nodules better than the SegNet [14] and U-Net [15] models. The problems of under- and oversegmentation that occur during segmentation are solved, effectively improving the image segmentation performance.
Collapse
Affiliation(s)
- Zezhi Wu
- Department of Computer Science, Anhui Medical University, Hefei, Anhui, China
| | - Xiaoshu Li
- Department of Radiology, First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Jianhui Zuo
- Department of General Thoracic Surgery, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| |
Collapse
|
8
|
Reenadevi R, Sathiyabhama B, Sankar S, Pandey D. Breast cancer detection in digital mammography using a novel hybrid approach of Salp Swarm and Cuckoo Search algorithm with deep belief network classifier. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- R. Reenadevi
- Department of Computer Science and Engineering, Sona College of Technology, Salem, India
| | - B. Sathiyabhama
- Department of Computer Science and Engineering, Sona College of Technology, Salem, India
| | - S. Sankar
- Department of Computer Science and Engineering, Sona College of Technology, Salem, India
| | - Digvijay Pandey
- Department of Technical Education, IET, Dr A.P.J Abdul Kalam Technical University, Lucknow, India
| |
Collapse
|
9
|
Ranjbarzadeh R, Dorosti S, Jafarzadeh Ghoushchi S, Caputo A, Tirkolaee EB, Ali SS, Arshadi Z, Bendechache M. Breast tumor localization and segmentation using machine learning techniques: Overview of datasets, findings, and methods. Comput Biol Med 2023; 152:106443. [PMID: 36563539 DOI: 10.1016/j.compbiomed.2022.106443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/24/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
The Global Cancer Statistics 2020 reported breast cancer (BC) as the most common diagnosis of cancer type. Therefore, early detection of such type of cancer would reduce the risk of death from it. Breast imaging techniques are one of the most frequently used techniques to detect the position of cancerous cells or suspicious lesions. Computer-aided diagnosis (CAD) is a particular generation of computer systems that assist experts in detecting medical image abnormalities. In the last decades, CAD has applied deep learning (DL) and machine learning approaches to perform complex medical tasks in the computer vision area and improve the ability to make decisions for doctors and radiologists. The most popular and widely used technique of image processing in CAD systems is segmentation which consists of extracting the region of interest (ROI) through various techniques. This research provides a detailed description of the main categories of segmentation procedures which are classified into three classes: supervised, unsupervised, and DL. The main aim of this work is to provide an overview of each of these techniques and discuss their pros and cons. This will help researchers better understand these techniques and assist them in choosing the appropriate method for a given use case.
Collapse
Affiliation(s)
- Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Shadi Dorosti
- Department of Industrial Engineering, Urmia University of Technology, Urmia, Iran.
| | | | - Annalina Caputo
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | | | - Sadia Samar Ali
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Zahra Arshadi
- Faculty of Electronics, Telecommunications and Physics Engineering, Polytechnic University, Turin, Italy.
| | - Malika Bendechache
- Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland.
| |
Collapse
|
10
|
Al-Hejri AM, Al-Tam RM, Fazea M, Sable AH, Lee S, Al-antari MA. ETECADx: Ensemble Self-Attention Transformer Encoder for Breast Cancer Diagnosis Using Full-Field Digital X-ray Breast Images. Diagnostics (Basel) 2022; 13:diagnostics13010089. [PMID: 36611382 PMCID: PMC9818801 DOI: 10.3390/diagnostics13010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 12/22/2022] [Accepted: 12/24/2022] [Indexed: 12/29/2022] Open
Abstract
Early detection of breast cancer is an essential procedure to reduce the mortality rate among women. In this paper, a new AI-based computer-aided diagnosis (CAD) framework called ETECADx is proposed by fusing the benefits of both ensemble transfer learning of the convolutional neural networks as well as the self-attention mechanism of vision transformer encoder (ViT). The accurate and precious high-level deep features are generated via the backbone ensemble network, while the transformer encoder is used to diagnose the breast cancer probabilities in two approaches: Approach A (i.e., binary classification) and Approach B (i.e., multi-classification). To build the proposed CAD system, the benchmark public multi-class INbreast dataset is used. Meanwhile, private real breast cancer images are collected and annotated by expert radiologists to validate the prediction performance of the proposed ETECADx framework. The promising evaluation results are achieved using the INbreast mammograms with overall accuracies of 98.58% and 97.87% for the binary and multi-class approaches, respectively. Compared with the individual backbone networks, the proposed ensemble learning model improves the breast cancer prediction performance by 6.6% for binary and 4.6% for multi-class approaches. The proposed hybrid ETECADx shows further prediction improvement when the ViT-based ensemble backbone network is used by 8.1% and 6.2% for binary and multi-class diagnosis, respectively. For validation purposes using the real breast images, the proposed CAD system provides encouraging prediction accuracies of 97.16% for binary and 89.40% for multi-class approaches. The ETECADx has a capability to predict the breast lesions for a single mammogram in an average of 0.048 s. Such promising performance could be useful and helpful to assist the practical CAD framework applications providing a second supporting opinion of distinguishing various breast cancer malignancies.
Collapse
Affiliation(s)
- Aymen M. Al-Hejri
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Riyadh M. Al-Tam
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Faculty of Administrative and Computer Sciences, University of Albaydha, Albaydha, Yemen
| | - Muneer Fazea
- Department of Radiology, Al-Ma’amon Diagnostic Center, Sana’a, Yemen
- Department of Radiology, School of Medicine, Ibb University of Medical Sciences, Ibb, Yemen
| | - Archana Harsing Sable
- School of Computational Sciences, Swami Ramanand Teerth Marathwada University, Nanded 431606, Maharashtra, India
- Correspondence: (A.H.S.); (M.A.A.-a.)
| | - Soojeong Lee
- Department of Computer Engineering, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software and Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
- Correspondence: (A.H.S.); (M.A.A.-a.)
| |
Collapse
|
11
|
Awotunde JB, Imoize AL, Ayoade OB, Abiodun MK, Do DT, Silva A, Sur SN. An Enhanced Hyper-Parameter Optimization of a Convolutional Neural Network Model for Leukemia Cancer Diagnosis in a Smart Healthcare System. SENSORS (BASEL, SWITZERLAND) 2022; 22:9689. [PMID: 36560057 PMCID: PMC9785310 DOI: 10.3390/s22249689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 06/17/2023]
Abstract
Healthcare systems in recent times have witnessed timely diagnoses with a high level of accuracy. Internet of Medical Things (IoMT)-enabled deep learning (DL) models have been used to support medical diagnostics in real time, thus resolving the issue of late-stage diagnosis of various diseases and increasing performance accuracy. The current approach for the diagnosis of leukemia uses traditional procedures, and in most cases, fails in the initial period. Hence, several patients suffering from cancer have died prematurely due to the late discovery of cancerous cells in blood tissue. Therefore, this study proposes an IoMT-enabled convolutional neural network (CNN) model to detect malignant and benign cancer cells in the patient's blood tissue. In particular, the hyper-parameter optimization through radial basis function and dynamic coordinate search (HORD) optimization algorithm was used to search for optimal values of CNN hyper-parameters. Utilizing the HORD algorithm significantly increased the effectiveness of finding the best solution for the CNN model by searching multidimensional hyper-parameters. This implies that the HORD method successfully found the values of hyper-parameters for precise leukemia features. Additionally, the HORD method increased the performance of the model by optimizing and searching for the best set of hyper-parameters for the CNN model. Leukemia datasets were used to evaluate the performance of the proposed model using standard performance indicators. The proposed model revealed significant classification accuracy compared to other state-of-the-art models.
Collapse
Affiliation(s)
- Joseph Bamidele Awotunde
- Department of Computer Science, Faculty of Information and Communication Sciences, University of Ilorin, Ilorin 240003, Nigeria
| | - Agbotiname Lucky Imoize
- Department of Electrical and Electronics Engineering, Faculty of Engineering, University of Lagos, Akoka, Lagos 100213, Nigeria
- Department of Electrical Engineering and Information Technology, Institute of Digital Communication, Ruhr University, 44801 Bochum, Germany
| | - Oluwafisayo Babatope Ayoade
- Department of Computing and Information Science, School of Pure & Applied Sciences, College of Science, Bamidele Olumilua University of Education, Science & Technology, Ikere-Ekiti 361264, Nigeria
| | | | - Dinh-Thuan Do
- Department of Computer Science and Information Engineering, College of Information and Electrical Engineering, Asia University, Taichung 41354, Taiwan
| | - Adão Silva
- Instituto de Telecomunicações (IT) and Departamento de Eletrónica, Telecomunicações e Informática (DETI), University of Aveiro, 3810-193 Aveiro, Portugal
| | - Samarendra Nath Sur
- Department of Electronics and Communication Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar, Rangpo 737136, Sikkim, India
| |
Collapse
|
12
|
Shaban WM. Insight into breast cancer detection: new hybrid feature selection method. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-08062-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
AbstractBreast cancer, which is also the leading cause of death among women, is one of the most common forms of the disease that affects females all over the world. The discovery of breast cancer at an early stage is extremely important because it allows selecting appropriate treatment protocol and thus, stops the development of cancer cells. In this paper, a new patients detection strategy has been presented to identify patients with the disease earlier. The proposed strategy composes of two parts which are data preprocessing phase and patient detection phase (PDP). The purpose of this study is to introduce a feature selection methodology for determining the most efficient and significant features for identifying breast cancer patients. This method is known as new hybrid feature selection method (NHFSM). NHFSM is made up of two modules which are quick selection module that uses information gain, and feature selection module that uses hybrid bat algorithm and particle swarm optimization. Consequently, NHFSM is a hybrid method that combines the advantages of bat algorithm and particle swarm optimization based on filter method to eliminate many drawbacks such as being stuck in a local optimal solution and having unbalanced exploitation. The preprocessed data are then used during PDP in order to enable a quick and accurate detection of patients. Based on experimental results, the proposed NHFSM improves the efficiency of patients’ classification in comparison with state-of-the-art feature selection approaches by roughly 0.97, 0.76, 0.75, and 0.716 in terms of accuracy, precision, sensitivity/recall, and F-measure. In contrast, it has the lowest error rate value of 0.03.
Collapse
|
13
|
Samee NA, Ahmad T, Mahmoud NF, Atteia G, Abdallah HA, Rizwan A. Clinical Decision Support Framework for Segmentation and Classification of Brain Tumor MRIs Using a U-Net and DCNN Cascaded Learning Algorithm. Healthcare (Basel) 2022; 10:healthcare10122340. [PMID: 36553864 PMCID: PMC9777942 DOI: 10.3390/healthcare10122340] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/11/2022] [Accepted: 11/15/2022] [Indexed: 11/23/2022] Open
Abstract
Brain tumors (BTs) are an uncommon but fatal kind of cancer. Therefore, the development of computer-aided diagnosis (CAD) systems for classifying brain tumors in magnetic resonance imaging (MRI) has been the subject of many research papers so far. However, research in this sector is still in its early stage. The ultimate goal of this research is to develop a lightweight effective implementation of the U-Net deep network for use in performing exact real-time segmentation. Moreover, a simplified deep convolutional neural network (DCNN) architecture for the BT classification is presented for automatic feature extraction and classification of the segmented regions of interest (ROIs). Five convolutional layers, rectified linear unit, normalization, and max-pooling layers make up the DCNN's proposed simplified architecture. The introduced method was verified on multimodal brain tumor segmentation (BRATS 2015) datasets. Our experimental results on BRATS 2015 acquired Dice similarity coefficient (DSC) scores, sensitivity, and classification accuracy of 88.8%, 89.4%, and 88.6% for high-grade gliomas. When it comes to segmenting BRATS 2015 BT images, the performance of our proposed CAD framework is on par with existing state-of-the-art methods. However, the accuracy achieved in this study for the classification of BT images has improved upon the accuracy reported in prior studies. Image classification accuracy for BRATS 2015 BT has been improved from 88% to 88.6%.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Tahir Ahmad
- Department of Computer Science, COMSATS University Islamabad, Attock Campus, Attock 43600, Pakistan
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| | - Hanaa A. Abdallah
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Atif Rizwan
- Department of Computer Engineering, Jeju National University, Jejusi 63243, Republic of Korea
- Correspondence: (N.F.M.); (G.A.); (A.R.)
| |
Collapse
|
14
|
A Hybrid Workflow of Residual Convolutional Transformer Encoder for Breast Cancer Classification Using Digital X-ray Mammograms. Biomedicines 2022; 10:biomedicines10112971. [PMID: 36428538 PMCID: PMC9687367 DOI: 10.3390/biomedicines10112971] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 11/03/2022] [Accepted: 11/13/2022] [Indexed: 11/19/2022] Open
Abstract
Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan.
Collapse
|
15
|
An evaluation of lightweight deep learning techniques in medical imaging for high precision COVID-19 diagnostics. HEALTHCARE ANALYTICS 2022. [PMID: 37520618 PMCID: PMC9396460 DOI: 10.1016/j.health.2022.100096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Timely and rapid diagnoses are core to informing on optimum interventions that curb the spread of COVID-19. The use of medical images such as chest X-rays and CTs has been advocated to supplement the Reverse-Transcription Polymerase Chain Reaction (RT-PCR) test, which in turn has stimulated the application of deep learning techniques in the development of automated systems for the detection of infections. Decision support systems relax the challenges inherent to the physical examination of images, which is both time consuming and requires interpretation by highly trained clinicians. A review of relevant reported studies to date shows that most deep learning algorithms utilised approaches are not amenable to implementation on resource-constrained devices. Given the rate of infections is increasing, rapid, trusted diagnoses are a central tool in the management of the spread, mandating a need for a low-cost and mobile point-of-care detection systems, especially for middle- and low-income nations. The paper presents the development and evaluation of the performance of lightweight deep learning technique for the detection of COVID-19 using the MobileNetV2 model. Results demonstrate that the performance of the lightweight deep learning model is competitive with respect to heavyweight models but delivers a significant increase in the efficiency of deployment, notably in the lowering of the cost and memory requirements of computing resources.
Collapse
|
16
|
Bhuyan HK, Ravi V, Bramha B, Kamila NK. Disease analysis using machine learning approaches in healthcare system. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00687-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
17
|
Basurto-Hurtado JA, Cruz-Albarran IA, Toledano-Ayala M, Ibarra-Manzano MA, Morales-Hernandez LA, Perez-Ramirez CA. Diagnostic Strategies for Breast Cancer Detection: From Image Generation to Classification Strategies Using Artificial Intelligence Algorithms. Cancers (Basel) 2022; 14:3442. [PMID: 35884503 PMCID: PMC9322973 DOI: 10.3390/cancers14143442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 07/02/2022] [Accepted: 07/12/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer is one the main death causes for women worldwide, as 16% of the diagnosed malignant lesions worldwide are its consequence. In this sense, it is of paramount importance to diagnose these lesions in the earliest stage possible, in order to have the highest chances of survival. While there are several works that present selected topics in this area, none of them present a complete panorama, that is, from the image generation to its interpretation. This work presents a comprehensive state-of-the-art review of the image generation and processing techniques to detect Breast Cancer, where potential candidates for the image generation and processing are presented and discussed. Novel methodologies should consider the adroit integration of artificial intelligence-concepts and the categorical data to generate modern alternatives that can have the accuracy, precision and reliability expected to mitigate the misclassifications.
Collapse
Affiliation(s)
- Jesus A. Basurto-Hurtado
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Irving A. Cruz-Albarran
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| | - Manuel Toledano-Ayala
- División de Investigación y Posgrado de la Facultad de Ingeniería (DIPFI), Universidad Autónoma de Querétaro, Cerro de las Campanas S/N Las Campanas, Santiago de Querétaro 76010, Mexico;
| | - Mario Alberto Ibarra-Manzano
- Laboratorio de Procesamiento Digital de Señales, Departamento de Ingeniería Electrónica, Division de Ingenierias Campus Irapuato-Salamanca (DICIS), Universidad de Guanajuato, Carretera Salamanca-Valle de Santiago KM. 3.5 + 1.8 Km., Salamanca 36885, Mexico;
| | - Luis A. Morales-Hernandez
- C.A. Mecatrónica, Facultad de Ingeniería, Campus San Juan del Río, Universidad Autónoma de Querétaro, Rio Moctezuma 249, San Cayetano, San Juan del Rio 76807, Mexico; (J.A.B.-H.); (I.A.C.-A.)
| | - Carlos A. Perez-Ramirez
- Laboratorio de Dispositivos Médicos, Facultad de Ingeniería, Universidad Autónoma de Querétaro, Carretera a Chichimequillas S/N, Ejido Bolaños, Santiago de Querétaro 76140, Mexico
| |
Collapse
|
18
|
Samee NA, Alhussan AA, Ghoneim VF, Atteia G, Alkanhel R, Al-antari MA, Kadah YM. A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms. SENSORS 2022; 22:s22134938. [PMID: 35808433 PMCID: PMC9269713 DOI: 10.3390/s22134938] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/22/2022] [Accepted: 06/27/2022] [Indexed: 12/16/2022]
Abstract
One of the most promising research areas in the healthcare industry and the scientific community is focusing on the AI-based applications for real medical challenges such as the building of computer-aided diagnosis (CAD) systems for breast cancer. Transfer learning is one of the recent emerging AI-based techniques that allow rapid learning progress and improve medical imaging diagnosis performance. Although deep learning classification for breast cancer has been widely covered, certain obstacles still remain to investigate the independency among the extracted high-level deep features. This work tackles two challenges that still exist when designing effective CAD systems for breast lesion classification from mammograms. The first challenge is to enrich the input information of the deep learning models by generating pseudo-colored images instead of only using the input original grayscale images. To achieve this goal two different image preprocessing techniques are parallel used: contrast-limited adaptive histogram equalization (CLAHE) and Pixel-wise intensity adjustment. The original image is preserved in the first channel, while the other two channels receive the processed images, respectively. The generated three-channel pseudo-colored images are fed directly into the input layer of the backbone CNNs to generate more powerful high-level deep features. The second challenge is to overcome the multicollinearity problem that occurs among the high correlated deep features generated from deep learning models. A new hybrid processing technique based on Logistic Regression (LR) as well as Principal Components Analysis (PCA) is presented and called LR-PCA. Such a process helps to select the significant principal components (PCs) to further use them for the classification purpose. The proposed CAD system has been examined using two different public benchmark datasets which are INbreast and mini-MAIS. The proposed CAD system could achieve the highest performance accuracies of 98.60% and 98.80% using INbreast and mini-MAIS datasets, respectively. Such a CAD system seems to be useful and reliable for breast cancer diagnosis.
Collapse
Affiliation(s)
- Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Amel A. Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
- Correspondence:
| | | | - Ghada Atteia
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Reem Alkanhel
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia; (N.A.S.); (G.A.); (R.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Korea;
| | - Yasser M. Kadah
- Electrical and Computer Engineering Department, King Abdulaziz University, Jeddah 22254, Saudi Arabia;
- Biomedical Engineering Department, Cairo University, Giza 12613, Egypt
| |
Collapse
|
19
|
Lee S, Kim H, Lee H, Cho S. Deep-learning-based projection-domain breast thickness estimation for shape-prior iterative image reconstruction in digital breast tomosynthesis. Med Phys 2022; 49:3670-3682. [PMID: 35297075 DOI: 10.1002/mp.15612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 03/10/2022] [Accepted: 03/11/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Digital breast tomosynthesis (DBT) is a technique that can overcome the shortcomings of conventional X-ray mammography and can be effective for the early screening of breast cancer. The compression of the breast is essential during the DBT imaging. However, since the periphery of the breast cannot be compressed to a constant value, nonuniformity of thickness and in-plane shape variation happen. These cause inconvenience in diagnosis, scatter correction, and breast density estimation. PURPOSE In this study, we propose a deep-learning-based methodology for projection-domain breast thickness estimation and demonstrate a shape-prior iterative DBT image reconstruction. METHODS We prepared the Euclidean distance map, the thickness map, and the thickness corrected image of the simulated breast projections for thickness and shape estimation. Each pixel of the Euclidean distance map denotes a distance to the closest skin-line. The thickness map is defined as a conceptual projection of ideal breast support that differentiates the inner and outer regions of the breast phantom. The thickness projection map thus represents the x-ray path lengths of a homogeneous breast phantom. We generated the thickness corrected image by dividing the projection image by the thickness map in a pixel-wise manner. We developed a convolutional neural network for thickness estimation and correction. The network utilizes a projection image and a Euclidean distance image together as a dual input. An estimated breast thickness map is then used for constructing the breast shape mask by use of the discrete algebraic reconstruction technique (DART). RESULTS The proposed network effectively corrected the breast thickness in various simulation situations. Low normalized root-mean-squared error (NRMSE; 1.976%) and high structural similarity (SSIM; 99.997%) indicated a good agreement between the network-generated thickness corrected image and the ground-truth image. Compared to the existing methods and simple single-input network, the proposed method showed outperformance in breast thickness estimation and accordingly in breast shape recovery for various numerical phantoms without provoking any significant artifact. We have demonstrated that the uniformity of voxel value has improved by the inclusion of a shape-prior for the iterative DBT reconstruction. CONCLUSIONS We presented a novel deep-learning-based breast thickness correction and a shape reconstruction method. This approach to estimating the true thickness map and the shape of the breast undergoing compression can benefit various fields such as improvement of diagnostic breast images, scatter correction, material decomposition, and breast density estimation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Seoyoung Lee
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hyeongseok Kim
- KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hoyeon Lee
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, 02114, USA
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.,KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.,KAIST Institutes for IT Convergence and Health Science and Technology, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| |
Collapse
|
20
|
Breast Histopathological Image Classification Method Based on Autoencoder and Siamese Framework. INFORMATION 2022. [DOI: 10.3390/info13030107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The automated classification of breast cancer histopathological images is one of the important tasks in computer-aided diagnosis systems (CADs). Due to the characteristics of small inter-class and large intra-class variances in breast cancer histopathological images, extracting features for breast cancer classification is difficult. To address this problem, an improved autoencoder (AE) network using a Siamese framework that can learn the effective features from histopathological images for CAD breast cancer classification tasks was designed. First, the inputted image is processed at multiple scales using a Gaussian pyramid to obtain multi-scale features. Second, in the feature extraction stage, a Siamese framework is used to constrain the pre-trained AE so that the extracted features have smaller intra-class variance and larger inter-class variance. Experimental results show that the proposed method classification accuracy was as high as 97.8% on the BreakHis dataset. Compared with commonly used algorithms in breast cancer histopathological classification, this method has superior, faster performance.
Collapse
|
21
|
Deep convolutional neural networks for computer-aided breast cancer diagnostic: a survey. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06804-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
22
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
23
|
A Hybrid Deep Learning Approach for COVID-19 Diagnosis via CT and X-ray Medical Images. IOCA 2021 2021. [DOI: 10.3390/ioca2021-10909] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
24
|
Hameed BMZ, Prerepa G, Patil V, Shekhar P, Zahid Raza S, Karimi H, Paul R, Naik N, Modi S, Vigneswaran G, Prasad Rai B, Chłosta P, Somani BK. Engineering and clinical use of artificial intelligence (AI) with machine learning and data science advancements: radiology leading the way for future. Ther Adv Urol 2021; 13:17562872211044880. [PMID: 34567272 PMCID: PMC8458681 DOI: 10.1177/17562872211044880] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 08/21/2021] [Indexed: 12/29/2022] Open
Abstract
Over the years, many clinical and engineering methods have been adapted for testing and screening for the presence of diseases. The most commonly used methods for diagnosis and analysis are computed tomography (CT) and X-ray imaging. Manual interpretation of these images is the current gold standard but can be subject to human error, is tedious, and is time-consuming. To improve efficiency and productivity, incorporating machine learning (ML) and deep learning (DL) algorithms could expedite the process. This article aims to review the role of artificial intelligence (AI) and its contribution to data science as well as various learning algorithms in radiology. We will analyze and explore the potential applications in image interpretation and radiological advances for AI. Furthermore, we will discuss the usage, methodology implemented, future of these concepts in radiology, and their limitations and challenges.
Collapse
Affiliation(s)
- B M Zeeshan Hameed
- Department of Urology, Father Muller Medical College, Mangalore, Karnataka, India
| | - Gayathri Prerepa
- Department of Electronics and Communication, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Vathsala Patil
- Department of Oral Medicine and Radiology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Pranav Shekhar
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Syed Zahid Raza
- Department of Urology, Dr. B.R. Ambedkar Medical College, Bengaluru, Karnataka, India
| | - Hadis Karimi
- Manipal College of Pharmaceutical Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Rahul Paul
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nithesh Naik
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| | - Sachin Modi
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Ganesh Vigneswaran
- Department of Interventional Radiology, University Hospital Southampton NHS Foundation Trust, Southampton, UK
| | - Bhavan Prasad Rai
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group Manipal, India
| | - Piotr Chłosta
- Department of Urology, Jagiellonian University in Kraków, Kraków, Poland
| | - Bhaskar K Somani
- International Training and Research in Uro-oncology and Endourology (iTRUE) Group, Manipal, India
| |
Collapse
|
25
|
Cao H, Pu S, Tan W, Tong J. Breast mass detection in digital mammography based on anchor-free architecture. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106033. [PMID: 33845408 DOI: 10.1016/j.cmpb.2021.106033] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 02/27/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate detection of breast masses in mammography images is critical to diagnose early breast cancer, which can greatly improve the patients' survival rate. However, it is still a big challenge due to the heterogeneity of breast masses and the complexity of their surrounding environment. Therefore, how to develop a robust breast mass detection framework in clinical practical applications to improve patient survival is a topic that researchers need to continue to explore. METHODS To address these problems, we propose a one-stage object detection architecture, called Breast Mass Detection Network (BMassDNet), based on anchor-free and feature pyramid which makes the detection of breast masses of different sizes well adapted. We introduce a truncation normalization method and combine it with adaptive histogram equalization to enhance the contrast between the breast mass and the surrounding environment. Meanwhile, to solve the overfitting problem caused by small data size, we propose a natural deformation data augmentation method and mend the train data dynamic updating method based on the data complexity to effectively utilize the limited data. Finally, we use transfer learning to assist the training process and to improve the robustness of the model ulteriorly. RESULTS On the INbreast dataset, each image has an average of 0.495 false positives whilst the recall rate is 0.930; On the DDSM dataset, when each image has 0.599 false positives, the recall rate reaches 0.943. CONCLUSIONS The experimental results on datasets INbreast and DDSM show that the proposed BMassDNet can obtain competitive detection performance over the current top ranked methods.
Collapse
Affiliation(s)
- Haichao Cao
- Hikvision Digital Technology Company Limited, Hangzhou310051, China
| | - Shiliang Pu
- Hikvision Digital Technology Company Limited, Hangzhou310051, China.
| | - Wenming Tan
- Hikvision Digital Technology Company Limited, Hangzhou310051, China
| | - Junyan Tong
- Hikvision Digital Technology Company Limited, Hangzhou310051, China
| |
Collapse
|
26
|
Bagheri F, Tarokh MJ, Ziaratban M. Skin lesion segmentation from dermoscopic images by using Mask R-CNN, Retina-Deeplab, and graph-based methods. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102533] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
27
|
Hirra I, Ahmad M, Hussain A, Ashraf MU, Saeed IA, Qadri SF, Alghamdi AM, Alfakeeh AS. Breast Cancer Classification From Histopathological Images Using Patch-Based Deep Learning Modeling. IEEE ACCESS 2021; 9:24273-24287. [DOI: 10.1109/access.2021.3056516] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/29/2023]
|
28
|
Al-Antari MA, Han SM, Kim TS. Evaluation of deep learning detection and classification towards computer-aided diagnosis of breast lesions in digital X-ray mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 196:105584. [PMID: 32554139 DOI: 10.1016/j.cmpb.2020.105584] [Citation(s) in RCA: 50] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Accepted: 05/28/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning detection and classification from medical imagery are key components for computer-aided diagnosis (CAD) systems to efficiently support physicians leading to an accurate diagnosis of breast lesions. METHODS In this study, an integrated CAD system of deep learning detection and classification is proposed aiming to improve the diagnostic performance of breast lesions. First, a deep learning YOLO detector is adopted and evaluated for breast lesion detection from entire mammograms. Then, three deep learning classifiers, namely regular feedforward CNN, ResNet-50, and InceptionResNet-V2, are modified and evaluated for breast lesion classification. The proposed deep learning system is evaluated over 5-fold cross-validation tests using two different and widely used databases of digital X-ray mammograms: DDSM and INbreast. RESULTS The evaluation results of breast lesion detection show the capability of the YOLO detector to achieve overall detection accuracies of 99.17% and 97.27% and F1-scores of 99.28% and 98.02% for DDSM and INbreast datasets, respectively. Meanwhile, the YOLO detector could predict 71 frames per second (FPS) at the testing time for both DDSM and INbreast datasets. Using detected breast lesions, the classification models of CNN, ResNet-50, and InceptionResNet-V2 achieve promising average overall accuracies of 94.50%, 95.83%, and 97.50%, respectively, for the DDSM dataset and 88.74%, 92.55%, and 95.32%, respectively, for the INbreast dataset. CONCLUSION The capability of the YOLO detector boosted the classification models to achieve a promising breast lesion diagnostic performance. Such prediction results should help to develop a feasible CAD system for practical breast cancer diagnosis.
Collapse
Affiliation(s)
- Mugahed A Al-Antari
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea; Department of Biomedical Engineering, Sana'a Community College, Sana'a, Republic of Yemen.
| | - Seung-Moo Han
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea.
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, 1732, Deogyeong-daero, Giheung-gu, Yongin-si, Gyeonggi-do 17104, Republic of Korea.
| |
Collapse
|
29
|
Koyuncu H, Barstuğan M, Öziç MÜ. A comprehensive study of brain tumour discrimination using phase combinations, feature rankings, and hybridised classifiers. Med Biol Eng Comput 2020; 58:2971-2987. [PMID: 33006703 DOI: 10.1007/s11517-020-02273-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Accepted: 09/17/2020] [Indexed: 10/23/2022]
Abstract
The binary categorisation of brain tumours is challenging owing to the complexities of tumours. These challenges arise because of the diversities between shape, size, and intensity features for identical types of tumours. Accordingly, framework designs should be optimised for two phenomena: feature analyses and classification. Based on the challenges and difficulty of the issue, limited information or studies exist that consider the binary classification of three-dimensional (3D) brain tumours. In this paper, the discrimination of high-grade glioma (HGG) and low-grade glioma (LGG) is accomplished by designing various frameworks based on 3D magnetic resonance imaging (3D MRI) data. Accordingly, diverse phase combinations, feature-ranking approaches, and hybrid classifiers are integrated. Feature analyses are performed to achieve remarkable performance using first-order statistics (FOS) by examining different phase combinations near the usage of single phases (T1c, FLAIR, T1, and T2) and by considering five feature-ranking approaches (Bhattacharyya, Entropy, Roc, t test, and Wilcoxon) to detect the appropriate input to the classifier. Hybrid classifiers based on neural networks (NN) are considered due to their robustness and superiority with medical pattern classification. In this study, state-of-the-art optimisation methods are used to form the hybrid classifiers: dynamic weight particle swarm optimisation (DW-PSO), chaotic dynamic weight particle swarm optimisation (CDW-PSO), and Gauss-map-based chaotic particle-swarm optimisation (GM-CPSO). The integrated frameworks, including DW-PSO-NN, CDW-PSO-NN, and GM-CPSO-NN, are evaluated on the BraTS 2017 challenge dataset involving 210 HGG and 75 LGG samples. The 2-fold cross-validation test method and seven metrics (accuracy, AUC, sensitivity, specificity, g-mean, precision, f-measure) are processed to evaluate the performance of frameworks efficiently. In experiments, the most effective framework is provided that uses FOS, data including three phase combinations, the Wilcoxon feature-ranking approach, and the GM-CPSO-NN method. Consequently, our framework achieved remarkable scores of 90.18% (accuracy), 85.62% (AUC), 95.24% (sensitivity), 76% (specificity), 85.08% (g-mean), 91.74% (precision), and 93.46% (f-measure) for HGG/LGG discrimination of 3D brain MRI data. Graphical abstract.
Collapse
Affiliation(s)
- Hasan Koyuncu
- Faculty of Engineering and Natural Sciences, Electrical & Electronics Engineering Department, Konya Technical University, 42250, Konya, Turkey.
| | - Mücahid Barstuğan
- Faculty of Engineering and Natural Sciences, Electrical & Electronics Engineering Department, Konya Technical University, 42250, Konya, Turkey
| | - Muhammet Üsame Öziç
- Faculty of Engineering and Architecture, Biomedical Engineering Department, Necmettin Erbakan University, Konya, Turkey
| |
Collapse
|
30
|
Classification of Dermoscopy Skin Lesion Color-Images Using Fractal-Deep Learning Features. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10175954] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided diagnosis based on 1D fractal signatures of texture-based features combining with deep-learning features using transferred learning based in Densenet-201. This proposal works with three 1D fractal signatures built per color-image. The energy, variance, and entropy of the fractal signatures are used combined with 100 features extracted from Densenet-201 to construct the features vector. Because commonly, the classes in the dataset of skin lesion images are imbalanced, we use the technique of ensemble of classifiers: K-nearest neighbors and two types of support vector machines. The computer-aided diagnosis output was determined based on the linear plurality vote. In this work, we obtained an average accuracy of 97.35%, an average precision of 91.61%, an average sensitivity of 66.45%, and an average specificity of 97.85% in the eight classes’ classification in the International Skin Imaging Collaboration (ISIC) archive-2019.
Collapse
|
31
|
Al-Masni MA, Kim DH, Kim TS. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 190:105351. [PMID: 32028084 DOI: 10.1016/j.cmpb.2020.105351] [Citation(s) in RCA: 79] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 01/03/2020] [Accepted: 01/19/2020] [Indexed: 05/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer automated diagnosis of various skin lesions through medical dermoscopy images remains a challenging task. METHODS In this work, we propose an integrated diagnostic framework that combines a skin lesion boundary segmentation stage and a multiple skin lesions classification stage. Firstly, we segment the skin lesion boundaries from the entire dermoscopy images using deep learning full resolution convolutional network (FrCN). Then, a convolutional neural network classifier (i.e., Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201) is applied on the segmented skin lesions for classification. The former stage is a critical prerequisite step for skin lesion diagnosis since it extracts prominent features of various types of skin lesions. A promising classifier is selected by testing well-established classification convolutional neural networks. The proposed integrated deep learning model has been evaluated using three independent datasets (i.e., International Skin Imaging Collaboration (ISIC) 2016, 2017, and 2018, which contain two, three, and seven types of skin lesions, respectively) with proper balancing, segmentation, and augmentation. RESULTS In the integrated diagnostic system, segmented lesions improve the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in terms of the F1-score for benign and malignant cases of the ISIC 2016 test dataset, respectively. The classifiers of Inception-v3, ResNet-50, Inception-ResNet-v2, and DenseNet-201 exhibit their capability with overall weighted prediction accuracies of 77.04%, 79.95%, 81.79%, and 81.27% for two classes of ISIC 2016, 81.29%, 81.57%, 81.34%, and 73.44% for three classes of ISIC 2017, and 88.05%, 89.28%, 87.74%, and 88.70% for seven classes of ISIC 2018, respectively, demonstrating the superior performance of ResNet-50. CONCLUSIONS The proposed integrated diagnostic networks could be used to support and aid dermatologists for further improvement in skin cancer diagnosis.
Collapse
Affiliation(s)
- Mohammed A Al-Masni
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, Republic of Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| |
Collapse
|
32
|
Automated mammogram breast cancer detection using the optimized combination of convolutional and recurrent neural network. EVOLUTIONARY INTELLIGENCE 2020. [DOI: 10.1007/s12065-020-00403-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
33
|
Al-Antari MA, Al-Masni MA, Kim TS. Deep Learning Computer-Aided Diagnosis for Breast Lesion in Digital Mammogram. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2020; 1213:59-72. [PMID: 32030663 DOI: 10.1007/978-3-030-33128-3_4] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
For computer-aided diagnosis (CAD), detection, segmentation, and classification from medical imagery are three key components to efficiently assist physicians for accurate diagnosis. In this chapter, a completely integrated CAD system based on deep learning is presented to diagnose breast lesions from digital X-ray mammograms involving detection, segmentation, and classification. To automatically detect breast lesions from mammograms, a regional deep learning approach called You-Only-Look-Once (YOLO) is used. To segment breast lesions, full resolution convolutional network (FrCN), a novel segmentation model of deep network, is implemented and used. Finally, three conventional deep learning models including regular feedforward CNN, ResNet-50, and InceptionResNet-V2 are separately adopted and used to classify or recognize the detected and segmented breast lesion as either benign or malignant. To evaluate the integrated CAD system for detection, segmentation, and classification, the publicly available and annotated INbreast database is used over fivefold cross-validation tests. The evaluation results of the YOLO-based detection achieved detection accuracy of 97.27%, Matthews's correlation coefficient (MCC) of 93.93%, and F1-score of 98.02%. Moreover, the results of the breast lesion segmentation via FrCN achieved an overall accuracy of 92.97%, MCC of 85.93%, Dice (F1-score) of 92.69%, and Jaccard similarity coefficient of 86.37%. The detected and segmented breast lesions are classified via CNN, ResNet-50, and InceptionResNet-V2 achieving an average overall accuracies of 88.74%, 92.56%, and 95.32%, respectively. The performance evaluation results through all stages of detection, segmentation, and classification show that the integrated CAD system outperforms the latest conventional deep learning methodologies. We conclude that our CAD system could be used to assist radiologists over all stages of detection, segmentation, and classification for diagnosis of breast lesions.
Collapse
Affiliation(s)
- Mugahed A Al-Antari
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.,Department of Biomedical Engineering, Sana'a Community College, Sana'a, Republic of Yemen
| | - Mohammed A Al-Masni
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
| |
Collapse
|
34
|
LLTO: Towards efficient lesion localization based on template occlusion strategy in intelligent diagnosis. Pattern Recognit Lett 2018. [DOI: 10.1016/j.patrec.2018.10.029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
35
|
Segmentation of lung fields from chest radiographs-a radiomic feature-based approach. Biomed Eng Lett 2018; 9:109-117. [PMID: 30956884 DOI: 10.1007/s13534-018-0086-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2018] [Revised: 07/31/2018] [Accepted: 09/16/2018] [Indexed: 10/28/2022] Open
Abstract
Precisely segmented lung fields restrict the region-of-interest from which radiological patterns are searched, and is thus an indispensable prerequisite step in any chest radiographic CADx system. Recently, a number of deep learning-based approaches have been proposed to implement this step. However, deep learning has its own limitations and cannot be used in resource-constrained settings. Medical systems generally have limited RAM, computational power, storage, and no GPUs. They are thus not always suited for running deep learning-based models. Shallow learning-based models with appropriately selected features give comparable performance but with modest resources. The present paper thus proposes a shallow learning-based method that makes use of 40 radiomic features to segment lung fields from chest radiographs. A distance regularized level set evolution (DRLSE) method along with other post-processing steps are used to refine its output. The proposed method is trained and tested using publicly available JSRT dataset. The testing results indicate that the performance of the proposed method is comparable to the state-of-the-art deep learning-based lung field segmentation (LFS) methods and better than other LFS methods.
Collapse
|
36
|
Al-antari MA, Al-masni MA, Choi MT, Han SM, Kim TS. A fully integrated computer-aided diagnosis system for digital X-ray mammograms via deep learning detection, segmentation, and classification. Int J Med Inform 2018; 117:44-54. [DOI: 10.1016/j.ijmedinf.2018.06.003] [Citation(s) in RCA: 184] [Impact Index Per Article: 30.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 05/22/2018] [Accepted: 06/06/2018] [Indexed: 11/28/2022]
|
37
|
Hussain D, Al-Antari MA, Al-Masni MA, Han SM, Kim TS. Femur segmentation in DXA imaging using a machine learning decision tree. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2018; 26:727-746. [PMID: 30056442 DOI: 10.3233/xst-180399] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND Accurate measurement of bone mineral density (BMD) in dual-energy X-ray absorptiometry (DXA) is essential for proper diagnosis of osteoporosis. Calculation of BMD requires precise bone segmentation and subtraction of soft tissue absorption. Femur segmentation remains a challenge as many existing methods fail to correctly distinguish femur from soft tissue. Reasons for this failure include low contrast and noise in DXA images, bone shape variability, and inconsistent X-ray beam penetration and attenuation, which cause shadowing effects and person-to-person variation. OBJECTIVE To present a new method namely, a Pixel Label Decision Tree (PLDT), and test whether it can achieve higher accurate performance in femur segmentation in DXA imaging. METHODS PLDT involves mainly feature extraction and selection. Unlike photographic images, X-ray images include features on the surface and inside an object. In order to reveal hidden patterns in DXA images, PLDT generates seven new feature maps from existing high energy (HE) and low energy (LE) X-ray features and determines the best feature set for the model. The performance of PLDT in femur segmentation is compared with that of three widely used medical image segmentation algorithms, the Global Threshold (GT), Region Growing Threshold (RGT), and artificial neural networks (ANN). RESULTS PLDT achieved a higher accuracy of femur segmentation in DXA imaging (91.4%) than either GT (68.4%), RGT (76%) or ANN (84.4%). CONCLUSIONS The study demonstrated that PLDT outperformed other conventional segmentation techniques in segmenting DXA images. Improved segmentation should help accurate computation of BMD which later improves clinical diagnosis of osteoporosis.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea
| | - Mugahed A Al-Antari
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea
| | - Seung-Moo Han
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea
| | - Tae-Seong Kim
- Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea
| |
Collapse
|