1
|
Hachache R, Yahyaouy A, Riffi J, Tairi H, Abibou S, Adoui ME, Benjelloun M. Advancing personalized oncology: a systematic review on the integration of artificial intelligence in monitoring neoadjuvant treatment for breast cancer patients. BMC Cancer 2024; 24:1300. [PMID: 39434042 PMCID: PMC11495077 DOI: 10.1186/s12885-024-13049-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 10/08/2024] [Indexed: 10/23/2024] Open
Abstract
PURPOSE Despite suffering from the same disease, each patient exhibits a distinct microbiological profile and variable reactivity to prescribed treatments. Most doctors typically use a standardized treatment approach for all patients suffering from a specific disease. Consequently, the challenge lies in the effectiveness of this standardized treatment and in adapting it to each individual patient. Personalized medicine is an emerging field in which doctors use diagnostic tests to identify the most effective medical treatments for each patient. Prognosis, disease monitoring, and treatment planning rely on manual, error-prone methods. Artificial intelligence (AI) uses predictive techniques capable of automating prognostic and monitoring processes, thus reducing the error rate associated with conventional methods. METHODS This paper conducts an analysis of current literature, encompassing the period from January 2015 to 2023, based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). RESULTS In assessing 25 pertinent studies concerning predicting neoadjuvant treatment (NAT) response in breast cancer (BC) patients, the studies explored various imaging modalities (Magnetic Resonance Imaging, Ultrasound, etc.), evaluating results based on accuracy, sensitivity, and area under the curve. Additionally, the technologies employed, such as machine learning (ML), deep learning (DL), statistics, and hybrid models, were scrutinized. The presentation of datasets used for predicting complete pathological response (PCR) was also considered. CONCLUSION This paper seeks to unveil crucial insights into the application of AI techniques in personalized oncology, particularly in the monitoring and prediction of responses to NAT for BC patients. Finally, the authors suggest avenues for future research into AI-based monitoring systems.
Collapse
Affiliation(s)
- Rachida Hachache
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco.
| | - Ali Yahyaouy
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
- USPN, La Maison Des Sciences Numériques, Paris, France
| | - Jamal Riffi
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Hamid Tairi
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Soukayna Abibou
- Department of Computer Sciences, LISAC Laboratory, Sidi Mohammed Ben Abdellah University, Fez, Morocco
| | - Mohammed El Adoui
- Computer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, Mons, 7000, Belgium
| | - Mohammed Benjelloun
- Computer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, Mons, 7000, Belgium
| |
Collapse
|
2
|
Yu ZH, Hong YT, Chou CP. Enhancing Breast Cancer Diagnosis: A Nomogram Model Integrating AI Ultrasound and Clinical Factors. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:1372-1380. [PMID: 38897841 DOI: 10.1016/j.ultrasmedbio.2024.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 04/16/2024] [Accepted: 05/12/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE A novel nomogram incorporating artificial intelligence (AI) and clinical features for enhanced ultrasound prediction of benign and malignant breast masses. MATERIALS AND METHODS This study analyzed 340 breast masses identified through ultrasound in 308 patients. The masses were divided into training (n = 260) and validation (n = 80) groups. The AI-based analysis employed the Samsung Ultrasound AI system (S-detect). Univariate and multivariate analyses were conducted to construct nomograms using logistic regression. The AI-Nomogram was based solely on AI results, while the ClinAI- Nomogram incorporated additional clinical factors. Both nomograms underwent internal validation with 1000 bootstrap resamples and external validation using the independent validation group. Performance was evaluated by analyzing the area under the receiver operating characteristic (ROC) curve (AUC) and calibration curves. RESULTS The ClinAI-Nomogram, which incorporates patient age, AI-based mass size, and AI-based diagnosis, outperformed an existing AI-Nomogram in differentiating benign from malignant breast masses. The ClinAI-Nomogram surpassed the AI-Nomogram in predicting malignancy with significantly higher AUC scores in both training (0.873, 95% CI: 0.830-0.917 vs. 0.792, 95% CI: 0.748-0.836; p = 0.016) and validation phases (0.847, 95% CI: 0.763-0.932 vs. 0.770, 95% CI: 0.709-0.833; p < 0.001). Calibration curves further revealed excellent agreement between the ClinAI-Nomogram's predicted probabilities and actual observed risks of malignancy. CONCLUSION The ClinAI- Nomogram, combining AI alongside clinical data, significantly enhanced the differentiation of benign and malignant breast masses in clinical AI-facilitated ultrasound examinations.
Collapse
Affiliation(s)
- Zi-Han Yu
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan; Department of Radiology, Jiannren Hospital, Kaohsiung, Taiwan
| | - Yu-Ting Hong
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan
| | - Chen-Pin Chou
- Department of Radiology, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan; Department of Medical Laboratory Science and Biotechnology, Fooyin University, Kaohsiung, Taiwan; Department of Pharmacy, College of Pharmacy, Tajen University, Pingtung, Taiwan.
| |
Collapse
|
3
|
Jiménez-Gaona Y, Álvarez MJR, Castillo-Malla D, García-Jaen S, Carrión-Figueroa D, Corral-Domínguez P, Lakshminarayanan V. BraNet: a mobil application for breast image classification based on deep learning algorithms. Med Biol Eng Comput 2024; 62:2737-2756. [PMID: 38693328 PMCID: PMC11330402 DOI: 10.1007/s11517-024-03084-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/26/2024] [Indexed: 05/03/2024]
Abstract
Mobile health apps are widely used for breast cancer detection using artificial intelligence algorithms, providing radiologists with second opinions and reducing false diagnoses. This study aims to develop an open-source mobile app named "BraNet" for 2D breast imaging segmentation and classification using deep learning algorithms. During the phase off-line, an SNGAN model was previously trained for synthetic image generation, and subsequently, these images were used to pre-trained SAM and ResNet18 segmentation and classification models. During phase online, the BraNet app was developed using the react native framework, offering a modular deep-learning pipeline for mammography (DM) and ultrasound (US) breast imaging classification. This application operates on a client-server architecture and was implemented in Python for iOS and Android devices. Then, two diagnostic radiologists were given a reading test of 290 total original RoI images to assign the perceived breast tissue type. The reader's agreement was assessed using the kappa coefficient. The BraNet App Mobil exhibited the highest accuracy in benign and malignant US images (94.7%/93.6%) classification compared to DM during training I (80.9%/76.9%) and training II (73.7/72.3%). The information contrasts with radiological experts' accuracy, with DM classification being 29%, concerning US 70% for both readers, because they achieved a higher accuracy in US ROI classification than DM images. The kappa value indicates a fair agreement (0.3) for DM images and moderate agreement (0.4) for US images in both readers. It means that not only the amount of data is essential in training deep learning algorithms. Also, it is vital to consider the variety of abnormalities, especially in the mammography data, where several BI-RADS categories are present (microcalcifications, nodules, mass, asymmetry, and dense breasts) and can affect the API accuracy model.
Collapse
Affiliation(s)
- Yuliana Jiménez-Gaona
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador.
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain.
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada.
| | - María José Rodríguez Álvarez
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
| | - Darwin Castillo-Malla
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
- Instituto de Instrumentación para la Imagen Molecular I3M, Universitat Politécnica de Valencia, 46022, Valencia, Spain
- Theoretical and Experimental Epistemology Lab, School of Opto ΩN2L3G1, Waterloo, Canada
| | - Santiago García-Jaen
- Departamento de Química y Ciencias Exactas, Universidad Técnica Particular de Loja, San Cayetano Alto s/n CP1101608, Loja, Ecuador
| | | | - Patricio Corral-Domínguez
- Corporación Médica Monte Sinaí-CIPAM (Centro Integral de Patología Mamaria) Cuenca-Ecuador, Facultad de Ciencias Médicas, Universidad de Cuenca, Cuenca, 010203, Ecuador
| | - Vasudevan Lakshminarayanan
- Department of Systems Design Engineering, Physics, and Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, N2L3G1, Canada
| |
Collapse
|
4
|
Cui Y, Li Y, Miedema JR, Edmiston SN, Farag SW, Marron JS, Thomas NE. Region of Interest Detection in Melanocytic Skin Tumor Whole Slide Images-Nevus and Melanoma. Cancers (Basel) 2024; 16:2616. [PMID: 39123344 PMCID: PMC11311050 DOI: 10.3390/cancers16152616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Revised: 07/17/2024] [Accepted: 07/19/2024] [Indexed: 08/12/2024] Open
Abstract
Automated region of interest detection in histopathological image analysis is a challenging and important topic with tremendous potential impact on clinical practice. The deep learning methods used in computational pathology may help us to reduce costs and increase the speed and accuracy of cancer diagnosis. We started with the UNC Melanocytic Tumor Dataset cohort which contains 160 hematoxylin and eosin whole slide images of primary melanoma (86) and nevi (74). We randomly assigned 80% (134) as a training set and built an in-house deep learning method to allow for classification, at the slide level, of nevi and melanoma. The proposed method performed well on the other 20% (26) test dataset; the accuracy of the slide classification task was 92.3% and our model also performed well in terms of predicting the region of interest annotated by the pathologists, showing excellent performance of our model on melanocytic skin tumors. Even though we tested the experiments on a skin tumor dataset, our work could also be extended to other medical image detection problems to benefit the clinical evaluation and diagnosis of different tumors.
Collapse
Affiliation(s)
- Yi Cui
- Department of Economics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Yao Li
- Department of Statistics & Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (J.S.M.)
| | - Jayson R. Miedema
- Department of Pathology and Laboratory Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Sharon N. Edmiston
- Lineberger Comprehensive Cancer Center, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - Sherif W. Farag
- Eshelman School of Pharmacy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
| | - James Stephen Marron
- Department of Statistics & Operations Research, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.L.); (J.S.M.)
- Lineberger Comprehensive Cancer Center, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Biostatistics, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Nancy E. Thomas
- Lineberger Comprehensive Cancer Center, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA;
- Department of Dermatology, UNC School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
5
|
Brima Y, Atemkeng M. Saliency-driven explainable deep learning in medical imaging: bridging visual explainability and statistical quantitative analysis. BioData Min 2024; 17:18. [PMID: 38909228 PMCID: PMC11193223 DOI: 10.1186/s13040-024-00370-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 06/10/2024] [Indexed: 06/24/2024] Open
Abstract
Deep learning shows great promise for medical image analysis but often lacks explainability, hindering its adoption in healthcare. Attribution techniques that explain model reasoning can potentially increase trust in deep learning among clinical stakeholders. In the literature, much of the research on attribution in medical imaging focuses on visual inspection rather than statistical quantitative analysis.In this paper, we proposed an image-based saliency framework to enhance the explainability of deep learning models in medical image analysis. We use adaptive path-based gradient integration, gradient-free techniques, and class activation mapping along with its derivatives to attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.The proposed framework integrates qualitative and statistical quantitative assessments, employing Accuracy Information Curves (AICs) and Softmax Information Curves (SICs) to measure the effectiveness of saliency methods in retaining critical image information and their correlation with model predictions. Visual inspections indicate that methods such as ScoreCAM, XRAI, GradCAM, and GradCAM++ consistently produce focused and clinically interpretable attribution maps. These methods highlighted possible biomarkers, exposed model biases, and offered insights into the links between input features and predictions, demonstrating their ability to elucidate model reasoning on these datasets. Empirical evaluations reveal that ScoreCAM and XRAI are particularly effective in retaining relevant image regions, as reflected in their higher AUC values. However, SICs highlight variability, with instances of random saliency masks outperforming established methods, emphasizing the need for combining visual and empirical metrics for a comprehensive evaluation.The results underscore the importance of selecting appropriate saliency methods for specific medical imaging tasks and suggest that combining qualitative and quantitative approaches can enhance the transparency, trustworthiness, and clinical adoption of deep learning models in healthcare. This study advances model explainability to increase trust in deep learning among healthcare stakeholders by revealing the rationale behind predictions. Future research should refine empirical metrics for stability and reliability, include more diverse imaging modalities, and focus on improving model explainability to support clinical decision-making.
Collapse
Affiliation(s)
- Yusuf Brima
- Computer Vision, Institute of Cognitive Science, Osnabrück University, Osnabrueck, D-49090, Lower Saxony, Germany.
| | - Marcellin Atemkeng
- Department of Mathematics, Rhodes University, Grahamstown, 6140, Eastern Cape, South Africa.
| |
Collapse
|
6
|
Yan T, Chen G, Zhang H, Wang G, Yan Z, Li Y, Xu S, Zhou Q, Shi R, Tian Z, Wang B. Convolutional neural network with parallel convolution scale attention module and ResCBAM for breast histology image classification. Heliyon 2024; 10:e30889. [PMID: 38770292 PMCID: PMC11103517 DOI: 10.1016/j.heliyon.2024.e30889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Revised: 05/04/2024] [Accepted: 05/07/2024] [Indexed: 05/22/2024] Open
Abstract
Breast cancer is the most common cause of female morbidity and death worldwide. Compared with other cancers, early detection of breast cancer is more helpful to improve the prognosis of patients. In order to achieve early diagnosis and treatment, clinical treatment requires rapid and accurate diagnosis. Therefore, the development of an automatic detection system for breast cancer suitable for patient imaging is of great significance for assisting clinical treatment. Accurate classification of pathological images plays a key role in computer-aided medical diagnosis and prognosis. However, in the automatic recognition and classification methods of breast cancer pathological images, the scale information, the loss of image information caused by insufficient feature fusion, and the enormous structure of the model may lead to inaccurate or inefficient classification. To minimize the impact, we proposed a lightweight PCSAM-ResCBAM model based on two-stage convolutional neural network. The model included a Parallel Convolution Scale Attention Module network (PCSAM-Net) and a Residual Convolutional Block Attention Module network (ResCBAM-Net). The first-level convolutional network was built through a 4-layer PCSAM module to achieve prediction and classification of patches extracted from images. To optimize the network's ability to represent global features of images, we proposed a tiled feature fusion method to fuse patch features from the same image, and proposed a residual convolutional attention module. Based on the above, the second-level convolutional network was constructed to achieve predictive classification of images. We evaluated the performance of our proposed model on the ICIAR2018 dataset and the BreakHis dataset, respectively. Furthermore, through model ablation studies, we found that scale attention and dilated convolution play an important role in improving model performance. Our proposed model outperforms the existing state-of-the-art models on 200 × and 400 × magnification datasets with a maximum accuracy of 98.74 %.
Collapse
Affiliation(s)
- Ting Yan
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Guohui Chen
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Huimin Zhang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Guolan Wang
- Computer Information Engineering Institute, Shanxi Technology and Business College, Taiyuan, China
| | - Zhenpeng Yan
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Ying Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Songrui Xu
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Qichao Zhou
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Ruyi Shi
- Department of Cell Biology and Genetics, Shanxi Medical University, Taiyuan, Shanxi, 030001, China
| | - Zhi Tian
- Second Clinical Medical College, Shanxi Medical University, 382 Wuyi Road, Taiyuan, Shanxi, 030001, China
- Department of Orthopedics, The Second Hospital of Shanxi Medical University, Shanxi Key Laboratory of Bone and Soft Tissue Injury Repair, 382 Wuyi Road, Taiyuan, Shanxi, 030001, China
| | - Bin Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| |
Collapse
|
7
|
Nissar I, Alam S, Masood S, Kashif M. MOB-CBAM: A dual-channel attention-based deep learning generalizable model for breast cancer molecular subtypes prediction using mammograms. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108121. [PMID: 38531147 DOI: 10.1016/j.cmpb.2024.108121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Revised: 02/15/2024] [Accepted: 03/06/2024] [Indexed: 03/28/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep Learning models have emerged as a significant tool in generating efficient solutions for complex problems including cancer detection, as they can analyze large amounts of data with high efficiency and performance. Recent medical studies highlight the significance of molecular subtype detection in breast cancer, aiding the development of personalized treatment plans as different subtypes of cancer respond better to different therapies. METHODS In this work, we propose a novel lightweight dual-channel attention-based deep learning model MOB-CBAM that utilizes the backbone of MobileNet-V3 architecture with a Convolutional Block Attention Module to make highly accurate and precise predictions about breast cancer. We used the CMMD mammogram dataset to evaluate the proposed model in our study. Nine distinct data subsets were created from the original dataset to perform coarse and fine-grained predictions, enabling it to identify masses, calcifications, benign, malignant tumors and molecular subtypes of cancer, including Luminal A, Luminal B, HER-2 Positive, and Triple Negative. The pipeline incorporates several image pre-processing techniques, including filtering, enhancement, and normalization, for enhancing the model's generalization ability. RESULTS While identifying benign versus malignant tumors, i.e., coarse-grained classification, the MOB-CBAM model produced exceptional results with 99 % accuracy, precision, recall, and F1-score values of 0.99 and MCC of 0.98. In terms of fine-grained classification, the MOB-CBAM model has proven to be highly efficient in accurately identifying mass with (benign/malignant) and calcification with (benign/malignant) classification tasks with an impressive accuracy rate of 98 %. We have also cross-validated the efficiency of the proposed MOB-CBAM deep learning architecture on two datasets: MIAS and CBIS-DDSM. On the MIAS dataset, an accuracy of 97 % was reported for the task of classifying benign, malignant, and normal images, while on the CBIS-DDSM dataset, an accuracy of 98 % was achieved for the classification of mass with either benign or malignant, and calcification with benign and malignant tumors. CONCLUSION This study presents lightweight MOB-CBAM, a novel deep learning framework, to address breast cancer diagnosis and subtype prediction. The model's innovative incorporation of the CBAM enhances precise predictions. The extensive evaluation of the CMMD dataset and cross-validation on other datasets affirm the model's efficacy.
Collapse
Affiliation(s)
- Iqra Nissar
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India.
| | - Shahzad Alam
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Sarfaraz Masood
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| | - Mohammad Kashif
- Department of Computer Engineering, Jamia Millia Islamia (A Central University), New Delhi, 110025, India
| |
Collapse
|
8
|
O'Connor S, Vercell A, Wong D, Yorke J, Fallatah FA, Cave L, Anny Chen LY. The application and use of artificial intelligence in cancer nursing: A systematic review. Eur J Oncol Nurs 2024; 68:102510. [PMID: 38310664 DOI: 10.1016/j.ejon.2024.102510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 01/07/2024] [Accepted: 01/10/2024] [Indexed: 02/06/2024]
Abstract
PURPOSE Artificial Intelligence is being applied in oncology to improve patient and service outcomes. Yet, there is a limited understanding of how these advanced computational techniques are employed in cancer nursing to inform clinical practice. This review aimed to identify and synthesise evidence on artificial intelligence in cancer nursing. METHODS CINAHL, MEDLINE, PsycINFO, and PubMed were searched using key terms between January 2010 and December 2022. Titles, abstracts, and then full texts were screened against eligibility criteria, resulting in twenty studies being included. Critical appraisal was undertaken, and relevant data extracted and analysed. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. RESULTS Artificial intelligence was used in numerous areas including breast, colorectal, liver, and ovarian cancer care among others. Algorithms were trained and tested on primary and secondary datasets to build predictive models of health problems related to cancer. Studies reported this led to improvements in the accuracy of predicting health outcomes or identifying variables that improved outcome prediction. While nurses led most studies, few deployed an artificial intelligence based digital tool with cancer nurses in a real-world setting as studies largely focused on developing and validating predictive models. CONCLUSION Electronic cancer nursing datasets should be established to enable artificial intelligence techniques to be tested and if effective implemented in digital prediction and other AI-based tools. Cancer nurses need more education on machine learning and natural language processing, so they can lead and contribute to artificial intelligence developments in oncology.
Collapse
Affiliation(s)
- Siobhan O'Connor
- Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King's College London, London, United Kingdom.
| | - Amy Vercell
- Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King's College London, London, United Kingdom; The Christie NHS Foundation Trust, Wilmslow Rd, Manchester, M20 4BX, United Kingdom.
| | - David Wong
- Leeds Institute for Health Informatics, University of Leeds, Leeds, United Kingdom.
| | - Janelle Yorke
- Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King's College London, London, United Kingdom; The Christie NHS Foundation Trust, Wilmslow Rd, Manchester, M20 4BX, United Kingdom.
| | - Fatmah Abdulsamad Fallatah
- Department of Nursing Affairs, King Faisal Specialist Hospital and Research Centre, Riyadh, Saudi Arabia.
| | - Louise Cave
- NHS Transformation Directorate, NHS England, England, United Kingdom.
| | - Lu-Yen Anny Chen
- Institute of Clinical Nursing, College of Nursing, National Yang Ming Chiao Tung University, Taipei, Taiwan.
| |
Collapse
|
9
|
Hussain D, Al-Masni MA, Aslam M, Sadeghi-Niaraki A, Hussain J, Gu YH, Naqvi RA. Revolutionizing tumor detection and classification in multimodality imaging based on deep learning approaches: Methods, applications and limitations. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:857-911. [PMID: 38701131 DOI: 10.3233/xst-230429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2024]
Abstract
BACKGROUND The emergence of deep learning (DL) techniques has revolutionized tumor detection and classification in medical imaging, with multimodal medical imaging (MMI) gaining recognition for its precision in diagnosis, treatment, and progression tracking. OBJECTIVE This review comprehensively examines DL methods in transforming tumor detection and classification across MMI modalities, aiming to provide insights into advancements, limitations, and key challenges for further progress. METHODS Systematic literature analysis identifies DL studies for tumor detection and classification, outlining methodologies including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and their variants. Integration of multimodality imaging enhances accuracy and robustness. RESULTS Recent advancements in DL-based MMI evaluation methods are surveyed, focusing on tumor detection and classification tasks. Various DL approaches, including CNNs, YOLO, Siamese Networks, Fusion-Based Models, Attention-Based Models, and Generative Adversarial Networks, are discussed with emphasis on PET-MRI, PET-CT, and SPECT-CT. FUTURE DIRECTIONS The review outlines emerging trends and future directions in DL-based tumor analysis, aiming to guide researchers and clinicians toward more effective diagnosis and prognosis. Continued innovation and collaboration are stressed in this rapidly evolving domain. CONCLUSION Conclusions drawn from literature analysis underscore the efficacy of DL approaches in tumor detection and classification, highlighting their potential to address challenges in MMI analysis and their implications for clinical practice.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Muhammad Aslam
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Abolghasem Sadeghi-Niaraki
- Department of Computer Science & Engineering and Convergence Engineering for Intelligent Drone, XR Research Center, Sejong University, Seoul, Korea
| | - Jamil Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul, Korea
| | - Rizwan Ali Naqvi
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul, Korea
| |
Collapse
|
10
|
Ma'touq J, Alnuman N. Comparative analysis of features and classification techniques in breast cancer detection for Biglycan biomarker images. Cancer Biomark 2024; 40:263-273. [PMID: 39177590 PMCID: PMC11380270 DOI: 10.3233/cbm-230544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2024]
Abstract
BACKGROUND Breast cancer (BC) is considered the world's most prevalent cancer. Early diagnosis of BC enables patients to receive better care and treatment, hence lowering patient mortality rates. Breast lesion identification and classification are challenging even for experienced radiologists due to the complexity of breast tissue and variations in lesion presentations. OBJECTIVE This work aims to investigate appropriate features and classification techniques for accurate breast cancer detection in 336 Biglycan biomarker images. METHODS The Biglycan biomarker images were retrieved from the Mendeley Data website (Repository name: Biglycan breast cancer dataset). Five features were extracted and compared based on shape characteristics (i.e., Harris Points and Minimum Eigenvalue (MinEigen) Points), frequency domain characteristics (i.e., The Two-dimensional Fourier Transform and the Wavelet Transform), and statistical characteristics (i.e., histogram). Six different commonly used classification algorithms were used; i.e., K-nearest neighbours (k-NN), Naïve Bayes (NB), Pseudo-Linear Discriminate Analysis (pl-DA), Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF). RESULTS The histogram of greyscale images showed the best performance for the k-NN (97.6%), SVM (95.8%), and RF (95.3%) classifiers. Additionally, among the five features, the greyscale histogram feature achieved the best accuracy in all classifiers with a maximum accuracy of 97.6%, while the wavelet feature provided a promising accuracy in most classifiers (up to 94.6%). CONCLUSION Machine learning demonstrates high accuracy in estimating cancer and such technology can assist doctors in the analysis of routine medical images and biopsy samples to improve early diagnosis and risk stratification.
Collapse
Affiliation(s)
- Jumana Ma'touq
- Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, Amman, Jordan
| | - Nasim Alnuman
- Department of Biomedical Engineering, School of Applied Medical Sciences, German Jordanian University, Amman, Jordan
- Physiotherapy Department, Faculty of Allied Medical Sciences, Isra University, Amman, Jordan
| |
Collapse
|
11
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
12
|
Balaji K. Image Augmentation based on Variational Autoencoder for Breast Tumor Segmentation. Acad Radiol 2023; 30 Suppl 2:S172-S183. [PMID: 36804294 DOI: 10.1016/j.acra.2022.12.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 02/18/2023]
Abstract
RATIONALE AND OBJECTIVES Breast tumor segmentation based on Dynamic Contrast-Enhanced Magnetic Resonance Imaging is significant step for computable radiomics analysis of breast cancer. Manual tumor annotation is time-consuming process and involves medical acquaintance, biased, inclined to error, and inter-user discrepancy. A number of modern trainings have revealed the capability of deep learning representations in image segmentation. MATERIALS AND METHODS Here, we describe a 3D Connected-UNets for tumor segmentation from 3D Magnetic Resonance Imagings based on encoder-decoder architecture. Due to a restricted training dataset size, a variational auto-encoder outlet is supplementary to renovate the input image itself in order to identify the shared decoder and execute additional controls on its layers. Based on initial segmentation of Connected-UNets, fully connected 3D provisional unsystematic domain is used to enhance segmentation outcomes by discovering 2D neighbor areas and 3D volume statistics. Moreover, 3D connected modules evaluation is used to endure around large modules and decrease segmentation noise. RESULTS The proposed method has been assessed on two widely offered datasets, explicitly INbreast and the curated breast imaging subset of digital database for screening mammography The proposed model has also been estimated using a private dataset. CONCLUSION The experimental results show that the proposed model outperforms the state-of-the-art methods for breast tumor segmentation.
Collapse
Affiliation(s)
- K Balaji
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014 India.
| |
Collapse
|
13
|
GadAllah MT, Mohamed AENA, Hefnawy AA, Zidan HE, El-Banby GM, Mohamed Badawy S. Convolutional Neural Networks Based Classification of Segmented Breast Ultrasound Images – A Comparative Preliminary Study. 2023 INTELLIGENT METHODS, SYSTEMS, AND APPLICATIONS (IMSA) 2023. [DOI: 10.1109/imsa58542.2023.10217585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Affiliation(s)
| | - Abd El-Naser A. Mohamed
- Menoufia University,Faculty of Electronic Engineering,Electronics and Electrical Communications Engineering Department,Menoufia,Egypt
| | - Alaa A. Hefnawy
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Hassan E. Zidan
- Electronics Research Institute (ERI),Computers and Systems Department,Cairo,Egypt
| | - Ghada M. El-Banby
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| | - Samir Mohamed Badawy
- Menoufia University,Faculty of Electronic Engineering,Industrial Electronics and Control Engineering Department,Menoufia,Egypt
| |
Collapse
|
14
|
Ali MD, Saleem A, Elahi H, Khan MA, Khan MI, Yaqoob MM, Farooq Khattak U, Al-Rasheed A. Breast Cancer Classification through Meta-Learning Ensemble Technique Using Convolution Neural Networks. Diagnostics (Basel) 2023; 13:2242. [PMID: 37443636 PMCID: PMC10341268 DOI: 10.3390/diagnostics13132242] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/22/2023] [Accepted: 06/23/2023] [Indexed: 07/15/2023] Open
Abstract
This study aims to develop an efficient and accurate breast cancer classification model using meta-learning approaches and multiple convolutional neural networks. This Breast Ultrasound Images (BUSI) dataset contains various types of breast lesions. The goal is to classify these lesions as benign or malignant, which is crucial for the early detection and treatment of breast cancer. The problem is that traditional machine learning and deep learning approaches often fail to accurately classify these images due to their complex and diverse nature. In this research, to address this problem, the proposed model used several advanced techniques, including meta-learning ensemble technique, transfer learning, and data augmentation. Meta-learning will optimize the model's learning process, allowing it to adapt to new and unseen datasets quickly. Transfer learning will leverage the pre-trained models such as Inception, ResNet50, and DenseNet121 to enhance the model's feature extraction ability. Data augmentation techniques will be applied to artificially generate new training images, increasing the size and diversity of the dataset. Meta ensemble learning techniques will combine the outputs of multiple CNNs, improving the model's classification accuracy. The proposed work will be investigated by pre-processing the BUSI dataset first, then training and evaluating multiple CNNs using different architectures and pre-trained models. Then, a meta-learning algorithm will be applied to optimize the learning process, and ensemble learning will be used to combine the outputs of multiple CNN. Additionally, the evaluation results indicate that the model is highly effective with high accuracy. Finally, the proposed model's performance will be compared with state-of-the-art approaches in other existing systems' accuracy, precision, recall, and F1 score.
Collapse
Affiliation(s)
- Muhammad Danish Ali
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Adnan Saleem
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Hubaib Elahi
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Muhammad Amir Khan
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
- Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, Shah Alam 40450, Malaysia
| | - Muhammad Ijaz Khan
- Institute of Computing and Information Technology, Gomal University, Dera Ismail Khan 29220, Pakistan;
| | - Muhammad Mateen Yaqoob
- Department of Computer Science, COMSATS University Islamabad, Abbottabad Campus, Abbottabad 22060, Pakistan; (M.D.A.); (A.S.); (H.E.); (M.M.Y.)
| | - Umar Farooq Khattak
- School of Information Technology, UNITAR International University, Kelana Jaya, Petaling Jaya 47301, Malaysia
| | - Amal Al-Rasheed
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia;
| |
Collapse
|
15
|
Yu X, Dong M, Yang D, Wang L, Wang H, Ma L. Deep learning for differentiating benign from malignant tumors on breast-specific gamma image. Technol Health Care 2023; 31:61-67. [PMID: 37038782 DOI: 10.3233/thc-236007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2023]
Abstract
BACKGROUND Breast diseases are a significant health threat for women. With the fast-growing BSGI data, it is becoming increasingly critical for physicians to accurately diagnose benign as well as malignant breast tumors. OBJECTIVE The purpose of this study is to diagnose benign and malignant breast tumors utilizing the deep learning model, with the input of breast-specific gamma imaging (BSGI). METHODS A benchmark dataset including 144 patients with benign tumors and 87 patients with malignant tumors was collected and divided into a training dataset and a test dataset according to the ratio of 8:2. The convolutional neural network ResNet18 was employed to develop a new deep learning model. The model proposed was compared with neural network and autoencoder models. Accuracy, specificity, sensitivity and ROC were used to evaluate the performance of different models. RESULTS The accuracy, specificity and sensitivity of the model proposed are 99.1%, 98.8% and 99.3% respectively, which achieves the best performance among all methods. Additionally, the Grad-CAM method is used to analyze the interpretability of the diagnostic results based on the deep learning model. CONCLUSION This study demonstrates that the proposed deep learning method could help physicians diagnose benign and malignant breast tumors quickly as well as reliably.
Collapse
Affiliation(s)
- Xia Yu
- Weihai Maternal and Children Health Hospital, Weihai, Shandong, China
| | - Mengchao Dong
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| | - Dongzhu Yang
- Weihai Municipal Hospital, Weihai, Shandong, China
| | - Lianfang Wang
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| | - Hongjie Wang
- Weihai Maternal and Children Health Hospital, Weihai, Shandong, China
| | - Liyong Ma
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| |
Collapse
|
16
|
Breast cancer detection and diagnosis using hybrid deep learning architecture. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
17
|
Shewajo FA, Fante KA. Tile-based microscopic image processing for malaria screening using a deep learning approach. BMC Med Imaging 2023; 23:39. [PMID: 36949382 PMCID: PMC10035268 DOI: 10.1186/s12880-023-00993-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 03/08/2023] [Indexed: 03/24/2023] Open
Abstract
BACKGROUND Manual microscopic examination remains the golden standard for malaria diagnosis. But it is laborious, and pathologists with experience are needed for accurate diagnosis. The need for computer-aided diagnosis methods is driven by the enormous workload and difficulties associated with manual microscopy based examination. While the importance of computer-aided diagnosis is increasing at an enormous pace, fostered by the advancement of deep learning algorithms, there are still challenges in detecting small objects such as malaria parasites in microscopic images of blood films. The state-of-the-art (SOTA) deep learning-based object detection models are inefficient in detecting small objects accurately because they are underrepresented on benchmark datasets. The performance of these models is affected by the loss of detailed spatial information due to in-network feature map downscaling. This is due to the fact that the SOTA models cannot directly process high-resolution images due to their low-resolution network input layer. METHODS In this study, an efficient and robust tile-based image processing method is proposed to enhance the performance of malaria parasites detection SOTA models. Three variants of YOLOV4-based object detectors are adopted considering their detection accuracy and speed. These models were trained using tiles generated from 1780 high-resolution P. falciparum-infected thick smear microscopic images. The tiling of high-resolution images improves the performance of the object detection models. The detection accuracy and the generalization capability of these models have been evaluated using three datasets acquired from different regions. RESULTS The best-performing model using the proposed tile-based approach outperforms the baseline method significantly (Recall, [95.3%] vs [57%] and Average Precision, [87.1%] vs [76%]). Furthermore, the proposed method has outperformed the existing approaches that used different machine learning techniques evaluated on similar datasets. CONCLUSIONS The experimental results show that the proposed method significantly improves P. falciparum detection from thick smear microscopic images while maintaining real-time detection speed. Furthermore, the proposed method has the potential to assist and reduce the workload of laboratory technicians in malaria-endemic remote areas of developing countries where there is a critical skill gap and a shortage of experts.
Collapse
Affiliation(s)
| | - Kinde Anlay Fante
- Faculty of Electrical and Computer Engineering, Jimma University, 378, Jimma, Ethiopia
| |
Collapse
|
18
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
19
|
Zhong S, Tu C, Dong X, Feng Q, Chen W, Zhang Y. MsGoF: Breast lesion classification on ultrasound images by multi-scale gradational-order fusion framework. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107346. [PMID: 36716637 DOI: 10.1016/j.cmpb.2023.107346] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 12/05/2022] [Accepted: 01/08/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Predicting the malignant potential of breast lesions based on breast ultrasound (BUS) images is a crucial component of computer-aided diagnosis system for breast cancers. However, since breast lesions in BUS images generally have various shapes with relatively low contrast and present complex textures, it still remains challenging to accurately identify the malignant potential of breast lesions. METHODS In this paper, we propose a multi-scale gradational-order fusion framework to make full advantages of multi-scale representations incorporating with gradational-order characteristics of BUS images for breast lesions classification. Specifically, we first construct a spatial context aggregation module to generate multi-scale context representations from the original BUS images. Subsequently, multi-scale representations are efficiently fused in feature fusion block that is armed with special fusion strategies to comprehensively capture morphological characteristics of breast lesions. To better characterize complex textures and enhance non-linear modeling capability, we further propose isotropous gradational-order feature module in the feature fusion block to learn and combine multi-order representations. Finally, these multi-scale gradational-order representations are utilized to perform prediction for the malignant potential of breast lesions. RESULTS The proposed model was evaluated on three open datasets by using 5-fold cross-validation. The experimental results (Accuracy: 85.32%, Sensitivity: 85.24%, Specificity: 88.57%, AUC: 90.63% on dataset A; Accuracy: 76.48%, Sensitivity: 72.45%, Specificity: 80.42%, AUC: 78.98% on dataset B) demonstrate that the proposed method achieves the promising performance when compared with other deep learning-based methods in BUS classification task. CONCLUSIONS The proposed method has demonstrated a promising potential to predict malignant potential of breast lesion using ultrasound image in an end-to-end manner.
Collapse
Affiliation(s)
- Shengzhou Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Chao Tu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Xiuyu Dong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Wufan Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| | - Yu Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou 510515, China.
| |
Collapse
|
20
|
Li Y, Xu J, Wang P, Li P, Yang G, Chen R. Manifold reconstructed semi-supervised domain adaptation for histopathology images classification. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
21
|
ME-CCNN: Multi-encoded images and a cascade convolutional neural network for breast tumor segmentation and recognition. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10426-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
|
22
|
Saeed W, Omlin C. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
23
|
Ranjbarzadeh R, Dorosti S, Jafarzadeh Ghoushchi S, Caputo A, Tirkolaee EB, Ali SS, Arshadi Z, Bendechache M. Breast tumor localization and segmentation using machine learning techniques: Overview of datasets, findings, and methods. Comput Biol Med 2023; 152:106443. [PMID: 36563539 DOI: 10.1016/j.compbiomed.2022.106443] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 11/24/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
The Global Cancer Statistics 2020 reported breast cancer (BC) as the most common diagnosis of cancer type. Therefore, early detection of such type of cancer would reduce the risk of death from it. Breast imaging techniques are one of the most frequently used techniques to detect the position of cancerous cells or suspicious lesions. Computer-aided diagnosis (CAD) is a particular generation of computer systems that assist experts in detecting medical image abnormalities. In the last decades, CAD has applied deep learning (DL) and machine learning approaches to perform complex medical tasks in the computer vision area and improve the ability to make decisions for doctors and radiologists. The most popular and widely used technique of image processing in CAD systems is segmentation which consists of extracting the region of interest (ROI) through various techniques. This research provides a detailed description of the main categories of segmentation procedures which are classified into three classes: supervised, unsupervised, and DL. The main aim of this work is to provide an overview of each of these techniques and discuss their pros and cons. This will help researchers better understand these techniques and assist them in choosing the appropriate method for a given use case.
Collapse
Affiliation(s)
- Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | - Shadi Dorosti
- Department of Industrial Engineering, Urmia University of Technology, Urmia, Iran.
| | | | - Annalina Caputo
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Ireland.
| | | | - Sadia Samar Ali
- Department of Industrial Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia.
| | - Zahra Arshadi
- Faculty of Electronics, Telecommunications and Physics Engineering, Polytechnic University, Turin, Italy.
| | - Malika Bendechache
- Lero & ADAPT Research Centres, School of Computer Science, University of Galway, Ireland.
| |
Collapse
|
24
|
Artificial Intelligence (AI) in Breast Imaging: A Scientometric Umbrella Review. Diagnostics (Basel) 2022; 12:diagnostics12123111. [PMID: 36553119 PMCID: PMC9777253 DOI: 10.3390/diagnostics12123111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/07/2022] [Accepted: 12/08/2022] [Indexed: 12/14/2022] Open
Abstract
Artificial intelligence (AI), a rousing advancement disrupting a wide spectrum of applications with remarkable betterment, has continued to gain momentum over the past decades. Within breast imaging, AI, especially machine learning and deep learning, honed with unlimited cross-data/case referencing, has found great utility encompassing four facets: screening and detection, diagnosis, disease monitoring, and data management as a whole. Over the years, breast cancer has been the apex of the cancer cumulative risk ranking for women across the six continents, existing in variegated forms and offering a complicated context in medical decisions. Realizing the ever-increasing demand for quality healthcare, contemporary AI has been envisioned to make great strides in clinical data management and perception, with the capability to detect indeterminate significance, predict prognostication, and correlate available data into a meaningful clinical endpoint. Here, the authors captured the review works over the past decades, focusing on AI in breast imaging, and systematized the included works into one usable document, which is termed an umbrella review. The present study aims to provide a panoramic view of how AI is poised to enhance breast imaging procedures. Evidence-based scientometric analysis was performed in accordance with the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guideline, resulting in 71 included review works. This study aims to synthesize, collate, and correlate the included review works, thereby identifying the patterns, trends, quality, and types of the included works, captured by the structured search strategy. The present study is intended to serve as a "one-stop center" synthesis and provide a holistic bird's eye view to readers, ranging from newcomers to existing researchers and relevant stakeholders, on the topic of interest.
Collapse
|
25
|
Iqbal MS, Ahmad W, Alizadehsani R, Hussain S, Rehman R. Breast Cancer Dataset, Classification and Detection Using Deep Learning. Healthcare (Basel) 2022; 10:2395. [PMID: 36553919 PMCID: PMC9778593 DOI: 10.3390/healthcare10122395] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 11/24/2022] [Accepted: 11/25/2022] [Indexed: 12/05/2022] Open
Abstract
Incorporating scientific research into clinical practice via clinical informatics, which includes genomics, proteomics, bioinformatics, and biostatistics, improves patients' treatment. Computational pathology is a growing subspecialty with the potential to integrate whole slide images, multi-omics data, and health informatics. Pathology and laboratory medicine are critical to diagnosing cancer. This work will review existing computational and digital pathology methods for breast cancer diagnosis with a special focus on deep learning. The paper starts by reviewing public datasets related to breast cancer diagnosis. Additionally, existing deep learning methods for breast cancer diagnosis are reviewed. The publicly available code repositories are introduced as well. The paper is closed by highlighting challenges and future works for deep learning-based diagnosis.
Collapse
Affiliation(s)
- Muhammad Shahid Iqbal
- Department of Computer Science and Information Technology, Women University AJK, Bagh 12500, Pakistan
| | - Waqas Ahmad
- Higher Education Department Govt, AJK, Mirpur 10250, Pakistan
| | - Roohallah Alizadehsani
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, VIC 3216, Australia
| | - Sadiq Hussain
- Examination Branch, Dibrugarh University, Dibrugarh 786004, India
| | - Rizwan Rehman
- Centre for Computer Science and Applications, Dibrugarh University, Dibrugarh 786004, India
| |
Collapse
|
26
|
Hsu SY, Wang CY, Kao YK, Liu KY, Lin MC, Yeh LR, Wang YM, Chen CI, Kao FC. Using Deep Neural Network Approach for Multiple-Class Assessment of Digital Mammography. Healthcare (Basel) 2022; 10:healthcare10122382. [PMID: 36553906 PMCID: PMC9778490 DOI: 10.3390/healthcare10122382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 11/16/2022] [Accepted: 11/25/2022] [Indexed: 11/29/2022] Open
Abstract
According to the Health Promotion Administration in the Ministry of Health and Welfare statistics in Taiwan, over ten thousand women have breast cancer every year. Mammography is widely used to detect breast cancer. However, it is limited by the operator's technique, the cooperation of the subjects, and the subjective interpretation by the physician. It results in inconsistent identification. Therefore, this study explores the use of a deep neural network algorithm for the classification of mammography images. In the experimental design, a retrospective study was used to collect imaging data from actual clinical cases. The mammography images were collected and classified according to the breast image reporting and data-analyzing system (BI-RADS). In terms of model building, a fully convolutional dense connection network (FC-DCN) is used for the network backbone. All the images were obtained through image preprocessing, a data augmentation method, and transfer learning technology to build a mammography image classification model. The research results show the model's accuracy, sensitivity, and specificity were 86.37%, 100%, and 72.73%, respectively. Based on the FC-DCN model framework, it can effectively reduce the number of training parameters and successfully obtain a reasonable image classification model for mammography.
Collapse
Affiliation(s)
- Shih-Yen Hsu
- Department of Information Engineering, I-Shou University, Kaohsiung City 84001, Taiwan
| | - Chi-Yuan Wang
- Department of Medical Imaging and Radiological Science, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Yi-Kai Kao
- Division of Colorectal Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City 82445, Taiwan
| | - Kuo-Ying Liu
- Department of Radiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Ming-Chia Lin
- Department of Nuclear Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Li-Ren Yeh
- Department of Anesthesiology, E-DA Cancer Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
- Department of Medical Imaging and Radiology, Shu-Zen College of Medicine and Management, Kaohsiung City 82144, Taiwan
| | - Yi-Ming Wang
- Department of Information Engineering, I-Shou University, Kaohsiung City 84001, Taiwan
- Department of Critical Care Medicine, E-DA Hospital, I-Shou University, Kaohsiung City 82445, Taiwan
| | - Chih-I Chen
- Division of Colon and Rectal Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City 82445, Taiwan
- Division of General Medicine Surgery, Department of Surgery, E-DA Hospital, Kaohsiung City 82445, Taiwan
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City 82445, Taiwan
- The School of Chinese Medicine for Post Baccalaureate, I-Shou University, Kaohsiung City 82445, Taiwan
- Correspondence: (C.-I.C.); (F.-C.K.)
| | - Feng-Chen Kao
- School of Medicine, College of Medicine, I-Shou University, Kaohsiung City 82445, Taiwan
- Department of Orthopedics, E-DA Hospital, Kaohsiung City 82445, Taiwan
- Department of Orthopedics, Dachang Hospital, Kaohsiung City 82445, Taiwan
- Correspondence: (C.-I.C.); (F.-C.K.)
| |
Collapse
|
27
|
Zaman KS, Reaz MBI, Md Ali SH, Bakar AAA, Chowdhury MEH. Custom Hardware Architectures for Deep Learning on Portable Devices: A Review. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6068-6088. [PMID: 34086580 DOI: 10.1109/tnnls.2021.3082304] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The staggering innovations and emergence of numerous deep learning (DL) applications have forced researchers to reconsider hardware architecture to accommodate fast and efficient application-specific computations. Applications, such as object detection, image recognition, speech translation, as well as music synthesis and image generation, can be performed with high accuracy at the expense of substantial computational resources using DL. Furthermore, the desire to adopt Industry 4.0 and smart technologies within the Internet of Things infrastructure has initiated several studies to enable on-chip DL capabilities for resource-constrained devices. Specialized DL processors reduce dependence on cloud servers, improve privacy, lessen latency, and mitigate bandwidth congestion. As we reach the limits of shrinking transistors, researchers are exploring various application-specific hardware architectures to meet the performance and efficiency requirements for DL tasks. Over the past few years, several software optimizations and hardware innovations have been proposed to efficiently perform these computations. In this article, we review several DL accelerators, as well as technologies with emerging devices, to highlight their architectural features in application-specific integrated circuit (IC) and field-programmable gate array (FPGA) platforms. Finally, the design considerations for DL hardware in portable applications have been discussed, along with some deductions about the future trends and potential research directions to innovate DL accelerator architectures further. By compiling this review, we expect to help aspiring researchers widen their knowledge in custom hardware architectures for DL.
Collapse
|
28
|
Ayana G, Choe SW. BUViTNet: Breast Ultrasound Detection via Vision Transformers. Diagnostics (Basel) 2022; 12:diagnostics12112654. [PMID: 36359497 PMCID: PMC9689470 DOI: 10.3390/diagnostics12112654] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 10/26/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022] Open
Abstract
Convolutional neural networks (CNNs) have enhanced ultrasound image-based early breast cancer detection. Vision transformers (ViTs) have recently surpassed CNNs as the most effective method for natural image analysis. ViTs have proven their capability of incorporating more global information than CNNs at lower layers, and their skip connections are more powerful than those of CNNs, which endows ViTs with superior performance. However, the effectiveness of ViTs in breast ultrasound imaging has not yet been investigated. Here, we present BUViTNet breast ultrasound detection via ViTs, where ViT-based multistage transfer learning is performed using ImageNet and cancer cell image datasets prior to transfer learning for classifying breast ultrasound images. We utilized two publicly available ultrasound breast image datasets, Mendeley and breast ultrasound images (BUSI), to train and evaluate our algorithm. The proposed method achieved the highest area under the receiver operating characteristics curve (AUC) of 1 ± 0, Matthew’s correlation coefficient (MCC) of 1 ± 0, and kappa score of 1 ± 0 on the Mendeley dataset. Furthermore, BUViTNet achieved the highest AUC of 0.968 ± 0.02, MCC of 0.961 ± 0.01, and kappa score of 0.959 ± 0.02 on the BUSI dataset. BUViTNet outperformed ViT trained from scratch, ViT-based conventional transfer learning, and CNN-based transfer learning in classifying breast ultrasound images (p < 0.01 in all cases). Our findings indicate that improved transformers are effective in analyzing breast images and can provide an improved diagnosis if used in clinical settings. Future work will consider the use of a wide range of datasets and parameters for optimized performance.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
29
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
30
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
31
|
Rashed BM, Popescu N. Critical Analysis of the Current Medical Image-Based Processing Techniques for Automatic Disease Evaluation: Systematic Literature Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:7065. [PMID: 36146414 PMCID: PMC9501859 DOI: 10.3390/s22187065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/06/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
Medical image processing and analysis techniques play a significant role in diagnosing diseases. Thus, during the last decade, several noteworthy improvements in medical diagnostics have been made based on medical image processing techniques. In this article, we reviewed articles published in the most important journals and conferences that used or proposed medical image analysis techniques to diagnose diseases. Starting from four scientific databases, we applied the PRISMA technique to efficiently process and refine articles until we obtained forty research articles published in the last five years (2017-2021) aimed at answering our research questions. The medical image processing and analysis approaches were identified, examined, and discussed, including preprocessing, segmentation, feature extraction, classification, evaluation metrics, and diagnosis techniques. This article also sheds light on machine learning and deep learning approaches. We also focused on the most important medical image processing techniques used in these articles to establish the best methodologies for future approaches, discussing the most efficient ones and proposing in this way a comprehensive reference source of methods of medical image processing and analysis that can be very useful in future medical diagnosis systems.
Collapse
Affiliation(s)
| | - Nirvana Popescu
- Computer Science Department, University Politehnica of Bucharest, 060042 Bucharest, Romania
| |
Collapse
|
32
|
Cross-domain decision making based on TrAdaBoost for diagnosis of breast lesions. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10267-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
33
|
Aljuaid H, Alturki N, Alsubaie N, Cavallaro L, Liotta A. Computer-aided diagnosis for breast cancer classification using deep neural networks and transfer learning. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 223:106951. [PMID: 35767911 DOI: 10.1016/j.cmpb.2022.106951] [Citation(s) in RCA: 42] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 05/25/2022] [Accepted: 06/09/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Many developed and non-developed countries worldwide suffer from cancer-related fatal diseases. In particular, the rate of breast cancer in females increases daily, partially due to unawareness and undiagnosed at the early stages. A proper first breast cancer treatment can only be provided by adequately detecting and classifying cancer during the very early stages of its development. The use of medical image analysis techniques and computer-aided diagnosis may help the acceleration and the automation of both cancer detection and classification by also training and aiding less experienced physicians. For large datasets of medical images, convolutional neural networks play a significant role in detecting and classifying cancer effectively. METHODS This article presents a novel computer-aided diagnosis method for breast cancer classification (both binary and multi-class), using a combination of deep neural networks (ResNet 18, ShuffleNet, and Inception-V3Net) and transfer learning on the BreakHis publicly available dataset. RESULTS AND CONCLUSIONS Our proposed method provides the best average accuracy for binary classification of benign or malignant cancer cases of 99.7%, 97.66%, and 96.94% for ResNet, InceptionV3Net, and ShuffleNet, respectively. Average accuracies for multi-class classification were 97.81%, 96.07%, and 95.79% for ResNet, Inception-V3Net, and ShuffleNet, respectively.
Collapse
Affiliation(s)
- Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Najah Alsubaie
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), PO Box 84428, Riyadh 11671, Saudi Arabia
| | - Lucia Cavallaro
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy
| | - Antonio Liotta
- Faculty of Computer Science, Free University of Bozen-Bolzano, Piazza Domenicani, 3, Bolzano 39100, Italy.
| |
Collapse
|
34
|
Hsiao M, Hung M. Construction of an Artificial Intelligence Writing Model for English Based on Fusion Neural Network Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1779131. [PMID: 35637722 PMCID: PMC9148263 DOI: 10.1155/2022/1779131] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 04/14/2022] [Accepted: 04/18/2022] [Indexed: 12/04/2022]
Abstract
This paper presents an in-depth study and analysis of the model of English writing using artificial intelligence algorithms of neural networks. Based on word vectors, the unsupervised disambiguation, and clustering of multimedia contexts extracted from massive online videos, the disambiguation accuracy reaches over 0.7, and the resulting small-scale multimedia context set can cover up to 90% of vocabulary learning tasks; user experiments show that the multimedia context learning system based on this method can improve the effectiveness and experience of ESL vocabulary learning, as well as the long-term word sense memory of learners. The results are 30% better. Based on the dependency grammatical relations and semantic metrics of collocations on a large-scale professional corpus, we established a collocation intention description and retrieval method in line with users' linguistic cognition and doubled the usage rate of collocation retrieval on the actual deployment system after half a year, becoming a user "sticky" ESL writing aid, and further defined style. Dictionaries only provide basic lexical definitions, and, even if supported by example sentences, they still cannot meet the needs of ESL authors in terms of expressive accuracy and richness. However, the current machine translation is based on the black box deep neural network construction, and its translation process is not understandable and interactive. Among the three algorithmic models constructed in this paper, the multitask learning model outperforms the conditional random field model and the LSTM-CRF model because the multitask learning model with auxiliary tasks solves the problem of sparse data to a certain extent, allowing the model to be trained more adequately in the case of uneven label distribution, and thus performs better than other models in the task of grammatical error detection.
Collapse
Affiliation(s)
- Meijin Hsiao
- School of Art and Design, Fuzhou University of International Studies and Trade, Fuzhou, Fujian 350202, China
| | - Maosheng Hung
- School of Foreign Languages, Fuzhou University of International Studies and Trade, Fuzhou, Fujian 350202, China
| |
Collapse
|
35
|
Ukwuoma CC, Hossain MA, Jackson JK, Nneji GU, Monday HN, Qin Z. Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:1152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION AND BACKGROUND Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Md Altab Hossain
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| |
Collapse
|
36
|
Li Q, Yuan Y, Song G, Liu Y. Nursing Analysis Based on Medical Imaging Technology before and after Coronary Angiography in Cardiovascular Medicine. Appl Bionics Biomech 2022; 2022:3279068. [PMID: 35465185 PMCID: PMC9033406 DOI: 10.1155/2022/3279068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/19/2022] [Accepted: 03/29/2022] [Indexed: 11/17/2022] Open
Abstract
With the advancement of technology, medical imaging technology has been greatly improved. This article mainly studies the nursing before and after coronary angiography in cardiovascular medicine based on medical imaging technology. This paper proposes a multimodal medical image fusion algorithm based on multiscale decomposition and convolution sparse representation. The algorithm first decomposes the preregistered source medical image by NSST, takes the subimages of different scales as training images, and optimizes the subdictionaries of different scales; then convolution and sparse the subimages on each scale encoding to obtain the sparse coefficients of different subimages; secondly, the combination of improved L1 norm and improved spatial frequency (novel sum-modified SF (NMSF)) is used for high-frequency subimage coefficients, and the fusion of low-frequency subimages improved the rule of combining the L1 norm and the regional energy; finally, the final fused image is obtained by inverse NSST of the fused low-frequency subband and high-frequency subband. Experimental analysis found that the bifurcation angle has nothing to do with the damage of the branch vessels after the main branch stent is placed. The bifurcation angle greater than 50° is an independent predictor of MACE after stent extrusion for bifurcation lesions. Experimental results show that the proposed method has good performance in contrast enhancement, detail extraction, and information retention, and it improves the quality of the fusion image.
Collapse
Affiliation(s)
- Qin Li
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| | - Yangyang Yuan
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| | - Guangyu Song
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| | - Yonghua Liu
- Department of Cardiovascular Medicine, Lianyungang First People's Hospital, Lianyungang, 222002 Jiangsu, China
| |
Collapse
|
37
|
Baseri Saadi S, Tataei Sarshar N, Sadeghi S, Ranjbarzadeh R, Kooshki Forooshani M, Bendechache M. Investigation of Effectiveness of Shuffled Frog-Leaping Optimizer in Training a Convolution Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4703682. [PMID: 35368933 PMCID: PMC8967525 DOI: 10.1155/2022/4703682] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 02/04/2022] [Accepted: 03/07/2022] [Indexed: 02/08/2023]
Abstract
One of the leading algorithms and architectures in deep learning is Convolution Neural Network (CNN). It represents a unique method for image processing, object detection, and classification. CNN has shown to be an efficient approach in the machine learning and computer vision fields. CNN is composed of several filters accompanied by nonlinear functions and pooling layers. It enforces limitations on the weights and interconnections of the neural network to create a good structure for processing spatial and temporal distributed data. A CNN can restrain the numbering of free parameters of the network through its weight-sharing property. However, the training of CNNs is a challenging approach. Some optimization techniques have been recently employed to optimize CNN's weight and biases such as Ant Colony Optimization, Genetic, Harmony Search, and Simulated Annealing. This paper employs the well-known nature-inspired algorithm called Shuffled Frog-Leaping Algorithm (SFLA) for training a classical CNN structure (LeNet-5), which has not been experienced before. The training method is investigated by employing four different datasets. To verify the study, the results are compared with some of the most famous evolutionary trainers: Whale Optimization Algorithm (WO), Bacteria Swarm Foraging Optimization (BFSO), and Ant Colony Optimization (ACO). The outcomes demonstrate that the SFL technique considerably improves the performance of the original LeNet-5 although using this algorithm slightly increases the training computation time. The results also demonstrate that the suggested algorithm presents high accuracy in classification and approximation in its mechanism.
Collapse
Affiliation(s)
| | | | - Soroush Sadeghi
- School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
| | - Ramin Ranjbarzadeh
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Dublin, Ireland
| | | | - Malika Bendechache
- School of Computing, Faculty of Engineering and Computing, Dublin City University, Dublin, Ireland
| |
Collapse
|
38
|
Ayana G, Park J, Choe SW. Patchless Multi-Stage Transfer Learning for Improved Mammographic Breast Mass Classification. Cancers (Basel) 2022; 14:cancers14051280. [PMID: 35267587 PMCID: PMC8909211 DOI: 10.3390/cancers14051280] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 02/22/2022] [Accepted: 02/24/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary In this study, we propose a novel deep-learning method based on multi-stage transfer learning (MSTL) from ImageNet and cancer cell line image pre-trained models to classify mammographic masses as either benign or malignant. The proposed method alleviates the challenge of obtaining large amounts of labeled mammogram training data by utilizing a large number of cancer cell line microscopic images as an intermediate domain of learning between the natural domain (ImageNet) and medical domain (mammography). Moreover, our method does not utilize patch separation (to segment the region of interest before classification), which renders it computationally simple and fast compared to previous studies. The findings of this study are of crucial importance in the early diagnosis of breast cancer in young women with dense breasts because mammography does not provide reliable diagnosis in such cases. Abstract Despite great achievements in classifying mammographic breast-mass images via deep-learning (DL), obtaining large amounts of training data and ensuring generalizations across different datasets with robust and well-optimized algorithms remain a challenge. ImageNet-based transfer learning (TL) and patch classifiers have been utilized to address these challenges. However, researchers have been unable to achieve the desired performance for DL to be used as a standalone tool. In this study, we propose a novel multi-stage TL from ImageNet and cancer cell line image pre-trained models to classify mammographic breast masses as either benign or malignant. We trained our model on three public datasets: Digital Database for Screening Mammography (DDSM), INbreast, and Mammographic Image Analysis Society (MIAS). In addition, a mixed dataset of the images from these three datasets was used to train the model. We obtained an average five-fold cross validation AUC of 1, 0.9994, 0.9993, and 0.9998 for DDSM, INbreast, MIAS, and mixed datasets, respectively. Moreover, the observed performance improvement using our method against the patch-based method was statistically significant, with a p-value of 0.0029. Furthermore, our patchless approach performed better than patch- and whole image-based methods, improving test accuracy by 8% (91.41% vs. 99.34%), tested on the INbreast dataset. The proposed method is of significant importance in solving the need for a large training dataset as well as reducing the computational burden in training and implementing the mammography-based deep-learning models for early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Jinhyung Park
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
| | - Se-woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea; (G.A.); (J.P.)
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
39
|
Wan Zaki WMD, Abdul Mutalib H, Ramlan LA, Hussain A, Mustapha A. Towards a Connected Mobile Cataract Screening System: A Future Approach. J Imaging 2022; 8:jimaging8020041. [PMID: 35200743 PMCID: PMC8879609 DOI: 10.3390/jimaging8020041] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 01/31/2022] [Accepted: 02/01/2022] [Indexed: 11/26/2022] Open
Abstract
Advances in computing and AI technology have promoted the development of connected health systems, indirectly influencing approaches to cataract treatment. In addition, thanks to the development of methods for cataract detection and grading using different imaging modalities, ophthalmologists can make diagnoses with significant objectivity. This paper aims to review the development and limitations of published methods for cataract detection and grading using different imaging modalities. Over the years, the proposed methods have shown significant improvement and reasonable effort towards automated cataract detection and grading systems that utilise various imaging modalities, such as optical coherence tomography (OCT), fundus, and slit-lamp images. However, more robust and fully automated cataract detection and grading systems are still needed. In addition, imaging modalities such as fundus, slit-lamps, and OCT images require medical equipment that is expensive and not portable. Therefore, the use of digital images from a smartphone as the future of cataract screening tools could be a practical and helpful solution for ophthalmologists, especially in rural areas with limited healthcare facilities.
Collapse
Affiliation(s)
- Wan Mimi Diyana Wan Zaki
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia; (W.M.D.W.Z.); (L.A.R.); (A.H.)
| | - Haliza Abdul Mutalib
- Optometry and Vision Science Programme, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Jalan Raja Muda Abdul Aziz, Kuala Lumpur 50300, Malaysia
- Correspondence:
| | - Laily Azyan Ramlan
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia; (W.M.D.W.Z.); (L.A.R.); (A.H.)
| | - Aini Hussain
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Malaysia; (W.M.D.W.Z.); (L.A.R.); (A.H.)
| | - Aouache Mustapha
- Division Telecom, Center for Development of Advanced Technologies (CDTA), Baba Hassen, Algiers 16081, Algeria;
| |
Collapse
|
40
|
Binary imbalanced big data classification based on fuzzy data reduction and classifier fusion. Soft comput 2022. [DOI: 10.1007/s00500-021-06654-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
41
|
Shah SM, Khan RA, Arif S, Sajid U. Artificial intelligence for breast cancer analysis: Trends & directions. Comput Biol Med 2022; 142:105221. [PMID: 35016100 DOI: 10.1016/j.compbiomed.2022.105221] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 01/03/2022] [Accepted: 01/03/2022] [Indexed: 12/18/2022]
Abstract
Breast cancer is one of the leading causes of death among women. Early detection of breast cancer can significantly improve the lives of millions of women across the globe. Given importance of finding solution/framework for early detection and diagnosis, recently many AI researchers are focusing to automate this task. The other reasons for surge in research activities in this direction are advent of robust AI algorithms (deep learning), availability of hardware that can run/train those robust and complex AI algorithms and accessibility of large enough dataset required for training AI algorithms. Different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection are mammograms, ultrasound, magnetic resonance imaging, histopathological images or any combination of them. This article analyzes these imaging modalities and presents their strengths and limitations. It also enlists resources from where their datasets can be accessed for research purpose. This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade to detect breast cancer using various imaging modalities. Primarily, in this article we have focused on reviewing frameworks that have reported results using mammograms as it is the most widely used breast imaging modality that serves as the first test that medical practitioners usually prescribe for the detection of breast cancer. Another reason for focusing on mammogram imaging modalities is the availability of its labelled datasets. Datasets availability is one of the most important aspects for the development of AI based frameworks as such algorithms are data hungry and generally quality of dataset affects performance of AI based algorithms. In a nutshell, this research article will act as a primary resource for the research community working in the field of automated breast imaging analysis.
Collapse
Affiliation(s)
- Shahid Munir Shah
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Rizwan Ahmed Khan
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan.
| | - Sheeraz Arif
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| | - Unaiza Sajid
- Department of Computer Science, Faculty of Information Technology, Salim Habib University, Karachi, Pakistan
| |
Collapse
|
42
|
Laxmisagar HS, Hanumantharaju MC. Detection of Breast Cancer with Lightweight Deep Neural Networks for Histology Image Classification. Crit Rev Biomed Eng 2022; 50:1-19. [PMID: 36374820 DOI: 10.1615/critrevbiomedeng.2022043417] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Many researchers have developed computer-assisted diagnostic (CAD) methods to diagnose breast cancer using histopathology microscopic images. These techniques help to improve the accuracy of biopsy diagnosis with hematoxylin and eosin-stained images. On the other hand, most CAD systems usually rely on inefficient and time-consuming manual feature extraction methods. Using a deep learning (DL) model with convolutional layers, we present a method to extract the most useful pictorial information for breast cancer classification. Breast biopsy images stained with hematoxylin and eosin can be categorized into four groups namely benign lesions, normal tissue, carcinoma in situ, and invasive carcinoma. To correctly classify different types of breast cancer, it is important to classify histopathological images accurately. The MobileNet architecture model is used to obtain high accuracy with less resource utilization. The proposed model is fast, inexpensive, and safe due to which it is suitable for the detection of breast cancer at an early stage. This lightweight deep neural network can be accelerated using field-programmable gate arrays for the detection of breast cancer. DL has been implemented to successfully classify breast cancer. The model uses categorical cross-entropy to learn to give the correct class a high probability and other classes a low probability. It is used in the classification stage of the convolutional neural network (CNN) after the clustering stage, thereby improving the performance of the proposed system. To measure training and validation accuracy, the model was trained on Google Colab for 280 epochs with a powerful GPU with 2496 CUDA cores, 12 GB GDDR5 VRAM, and 12.6 GB RAM. Our results demonstrate that deep CNN with a chi-square test has improved the accuracy of histopathological image classification of breast cancer by greater than 11% compared with other state-of-the-art methods.
Collapse
Affiliation(s)
- H S Laxmisagar
- Department of Electronics and Communication Engineering, BMS Institute of Technology Management, Bengaluru 560064, India
| | - M C Hanumantharaju
- Department of Electronics and Communication Engineering, BMS Institute of Technology Management, Bengaluru 560064, India
| |
Collapse
|
43
|
Deep Learning on Histopathology Images for Breast Cancer Classification: A Bibliometric Analysis. Healthcare (Basel) 2021; 10:healthcare10010010. [PMID: 35052174 PMCID: PMC8775465 DOI: 10.3390/healthcare10010010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/07/2021] [Accepted: 12/12/2021] [Indexed: 12/16/2022] Open
Abstract
Medical imaging is gaining significant attention in healthcare, including breast cancer. Breast cancer is the most common cancer-related death among women worldwide. Currently, histopathology image analysis is the clinical gold standard in cancer diagnosis. However, the manual process of microscopic examination involves laborious work and can be misleading due to human error. Therefore, this study explored the research status and development trends of deep learning on breast cancer image classification using bibliometric analysis. Relevant works of literature were obtained from the Scopus database between 2014 and 2021. The VOSviewer and Bibliometrix tools were used for analysis through various visualization forms. This study is concerned with the annual publication trends, co-authorship networks among countries, authors, and scientific journals. The co-occurrence network of the authors’ keywords was analyzed for potential future directions of the field. Authors started to contribute to publications in 2016, and the research domain has maintained its growth rate since. The United States and China have strong research collaboration strengths. Only a few studies use bibliometric analysis in this research area. This study provides a recent review on this fast-growing field to highlight status and trends using scientific visualization. It is hoped that the findings will assist researchers in identifying and exploring the potential emerging areas in the related field.
Collapse
|
44
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
45
|
Connected-UNets: a deep learning architecture for breast mass segmentation. NPJ Breast Cancer 2021; 7:151. [PMID: 34857755 PMCID: PMC8640011 DOI: 10.1038/s41523-021-00358-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 11/01/2021] [Indexed: 12/19/2022] Open
Abstract
Breast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.
Collapse
|
46
|
Zhang Q, Ren X, Wei B. Segmentation of infected region in CT images of COVID-19 patients based on QC-HC U-net. Sci Rep 2021; 11:22854. [PMID: 34819524 PMCID: PMC8613253 DOI: 10.1038/s41598-021-01502-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 10/25/2021] [Indexed: 12/24/2022] Open
Abstract
Since the outbreak of COVID-19 in 2019, the rapid spread of the epidemic has brought huge challenges to medical institutions. If the pathological region in the COVID-19 CT image can be automatically segmented, it will help doctors quickly determine the patient's infection, thereby speeding up the diagnosis process. To be able to automatically segment the infected area, we proposed a new network structure and named QC-HC U-Net. First, we combine residual connection and dense connection to form a new connection method and apply it to the encoder and the decoder. Second, we choose to add Hypercolumns in the decoder section. Compared with the benchmark 3D U-Net, the improved network can effectively avoid vanishing gradient while extracting more features. To improve the situation of insufficient data, resampling and data enhancement methods are selected in this paper to expand the datasets. We used 63 cases of MSD lung tumor data for training and testing, continuously verified to ensure the training effect of this model, and then selected 20 cases of public COVID-19 data for training and testing. Experimental results showed that in the segmentation of COVID-19, the specificity and sensitivity were 85.3% and 83.6%, respectively, and in the segmentation of MSD lung tumors, the specificity and sensitivity were 81.45% and 80.93%, respectively, without any fitting.
Collapse
Affiliation(s)
- Qin Zhang
- School of Computer Science and Technology, Qilu University of Technology, Jinan, 250301, China
| | - Xiaoqiang Ren
- School of Computer Science and Technology, Qilu University of Technology, Jinan, 250301, China.
| | - Benzheng Wei
- Center for Medical Artificial Intelligence, Shandong University of Traditional Chinese Medicine, Jinan, China.
| |
Collapse
|
47
|
Kumar Y, Gupta S, Singla R, Hu YC. A Systematic Review of Artificial Intelligence Techniques in Cancer Prediction and Diagnosis. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2043-2070. [PMID: 34602811 PMCID: PMC8475374 DOI: 10.1007/s11831-021-09648-w] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 09/11/2021] [Indexed: 05/05/2023]
Abstract
Artificial intelligence has aided in the advancement of healthcare research. The availability of open-source healthcare statistics has prompted researchers to create applications that aid cancer detection and prognosis. Deep learning and machine learning models provide a reliable, rapid, and effective solution to deal with such challenging diseases in these circumstances. PRISMA guidelines had been used to select the articles published on the web of science, EBSCO, and EMBASE between 2009 and 2021. In this study, we performed an efficient search and included the research articles that employed AI-based learning approaches for cancer prediction. A total of 185 papers are considered impactful for cancer prediction using conventional machine and deep learning-based classifications. In addition, the survey also deliberated the work done by the different researchers and highlighted the limitations of the existing literature, and performed the comparison using various parameters such as prediction rate, accuracy, sensitivity, specificity, dice score, detection rate, area undercover, precision, recall, and F1-score. Five investigations have been designed, and solutions to those were explored. Although multiple techniques recommended in the literature have achieved great prediction results, still cancer mortality has not been reduced. Thus, more extensive research to deal with the challenges in the area of cancer prediction is required.
Collapse
Affiliation(s)
- Yogesh Kumar
- Department of Computer Engineering, Indus Institute of Technology & Engineering, Indus University, Rancharda, Via: Shilaj, Ahmedabad, Gujarat 382115 India
| | - Surbhi Gupta
- School of Computer Science and Engineering, Model Institute of Engineering and Technology, Kot bhalwal, Jammu, J&K 181122 India
| | - Ruchi Singla
- Department of Research, Innovations, Sponsored Projects and Entrepreneurship, Chandigarh Group of Colleges, Landran, Mohali India
| | - Yu-Chen Hu
- Department of Computer Science and Information Management, Providence University, Taichung City, Taiwan, ROC
| |
Collapse
|
48
|
Oza P, Sharma P, Patel S, Bruno A. A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms. J Imaging 2021; 7:190. [PMID: 34564116 PMCID: PMC8466003 DOI: 10.3390/jimaging7090190] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper's main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole, Dorset BH12 5BB, UK
| |
Collapse
|
49
|
Multi-criterion decision making-based multi-channel hierarchical fusion of digital breast tomosynthesis and digital mammography for breast mass discrimination. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107303] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
50
|
Badawy SM, Mohamed AENA, Hefnawy AA, Zidan HE, GadAllah MT, El-Banby GM. Classification of Breast Ultrasound Images Based on Convolutional Neural Networks - A Comparative Study. 2021 INTERNATIONAL TELECOMMUNICATIONS CONFERENCE (ITC-EGYPT) 2021. [DOI: 10.1109/itc-egypt52936.2021.9513972] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|