1
|
Kabir MM, Rahman A, Hasan MN, Mridha MF. Computer vision algorithms in healthcare: Recent advancements and future challenges. Comput Biol Med 2025; 185:109531. [PMID: 39675214 DOI: 10.1016/j.compbiomed.2024.109531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 10/05/2024] [Accepted: 12/03/2024] [Indexed: 12/17/2024]
Abstract
Computer vision has emerged as a promising technology with numerous applications in healthcare. This systematic review provides an overview of advancements and challenges associated with computer vision in healthcare. The review highlights the application areas where computer vision has made significant strides, including medical imaging, surgical assistance, remote patient monitoring, and telehealth. Additionally, it addresses the challenges related to data quality, privacy, model interpretability, and integration with existing healthcare systems. Ethical and legal considerations, such as patient consent and algorithmic bias, are also discussed. The review concludes by identifying future directions and opportunities for research, emphasizing the potential impact of computer vision on healthcare delivery and outcomes. Overall, this systematic review underscores the importance of understanding both the advancements and challenges in computer vision to facilitate its responsible implementation in healthcare.
Collapse
Affiliation(s)
- Md Mohsin Kabir
- School of Innovation, Design and Engineering, Mälardalens University, Västerås, 722 20, Sweden.
| | - Ashifur Rahman
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Mirpur-2, Dhaka, 1216, Bangladesh.
| | - Md Nahid Hasan
- Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI 53211, United States.
| | - M F Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka, 1229, Dhaka, Bangladesh.
| |
Collapse
|
2
|
Ben Khalifa A, Mili M, Maatouk M, Ben Abdallah A, Abdellali M, Gaied S, Ben Ali A, Lahouel Y, Bedoui MH, Zrig A. Deep Transfer Learning for Classification of Late Gadolinium Enhancement Cardiac MRI Images into Myocardial Infarction, Myocarditis, and Healthy Classes: Comparison with Subjective Visual Evaluation. Diagnostics (Basel) 2025; 15:207. [PMID: 39857091 PMCID: PMC11765457 DOI: 10.3390/diagnostics15020207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2024] [Revised: 12/25/2024] [Accepted: 12/30/2024] [Indexed: 01/27/2025] Open
Abstract
Background/Objectives: To develop a computer-aided diagnosis (CAD) method for the classification of late gadolinium enhancement (LGE) cardiac MRI images into myocardial infarction (MI), myocarditis, and healthy classes using a fine-tuned VGG16 model hybridized with multi-layer perceptron (MLP) (VGG16-MLP) and assess our model's performance in comparison to various pre-trained base models and MRI readers. Methods: This study included 361 LGE images for MI, 222 for myocarditis, and 254 for the healthy class. The left ventricle was extracted automatically using a U-net segmentation model on LGE images. Fine-tuned VGG16 was performed for feature extraction. A spatial attention mechanism was implemented as a part of the neural network architecture. The MLP architecture was used for the classification. The evaluation metrics were calculated using a separate test set. To compare the VGG16 model's performance in feature extraction, various pre-trained base models were evaluated: VGG19, DenseNet121, DenseNet201, MobileNet, InceptionV3, and InceptionResNetV2. The Support Vector Machine (SVM) classifier was evaluated and compared to MLP for the classification task. The performance of the VGG16-MLP model was compared with a subjective visual analysis conducted by two blinded independent readers. Results: The VGG16-MLP model allowed high-performance differentiation between MI, myocarditis, and healthy LGE cardiac MRI images. It outperformed the other tested models with 96% accuracy, 97% precision, 96% sensitivity, and 96% F1-score. Our model surpassed the accuracy of Reader 1 by 27% and Reader 2 by 17%. Conclusions: Our study demonstrated that the VGG16-MLP model permits accurate classification of MI, myocarditis, and healthy LGE cardiac MRI images and could be considered a reliable computer-aided diagnosis approach specifically for radiologists with limited experience in cardiovascular imaging.
Collapse
Affiliation(s)
- Amani Ben Khalifa
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Manel Mili
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
- Faculty of Sciences of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Mezri Maatouk
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Asma Ben Abdallah
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Mabrouk Abdellali
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Sofiene Gaied
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Azza Ben Ali
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Yassir Lahouel
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Mohamed Hedi Bedoui
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Ahmed Zrig
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| |
Collapse
|
3
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
4
|
Wang G, Jia M, Zhou Q, Xu S, Zhao Y, Wang Q, Tian Z, Shi R, Wang K, Yan T, Chen G, Wang B. Multi-classification of breast cancer pathology images based on a two-stage hybrid network. J Cancer Res Clin Oncol 2024; 150:505. [PMID: 39551897 PMCID: PMC11570553 DOI: 10.1007/s00432-024-06002-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 10/15/2024] [Indexed: 11/19/2024]
Abstract
BACKGROUND AND OBJECTIVE In current clinical medicine, pathological image diagnosis is the gold standard for cancer diagnosis. After pathologists determine whether breast lesions are malignant or benign, further sub-type classification is often necessary. METHODS For this task, this study designed a multi-classification model for breast cancer pathological images based on a two-stage hybrid network. Due to limited sample size for breast sub-type data, this study selected the ResNet34 network as the base network and improved it as the first-level convolutional network, using transfer learning to assist network training. In order to compensate for the lack of long-distance dependencies in the convolutional network, the second-level network was designed to use Long Short-Term Memory (LSTM) to capture contextual information in the images for predictive classification. RESULTS For the 8 sub-types of breast cancer classification on the BreakHis (40×, 100×, 200×, 400×) dataset, the ensemble model achieved accuracy rates of 93.67%, 97.08%, 98.01%, and 94.73% respectively. For the 4 sub-types of breast cancer classification on the ICIAR2018 (200×) dataset, the ensemble model achieved accuracy, precision, recall, and F1 Score rates of 93.75%, 92.5%, 92.5%, and 92.5% respectively. CONCLUSION The results show that the multi-classification model proposed in this study outperforms other methods in terms of classification performance, and further demonstrate that the proposed RFSAM module is beneficial for improving model performance.
Collapse
Affiliation(s)
- Guolan Wang
- School of Computer Information Engineering, Shanxi Technology and Business University, Taiyuan, China
| | - Mengjiu Jia
- School of Computer Information Engineering, Shanxi Technology and Business University, Taiyuan, China
| | - Qichao Zhou
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China
| | - Songrui Xu
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China
| | - Yadong Zhao
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China
| | - Qiaorong Wang
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China
| | - Zhi Tian
- Second Clinical Medical College, Shanxi Medical University, 382 Wuyi Road, Taiyuan, Shanxi, People's Republic of China
| | - Ruyi Shi
- Department of Cell Biology and Genetics, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China
| | - Keke Wang
- The First Hospital of Shanxi Medical University, Taiyuan, 030001, China
| | - Ting Yan
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China
| | - Guohui Chen
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, Shanxi, People's Republic of China.
| | - Bin Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, Shanxi, People's Republic of China.
| |
Collapse
|
5
|
Ma W, Li M, Chu Z, Chen H. Smart Biosensor for Breast Cancer Survival Prediction Based on Multi-View Multi-Way Graph Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:3289. [PMID: 38894082 PMCID: PMC11174864 DOI: 10.3390/s24113289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 05/17/2024] [Accepted: 05/19/2024] [Indexed: 06/21/2024]
Abstract
Biosensors play a crucial role in detecting cancer signals by orchestrating a series of intricate biological and physical transduction processes. Among various cancers, breast cancer stands out due to its genetic underpinnings, which trigger uncontrolled cell proliferation, predominantly impacting women, and resulting in significant mortality rates. The utilization of biosensors in predicting survival time becomes paramount in formulating an optimal treatment strategy. However, conventional biosensors employing traditional machine learning methods encounter challenges in preprocessing features for the learning task. Despite the potential of deep learning techniques to automatically extract useful features, they often struggle to effectively leverage the intricate relationships between features and instances. To address this challenge, our study proposes a novel smart biosensor architecture that integrates a multi-view multi-way graph learning (MVMWGL) approach for predicting breast cancer survival time. This innovative approach enables the assimilation of insights from gene interactions and biosensor similarities. By leveraging real-world data, we conducted comprehensive evaluations, and our experimental results unequivocally demonstrate the superiority of the MVMWGL approach over existing methods.
Collapse
Affiliation(s)
- Wenming Ma
- School of Computer and Control Engineering, Yantai University, Yantai 264005, China; (M.L.); (Z.C.); (H.C.)
| | | | | | | |
Collapse
|
6
|
Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024; 10:705-726. [PMID: 38787015 PMCID: PMC11125819 DOI: 10.3390/tomography10050055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.
Collapse
Affiliation(s)
- Dhurgham Al-Karawi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Shakir Al-Zaidi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Khaled Ahmad Helael
- Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan;
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Abdulmajeed Mounzer Mouhsen
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Tarek Ajam
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Bashar A. Alshalabi
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohamed Salman
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohammed H. Ahmed
- School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK;
| |
Collapse
|
7
|
Baroni GL, Rasotto L, Roitero K, Tulisso A, Di Loreto C, Della Mea V. Optimizing Vision Transformers for Histopathology: Pretraining and Normalization in Breast Cancer Classification. J Imaging 2024; 10:108. [PMID: 38786562 PMCID: PMC11121856 DOI: 10.3390/jimaging10050108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 04/26/2024] [Accepted: 04/27/2024] [Indexed: 05/25/2024] Open
Abstract
This paper introduces a self-attention Vision Transformer model specifically developed for classifying breast cancer in histology images. We examine various training strategies and configurations, including pretraining, dimension resizing, data augmentation and color normalization strategies, patch overlap, and patch size configurations, in order to evaluate their impact on the effectiveness of the histology image classification. Additionally, we provide evidence for the increase in effectiveness gathered through geometric and color data augmentation techniques. We primarily utilize the BACH dataset to train and validate our methods and models, but we also test them on two additional datasets, BRACS and AIDPATH, to verify their generalization capabilities. Our model, developed from a transformer pretrained on ImageNet, achieves an accuracy rate of 0.91 on the BACH dataset, 0.74 on the BRACS dataset, and 0.92 on the AIDPATH dataset. Using a model based on the prostate small and prostate medium HistoEncoder models, we achieve accuracy rates of 0.89 and 0.86, respectively. Our results suggest that pretraining on large-scale general datasets like ImageNet is advantageous. We also show the potential benefits of using domain-specific pretraining datasets, such as extensive histopathological image collections as in HistoEncoder, though not yet with clear advantages.
Collapse
Affiliation(s)
- Giulia Lucrezia Baroni
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (G.L.B.); (L.R.); (K.R.)
| | - Laura Rasotto
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (G.L.B.); (L.R.); (K.R.)
| | - Kevin Roitero
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (G.L.B.); (L.R.); (K.R.)
| | - Angelica Tulisso
- Istituto di Anatomia Patologica, Azienda Sanitaria Universitaria Friuli Centrale, 33100 Udine, Italy
| | - Carla Di Loreto
- Istituto di Anatomia Patologica, Azienda Sanitaria Universitaria Friuli Centrale, 33100 Udine, Italy
| | - Vincenzo Della Mea
- Department of Mathematics, Computer Science and Physics, University of Udine, 33100 Udine, Italy; (G.L.B.); (L.R.); (K.R.)
| |
Collapse
|
8
|
Priya C V L, V G B, B R V, Ramachandran S. Deep learning approaches for breast cancer detection in histopathology images: A review. Cancer Biomark 2024; 40:1-25. [PMID: 38517775 PMCID: PMC11191493 DOI: 10.3233/cbm-230251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/24/2024]
Abstract
BACKGROUND Breast cancer is one of the leading causes of death in women worldwide. Histopathology analysis of breast tissue is an essential tool for diagnosing and staging breast cancer. In recent years, there has been a significant increase in research exploring the use of deep-learning approaches for breast cancer detection from histopathology images. OBJECTIVE To provide an overview of the current state-of-the-art technologies in automated breast cancer detection in histopathology images using deep learning techniques. METHODS This review focuses on the use of deep learning algorithms for the detection and classification of breast cancer from histopathology images. We provide an overview of publicly available histopathology image datasets for breast cancer detection. We also highlight the strengths and weaknesses of these architectures and their performance on different histopathology image datasets. Finally, we discuss the challenges associated with using deep learning techniques for breast cancer detection, including the need for large and diverse datasets and the interpretability of deep learning models. RESULTS Deep learning techniques have shown great promise in accurately detecting and classifying breast cancer from histopathology images. Although the accuracy levels vary depending on the specific data set, image pre-processing techniques, and deep learning architecture used, these results highlight the potential of deep learning algorithms in improving the accuracy and efficiency of breast cancer detection from histopathology images. CONCLUSION This review has presented a thorough account of the current state-of-the-art techniques for detecting breast cancer using histopathology images. The integration of machine learning and deep learning algorithms has demonstrated promising results in accurately identifying breast cancer from histopathology images. The insights gathered from this review can act as a valuable reference for researchers in this field who are developing diagnostic strategies using histopathology images. Overall, the objective of this review is to spark interest among scholars in this complex field and acquaint them with cutting-edge technologies in breast cancer detection using histopathology images.
Collapse
Affiliation(s)
- Lakshmi Priya C V
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Biju V G
- Department of Electronics and Communication Engineering, College of Engineering Munnar, Kerala, India
| | - Vinod B R
- Department of Electronics and Communication Engineering, College of Engineering Trivandrum, Kerala, India
| | - Sivakumar Ramachandran
- Department of Electronics and Communication Engineering, Government Engineering College Wayanad, Kerala, India
| |
Collapse
|
9
|
Zaki M, Elallam O, Jami O, EL Ghoubali D, Jhilal F, Alidrissi N, Ghazal H, Habib N, Abbad F, Benmoussa A, Bakkali F. Advancing Tumor Cell Classification and Segmentation in Ki-67 Images: A Systematic Review of Deep Learning Approaches. LECTURE NOTES IN NETWORKS AND SYSTEMS 2024:94-112. [DOI: 10.1007/978-3-031-52385-4_9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2025]
|
10
|
Cen M, Li X, Guo B, Jonnagaddala J, Zhang H, Xu XS. A Novel and Efficient Digital Pathology Classifier for Predicting Cancer Biomarkers Using Sequencer Architecture. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2122-2132. [PMID: 37775043 DOI: 10.1016/j.ajpath.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 08/16/2023] [Accepted: 09/01/2023] [Indexed: 10/01/2023]
Abstract
In digital pathology tasks, transformers have achieved state-of-the-art results, surpassing convolutional neural networks (CNNs). However, transformers are usually complex and resource intensive. This study developed a novel and efficient digital pathology classifier called DPSeq to predict cancer biomarkers through fine-tuning a sequencer architecture integrating horizontal and vertical bidirectional long short-term memory networks. Using hematoxylin and eosin-stained histopathologic images of colorectal cancer from two international data sets (The Cancer Genome Atlas and Molecular and Cellular Oncology), the predictive performance of DPSeq was evaluated in a series of experiments. DPSeq demonstrated exceptional performance for predicting key biomarkers in colorectal cancer (microsatellite instability status, hypermutation, CpG island methylator phenotype status, BRAF mutation, TP53 mutation, and chromosomal instability), outperforming most published state-of-the-art classifiers in a within-cohort internal validation and a cross-cohort external validation. In addition, under the same experimental conditions using the same set of training and testing data sets, DPSeq surpassed four CNNs (ResNet18, ResNet50, MobileNetV2, and EfficientNet) and two transformer (Vision Transformer and Swin Transformer) models, achieving the highest area under the receiver operating characteristic curve and area under the precision-recall curve values in predicting microsatellite instability status, BRAF mutation, and CpG island methylator phenotype status. Furthermore, DPSeq required less time for both training and prediction because of its simple architecture. Therefore, DPSeq appears to be the preferred choice over transformer and CNN models for predicting cancer biomarkers.
Collapse
Affiliation(s)
- Min Cen
- School of Data Science, University of Science and Technology of China, Hefei, China
| | - Xingyu Li
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, China
| | - Bangwei Guo
- School of Data Science, University of Science and Technology of China, Hefei, China
| | - Jitendra Jonnagaddala
- School of Population Health, University of New South Wales, Sydney, New South Wales, Australia
| | - Hong Zhang
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, China.
| | - Xu Steven Xu
- Clinical Pharmacology and Quantitative Science, Genmab Inc., Princeton, New Jersey.
| |
Collapse
|
11
|
Li M, Jiang Y, Zhang Y, Zhu H. Medical image analysis using deep learning algorithms. Front Public Health 2023; 11:1273253. [PMID: 38026291 PMCID: PMC10662291 DOI: 10.3389/fpubh.2023.1273253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/05/2023] [Indexed: 12/01/2023] Open
Abstract
In the field of medical image analysis within deep learning (DL), the importance of employing advanced DL techniques cannot be overstated. DL has achieved impressive results in various areas, making it particularly noteworthy for medical image analysis in healthcare. The integration of DL with medical image analysis enables real-time analysis of vast and intricate datasets, yielding insights that significantly enhance healthcare outcomes and operational efficiency in the industry. This extensive review of existing literature conducts a thorough examination of the most recent deep learning (DL) approaches designed to address the difficulties faced in medical healthcare, particularly focusing on the use of deep learning algorithms in medical image analysis. Falling all the investigated papers into five different categories in terms of their techniques, we have assessed them according to some critical parameters. Through a systematic categorization of state-of-the-art DL techniques, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Long Short-term Memory (LSTM) models, and hybrid models, this study explores their underlying principles, advantages, limitations, methodologies, simulation environments, and datasets. Based on our results, Python was the most frequent programming language used for implementing the proposed methods in the investigated papers. Notably, the majority of the scrutinized papers were published in 2021, underscoring the contemporaneous nature of the research. Moreover, this review accentuates the forefront advancements in DL techniques and their practical applications within the realm of medical image analysis, while simultaneously addressing the challenges that hinder the widespread implementation of DL in image analysis within the medical healthcare domains. These discerned insights serve as compelling impetuses for future studies aimed at the progressive advancement of image analysis in medical healthcare research. The evaluation metrics employed across the reviewed articles encompass a broad spectrum of features, encompassing accuracy, sensitivity, specificity, F-score, robustness, computational complexity, and generalizability.
Collapse
Affiliation(s)
- Mengfang Li
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yuanyuan Jiang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yanzhou Zhang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Haisheng Zhu
- Department of Cardiovascular Medicine, Wencheng People’s Hospital, Wencheng, China
| |
Collapse
|
12
|
Liang C, Li X, Qin Y, Li M, Ma Y, Wang R, Xu X, Yu J, Lv S, Luo H. Effective automatic detection of anterior cruciate ligament injury using convolutional neural network with two attention mechanism modules. BMC Med Imaging 2023; 23:120. [PMID: 37697236 PMCID: PMC10494428 DOI: 10.1186/s12880-023-01091-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 08/30/2023] [Indexed: 09/13/2023] Open
Abstract
BACKGROUND To develop a fully automated CNN detection system based on magnetic resonance imaging (MRI) for ACL injury, and to explore the feasibility of CNN for ACL injury detection on MRI images. METHODS Including 313 patients aged 16 - 65 years old, the raw data are 368 pieces with injured ACL and 100 pieces with intact ACL. By adding flipping, rotation, scaling and other methods to expand the data, the final data set is 630 pieces including 355 pieces of injured ACL and 275 pieces of intact ACL. Using the proposed CNN model with two attention mechanism modules, data sets are trained and tested with fivefold cross-validation. RESULTS The performance is evaluated using accuracy, precision, sensitivity, specificity and F1 score of our proposed CNN model, with results of 0.8063, 0.7741, 0.9268, 0.6509 and 0.8436. The average accuracy in the fivefold cross-validation is 0.8064. For our model, the average area under curves (AUC) for detecting injured ACL has results of 0.8886. CONCLUSION We propose an effective and automatic CNN model to detect ACL injury from MRI of human knees. This model can effectively help clinicians diagnose ACL injury, improving diagnostic efficiency and reducing misdiagnosis and missed diagnosis.
Collapse
Affiliation(s)
- Chen Liang
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yong Qin
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yingkai Ma
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Ren Wang
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Xiangning Xu
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Jinping Yu
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China
| | - Songcen Lv
- Department of Minimally Invasive Surgery and Sports Medicine, The 2Nd Affiliated Hospital of Harbin Medical University, Harbin, 150001, China.
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China.
| |
Collapse
|
13
|
A Novel Framework of Manifold Learning Cascade-Clustering for the Informative Frame Selection. Diagnostics (Basel) 2023; 13:diagnostics13061151. [PMID: 36980459 PMCID: PMC10047422 DOI: 10.3390/diagnostics13061151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 03/05/2023] [Accepted: 03/10/2023] [Indexed: 03/19/2023] Open
Abstract
Narrow band imaging is an established non-invasive tool used for the early detection of laryngeal cancer in surveillance examinations. Most images produced from the examination are useless, such as blurred, specular reflection, and underexposed. Removing the uninformative frames is vital to improve detection accuracy and speed up computer-aided diagnosis. It often takes a lot of time for the physician to manually inspect the informative frames. This issue is commonly addressed by a classifier with task-specific categories of the uninformative frames. However, the definition of the uninformative categories is ambiguous, and tedious labeling still cannot be avoided. Here, we show that a novel unsupervised scheme is comparable to the current benchmarks on the dataset of NBI-InfFrames. We extract feature embedding using a vanilla neural network (VGG16) and introduce a new dimensionality reduction method called UMAP that distinguishes the feature embedding in the lower-dimensional space. Along with the proposed automatic cluster labeling algorithm and cost function in Bayesian optimization, the proposed method coupled with UMAP achieves state-of-the-art performance. It outperforms the baseline by 12% absolute. The overall median recall of the proposed method is currently the highest, 96%. Our results demonstrate the effectiveness of the proposed scheme and the robustness of detecting the informative frames. It also suggests the patterns embedded in the data help develop flexible algorithms that do not require manual labeling.
Collapse
|
14
|
Zhu Z, Wang SH, Zhang YD. A Survey of Convolutional Neural Network in Breast Cancer. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 136:2127-2172. [PMID: 37152661 PMCID: PMC7614504 DOI: 10.32604/cmes.2023.025484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/28/2022] [Indexed: 05/09/2023]
Abstract
Problems For people all over the world, cancer is one of the most feared diseases. Cancer is one of the major obstacles to improving life expectancy in countries around the world and one of the biggest causes of death before the age of 70 in 112 countries. Among all kinds of cancers, breast cancer is the most common cancer for women. The data showed that female breast cancer had become one of the most common cancers. Aims A large number of clinical trials have proved that if breast cancer is diagnosed at an early stage, it could give patients more treatment options and improve the treatment effect and survival ability. Based on this situation, there are many diagnostic methods for breast cancer, such as computer-aided diagnosis (CAD). Methods We complete a comprehensive review of the diagnosis of breast cancer based on the convolutional neural network (CNN) after reviewing a sea of recent papers. Firstly, we introduce several different imaging modalities. The structure of CNN is given in the second part. After that, we introduce some public breast cancer data sets. Then, we divide the diagnosis of breast cancer into three different tasks: 1. classification; 2. detection; 3. segmentation. Conclusion Although this diagnosis with CNN has achieved great success, there are still some limitations. (i) There are too few good data sets. A good public breast cancer dataset needs to involve many aspects, such as professional medical knowledge, privacy issues, financial issues, dataset size, and so on. (ii) When the data set is too large, the CNN-based model needs a sea of computation and time to complete the diagnosis. (iii) It is easy to cause overfitting when using small data sets.
Collapse
Affiliation(s)
| | | | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
| |
Collapse
|
15
|
Bakrania A, Joshi N, Zhao X, Zheng G, Bhat M. Artificial intelligence in liver cancers: Decoding the impact of machine learning models in clinical diagnosis of primary liver cancers and liver cancer metastases. Pharmacol Res 2023; 189:106706. [PMID: 36813095 DOI: 10.1016/j.phrs.2023.106706] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/17/2023] [Accepted: 02/19/2023] [Indexed: 02/22/2023]
Abstract
Liver cancers are the fourth leading cause of cancer-related mortality worldwide. In the past decade, breakthroughs in the field of artificial intelligence (AI) have inspired development of algorithms in the cancer setting. A growing body of recent studies have evaluated machine learning (ML) and deep learning (DL) algorithms for pre-screening, diagnosis and management of liver cancer patients through diagnostic image analysis, biomarker discovery and predicting personalized clinical outcomes. Despite the promise of these early AI tools, there is a significant need to explain the 'black box' of AI and work towards deployment to enable ultimate clinical translatability. Certain emerging fields such as RNA nanomedicine for targeted liver cancer therapy may also benefit from application of AI, specifically in nano-formulation research and development given that they are still largely reliant on lengthy trial-and-error experiments. In this paper, we put forward the current landscape of AI in liver cancers along with the challenges of AI in liver cancer diagnosis and management. Finally, we have discussed the future perspectives of AI application in liver cancer and how a multidisciplinary approach using AI in nanomedicine could accelerate the transition of personalized liver cancer medicine from bench side to the clinic.
Collapse
Affiliation(s)
- Anita Bakrania
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
| | | | - Xun Zhao
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada
| | - Gang Zheng
- Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada; Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada; Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Mamatha Bhat
- Toronto General Hospital Research Institute, Toronto, ON, Canada; Ajmera Transplant Program, University Health Network, Toronto, ON, Canada; Division of Gastroenterology, Department of Medicine, University Health Network and University of Toronto, Toronto, ON, Canada; Department of Medical Sciences, Toronto, ON, Canada.
| |
Collapse
|
16
|
Li J, Shi J, Chen J, Du Z, Huang L. Self-attention random forest for breast cancer image classification. Front Oncol 2023; 13:1043463. [PMID: 36814814 PMCID: PMC9939756 DOI: 10.3389/fonc.2023.1043463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Accepted: 01/09/2023] [Indexed: 02/08/2023] Open
Abstract
Introduction Early screening and diagnosis of breast cancer can not only detect hidden diseases in time, but also effectively improve the survival rate of patients. Therefore, the accurate classification of breast cancer images becomes the key to auxiliary diagnosis. Methods In this paper, on the basis of extracting multi-scale fusion features of breast cancer images using pyramid gray level co-occurrence matrix, we present a Self-Attention Random Forest (SARF) model as a classifier to explain the importance of fusion features, and can perform adaptive refinement processing on features, thus, the classification accuracy can be improved. In addition, we use GridSearchCV technique to optimize the hyperparameters of the model, which greatly avoids the limitation of artificially selected parameters. Results To demonstrate the effectiveness of our method, we perform validation on the breast cancer histopathological image-BreaKHis. The proposed method achieves an average accuracy of 92.96% and a micro average AUC value of 0.9588 for eight-class classification, and an average accuracy of 97.16% and an AUC value of 0.9713 for binary classification on BreaKHis dataset. Discussion For the sake of verify the universality of the proposed model, we also conduct experiments on MIAS dataset. An excellent average classification accuracy is 98.79% on MIAS dataset. Compared to other state-of-the-art methods, the experimental results demonstrate that the performance of the proposed method is superior to that of others. Furthermore, we can analyze the influence of different types of features on the proposed model, and provide theoretical basis for further optimization of the model in the future.
Collapse
|
17
|
Nasser M, Yusof UK. Deep Learning Based Methods for Breast Cancer Diagnosis: A Systematic Review and Future Direction. Diagnostics (Basel) 2023; 13:diagnostics13010161. [PMID: 36611453 PMCID: PMC9818155 DOI: 10.3390/diagnostics13010161] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 12/19/2022] [Accepted: 12/19/2022] [Indexed: 01/06/2023] Open
Abstract
Breast cancer is one of the precarious conditions that affect women, and a substantive cure has not yet been discovered for it. With the advent of Artificial intelligence (AI), recently, deep learning techniques have been used effectively in breast cancer detection, facilitating early diagnosis and therefore increasing the chances of patients' survival. Compared to classical machine learning techniques, deep learning requires less human intervention for similar feature extraction. This study presents a systematic literature review on the deep learning-based methods for breast cancer detection that can guide practitioners and researchers in understanding the challenges and new trends in the field. Particularly, different deep learning-based methods for breast cancer detection are investigated, focusing on the genomics and histopathological imaging data. The study specifically adopts the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), which offer a detailed analysis and synthesis of the published articles. Several studies were searched and gathered, and after the eligibility screening and quality evaluation, 98 articles were identified. The results of the review indicated that the Convolutional Neural Network (CNN) is the most accurate and extensively used model for breast cancer detection, and the accuracy metrics are the most popular method used for performance evaluation. Moreover, datasets utilized for breast cancer detection and the evaluation metrics are also studied. Finally, the challenges and future research direction in breast cancer detection based on deep learning models are also investigated to help researchers and practitioners acquire in-depth knowledge of and insight into the area.
Collapse
|
18
|
Sowjanya AM, Mrudula O. Effective treatment of imbalanced datasets in health care using modified SMOTE coupled with stacked deep learning algorithms. APPLIED NANOSCIENCE 2023; 13:1829-1840. [PMID: 35132368 PMCID: PMC8811587 DOI: 10.1007/s13204-021-02063-4] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 08/28/2021] [Indexed: 12/03/2022]
Abstract
One of the prominent uses of Predictive Analytics is Health care for more accurate predictions based on proper analysis of cumulative datasets. Often times the datasets are quite imbalanced and sampling techniques like Synthetic Minority Oversampling Technique (SMOTE) give only moderate accuracy in such cases. To overcome this problem, a two-step approach has been proposed. In the first step, SMOTE is modified to reduce the class imbalance in terms of Distance-based SMOTE (D-SMOTE) and Bi-phasic SMOTE (BP-SMOTE) which were then coupled with selective classifiers for prediction. An increase in accuracy is noted for both BP-SMOTE and D-SMOTE compared to basic SMOTE. In the second step, Machine learning, Deep Learning and Ensemble algorithms were used to develop a Stacking Ensemble Framework which showed a significant increase in accuracy for Stacking compared to individual machine learning algorithms like Decision Tree, Naïve Bayes, Neural Networks and Ensemble techniques like Voting, Bagging and Boosting. Two different methods have been developed by combing Deep learning with Stacking approach namely Stacked CNN and Stacked RNN which yielded significantly higher accuracy of 96-97% compared to individual algorithms. Framingham dataset is used for data sampling, Wisconsin Hospital data of Breast Cancer study is used for Stacked CNN and Novel Coronavirus 2019 dataset relating to forecasting COVID-19 cases, is used for Stacked RNN.
Collapse
Affiliation(s)
- A. Mary Sowjanya
- grid.411381.e0000 0001 0728 2694Department of CS & SE, Andhra University College of Engineering (A), Visakhapatnam, Andhra Pradesh India
| | - Owk Mrudula
- grid.411381.e0000 0001 0728 2694Department of CS & SE, Andhra University College of Engineering (A), Visakhapatnam, Andhra Pradesh India
| |
Collapse
|
19
|
Zhao Y, Zhang J, Hu D, Qu H, Tian Y, Cui X. Application of Deep Learning in Histopathology Images of Breast Cancer: A Review. MICROMACHINES 2022; 13:2197. [PMID: 36557496 PMCID: PMC9781697 DOI: 10.3390/mi13122197] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/04/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
With the development of artificial intelligence technology and computer hardware functions, deep learning algorithms have become a powerful auxiliary tool for medical image analysis. This study was an attempt to use statistical methods to analyze studies related to the detection, segmentation, and classification of breast cancer in pathological images. After an analysis of 107 articles on the application of deep learning to pathological images of breast cancer, this study is divided into three directions based on the types of results they report: detection, segmentation, and classification. We introduced and analyzed models that performed well in these three directions and summarized the related work from recent years. Based on the results obtained, the significant ability of deep learning in the application of breast cancer pathological images can be recognized. Furthermore, in the classification and detection of pathological images of breast cancer, the accuracy of deep learning algorithms has surpassed that of pathologists in certain circumstances. Our study provides a comprehensive review of the development of breast cancer pathological imaging-related research and provides reliable recommendations for the structure of deep learning network models in different application scenarios.
Collapse
Affiliation(s)
- Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| | - Jie Zhang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Dayu Hu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Hui Qu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Ye Tian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang 110169, China
- Key Laboratory of Data Analytics and Optimization for Smart Industry, Northeastern University, Shenyang 110169, China
| |
Collapse
|
20
|
Ho N, Kim YC. Estimation of Cardiac Short Axis Slice Levels with a Cascaded Deep Convolutional and Recurrent Neural Network Model. Tomography 2022; 8:2749-2760. [PMID: 36412688 PMCID: PMC9680453 DOI: 10.3390/tomography8060229] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/04/2022] [Accepted: 11/07/2022] [Indexed: 11/16/2022] Open
Abstract
Automatic identification of short axis slice levels in cardiac magnetic resonance imaging (MRI) is important in efficient and precise diagnosis of cardiac disease based on the geometry of the left ventricle. We developed a combined model of convolutional neural network (CNN) and recurrent neural network (RNN) that takes a series of short axis slices as input and predicts a series of slice levels as output. Each slice image was labeled as one of the following five classes: out-of-apical, apical, mid, basal, and out-of-basal levels. A variety of multi-class classification models were evaluated. When compared with the CNN-alone models, the cascaded CNN-RNN models resulted in higher mean F1-score and accuracy. In our implementation and testing of four different baseline networks with different combinations of RNN modules, MobileNet as the feature extractor cascaded with a two-layer long short-term memory (LSTM) network produced the highest scores in four of the seven evaluation metrics, i.e., five F1-scores, area under the curve (AUC), and accuracy. Our study indicates that the cascaded CNN-RNN models are superior to the CNN-alone models for the classification of short axis slice levels in cardiac cine MR images.
Collapse
Affiliation(s)
- Namgyu Ho
- Kim Jaechul Graduate School of Artificial Intelligence, KAIST, Seoul 02455, Republic of Korea
- Department of Computer Science and Engineering, Sogang University, Seoul 04107, Republic of Korea
| | - Yoon-Chul Kim
- Division of Digital Healthcare, College of Software and Digital Healthcare Convergence, Yonsei University, Wonju 26493, Republic of Korea
- Correspondence:
| |
Collapse
|
21
|
Classification of breast cancer histology images using MSMV-PFENet. Sci Rep 2022; 12:17447. [PMID: 36261463 PMCID: PMC9581896 DOI: 10.1038/s41598-022-22358-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 10/13/2022] [Indexed: 01/12/2023] Open
Abstract
Deep learning has been used extensively in histopathological image classification, but people in this field are still exploring new neural network architectures for more effective and efficient cancer diagnosis. Here, we propose multi-scale, multi-view progressive feature encoding network (MSMV-PFENet) for effective classification. With respect to the density of cell nuclei, we selected the regions potentially related to carcinogenesis at multiple scales from each view. The progressive feature encoding network then extracted the global and local features from these regions. A bidirectional long short-term memory analyzed the encoding vectors to get a category score, and finally the majority voting method integrated different views to classify the histopathological images. We tested our method on the breast cancer histology dataset from the ICIAR 2018 grand challenge. The proposed MSMV-PFENet achieved 93.0[Formula: see text] and 94.8[Formula: see text] accuracies at the patch and image levels, respectively. This method can potentially benefit the clinical cancer diagnosis.
Collapse
|
22
|
Object tracking in infrared images using a deep learning model and a target-attention mechanism. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00872-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
AbstractSmall object tracking in infrared images is widely utilized in various fields, such as video surveillance, infrared guidance, and unmanned aerial vehicle monitoring. The existing small target detection strategies in infrared images suffer from submerging the target in heavy cluttered infrared (IR) maritime images. To overcome this issue, we use the original image and the corresponding encoded image to apply our model. We use the local directional number patterns algorithm to encode the original image to represent more unique details. Our model is able to learn more informative and unique features from the original and encoded image for visual tracking. In this study, we explore the best convolutional filters to obtain the best possible visual tracking results by finding those inactive to the backgrounds while active in the target region. To this end, the attention mechanism for the feature extracting framework is investigated comprising a scale-sensitive feature generation component and a discriminative feature generation module based on the gradients of regression and scoring losses. Comprehensive experiments have demonstrated that our pipeline obtains competitive results compared to recently published papers.
Collapse
|
23
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
24
|
Alzheimer’s Disease Prediction Using Attention Mechanism with Dual-Phase 18F-Florbetaben Images. Nucl Med Mol Imaging 2022; 57:61-72. [PMID: 36998590 PMCID: PMC10043070 DOI: 10.1007/s13139-022-00767-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Revised: 07/04/2022] [Accepted: 08/02/2022] [Indexed: 10/15/2022] Open
Abstract
Abstract
Introduction
Amyloid-beta (Aβ) imaging test plays an important role in the early diagnosis and research of biomarkers of Alzheimer’s disease (AD) but a single test may produce Aβ-negative AD or Aβ-positive cognitively normal (CN). In this study, we aimed to distinguish AD from CN with dual-phase 18F-Florbetaben (FBB) via a deep learning–based attention method and evaluate the AD positivity scores compared to late-phase FBB which is currently adopted for AD diagnosis.
Materials and Methods
A total of 264 patients (74 CN and 190 AD), who underwent FBB imaging test and neuropsychological tests, were retrospectively analyzed. Early- and delay-phase FBB images were spatially normalized with an in-house FBB template. The regional standard uptake value ratios were calculated with the cerebellar region as a reference region and used as independent variables that predict the diagnostic label assigned to the raw image.
Results
AD positivity scores estimated from dual-phase FBB showed better accuracy (ACC) and area under the receiver operating characteristic curve (AUROC) for AD detection (ACC: 0.858, AUROC: 0.831) than those from delay phase FBB imaging (ACC: 0.821, AUROC: 0.794). AD positivity score estimated by dual-phase FBB (R: −0.5412) shows a higher correlation with psychological test compared to only dFBB (R: −0.2975). In the relevance analysis, we observed that LSTM uses different time and regions of early-phase FBB for each disease group for AD detection.
Conclusions
These results show that the aggregated model with dual-phase FBB with long short-term memory and attention mechanism can be used to provide a more accurate AD positivity score, which shows a closer association with AD, than the prediction with only a single phase FBB.
Collapse
|
25
|
Ukwuoma CC, Hossain MA, Jackson JK, Nneji GU, Monday HN, Qin Z. Multi-Classification of Breast Cancer Lesions in Histopathological Images Using DEEP_Pachi: Multiple Self-Attention Head. Diagnostics (Basel) 2022; 12:1152. [PMID: 35626307 PMCID: PMC9139754 DOI: 10.3390/diagnostics12051152] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/16/2022] Open
Abstract
INTRODUCTION AND BACKGROUND Despite fast developments in the medical field, histological diagnosis is still regarded as the benchmark in cancer diagnosis. However, the input image feature extraction that is used to determine the severity of cancer at various magnifications is harrowing since manual procedures are biased, time consuming, labor intensive, and error-prone. Current state-of-the-art deep learning approaches for breast histopathology image classification take features from entire images (generic features). Thus, they are likely to overlook the essential image features for the unnecessary features, resulting in an incorrect diagnosis of breast histopathology imaging and leading to mortality. METHODS This discrepancy prompted us to develop DEEP_Pachi for classifying breast histopathology images at various magnifications. The suggested DEEP_Pachi collects global and regional features that are essential for effective breast histopathology image classification. The proposed model backbone is an ensemble of DenseNet201 and VGG16 architecture. The ensemble model extracts global features (generic image information), whereas DEEP_Pachi extracts spatial information (regions of interest). Statistically, the evaluation of the proposed model was performed on publicly available dataset: BreakHis and ICIAR 2018 Challenge datasets. RESULTS A detailed evaluation of the proposed model's accuracy, sensitivity, precision, specificity, and f1-score metrics revealed the usefulness of the backbone model and the DEEP_Pachi model for image classifying. The suggested technique outperformed state-of-the-art classifiers, achieving an accuracy of 1.0 for the benign class and 0.99 for the malignant class in all magnifications of BreakHis datasets and an accuracy of 1.0 on the ICIAR 2018 Challenge dataset. CONCLUSIONS The acquired findings were significantly resilient and proved helpful for the suggested system to assist experts at big medical institutions, resulting in early breast cancer diagnosis and a reduction in the death rate.
Collapse
Affiliation(s)
- Chiagoziem C. Ukwuoma
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Md Altab Hossain
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Jehoiada K. Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Grace U. Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| | - Happy N. Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China;
| | - Zhiguang Qin
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 610054, China; (J.K.J.); (G.U.N.)
| |
Collapse
|
26
|
Bio-Imaging-Based Machine Learning Algorithm for Breast Cancer Detection. Diagnostics (Basel) 2022; 12:diagnostics12051134. [PMID: 35626290 PMCID: PMC9140096 DOI: 10.3390/diagnostics12051134] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 04/26/2022] [Accepted: 04/27/2022] [Indexed: 01/22/2023] Open
Abstract
Breast cancer is one of the most widespread diseases in women worldwide. It leads to the second-largest mortality rate in women, especially in European countries. It occurs when malignant lumps that are cancerous start to grow in the breast cells. Accurate and early diagnosis can help in increasing survival rates against this disease. A computer-aided detection (CAD) system is necessary for radiologists to differentiate between normal and abnormal cell growth. This research consists of two parts; the first part involves a brief overview of the different image modalities, using a wide range of research databases to source information such as ultrasound, histography, and mammography to access various publications. The second part evaluates different machine learning techniques used to estimate breast cancer recurrence rates. The first step is to perform preprocessing, including eliminating missing values, data noise, and transformation. The dataset is divided as follows: 60% of the dataset is used for training, and the rest, 40%, is used for testing. We focus on minimizing type one false-positive rate (FPR) and type two false-negative rate (FNR) errors to improve accuracy and sensitivity. Our proposed model uses machine learning techniques such as support vector machine (SVM), logistic regression (LR), and K-nearest neighbor (KNN) to achieve better accuracy in breast cancer classification. Furthermore, we attain the highest accuracy of 97.7% with 0.01 FPR, 0.03 FNR, and an area under the ROC curve (AUC) score of 0.99. The results show that our proposed model successfully classifies breast tumors while overcoming previous research limitations. Finally, we summarize the paper with the future trends and challenges of the classification and segmentation in breast cancer detection.
Collapse
|
27
|
Luo X, Zhang J, Li Z, Yang R. Diagnosis of ulcerative colitis from endoscopic images based on deep learning. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103443] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
28
|
Amin J, Sharif M, Fernandes SL, Wang SH, Saba T, Khan AR. Breast microscopic cancer segmentation and classification using unique 4-qubit-quantum model. Microsc Res Tech 2022; 85:1926-1936. [PMID: 35043505 DOI: 10.1002/jemt.24054] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 10/20/2021] [Accepted: 12/02/2021] [Indexed: 12/19/2022]
Abstract
The visual inspection of histopathological samples is the benchmark for detecting breast cancer, but a strenuous and complicated process takes a long time of the pathologist practice. Deep learning models have shown excellent outcomes in clinical diagnosis and image processing and advances in various fields, including drug development, frequency simulation, and optimization techniques. However, the resemblance of histopathologic images of breast cancer and the inclusion of stable and infected tissues in different areas make detecting and classifying tumors on entire slide images more difficult. In breast cancer, a correct diagnosis is needed for complete care in a limited amount of time. An effective detection can relieve the pathologist's workload and mitigate diagnostic subjectivity. Therefore, this research work investigates improved the pre-trained xception and deeplabv3+ design semantic model. The model has been trained on input images with ground masks on the tuned parameters that significantly improve the segmentation of ultrasound breast images into respective classes, that is, benign/malignant. The segmentation model delivered an accuracy of greater than 99% to prove the model's effectiveness. The segmented images and histopathological breast images are transferred to the 4-qubit-quantum circuit with six-layered architecture to detect breast malignancy. The proposed framework achieved remarkable performance as contrasted to currently published methodologies. HIGHLIGHTS: This research proposed hybrid semantic model using pre-trained xception and deeplabv3 for breast microscopic cancer classification in to benign and malignant classes at accuracy of 95% accuracy, 99% accuracy for detection of breast malignancy.
Collapse
Affiliation(s)
- Javaria Amin
- Department of Computer Science, University of Wah, Quaid Avenue, Wah Cantt, Pakistan, 4740, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Steven Lawrence Fernandes
- Department of Computer Science, Design and Journalism, Creighton University, Omaha, Nebraska, 68178, USA
| | - Shui-Hua Wang
- School of Mathematics and Actuarial Science, University of Leicester, Leicester, UK
| | - Tanzila Saba
- Artificial Intelligence & Data Lab (AIDA) CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| | - Amjad Rehman Khan
- Artificial Intelligence & Data Lab (AIDA) CCIS, Prince Sultan University, Riyadh, 11586, Saudi Arabia
| |
Collapse
|
29
|
Zhong Y, Piao Y, Zhang G. Dilated and soft attention-guided convolutional neural network for breast cancer histology images classification. Microsc Res Tech 2021; 85:1248-1257. [PMID: 34859543 DOI: 10.1002/jemt.23991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 10/03/2021] [Accepted: 10/18/2021] [Indexed: 01/22/2023]
Abstract
Breast cancer is one of the most common types of cancer in women, and histopathological imaging is considered the gold standard for its diagnosis. However, the great complexity of histopathological images and the considerable workload make this work extremely time-consuming, and the results may be affected by the subjectivity of the pathologist. Therefore, the development of an accurate, automated method for analysis of histopathological images is critical to this field. In this article, we propose a deep learning method guided by the attention mechanism for fast and effective classification of haematoxylin and eosin-stained breast biopsy images. First, this method takes advantage of DenseNet and uses the feature map's information. Second, we introduce dilated convolution to produce a larger receptive field. Finally, spatial attention and channel attention are used to guide the extraction of the most useful visual features. With the use of fivefold cross-validation, the best model obtained an accuracy of 96.47% on the BACH2018 dataset. We also evaluated our method on other datasets, and the experimental results demonstrated that our model has reliable performance. This study indicates that our histopathological image classifier with a soft attention-guided deep learning model for breast cancer shows significantly better results than the latest methods. It has great potential as an effective tool for automatic evaluation of digital histopathological microscopic images for computer-aided diagnosis.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, China
| | - Guohui Zhang
- Pneumoconiosis Diagnosis and Treatment Center, Occupational Preventive and Treatment Hospital in Jilin Province, Changchun, China
| |
Collapse
|
30
|
Wang Y, Zhang W. A Dense RNN for Sequential Four-Chamber View Left Ventricle Wall Segmentation and Cardiac State Estimation. Front Bioeng Biotechnol 2021; 9:696227. [PMID: 34422778 PMCID: PMC8378502 DOI: 10.3389/fbioe.2021.696227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Accepted: 06/21/2021] [Indexed: 12/04/2022] Open
Abstract
The segmentation of the left ventricle (LV) wall in four-chamber view cardiac sequential image is significant for cardiac disease diagnosis and cardiac mechanisms study; however, there is no successful reported work on sequential four-chambered view LV wall segmentation due to the complex four-chamber structure and diversity of wall motion. In this article, we propose a dense recurrent neural network (RNN) algorithm to achieve accurately LV wall segmentation in a four-chamber view MRI time sequence. In the cardiac sequential LV wall process, not only the sequential accuracy but also the accuracy of each image matters. Thus, we propose a dense RNN to provide compensation for the first long short-term memory (LSTM) cells. Two RNNs are combined in this work, the first one aims at providing information for the first image, and the second RNN generates segmentation result. In this way, the proposed dense RNN improves the accuracy of the first frame image. What is more is that, it improves the effectiveness of information flow between LSTM cells. Obtaining more competent information from the former cell, frame-wise segmentation accuracy is greatly improved. Based on the segmentation result, an algorithm is proposed to estimate cardiac state. This is the first time that deals with both cardiac time-sequential LV segmentation problems and, robustly, estimates cardiac state. Rather than segmenting each frame separately, utilizing cardiac sequence information is more stable. The proposed method ensures an Intersection over Union (IoU) of 92.13%, which outperforms other classical deep learning algorithms.
Collapse
Affiliation(s)
- Yu Wang
- Research Center for Physical Education Reform and Development, School of Physical Education, Henan University, Kaifeng, China
| | - Wanjun Zhang
- Henan Key Laboratory of Big Data Analysis and Processing, School of Computer and Information Engineering, Henan University, Kaifeng, China
| |
Collapse
|
31
|
Khan SI, Shahrior A, Karim R, Hasan M, Rahman A. MultiNet: A deep neural network approach for detecting breast cancer through multi-scale feature fusion. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.08.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
32
|
He Q, Cheng G, Ju H. BCDnet: Parallel heterogeneous eight-class classification model of breast pathology. PLoS One 2021; 16:e0253764. [PMID: 34252112 PMCID: PMC8274904 DOI: 10.1371/journal.pone.0253764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 06/12/2021] [Indexed: 12/24/2022] Open
Abstract
Breast cancer is the cancer with the highest incidence of malignant tumors in women, which seriously endangers women's health. With the help of computer vision technology, it has important application value to automatically classify pathological tissue images to assist doctors in rapid and accurate diagnosis. Breast pathological tissue images have complex and diverse characteristics, and the medical data set of breast pathological tissue images is small, which makes it difficult to automatically classify breast pathological tissues. In recent years, most of the researches have focused on the simple binary classification of benign and malignant, which cannot meet the actual needs for classification of pathological tissues. Therefore, based on deep convolutional neural network, model ensembleing, transfer learning, feature fusion technology, this paper designs an eight-class classification breast pathology diagnosis model BCDnet. A user inputs the patient's breast pathological tissue image, and the model can automatically determine what the disease is (Adenosis, Fibroadenoma, Tubular Adenoma, Phyllodes Tumor, Ductal Carcinoma, Lobular Carcinoma, Mucinous Carcinoma or Papillary Carcinoma). The model uses the VGG16 convolution base and Resnet50 convolution base as the parallel convolution base of the model. Two convolutional bases (VGG16 convolutional base and Resnet50 convolutional base) obtain breast tissue image features from different fields of view. After the information output by the fully connected layer of the two convolutional bases is fused, it is classified and output by the SoftMax function. The model experiment uses the publicly available BreaKHis data set. The number of samples of each class in the data set is extremely unevenly distributed. Compared with the binary classification, the number of samples in each class of the eight-class classification is also smaller. Therefore, the image segmentation method is used to expand the data set and the non-repeated random cropping method is used to balance the data set. Based on the balanced data set and the unbalanced data set, the BCDnet model, the pre-trained model Resnet50+ fine-tuning, and the pre-trained model VGG16+ fine-tuning are used for multiple comparison experiments. In the comparison experiment, the BCDnet model performed outstandingly, and the correct recognition rate of the eight-class classification model is higher than 98%. The results show that the model proposed in this paper and the method of improving the data set are reasonable and effective.
Collapse
Affiliation(s)
- Qingfang He
- Institute of Computer Technology, Beijing Union University, Beijing, China
| | - Guang Cheng
- Institute of Computer Technology, Beijing Union University, Beijing, China
| | - Huimin Ju
- Institute of Computer Technology, Beijing Union University, Beijing, China
| |
Collapse
|
33
|
Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images. Sci Rep 2021; 11:10930. [PMID: 34035406 PMCID: PMC8149837 DOI: 10.1038/s41598-021-90428-8] [Citation(s) in RCA: 107] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 05/07/2021] [Indexed: 12/15/2022] Open
Abstract
Brain tumor localization and segmentation from magnetic resonance imaging (MRI) are hard and important tasks for several applications in the field of medical analysis. As each brain imaging modality gives unique and key details related to each part of the tumor, many recent approaches used four modalities T1, T1c, T2, and FLAIR. Although many of them obtained a promising segmentation result on the BRATS 2018 dataset, they suffer from a complex structure that needs more time to train and test. So, in this paper, to obtain a flexible and effective brain tumor segmentation system, first, we propose a preprocessing approach to work only on a small part of the image rather than the whole part of the image. This method leads to a decrease in computing time and overcomes the overfitting problems in a Cascade Deep Learning model. In the second step, as we are dealing with a smaller part of brain images in each slice, a simple and efficient Cascade Convolutional Neural Network (C-ConvNet/C-CNN) is proposed. This C-CNN model mines both local and global features in two different routes. Also, to improve the brain tumor segmentation accuracy compared with the state-of-the-art models, a novel Distance-Wise Attention (DWA) mechanism is introduced. The DWA mechanism considers the effect of the center location of the tumor and the brain inside the model. Comprehensive experiments are conducted on the BRATS 2018 dataset and show that the proposed model obtains competitive results: the proposed method achieves a mean whole tumor, enhancing tumor, and tumor core dice scores of 0.9203, 0.9113 and 0.8726 respectively. Other quantitative and qualitative assessments are presented and discussed.
Collapse
|
34
|
Liao Q, Zhang Q, Feng X, Huang H, Xu H, Tian B, Liu J, Yu Q, Guo N, Liu Q, Huang B, Ma D, Ai J, Xu S, Li K. Development of deep learning algorithms for predicting blastocyst formation and quality by time-lapse monitoring. Commun Biol 2021; 4:415. [PMID: 33772211 PMCID: PMC7998018 DOI: 10.1038/s42003-021-01937-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 02/24/2021] [Indexed: 12/24/2022] Open
Abstract
Approaches to reliably predict the developmental potential of embryos and select suitable embryos for blastocyst culture are needed. The development of time-lapse monitoring (TLM) and artificial intelligence (AI) may help solve this problem. Here, we report deep learning models that can accurately predict blastocyst formation and usable blastocysts using TLM videos of the embryo’s first three days. The DenseNet201 network, focal loss, long short-term memory (LSTM) network and gradient boosting classifier were mainly employed, and video preparation algorithms, spatial stream and temporal stream models were developed into ensemble prediction models called STEM and STEM+. STEM exhibited 78.2% accuracy and 0.82 AUC in predicting blastocyst formation, and STEM+ achieved 71.9% accuracy and 0.79 AUC in predicting usable blastocysts. We believe the models are beneficial for blastocyst formation prediction and embryo selection in clinical practice, and our modeling methods will provide valuable information for analyzing medical videos with continuous appearance variation. Liao et al. propose a deep learning model to predict blastocyst formation using TLM videos following the first three days of embryogenesis. The authors develop an ensemble prediction model, STEM and STEM+, which were found to exhibit 78.2% and 71.9% accuracy at predicting blastocyst formation and useable blastocysts respectively.
Collapse
Affiliation(s)
- Qiuyue Liao
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qi Zhang
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Xue Feng
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Haibo Huang
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Haohao Xu
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Baoyuan Tian
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Jihao Liu
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Qihui Yu
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China
| | - Na Guo
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Qun Liu
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Bo Huang
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Ding Ma
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Jihui Ai
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Shugong Xu
- Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China.
| | - Kezhen Li
- Department of Gynecology and Obstetrics, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| |
Collapse
|
35
|
Artificial Intelligence Techniques for Prostate Cancer Detection through Dual-Channel Tissue Feature Engineering. Cancers (Basel) 2021; 13:cancers13071524. [PMID: 33810251 PMCID: PMC8036750 DOI: 10.3390/cancers13071524] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/09/2021] [Accepted: 03/23/2021] [Indexed: 12/27/2022] Open
Abstract
Simple Summary Artificial intelligence techniques were used for the detection of prostate cancer through tissue feature engineering. A radiomic method was used to extract the important features or information from histopathology tissue images to perform binary classification (i.e., benign vs. malignant). This method can identify a histological pattern that is invisible to the human eye, which helps researchers to predict and detect prostate cancer. We used different performance metrics to evaluate the results of the classification. In the future, it is expected that a method like radiomic will provide a consistent contribution to analyze histopathology tissue images and differentiate between cancerous and noncancerous tumors. Abstract The optimal diagnostic and treatment strategies for prostate cancer (PCa) are constantly changing. Given the importance of accurate diagnosis, texture analysis of stained prostate tissues is important for automatic PCa detection. We used artificial intelligence (AI) techniques to classify dual-channel tissue features extracted from Hematoxylin and Eosin (H&E) tissue images, respectively. Tissue feature engineering was performed to extract first-order statistic (FOS)-based textural features from each stained channel, and cancer classification between benign and malignant was carried out based on important features. Recursive feature elimination (RFE) and one-way analysis of variance (ANOVA) methods were used to identify significant features, which provided the best five features out of the extracted six features. The AI techniques used in this study for binary classification (benign vs. malignant and low-grade vs. high-grade) were support vector machine (SVM), logistic regression (LR), bagging tree, boosting tree, and dual-channel bidirectional long short-term memory (DC-BiLSTM) network. Further, a comparative analysis was carried out between the AI algorithms. Two different datasets were used for PCa classification. Out of these, the first dataset (private) was used for training and testing the AI models and the second dataset (public) was used only for testing to evaluate model performance. The automatic AI classification system performed well and showed satisfactory results according to the hypothesis of this study.
Collapse
|
36
|
Bianconi F, Kather JN, Reyes-Aldasoro CC. Experimental Assessment of Color Deconvolution and Color Normalization for Automated Classification of Histology Images Stained with Hematoxylin and Eosin. Cancers (Basel) 2020; 12:cancers12113337. [PMID: 33187299 PMCID: PMC7697346 DOI: 10.3390/cancers12113337] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 11/04/2020] [Indexed: 02/06/2023] Open
Abstract
Histological evaluation plays a major role in cancer diagnosis and treatment. The appearance of H&E-stained images can vary significantly as a consequence of differences in several factors, such as reagents, staining conditions, preparation procedure and image acquisition system. Such potential sources of noise can all have negative effects on computer-assisted classification. To minimize such artefacts and their potentially negative effects several color pre-processing methods have been proposed in the literature-for instance, color augmentation, color constancy, color deconvolution and color transfer. Still, little work has been done to investigate the efficacy of these methods on a quantitative basis. In this paper, we evaluated the effects of color constancy, deconvolution and transfer on automated classification of H&E-stained images representing different types of cancers-specifically breast, prostate, colorectal cancer and malignant lymphoma. Our results indicate that in most cases color pre-processing does not improve the classification accuracy, especially when coupled with color-based image descriptors. Some pre-processing methods, however, can be beneficial when used with some texture-based methods like Gabor filters and Local Binary Patterns.
Collapse
Affiliation(s)
- Francesco Bianconi
- Department of Engineering, Università degli Studi di Perugia, Via Goffredo Duranti 93, 06125 Perugia, Italy
- giCentre, School of Mathematics, Computer Science & Engineering, City, University of London, Northampton Square, London EC1V 0HB, UK;
- Correspondence: ; Tel.: +39-075-585-3706
| | - Jakob N. Kather
- Department of Medical Oncology and Internal Medicine VI, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg, Germany;
| | - Constantino Carlos Reyes-Aldasoro
- giCentre, School of Mathematics, Computer Science & Engineering, City, University of London, Northampton Square, London EC1V 0HB, UK;
| |
Collapse
|
37
|
Bera K, Katz I, Madabhushi A. Reimagining T Staging Through Artificial Intelligence and Machine Learning Image Processing Approaches in Digital Pathology. JCO Clin Cancer Inform 2020; 4:1039-1050. [PMID: 33166198 PMCID: PMC7713520 DOI: 10.1200/cci.20.00110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2020] [Indexed: 02/06/2023] Open
Abstract
Tumor stage and grade, visually assessed by pathologists from evaluation of pathology images in conjunction with radiographic imaging techniques, have been linked to outcome, progression, and survival for a number of cancers. The gold standard of staging in oncology has been the TNM (tumor-node-metastasis) staging system. Though histopathological grading has shown prognostic significance, it is subjective and limited by interobserver variability even among experienced surgical pathologists. Recently, artificial intelligence (AI) approaches have been applied to pathology images toward diagnostic-, prognostic-, and treatment prediction-related tasks in cancer. AI approaches have the potential to overcome the limitations of conventional TNM staging and tumor grading approaches, providing a direct prognostic prediction of disease outcome independent of tumor stage and grade. Broadly speaking, these AI approaches involve extracting patterns from images that are then compared against previously defined disease signatures. These patterns are typically categorized as either (1) handcrafted, which involve domain-inspired attributes, such as nuclear shape, or (2) deep learning (DL)-based representations, which tend to be more abstract. DL approaches have particularly gained considerable popularity because of the minimal domain knowledge needed for training, mostly only requiring annotated examples corresponding to the categories of interest. In this article, we discuss AI approaches for digital pathology, especially as they relate to disease prognosis, prediction of genomic and molecular alterations in the tumor, and prediction of treatment response in oncology. We also discuss some of the potential challenges with validation, interpretability, and reimbursement that must be addressed before widespread clinical deployment. The article concludes with a brief discussion of potential future opportunities in the field of AI for digital pathology and oncology.
Collapse
Affiliation(s)
- Kaustav Bera
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, OH
- Maimonides Medical Center, Department of Internal Medicine, Brooklyn, NY
| | - Ian Katz
- Southern Sun Pathology, Sydney, Australia, and University of Queensland, Brisbane, Australia
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, OH
- Louis Stokes Veterans Affairs Medical Center, Cleveland, OH
| |
Collapse
|
38
|
Xie J, Song X, Zhang W, Dong Q, Wang Y, Li F, Wan C. A novel approach with dual-sampling convolutional neural network for ultrasound image classification of breast tumors. Phys Med Biol 2020; 65. [PMID: 33120380 DOI: 10.1088/1361-6560/abc5c7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/29/2020] [Indexed: 12/19/2022]
Abstract
Breast cancer is one of the leading causes of female cancer deaths. Early diagnosis with prophylactic may improve the patients' prognosis. So far ultrasound (US) imaging is a popular method in breast cancer diagnosis. However, its accuracy is bounded to traditional handcrafted feature methods and expertise. A novel method named Dual-Sampling Convolutional Neural Networks (DSCNN) was proposed in this paper for the differential diagnosis of breast tumors based on US images. Combining traditional convolutional and residual networks, DSCNN prevented gradient disappearance and degradation. The prediction accuracy was increased by the parallel dual-sampling structure, which can effectively extract potential features from US images. Compared with other advanced deep learning methods and traditional handcraftedfeaturemethods,DSCNNreachedthebestperformance withanaccuracyof91.67%andan AUC of 0.939. The robustness of the proposed method was also verified by using a public dataset. Moreover, DSCNN was compared with evaluation from three radiologists utilizing US-BI-RADS lexicon categories for overall breast tumors assessment. The result demonstrated that the prediction sensitivity, specificity and accuracy of the DSCNN were higher than those of the radiologist with 10- year experience, suggesting that the DSCNN has the potential to help doctors make judgement in clinic.
Collapse
Affiliation(s)
- Jiang Xie
- School of Computer Engineering and Science, Shanghai University, Shanghai, CHINA
| | - Xiangshuai Song
- School of Computer Engineering and Science, Shanghai University, Shanghai, 200444, CHINA
| | - Wu Zhang
- Shanghai Institute of Applied Mathematics and Mechanics, Shanghai University, Shanghai, CHINA
| | - Qi Dong
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Yan Wang
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Fenghua Li
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, CHINA
| | - Caifeng Wan
- Department of Ultrasound, Shanghai Jiao Tong University School of Medicine Affiliated Renji Hospital, Shanghai, 200127, CHINA
| |
Collapse
|
39
|
Kumar D, Batra U. An ensemble algorithm for breast cancer histopathology image classification. JOURNAL OF STATISTICS & MANAGEMENT SYSTEMS 2020. [DOI: 10.1080/09720510.2020.1818451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Deepika Kumar
- School of Engineering, G. D. Goenka University, Gurugram 122103 Haryana, India
| | - Usha Batra
- School of Engineering, G. D. Goenka University, Gurugram 122103 Haryana, India
| |
Collapse
|
40
|
Hameed Z, Zahia S, Garcia-Zapirain B, Javier Aguirre J, María Vanegas A. Breast Cancer Histopathology Image Classification Using an Ensemble of Deep Learning Models. SENSORS 2020; 20:s20164373. [PMID: 32764398 PMCID: PMC7472736 DOI: 10.3390/s20164373] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/01/2020] [Accepted: 08/03/2020] [Indexed: 12/13/2022]
Abstract
Breast cancer is one of the major public health issues and is considered a leading cause of cancer-related deaths among women worldwide. Its early diagnosis can effectively help in increasing the chances of survival rate. To this end, biopsy is usually followed as a gold standard approach in which tissues are collected for microscopic analysis. However, the histopathological analysis of breast cancer is non-trivial, labor-intensive, and may lead to a high degree of disagreement among pathologists. Therefore, an automatic diagnostic system could assist pathologists to improve the effectiveness of diagnostic processes. This paper presents an ensemble deep learning approach for the definite classification of non-carcinoma and carcinoma breast cancer histopathology images using our collected dataset. We trained four different models based on pre-trained VGG16 and VGG19 architectures. Initially, we followed 5-fold cross-validation operations on all the individual models, namely, fully-trained VGG16, fine-tuned VGG16, fully-trained VGG19, and fine-tuned VGG19 models. Then, we followed an ensemble strategy by taking the average of predicted probabilities and found that the ensemble of fine-tuned VGG16 and fine-tuned VGG19 performed competitive classification performance, especially on the carcinoma class. The ensemble of fine-tuned VGG16 and VGG19 models offered sensitivity of 97.73% for carcinoma class and overall accuracy of 95.29%. Also, it offered an F1 score of 95.29%. These experimental results demonstrated that our proposed deep learning approach is effective for the automatic classification of complex-natured histopathology images of breast cancer, more specifically for carcinoma images.
Collapse
Affiliation(s)
- Zabit Hameed
- eVida Research Group, University of Deusto, 48007 Bilbao, Spain; (S.Z.); (B.G.-Z.)
- Correspondence:
| | - Sofia Zahia
- eVida Research Group, University of Deusto, 48007 Bilbao, Spain; (S.Z.); (B.G.-Z.)
| | | | - José Javier Aguirre
- Biokeralty Reseach Institute, 01510 Vitoria, Spain;
- Department of Pathological Anatomy, University Hospital of Araba, 01009 Vitoria, Spain
| | | |
Collapse
|
41
|
Sultan AS, Elgharib MA, Tavares T, Jessri M, Basile JR. The use of artificial intelligence, machine learning and deep learning in oncologic histopathology. J Oral Pathol Med 2020; 49:849-856. [PMID: 32449232 DOI: 10.1111/jop.13042] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 04/29/2020] [Accepted: 05/09/2020] [Indexed: 12/13/2022]
Abstract
BACKGROUND Recently, there has been a momentous drive to apply advanced artificial intelligence (AI) technologies to diagnostic medicine. The introduction of AI has provided vast new opportunities to improve health care and has introduced a new wave of heightened precision in oncologic pathology. The impact of AI on oncologic pathology has now become apparent, and its use with respect to oral oncology is still in the nascent stage. DISCUSSION A foundational overview of AI classification systems used in medicine and a review of common terminology used in machine learning and computational pathology will be presented. This paper provides a focused review on the recent advances in AI and deep learning in oncologic histopathology and oral oncology. In addition, specific emphasis on recent studies that have applied these technologies to oral cancer prognostication will also be discussed. CONCLUSION Machine and deep learning methods designed to enhance prognostication of oral cancer have been proposed with much of the work focused on prediction models on patient survival and locoregional recurrences in patients with oral squamous cell carcinomas (OSCC). Few studies have explored machine learning methods on OSCC digital histopathologic images. It is evident that further research at the whole slide image level is needed and future collaborations with computer scientists may progress the field of oral oncology.
Collapse
Affiliation(s)
- Ahmed S Sultan
- School of Dentistry, University of Maryland, Baltimore, MD, USA
| | | | - Tiffany Tavares
- School of Dentistry, University of Missouri-Kansas City, Kansas City, MO, USA
| | - Maryam Jessri
- Oral Health Centre of Western Australia, Perth, WA, Australia
| | - John R Basile
- School of Dentistry, University of Maryland, Baltimore, MD, USA.,University of Maryland Greenebaum Cancer Center, Baltimore, MD, USA
| |
Collapse
|
42
|
Optimizing the Performance of Breast Cancer Classification by Employing the Same Domain Transfer Learning from Hybrid Deep Convolutional Neural Network Model. ELECTRONICS 2020. [DOI: 10.3390/electronics9030445] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is a significant factor in female mortality. An early cancer diagnosis leads to a reduction in the breast cancer death rate. With the help of a computer-aided diagnosis system, the efficiency increased, and the cost was reduced for the cancer diagnosis. Traditional breast cancer classification techniques are based on handcrafted features techniques, and their performance relies upon the chosen features. They also are very sensitive to different sizes and complex shapes. However, histopathological breast cancer images are very complex in shape. Currently, deep learning models have become an alternative solution for diagnosis, and have overcome the drawbacks of classical classification techniques. Although deep learning has performed well in various tasks of computer vision and pattern recognition, it still has some challenges. One of the main challenges is the lack of training data. To address this challenge and optimize the performance, we have utilized a transfer learning technique which is where the deep learning models train on a task, and then fine-tune the models for another task. We have employed transfer learning in two ways: Training our proposed model first on the same domain dataset, then on the target dataset, and training our model on a different domain dataset, then on the target dataset. We have empirically proven that the same domain transfer learning optimized the performance. Our hybrid model of parallel convolutional layers and residual links is utilized to classify hematoxylin–eosin-stained breast biopsy images into four classes: invasive carcinoma, in-situ carcinoma, benign tumor and normal tissue. To reduce the effect of overfitting, we have augmented the images with different image processing techniques. The proposed model achieved state-of-the-art performance, and it outperformed the latest methods by achieving a patch-wise classification accuracy of 90.5%, and an image-wise classification accuracy of 97.4% on the validation set. Moreover, we have achieved an image-wise classification accuracy of 96.1% on the test set of the microscopy ICIAR-2018 dataset.
Collapse
|