1
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
2
|
Shimada Y, Ojima T, Takaoka Y, Sugano A, Someya Y, Hirabayashi K, Homma T, Kitamura N, Akemoto Y, Tanabe K, Sato F, Yoshimura N, Tsuchiya T. Prediction of visceral pleural invasion of clinical stage I lung adenocarcinoma using thoracoscopic images and deep learning. Surg Today 2024; 54:540-550. [PMID: 37864054 DOI: 10.1007/s00595-023-02756-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/13/2023] [Indexed: 10/22/2023]
Abstract
PURPOSE To develop deep learning models using thoracoscopic images to identify visceral pleural invasion (VPI) in patients with clinical stage I lung adenocarcinoma, and to verify if these models can be applied clinically. METHODS Two deep learning models, one based on a convolutional neural network (CNN) and the other based on a vision transformer (ViT), were applied and trained via 463 images (VPI negative: 269 images, VPI positive: 194 images) captured from surgical videos of 81 patients. Model performances were validated via an independent test dataset containing 46 images (VPI negative: 28 images, VPI positive: 18 images) from 46 test patients. RESULTS The areas under the receiver operating characteristic curves of the CNN-based and ViT-based models were 0.77 and 0.84 (p = 0.304), respectively. The accuracy, sensitivity, specificity, and positive and negative predictive values were 73.91, 83.33, 67.86, 62.50, and 86.36% for the CNN-based model and 78.26, 77.78, 78.57, 70.00, and 84.62% for the ViT-based model, respectively. These models' diagnostic abilities were comparable to those of board-certified thoracic surgeons and tended to be superior to those of non-board-certified thoracic surgeons. CONCLUSION The deep learning model systems can be utilized in clinical applications via data expansion.
Collapse
Affiliation(s)
- Yoshifumi Shimada
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Toshihiro Ojima
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Yutaka Takaoka
- Data Science Center for Medicine and Hospital Management, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
- Center for Data Science and Artificial Intelligence Research Promotion, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Aki Sugano
- Data Science Center for Medicine and Hospital Management, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
- Center for Clinical Research, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Yoshiaki Someya
- Center for Data Science and Artificial Intelligence Research Promotion, Toyama University Hospital, 2630 Sugitani, Toyama, Japan
| | - Kenichi Hirabayashi
- Department of Diagnostic Pathology, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Takahiro Homma
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Naoya Kitamura
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Yushi Akemoto
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Keitaro Tanabe
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Fumitaka Sato
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Naoki Yoshimura
- Department of Cardiovascular Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan
| | - Tomoshi Tsuchiya
- Department of Thoracic Surgery, University of Toyama, 2630 Sugitani, Toyama, Japan.
| |
Collapse
|
3
|
Williams DKA, Graifman G, Hussain N, Amiel M, Tran P, Reddy A, Haider A, Kavitesh BK, Li A, Alishahian L, Perera N, Efros C, Babu M, Tharakan M, Etienne M, Babu BA. Digital pathology, deep learning, and cancer: a narrative review. Transl Cancer Res 2024; 13:2544-2560. [PMID: 38881914 PMCID: PMC11170525 DOI: 10.21037/tcr-23-964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 03/24/2024] [Indexed: 06/18/2024]
Abstract
Background and Objective Cancer is a leading cause of morbidity and mortality worldwide. The emergence of digital pathology and deep learning technologies signifies a transformative era in healthcare. These technologies can enhance cancer detection, streamline operations, and bolster patient care. A substantial gap exists between the development phase of deep learning models in controlled laboratory environments and their translations into clinical practice. This narrative review evaluates the current landscape of deep learning and digital pathology, analyzing the factors influencing model development and implementation into clinical practice. Methods We searched multiple databases, including Web of Science, Arxiv, MedRxiv, BioRxiv, Embase, PubMed, DBLP, Google Scholar, IEEE Xplore, Semantic Scholar, and Cochrane, targeting articles on whole slide imaging and deep learning published from 2014 and 2023. Out of 776 articles identified based on inclusion criteria, we selected 36 papers for the analysis. Key Content and Findings Most articles in this review focus on the in-laboratory phase of deep learning model development, a critical stage in the deep learning lifecycle. Challenges arise during model development and their integration into clinical practice. Notably, lab performance metrics may not always match real-world clinical outcomes. As technology advances and regulations evolve, we expect more clinical trials to bridge this performance gap and validate deep learning models' effectiveness in clinical care. High clinical accuracy is vital for informed decision-making throughout a patient's cancer care. Conclusions Deep learning technology can enhance cancer detection, clinical workflows, and patient care. Challenges may arise during model development. The deep learning lifecycle involves data preprocessing, model development, and clinical implementation. Achieving health equity requires including diverse patient groups and eliminating bias during implementation. While model development is integral, most articles focus on the pre-deployment phase. Future longitudinal studies are crucial for validating models in real-world settings post-deployment. A collaborative approach among computational pathologists, technologists, industry, and healthcare providers is essential for driving adoption in clinical settings.
Collapse
Affiliation(s)
| | | | - Nowair Hussain
- Department of Internal Medicine, Overlook Medical Center, Summit, NJ, USA
| | | | | | - Arjun Reddy
- Applied Mathematics & Statistics Stony Brook University, Stony Brook, NY, USA
| | - Ali Haider
- Department of Artificial Intelligence, Yeshiva University, New York, NY, USA
| | - Bali Kumar Kavitesh
- Centre for Frontier AI Research (CFAR), Agency for Science, Technology, and Research (A*STAR), Singapore, Singapore
| | - Austin Li
- New York Medical College, Valhalla, NY, USA
| | | | | | | | - Myoungmee Babu
- Artificial Intelligence and Mathematics, New York City Department of Education, New York, NY, USA
| | | | - Mill Etienne
- Department of Neurology, New York Medical College, Valhalla, NY, USA
| | - Benson A Babu
- New York Medical College, Valhalla, NY, USA
- Department of Hospital Medicine, Wyckoff, Medical Center, New York, NY, USA
| |
Collapse
|
4
|
Katayama A, Aoki Y, Watanabe Y, Horiguchi J, Rakha EA, Oyama T. Current status and prospects of artificial intelligence in breast cancer pathology: convolutional neural networks to prospective Vision Transformers. Int J Clin Oncol 2024:10.1007/s10147-024-02513-3. [PMID: 38619651 DOI: 10.1007/s10147-024-02513-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 03/12/2024] [Indexed: 04/16/2024]
Abstract
Breast cancer is the most prevalent cancer among women, and its diagnosis requires the accurate identification and classification of histological features for effective patient management. Artificial intelligence, particularly through deep learning, represents the next frontier in cancer diagnosis and management. Notably, the use of convolutional neural networks and emerging Vision Transformers (ViT) has been reported to automate pathologists' tasks, including tumor detection and classification, in addition to improving the efficiency of pathology services. Deep learning applications have also been extended to the prediction of protein expression, molecular subtype, mutation status, therapeutic efficacy, and outcome prediction directly from hematoxylin and eosin-stained slides, bypassing the need for immunohistochemistry or genetic testing. This review explores the current status and prospects of deep learning in breast cancer diagnosis with a focus on whole-slide image analysis. Artificial intelligence applications are increasingly applied to many tasks in breast pathology ranging from disease diagnosis to outcome prediction, thus serving as valuable tools for assisting pathologists and supporting breast cancer management.
Collapse
Affiliation(s)
- Ayaka Katayama
- Diagnostic Pathology, Gunma University Graduate School of Medicine, 3-39-22 Showamachi, Maebashi, Gunma, 371-8511, Japan.
| | - Yuki Aoki
- Center for Mathematics and Data Science, Gunma University, Maebashi, Japan
| | - Yukako Watanabe
- Clinical Training Center, Gunma University Hospital, Maebashi, Japan
| | - Jun Horiguchi
- Department of Breast Surgery, International University of Health and Welfare, Narita, Japan
| | - Emad A Rakha
- Department of Histopathology School of Medicine, University of Nottingham, University Park, Nottingham, UK
- Department of Pathology, Hamad Medical Corporation, Doha, Qatar
| | - Tetsunari Oyama
- Diagnostic Pathology, Gunma University Graduate School of Medicine, 3-39-22 Showamachi, Maebashi, Gunma, 371-8511, Japan
| |
Collapse
|
5
|
Lin Z, He Y, Qiu C, Yu Q, Huang H, Yiwen Zhang, Li W, Qiu T, Xiaoping Li. A multi-omics signature to predict the prognosis of invasive ductal carcinoma of the breast. Comput Biol Med 2022; 151:106291. [PMID: 36395590 DOI: 10.1016/j.compbiomed.2022.106291] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 10/04/2022] [Accepted: 11/06/2022] [Indexed: 11/13/2022]
Abstract
BACKGROUND Precisely evaluating the prognosis of invasive ductal carcinoma (IDC) of the breast is challenging as most prognostic signatures use single-omics data based on gene or clinical information. METHODS Whole-slide images (WSIs), transcriptome, and clinical data of breast IDC were collected from the Cancer Genome Atlas Database. The cancer-associated fibroblast (CAF) gene sets were downloaded from the Molecular Signatures Database. The WSI feature was extracted by artificial feature engineering. The CAF prognostic genes were determined by the Gene Set Enrichment Analysis, the Wilcoxon test, and univariate Cox regression. The IDC patients were divided into the training and test sets. The prognostic signatures based on WSIs, IDC-CAFs, bi-omics, and tri-omics were constructed using multivariate Cox regression. The samples were divided into low- and high-risk groups according to the median risk score. The Kaplan-Meier survival and receiver operating characteristic curves were applied to validate the prediction performance of the four signatures. RESULTS In total, 508 IDC patients with complete data were included. The area under the curve (AUC) of single-omics signature based on WSI characteristics and CAFs was 0.765 and 0.775, whereas the AUC of bi-omics was 0.823. The tri-omics signature based on WSIs, CAFs, and lymph node status demonstrated the best predictive value with an AUC of 0.897. CONCLUSION The multi-omics signature based on WSIs, CAFs, and clinical characteristics showed excellent prediction ability in breast IDC patients, whose risk factors can also provide a valuable diagnostic reference for the clinical course.
Collapse
Affiliation(s)
- Zhiquan Lin
- Wuyi University, 99 Yinbin Avenue, Jiangmen, Guangdong, China
| | - Yu He
- National Drug Clinical Trial Institution, Jiangmen Central Hospital, Jiangmen, Guangdong, China
| | - Chaoran Qiu
- Department of Breast, Jiangmen Central Hospital, Jiangmen, Guangdong, China
| | - Qihe Yu
- Department of Oncology, Jiangmen Central Hospital, Jiangmen, Guangdong, China
| | - Hui Huang
- Department of Breast Surgery, Jiangmen Maternity & Child Health Care Hospital, Jiangmen, Guangdong, China
| | - Yiwen Zhang
- Department of Breast, Jiangmen Central Hospital, Jiangmen, Guangdong, China
| | - Weiwen Li
- Department of Breast, Jiangmen Central Hospital, Jiangmen, Guangdong, China
| | - Tian Qiu
- Wuyi University, 99 Yinbin Avenue, Jiangmen, Guangdong, China.
| | - Xiaoping Li
- Department of Breast, Jiangmen Central Hospital, Jiangmen, Guangdong, China.
| |
Collapse
|
6
|
Tsuneki M, Kanavati F. Weakly supervised learning for multi-organ adenocarcinoma classification in whole slide images. PLoS One 2022; 17:e0275378. [PMID: 36417401 PMCID: PMC9683606 DOI: 10.1371/journal.pone.0275378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 09/15/2022] [Indexed: 11/25/2022] Open
Abstract
The primary screening by automated computational pathology algorithms of the presence or absence of adenocarcinoma in biopsy specimens (e.g., endoscopic biopsy, transbronchial lung biopsy, and needle biopsy) of possible primary organs (e.g., stomach, colon, lung, and breast) and radical lymph node dissection specimen is very useful and should be a powerful tool to assist surgical pathologists in routine histopathological diagnostic workflow. In this paper, we trained multi-organ deep learning models to classify adenocarcinoma in biopsy and radical lymph node dissection specimens whole slide images (WSIs). We evaluated the models on five independent test sets (stomach, colon, lung, breast, lymph nodes) to demonstrate the feasibility in multi-organ and lymph nodes specimens from different medical institutions, achieving receiver operating characteristic areas under the curves (ROC-AUCs) in the range of 0.91 -0.98.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Akasaka, Chuo-ku, Fukuoka, Japan
- * E-mail:
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., Akasaka, Chuo-ku, Fukuoka, Japan
| |
Collapse
|
7
|
Accuracy and Utility of Preoperative Ultrasound-Guided Axillary Lymph Node Biopsy for Invasive Breast Cancer: A Systematic Review and Meta-Analysis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3307627. [PMID: 36203726 PMCID: PMC9532070 DOI: 10.1155/2022/3307627] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/07/2022] [Accepted: 09/10/2022] [Indexed: 12/05/2022]
Abstract
Background With the acceleration of the pace of life and work, the incidence rate of invasive breast cancer is getting higher and higher, and early diagnosis is very important. This study screened and analyzed the published literature on ultrasound-guided biopsy of invasive breast cancer and obtained the accuracy and practicality of preoperative biopsy. Method The four databases were screened for the literature. There was no requirement for the start date of retrieval, and the deadline was July 2, 2022. Two researchers screened the literature, respectively, and included the literature on preoperative ultrasound-guided biopsy and intraoperative and postoperative pathological diagnosis of invasive breast cancer. The diagnostic data included in the literature were extracted and meta-analyzed with RevMan 5.4 software, and the bias risk map, forest map, and summary receiver operating characteristic curves (SROC) were drawn. Results The included 19 studies involved about 18668 patients with invasive breast cancer. The degree of bias of the included literature is low. The distribution range of true positive, false positive, true negative, and false negative in the forest map is large, which may be related to the large difference in the number of patients in each study. Most studies in the SROC curve are at the upper left, indicating that the accuracy of ultrasound-guided axillary biopsy is very high. Conclusion For invasive breast cancer, preoperative ultrasound-guided biopsy can accurately predict staging and grading of breast cancer, which has important reference value for surgery and follow-up treatment.
Collapse
|
8
|
Agbley BLY, Li J, Hossin MA, Nneji GU, Jackson J, Monday HN, James EC. Federated Learning-Based Detection of Invasive Carcinoma of No Special Type with Histopathological Images. Diagnostics (Basel) 2022; 12:diagnostics12071669. [PMID: 35885573 PMCID: PMC9323034 DOI: 10.3390/diagnostics12071669] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 07/03/2022] [Accepted: 07/05/2022] [Indexed: 11/16/2022] Open
Abstract
Invasive carcinoma of no special type (IC-NST) is known to be one of the most prevalent kinds of breast cancer, hence the growing research interest in studying automated systems that can detect the presence of breast tumors and appropriately classify them into subtypes. Machine learning (ML) and, more specifically, deep learning (DL) techniques have been used to approach this problem. However, such techniques usually require massive amounts of data to obtain competitive results. This requirement makes their application in specific areas such as health problematic as privacy concerns regarding the release of patients’ data publicly result in a limited number of publicly available datasets for the research community. This paper proposes an approach that leverages federated learning (FL) to securely train mathematical models over multiple clients with local IC-NST images partitioned from the breast histopathology image (BHI) dataset to obtain a global model. First, we used residual neural networks for automatic feature extraction. Then, we proposed a second network consisting of Gabor kernels to extract another set of features from the IC-NST dataset. After that, we performed a late fusion of the two sets of features and passed the output through a custom classifier. Experiments were conducted for the federated learning (FL) and centralized learning (CL) scenarios, and the results were compared. Competitive results were obtained, indicating the positive prospects of adopting FL for IC-NST detection. Additionally, fusing the Gabor features with the residual neural network features resulted in the best performance in terms of accuracy, F1 score, and area under the receiver operation curve (AUC-ROC). The models show good generalization by performing well on another domain dataset, the breast cancer histopathological (BreakHis) image dataset. Our method also outperformed other methods from the literature.
Collapse
Affiliation(s)
- Bless Lord Y. Agbley
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (B.L.Y.A.); (H.N.M.)
| | - Jianping Li
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (B.L.Y.A.); (H.N.M.)
- Correspondence:
| | - Md Altab Hossin
- School of Innovation and Entrepreneurship, Chengdu University, Chengdu 610106, China;
| | - Grace Ugochi Nneji
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.J.); (E.C.J.)
| | - Jehoiada Jackson
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.J.); (E.C.J.)
| | - Happy Nkanta Monday
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (B.L.Y.A.); (H.N.M.)
| | - Edidiong Christopher James
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; (G.U.N.); (J.J.); (E.C.J.)
| |
Collapse
|
9
|
BM-Net: CNN-Based MobileNet-V3 and Bilinear Structure for Breast Cancer Detection in Whole Slide Images. Bioengineering (Basel) 2022; 9:bioengineering9060261. [PMID: 35735504 PMCID: PMC9220285 DOI: 10.3390/bioengineering9060261] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 06/15/2022] [Accepted: 06/15/2022] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common types of cancer and is the leading cause of cancer-related death. Diagnosis of breast cancer is based on the evaluation of pathology slides. In the era of digital pathology, these slides can be converted into digital whole slide images (WSIs) for further analysis. However, due to their sheer size, digital WSIs diagnoses are time consuming and challenging. In this study, we present a lightweight architecture that consists of a bilinear structure and MobileNet-V3 network, bilinear MobileNet-V3 (BM-Net), to analyze breast cancer WSIs. We utilized the WSI dataset from the ICIAR2018 Grand Challenge on Breast Cancer Histology Images (BACH) competition, which contains four classes: normal, benign, in situ carcinoma, and invasive carcinoma. We adopted data augmentation techniques to increase diversity and utilized focal loss to remove class imbalance. We achieved high performance, with 0.88 accuracy in patch classification and an average 0.71 score, which surpassed state-of-the-art models. Our BM-Net shows great potential in detecting cancer in WSIs and is a promising clinical tool.
Collapse
|
10
|
A Deep Learning Model for Prostate Adenocarcinoma Classification in Needle Biopsy Whole-Slide Images Using Transfer Learning. Diagnostics (Basel) 2022; 12:diagnostics12030768. [PMID: 35328321 PMCID: PMC8947489 DOI: 10.3390/diagnostics12030768] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/08/2022] [Accepted: 03/18/2022] [Indexed: 02/04/2023] Open
Abstract
The histopathological diagnosis of prostate adenocarcinoma in needle biopsy specimens is of pivotal importance for determining optimum prostate cancer treatment. Since diagnosing a large number of cases containing 12 core biopsy specimens by pathologists using a microscope is time-consuming manual system and limited in terms of human resources, it is necessary to develop new techniques that can rapidly and accurately screen large numbers of histopathological prostate needle biopsy specimens. Computational pathology applications that can assist pathologists in detecting and classifying prostate adenocarcinoma from whole-slide images (WSIs) would be of great benefit for routine pathological practice. In this paper, we trained deep learning models capable of classifying needle biopsy WSIs into adenocarcinoma and benign (non-neoplastic) lesions. We evaluated the models on needle biopsy, transurethral resection of the prostate (TUR-P), and The Cancer Genome Atlas (TCGA) public dataset test sets, achieving an ROC-AUC up to 0.978 in needle biopsy test sets and up to 0.9873 in TCGA test sets for adenocarcinoma.
Collapse
|
11
|
Tsuneki M. Deep learning models in medical image analysis. J Oral Biosci 2022; 64:312-320. [PMID: 35306172 DOI: 10.1016/j.job.2022.03.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/08/2022] [Accepted: 03/09/2022] [Indexed: 02/07/2023]
Abstract
BACKGROUND Deep learning is a state-of-the-art technology that has rapidly become the method of choice for medical image analysis. Its fast and robust object detection, segmentation, tracking, and classification of pathophysiological anatomical structures can support medical practitioners during routine clinical workflow. Thus, deep learning-based applications for diseases diagnosis will empower physicians and allow fast decision-making in clinical practice. HIGHLIGHT Deep learning can be more robust with various features for differentiating classes, provided the training set is large and diverse for analysis. However, sufficient medical images for training sets are not always available from medical institutions, which is one of the major limitations of deep learning in medical image analysis. This review article presents some solutions for this issue and discusses efforts needed to develop robust deep learning-based computer-aided diagnosis applications for better clinical workflow in endoscopy, radiology, pathology, and dentistry. CONCLUSION The introduction of deep learning-based applications will enhance the traditional role of medical practitioners in ensuring accurate diagnoses and treatment in terms of precision, reproducibility, and scalability.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Fukuoka, Japan; Division of Anatomy and Cell Biology of the Hard Tissue, Department of Tissue Regeneration and Reconstruction, Niigata University Graduate School of Medical and Dental Sciences, Niigata, Japan.
| |
Collapse
|