1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S. Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N. Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Rahman T, Baras AS, Chellappa R. Evaluation of a task specific self-supervised learning framework in digital pathology relative to transfer learning approaches and existing foundation models. Mod Pathol 2024:100636. [PMID: 39455029 DOI: 10.1016/j.modpat.2024.100636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 09/06/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024]
Abstract
An integral stage in typical digital pathology workflows involves deriving specific features from tiles extracted from a tessellated whole slide image. Notably, various computer vision neural network architectures, particularly the ImageNet pre-trained, have been extensively used in this domain. This study critically analyzes multiple strategies for encoding tiles to understand the extent of transfer learning and identify the most effective approach. The study categorizes neural network performance into three weight initialization methods: random, ImageNet-based, and self-supervised learning. Additionally, we propose a framework based on task-specific self-supervised learning (TS-SSL) which introduces a shallow feature extraction method, employing a spatial-channel attention block to glean distinctive features optimized for histopathology intricacies. Across two different downstream classification tasks (patch classification, and weakly supervised whole slide image classification) with diverse classification datasets, including Colorectal cancer histology, Patch Camelyon, PANDA, TCGA and CIFAR-10, our task specific self-supervised encoding approach consistently outperforms other CNN-based encoders. The better performances highlight the potential of task-specific-attention based self-supervised training in tailoring feature extraction for histopathology, indicating a shift from utilizing pretrained models originating outside the histopathology domain. Our study supports the idea that task-specific self-supervised learning allows domain-specific feature extraction, encouraging a more focused analysis.
Collapse
Affiliation(s)
- Tawsifur Rahman
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Alexander S Baras
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Rama Chellappa
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, Maryland, USA; Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
3
|
Li Q, Geng S, Luo H, Wang W, Mo YQ, Luo Q, Wang L, Song GB, Sheng JP, Xu B. Signaling pathways involved in colorectal cancer: pathogenesis and targeted therapy. Signal Transduct Target Ther 2024; 9:266. [PMID: 39370455 PMCID: PMC11456611 DOI: 10.1038/s41392-024-01953-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/25/2024] [Accepted: 08/16/2024] [Indexed: 10/08/2024] Open
Abstract
Colorectal cancer (CRC) remains one of the leading causes of cancer-related mortality worldwide. Its complexity is influenced by various signal transduction networks that govern cellular proliferation, survival, differentiation, and apoptosis. The pathogenesis of CRC is a testament to the dysregulation of these signaling cascades, which culminates in the malignant transformation of colonic epithelium. This review aims to dissect the foundational signaling mechanisms implicated in CRC, to elucidate the generalized principles underpinning neoplastic evolution and progression. We discuss the molecular hallmarks of CRC, including the genomic, epigenomic and microbial features of CRC to highlight the role of signal transduction in the orchestration of the tumorigenic process. Concurrently, we review the advent of targeted and immune therapies in CRC, assessing their impact on the current clinical landscape. The development of these therapies has been informed by a deepening understanding of oncogenic signaling, leading to the identification of key nodes within these networks that can be exploited pharmacologically. Furthermore, we explore the potential of integrating AI to enhance the precision of therapeutic targeting and patient stratification, emphasizing their role in personalized medicine. In summary, our review captures the dynamic interplay between aberrant signaling in CRC pathogenesis and the concerted efforts to counteract these changes through targeted therapeutic strategies, ultimately aiming to pave the way for improved prognosis and personalized treatment modalities in colorectal cancer.
Collapse
Affiliation(s)
- Qing Li
- The Shapingba Hospital, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Shan Geng
- Central Laboratory, The Affiliated Dazu Hospital of Chongqing Medical University, Chongqing, China
| | - Hao Luo
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
- Cancer Center, Daping Hospital, Army Medical University, Chongqing, China
| | - Wei Wang
- Chongqing Municipal Health and Health Committee, Chongqing, China
| | - Ya-Qi Mo
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
| | - Qing Luo
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Lu Wang
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
| | - Guan-Bin Song
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China.
| | - Jian-Peng Sheng
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| | - Bo Xu
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China.
| |
Collapse
|
4
|
Chandramohan D, Garapati HN, Nangia U, Simhadri PK, Lapsiwala B, Jena NK, Singh P. Diagnostic accuracy of deep learning in detection and prognostication of renal cell carcinoma: a systematic review and meta-analysis. Front Med (Lausanne) 2024; 11:1447057. [PMID: 39301494 PMCID: PMC11412207 DOI: 10.3389/fmed.2024.1447057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Accepted: 08/07/2024] [Indexed: 09/22/2024] Open
Abstract
Introduction The prevalence of Renal cell carcinoma (RCC) is increasing among adults. Histopathologic samples obtained after surgical resection or from biopsies of a renal mass require subtype classification for diagnosis, prognosis, and to determine surveillance. Deep learning in artificial intelligence (AI) and pathomics are rapidly advancing, leading to numerous applications such as histopathological diagnosis. In our meta-analysis, we assessed the pooled diagnostic performances of deep neural network (DNN) frameworks in detecting RCC subtypes and to predicting survival. Methods A systematic search was done in PubMed, Google Scholar, Embase, and Scopus from inception to November 2023. The random effects model was used to calculate the pooled percentages, mean, and 95% confidence interval. Accuracy was defined as the number of cases identified by AI out of the total number of cases, i.e. (True Positive + True Negative)/(True Positive + True Negative + False Positive + False Negative). The heterogeneity between study-specific estimates was assessed by the I 2 statistic. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were used to conduct and report the analysis. Results The search retrieved 347 studies; 13 retrospective studies evaluating 5340 patients were included in the final analysis. The pooled performance of the DNN was as follows: accuracy 92.3% (95% CI: 85.8-95.9; I 2 = 98.3%), sensitivity 97.5% (95% CI: 83.2-99.7; I 2 = 92%), specificity 89.2% (95% CI: 29.9-99.4; I 2 = 99.6%) and area under the curve 0.91 (95% CI: 0.85-0.97.3; I 2 = 99.6%). Specifically, their accuracy in RCC subtype detection was 93.5% (95% CI: 88.7-96.3; I 2 = 92%), and the accuracy in survival analysis prediction was 81% (95% CI: 67.8-89.6; I 2 = 94.4%). Discussion The DNN showed excellent pooled diagnostic accuracy rates to classify RCC into subtypes and grade them for prognostic purposes. Further studies are required to establish generalizability and validate these findings on a larger scale.
Collapse
Affiliation(s)
- Deepak Chandramohan
- Department of Nephrology, The University of Alabama at Birmingham, Birmingham, AL, United States
| | - Hari Naga Garapati
- Department of Nephrology, Baptist Medical Center South, Montgomery, AL, United States
| | - Udit Nangia
- Department of Medicine, University Hospital Parma Medical Center, Parma, OH, United States
| | - Prathap K Simhadri
- Department of Nephrology, The University of Alabama at Birmingham, Birmingham, AL, United States
| | - Boney Lapsiwala
- Department of Internal Medicine, Medical City Arlington, Arlington, TX, United States
| | - Nihar K Jena
- Department of Cardiology, Trinity Health Oakland Hospital, Pontiac, MI, United States
| | - Prabhat Singh
- Department of Nephrology, Christus Spohn Health System, Corpus Christi, TX, United States
| |
Collapse
|
5
|
Miura E, Emoto K, Abe T, Hashiguchi A, Hishida T, Asakura K, Sakamoto M. Establishment of artificial intelligence model for precise histological subtyping of lung adenocarcinoma and its application to quantitative and spatial analysis. Jpn J Clin Oncol 2024; 54:1009-1023. [PMID: 38757929 DOI: 10.1093/jjco/hyae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 05/04/2024] [Indexed: 05/18/2024] Open
Abstract
BACKGROUND The histological subtype of lung adenocarcinoma is a major prognostic factor. We developed a new artificial intelligence model to classify lung adenocarcinoma images into seven histological subtypes and adopted the model for whole-slide images to investigate the relationship between the distribution of histological subtypes and clinicopathological factors. METHODS Using histological subtype images, which are typical for pathologists, we trained and validated an artificial intelligence model. Then, the model was applied to whole-slide images of resected lung adenocarcinoma specimens from 147 cases. RESULT The model achieved an accuracy of 99.7% in training sets and 90.4% in validation sets consisting of typical tiles of histological subtyping for pathologists. When the model was applied to whole-slide images, the predominant subtype according to the artificial intelligence model classification matched that determined by pathologists in 75.5% of cases. The predominant subtype and tumor grade (using the WHO fourth and fifth classifications) determined by the artificial intelligence model resulted in similar recurrence-free survival curves to those determined by pathologists. Furthermore, we stratified the recurrence-free survival curves for patients with different proportions of high-grade components (solid, micropapillary and cribriform) according to the physical distribution of the high-grade component. The results suggested that tumors with centrally located high-grade components had a higher malignant potential (P < 0.001 for 5-20% high-grade component). CONCLUSION The new artificial intelligence model for histological subtyping of lung adenocarcinoma achieved high accuracy, and subtype quantification and subtype distribution analyses could be achieved. Artificial intelligence model therefore has potential for clinical application for both quantification and spatial analysis.
Collapse
Affiliation(s)
- Eisuke Miura
- Department of Pathology, Keio University School of Medicine, Tokyo, Japan
| | - Katsura Emoto
- Department of Pathology, Keio University School of Medicine, Tokyo, Japan
- Department of Diagnostic Pathology, National Hospital Organization Saitama Hospital, Saitama, Japan
| | - Tokiya Abe
- Department of Pathology, Keio University School of Medicine, Tokyo, Japan
| | - Akinori Hashiguchi
- Department of Pathology, Keio University School of Medicine, Tokyo, Japan
| | - Tomoyuki Hishida
- Division of Thoracic Surgery, Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Keisuke Asakura
- Division of Thoracic Surgery, Department of Surgery, Keio University School of Medicine, Tokyo, Japan
| | - Michiie Sakamoto
- Department of Pathology, Keio University School of Medicine, Tokyo, Japan
- School of Medicine, International University of Health and Welfare, Chiba, Japan
| |
Collapse
|
6
|
Mubarak M, Rashid R, Sapna F, Shakeel S. Expanding role and scope of artificial intelligence in the field of gastrointestinal pathology. Artif Intell Gastroenterol 2024; 5:91550. [DOI: 10.35712/aig.v5.i2.91550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/06/2024] [Accepted: 07/29/2024] [Indexed: 08/08/2024] Open
Abstract
Digital pathology (DP) and its subsidiaries including artificial intelligence (AI) are rapidly making inroads into the area of diagnostic anatomic pathology (AP) including gastrointestinal (GI) pathology. It is poised to revolutionize the field of diagnostic AP. Historically, AP has been slow to adopt digital technology, but this is changing rapidly, with many centers worldwide transitioning to DP. Coupled with advanced techniques of AI such as deep learning and machine learning, DP is likely to transform histopathology from a subjective field to an objective, efficient, and transparent discipline. AI is increasingly integrated into GI pathology, offering numerous advancements and improvements in overall diagnostic accuracy, efficiency, and patient care. Specifically, AI in GI pathology enhances diagnostic accuracy, streamlines workflows, provides predictive insights, integrates multimodal data, supports research, and aids in education and training, ultimately improving patient care and outcomes. This review summarized the latest developments in the role and scope of AI in AP with a focus on GI pathology. The main aim was to provide updates and create awareness among the pathology community.
Collapse
Affiliation(s)
- Muhammed Mubarak
- Department of Histopathology, Sindh Institute of Urology and Transplantation, Karachi 74200, Sindh, Pakistan
| | - Rahma Rashid
- Department of Histopathology, Sindh Institute of Urology and Transplantation, Karachi 74200, Sindh, Pakistan
| | - Fnu Sapna
- Department of Pathology, Montefiore Medical Center, The University Hospital for Albert Einstein School of Medicine, Bronx, NY 10461, United States
| | - Shaheera Shakeel
- Department of Histopathology, Sindh Institute of Urology and Transplantation, Karachi 74200, Sindh, Pakistan
| |
Collapse
|
7
|
Cai C, Zhou Y, Jiao Y, Li L, Xu J. Prognostic Analysis Combining Histopathological Features and Clinical Information to Predict Colorectal Cancer Survival from Whole-Slide Images. Dig Dis Sci 2024; 69:2985-2995. [PMID: 38837111 DOI: 10.1007/s10620-024-08501-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 05/13/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.
Collapse
Affiliation(s)
- Chengfei Cai
- School of Automation, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
- College of Information Engineering, Taizhou University, Taizhou, 225300, China.
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China.
| | - Yangshu Zhou
- Department of Pathology, Zhujiang Hospital of Southern Medical University, Guangzhou, 510280, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| | - Liang Li
- Department of Pathology, Nanfang Hospital of Southern Medical University, Guangzhou, 510515, China
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044, China
| |
Collapse
|
8
|
Li Q, Zhang X, Zhang J, Huang H, Li L, Guo C, Li W, Guo Y. Deep learning-based hyperspectral technique identifies metastatic lymph nodes in oral squamous cell carcinoma-A pilot study. Oral Dis 2024. [PMID: 39005220 DOI: 10.1111/odi.15067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Revised: 05/31/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
AIMS To establish a system based on hyperspectral imaging and deep learning for the detection of cancer cells in metastatic lymph nodes. MAIN METHODS The continuous sections of metastatic lymph nodes from 45 oral squamous cell carcinoma (OSCC) patients were collected. An improved ResUNet algorithm was established for deep learning to analyze the spectral curve differences between cancer cells and lymphocytes, and that between tumor tissue and normal tissue. KEY FINDINGS It was found that cancer cells, lymphocytes, and erythrocytes in the metastatic lymph nodes could be distinguished basing hyperspectral image, with overall accuracy (OA) as 87.30% and average accuracy (AA) as 85.46%. Cancerous area could be recognized by hyperspectral image and deep learning, and the average intersection over union (IOU) and accuracy were 0.6253 and 0.7692, respectively. SIGNIFICANCE This study indicated that deep learning-based hyperspectral techniques can identify tumor tissue in OSCC metastatic lymph nodes, achieving high accuracy of pathological diagnosis, high work efficiency, and reducing work burden. But these are preliminary results limited to a small sample.
Collapse
Affiliation(s)
- Qingxiang Li
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China
- National Center for Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Beijing, China
- National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
| | - Xueyu Zhang
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
- Beijing Key Laboratory of Fractional Signals and Systems, Beijing, China
| | - Jianyun Zhang
- National Center for Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Beijing, China
- National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
- Department of Oral Pathology, Peking University School and Hospital of Stomatology, Beijing, China
| | - Hongyuan Huang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China
- National Center for Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Beijing, China
- National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
| | - Liangliang Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
- Beijing Key Laboratory of Fractional Signals and Systems, Beijing, China
| | - Chuanbin Guo
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China
- National Center for Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Beijing, China
- National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
| | - Wei Li
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China
- Beijing Key Laboratory of Fractional Signals and Systems, Beijing, China
| | - Yuxing Guo
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China
- National Center for Stomatology, Beijing, China
- National Clinical Research Center for Oral Diseases, Beijing, China
- National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing, China
| |
Collapse
|
9
|
Chang J, Hatfield B. Advancements in computer vision and pathology: Unraveling the potential of artificial intelligence for precision diagnosis and beyond. Adv Cancer Res 2024; 161:431-478. [PMID: 39032956 DOI: 10.1016/bs.acr.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
The integration of computer vision into pathology through slide digitalization represents a transformative leap in the field's evolution. Traditional pathology methods, while reliable, are often time-consuming and susceptible to intra- and interobserver variability. In contrast, computer vision, empowered by artificial intelligence (AI) and machine learning (ML), promises revolutionary changes, offering consistent, reproducible, and objective results with ever-increasing speed and scalability. The applications of advanced algorithms and deep learning architectures like CNNs and U-Nets augment pathologists' diagnostic capabilities, opening new frontiers in automated image analysis. As these technologies mature and integrate into digital pathology workflows, they are poised to provide deeper insights into disease processes, quantify and standardize biomarkers, enhance patient outcomes, and automate routine tasks, reducing pathologists' workload. However, this transformative force calls for cross-disciplinary collaboration between pathologists, computer scientists, and industry innovators to drive research and development. While acknowledging its potential, this chapter addresses the limitations of AI in pathology, encompassing technical, practical, and ethical considerations during development and implementation.
Collapse
Affiliation(s)
- Justin Chang
- Virginia Commonwealth University Health System, Richmond, VA, United States
| | - Bryce Hatfield
- Virginia Commonwealth University Health System, Richmond, VA, United States.
| |
Collapse
|
10
|
Juan Ramon A, Parmar C, Carrasco-Zevallos OM, Csiszer C, Yip SSF, Raciti P, Stone NL, Triantos S, Quiroz MM, Crowley P, Batavia AS, Greshock J, Mansi T, Standish KA. Development and deployment of a histopathology-based deep learning algorithm for patient prescreening in a clinical trial. Nat Commun 2024; 15:4690. [PMID: 38824132 PMCID: PMC11144215 DOI: 10.1038/s41467-024-49153-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 05/24/2024] [Indexed: 06/03/2024] Open
Abstract
Accurate identification of genetic alterations in tumors, such as Fibroblast Growth Factor Receptor, is crucial for treating with targeted therapies; however, molecular testing can delay patient care due to the time and tissue required. Successful development, validation, and deployment of an AI-based, biomarker-detection algorithm could reduce screening cost and accelerate patient recruitment. Here, we develop a deep-learning algorithm using >3000 H&E-stained whole slide images from patients with advanced urothelial cancers, optimized for high sensitivity to avoid ruling out trial-eligible patients. The algorithm is validated on a dataset of 350 patients, achieving an area under the curve of 0.75, specificity of 31.8% at 88.7% sensitivity, and projected 28.7% reduction in molecular testing. We successfully deploy the system in a non-interventional study comprising 89 global study clinical sites and demonstrate its potential to prioritize/deprioritize molecular testing resources and provide substantial cost savings in the drug development and clinical settings.
Collapse
Affiliation(s)
- Albert Juan Ramon
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA.
| | - Chaitanya Parmar
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA
| | | | - Carlos Csiszer
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Stephen S F Yip
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Cambridge, MA, USA
| | - Patricia Raciti
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Nicole L Stone
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Spyros Triantos
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Michelle M Quiroz
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Patrick Crowley
- Janssen R&D, LLC, a Johnson & Johnson Company. Global Development, High Wycombe, UK
| | - Ashita S Batavia
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Joel Greshock
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Spring House, PA, USA
| | - Tommaso Mansi
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Kristopher A Standish
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA
| |
Collapse
|
11
|
Zhou J, Song W, Liu Y, Yuan X. An efficient computational framework for gastrointestinal disorder prediction using attention-based transfer learning. PeerJ Comput Sci 2024; 10:e2059. [PMID: 38855223 PMCID: PMC11157572 DOI: 10.7717/peerj-cs.2059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 04/23/2024] [Indexed: 06/11/2024]
Abstract
Diagnosing gastrointestinal (GI) disorders, which affect parts of the digestive system such as the stomach and intestines, can be difficult even for experienced gastroenterologists due to the variety of ways these conditions present. Early diagnosis is critical for successful treatment, but the review process is time-consuming and labor-intensive. Computer-aided diagnostic (CAD) methods provide a solution by automating diagnosis, saving time, reducing workload, and lowering the likelihood of missing critical signs. In recent years, machine learning and deep learning approaches have been used to develop many CAD systems to address this issue. However, existing systems need to be improved for better safety and reliability on larger datasets before they can be used in medical diagnostics. In our study, we developed an effective CAD system for classifying eight types of GI images by combining transfer learning with an attention mechanism. Our experimental results show that ConvNeXt is an effective pre-trained network for feature extraction, and ConvNeXt+Attention (our proposed method) is a robust CAD system that outperforms other cutting-edge approaches. Our proposed method had an area under the receiver operating characteristic curve of 0.9997 and an area under the precision-recall curve of 0.9973, indicating excellent performance. The conclusion regarding the effectiveness of the system was also supported by the values of other evaluation metrics.
Collapse
Affiliation(s)
- Jiajie Zhou
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| | - Wei Song
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| | - Yeliu Liu
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| | - Xiaoming Yuan
- Huai’an First People’s Hospital, Nanjing Medical University, Jiangsu, China
| |
Collapse
|
12
|
Bui DC, Song B, Kim K, Kwak JT. DAX-Net: A dual-branch dual-task adaptive cross-weight feature fusion network for robust multi-class cancer classification in pathology images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108112. [PMID: 38479146 DOI: 10.1016/j.cmpb.2024.108112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 02/15/2024] [Accepted: 03/01/2024] [Indexed: 04/02/2024]
Abstract
BACKGROUND AND OBJECTIVE Multi-class cancer classification has been extensively studied in digital and computational pathology due to its importance in clinical decision-making. Numerous computational tools have been proposed for various types of cancer classification. Many of them are built based on convolutional neural networks. Recently, Transformer-style networks have shown to be effective for cancer classification. Herein, we present a hybrid design that leverages both convolutional neural networks and transformer architecture to obtain superior performance in cancer classification. METHODS We propose a dual-branch dual-task adaptive cross-weight feature fusion network, called DAX-Net, which exploits heterogeneous feature representations from the convolutional neural network and Transformer network, adaptively combines them to boost their representation power, and conducts cancer classification as categorical classification and ordinal classification. For an efficient and effective optimization of the proposed model, we introduce two loss functions that are tailored to the two classification tasks. RESULTS To evaluate the proposed method, we employed colorectal and prostate cancer datasets, of which each contains both in-domain and out-of-domain test sets. For colorectal cancer, the proposed method obtained an accuracy of 88.4%, a quadratic kappa score of 0.945, and an F1 score of 0.831 for the in-domain test set, and 84.4%, 0.910, and 0.768 for the out-of-domain test set. For prostate cancer, it achieved an accuracy of 71.6%, a kappa score of 0.635, and an F1 score of 0.655 for the in-domain test set, 79.2% accuracy, 0.721 kappa score, and 0.686 F1 score for the first out-of-domain test set, and 58.1% accuracy, 0.564 kappa score, and 0.493 F1 score for the second out-of-domain test set. It is worth noting that the performance of the proposed method outperformed other competitors by significant margins, in particular, with respect to the out-of-domain test sets. CONCLUSIONS The experimental results demonstrate that the proposed method is not only accurate but also robust to varying conditions of the test sets in comparison to several, related methods. These results suggest that the proposed method can facilitate automated cancer classification in various clinical settings.
Collapse
Affiliation(s)
- Doanh C Bui
- School of Electrical Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Boram Song
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, 03181, Republic of Korea
| | - Kyungeun Kim
- Department of Pathology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, 03181, Republic of Korea
| | - Jin Tae Kwak
- School of Electrical Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
13
|
Yilmaz F, Brickman A, Najdawi F, Yakirevich E, Egger R, Resnick MB. Advancing Artificial Intelligence Integration Into the Pathology Workflow: Exploring Opportunities in Gastrointestinal Tract Biopsies. J Transl Med 2024; 104:102043. [PMID: 38431118 DOI: 10.1016/j.labinv.2024.102043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/26/2024] [Indexed: 03/05/2024] Open
Abstract
This review aims to present a comprehensive overview of the current landscape of artificial intelligence (AI) applications in the analysis of tubular gastrointestinal biopsies. These publications cover a spectrum of conditions, ranging from inflammatory ailments to malignancies. Moving beyond the conventional diagnosis based on hematoxylin and eosin-stained whole-slide images, the review explores additional implications of AI, including its involvement in interpreting immunohistochemical results, molecular subtyping, and the identification of cellular spatial biomarkers. Furthermore, the review examines how AI can contribute to enhancing the quality and control of diagnostic processes, introducing new workflow options, and addressing the limitations and caveats associated with current AI platforms in this context.
Collapse
Affiliation(s)
- Fazilet Yilmaz
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Arlen Brickman
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Fedaa Najdawi
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | - Evgeny Yakirevich
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island
| | | | - Murray B Resnick
- The Warren Alpert Medical School of Brown University, Rhode Island Hospital, Providence, Rhode Island.
| |
Collapse
|
14
|
Lotter W, Hassett MJ, Schultz N, Kehl KL, Van Allen EM, Cerami E. Artificial Intelligence in Oncology: Current Landscape, Challenges, and Future Directions. Cancer Discov 2024; 14:711-726. [PMID: 38597966 PMCID: PMC11131133 DOI: 10.1158/2159-8290.cd-23-1199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/29/2024] [Accepted: 02/28/2024] [Indexed: 04/11/2024]
Abstract
Artificial intelligence (AI) in oncology is advancing beyond algorithm development to integration into clinical practice. This review describes the current state of the field, with a specific focus on clinical integration. AI applications are structured according to cancer type and clinical domain, focusing on the four most common cancers and tasks of detection, diagnosis, and treatment. These applications encompass various data modalities, including imaging, genomics, and medical records. We conclude with a summary of existing challenges, evolving solutions, and potential future directions for the field. SIGNIFICANCE AI is increasingly being applied to all aspects of oncology, where several applications are maturing beyond research and development to direct clinical integration. This review summarizes the current state of the field through the lens of clinical translation along the clinical care continuum. Emerging areas are also highlighted, along with common challenges, evolving solutions, and potential future directions for the field.
Collapse
Affiliation(s)
- William Lotter
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Pathology, Brigham and Women’s Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Michael J. Hassett
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Nikolaus Schultz
- Marie-Josée and Henry R. Kravis Center for Molecular Oncology, Memorial Sloan Kettering Cancer Center; New York, NY, USA
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Kenneth L. Kehl
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Eliezer M. Van Allen
- Harvard Medical School, Boston, MA, USA
- Division of Population Sciences, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, MA, USA
- Cancer Program, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Ethan Cerami
- Department of Data Science, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
15
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. An effective colorectal polyp classification for histopathological images based on supervised contrastive learning. Comput Biol Med 2024; 172:108267. [PMID: 38479197 DOI: 10.1016/j.compbiomed.2024.108267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/06/2024] [Accepted: 03/06/2024] [Indexed: 03/26/2024]
Abstract
Early detection of colon adenomatous polyps is pivotal in reducing colon cancer risk. In this context, accurately distinguishing between adenomatous polyp subtypes, especially tubular and tubulovillous, from hyperplastic variants is crucial. This study introduces a cutting-edge computer-aided diagnosis system optimized for this task. Our system employs advanced Supervised Contrastive learning to ensure precise classification of colon histopathology images. Significantly, we have integrated the Big Transfer model, which has gained prominence for its exemplary adaptability to visual tasks in medical imaging. Our novel approach discerns between in-class and out-of-class images, thereby elevating its discriminatory power for polyp subtypes. We validated our system using two datasets: a specially curated one and the publicly accessible UniToPatho dataset. The results reveal that our model markedly surpasses traditional deep convolutional neural networks, registering classification accuracies of 87.1% and 70.3% for the custom and UniToPatho datasets, respectively. Such results emphasize the transformative potential of our model in polyp classification endeavors.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait
| |
Collapse
|
16
|
Oon ML, Syn NL, Tan CL, Tan KB, Ng SB. Bridging bytes and biopsies: A comparative analysis of ChatGPT and histopathologists in pathology diagnosis and collaborative potential. Histopathology 2024; 84:601-613. [PMID: 38032062 DOI: 10.1111/his.15100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2023] [Revised: 10/03/2023] [Accepted: 11/02/2023] [Indexed: 12/01/2023]
Abstract
BACKGROUND AND AIMS ChatGPT is a powerful artificial intelligence (AI) chatbot developed by the OpenAI research laboratory which is capable of analysing human input and generating human-like responses. Early research into the potential application of ChatGPT in healthcare has focused mainly on clinical and administrative functions. The diagnostic ability and utility of ChatGPT in histopathology is not well defined. We benchmarked the performance of ChatGPT against pathologists in diagnostic histopathology, and evaluated the collaborative potential between pathologists and ChatGPT to deliver more accurate diagnoses. METHODS AND RESULTS In Part 1 of the study, pathologists and ChatGPT were subjected to a series of questions encompassing common diagnostic conundrums in histopathology. For Part 2, pathologists reviewed a series of challenging virtual slides and provided their diagnoses before and after consultation with ChatGPT. We found that ChatGPT performed worse than pathologists in reaching the correct diagnosis. Consultation with ChatGPT provided limited help and information generated from ChatGPT is dependent on the prompts provided by the pathologists and is not always correct. Finally, we surveyed pathologists who rated the diagnostic accuracy of ChatGPT poorly, but found it useful as an advanced search engine. CONCLUSIONS The use of ChatGPT4 as a diagnostic tool in histopathology is limited by its inherent shortcomings. Judicious evaluation of the information and histopathology diagnosis generated from ChatGPT4 is essential and cannot replace the acuity and judgement of a pathologist. However, future advances in generative AI may expand its role in the field of histopathology.
Collapse
Affiliation(s)
- Ming Liang Oon
- Department of Pathology, National University Hospital, Singapore, Singapore
| | - Nicholas L Syn
- Department of Pathology, National University Hospital, Singapore, Singapore
| | - Char Loo Tan
- Department of Pathology, National University Hospital, Singapore, Singapore
| | - Kong-Bing Tan
- Department of Pathology, National University Hospital, Singapore, Singapore
- Department of Pathology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Siok-Bian Ng
- Department of Pathology, National University Hospital, Singapore, Singapore
- Department of Pathology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Cancer Science Institute of Singapore, National University of Singapore, Singapore, Singapore
| |
Collapse
|
17
|
Hua H, Zhou Y, Li W, Zhang J, Deng Y, Khoo BL. Microfluidics-based patient-derived disease detection tool for deep learning-assisted precision medicine. BIOMICROFLUIDICS 2024; 18:014101. [PMID: 38223546 PMCID: PMC10787641 DOI: 10.1063/5.0172146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 12/11/2023] [Indexed: 01/16/2024]
Abstract
Cancer spatial and temporal heterogeneity fuels resistance to therapies. To realize the routine assessment of cancer prognosis and treatment, we demonstrate the development of an Intelligent Disease Detection Tool (IDDT), a microfluidic-based tumor model integrated with deep learning-assisted algorithmic analysis. IDDT was clinically validated with liquid blood biopsy samples (n = 71) from patients with various types of cancers (e.g., breast, gastric, and lung cancer) and healthy donors, requiring low sample volume (∼200 μl) and a high-throughput 3D tumor culturing system (∼300 tumor clusters). To support automated algorithmic analysis, intelligent decision-making, and precise segmentation, we designed and developed an integrative deep neural network, which includes Mask Region-Based Convolutional Neural Network (Mask R-CNN), vision transformer, and Segment Anything Model (SAM). Our approach significantly reduces the manual labeling time by up to 90% with a high mean Intersection Over Union (mIoU) of 0.902 and immediate results (<2 s per image) for clinical cohort classification. The IDDT can accurately stratify healthy donors (n = 12) and cancer patients (n = 55) within their respective treatment cycle and cancer stage, resulting in high precision (∼99.3%) and high sensitivity (∼98%). We envision that our patient-centric IDDT provides an intelligent, label-free, and cost-effective approach to help clinicians make precise medical decisions and tailor treatment strategies for each patient.
Collapse
Affiliation(s)
| | - Yunlan Zhou
- Department of Clinical Laboratory, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200092, China
| | | | - Jing Zhang
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong 999077, China
| | - Yanlin Deng
- Department of Biomedical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong 999077, China
| | - Bee Luan Khoo
- Authors to whom correspondence should be addressed:; ; and
| |
Collapse
|
18
|
Hatta S, Ichiuji Y, Mabu S, Kugler M, Hontani H, Okoshi T, Fuse H, Kawada T, Kido S, Imamura Y, Naiki H, Inai K. Improved artificial intelligence discrimination of minor histological populations by supplementing with color-adjusted images. Sci Rep 2023; 13:19068. [PMID: 37925580 PMCID: PMC10625567 DOI: 10.1038/s41598-023-46472-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 11/01/2023] [Indexed: 11/06/2023] Open
Abstract
Despite the dedicated research of artificial intelligence (AI) for pathological images, the construction of AI applicable to histopathological tissue subtypes, is limited by insufficient dataset collection owing to disease infrequency. Here, we present a solution involving the addition of supplemental tissue array (TA) images that are adjusted to the tonality of the main data using a cycle-consistent generative adversarial network (CycleGAN) to the training data for rare tissue types. F1 scores of rare tissue types that constitute < 1.2% of the training data were significantly increased by improving recall values after adding color-adjusted TA images constituting < 0.65% of total training patches. The detector also enabled the equivalent discrimination of clinical images from two distinct hospitals and the capability was more increased following color-correction of test data before AI identification (F1 score from 45.2 ± 27.1 to 77.1 ± 10.3, p < 0.01). These methods also classified intraoperative frozen sections, while excessive supplementation paradoxically decreased F1 scores. These results identify strategies for building an AI that preserves the imbalance between training data with large differences in actual disease frequencies, which is important for constructing AI for practical histopathological classification.
Collapse
Affiliation(s)
- Satomi Hatta
- Division of Molecular Pathology, Department of Pathological Sciences, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji, Fukui, 910-1193, Japan
- Division of Diagnostic/Surgical Pathology, University of Fukui Hospital, Eiheiji, Japan
| | - Yoshihito Ichiuji
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan
| | - Shingo Mabu
- Graduate School of Sciences and Technology for Innovation, Yamaguchi University, Yamaguchi, Japan
| | - Mauricio Kugler
- Department of Computer Science, Nagoya Institute of Technology, Nagoya, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Nagoya, Japan
| | - Tadakazu Okoshi
- Department of Pathology, Fukui Red Cross Hospital, Fukui, Japan
| | - Haruki Fuse
- Department of Clinical Inspection, Maizuru Kyosai Hospital, Maizuru, Japan
| | - Takako Kawada
- Department of Clinical Inspection, Maizuru Kyosai Hospital, Maizuru, Japan
| | - Shoji Kido
- Department of Artificial Intelligence Diagnostic Radiology, Osaka University Graduate School of Medicine, Suita, Japan
| | - Yoshiaki Imamura
- Division of Diagnostic/Surgical Pathology, University of Fukui Hospital, Eiheiji, Japan
| | - Hironobu Naiki
- Division of Molecular Pathology, Department of Pathological Sciences, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji, Fukui, 910-1193, Japan
| | - Kunihiro Inai
- Division of Molecular Pathology, Department of Pathological Sciences, University of Fukui, 23-3 Matsuoka-Shimoaizuki, Eiheiji, Fukui, 910-1193, Japan.
| |
Collapse
|
19
|
Graham S, Minhas F, Bilal M, Ali M, Tsang YW, Eastwood M, Wahab N, Jahanifar M, Hero E, Dodd K, Sahota H, Wu S, Lu W, Azam A, Benes K, Nimir M, Hewitt K, Bhalerao A, Robinson A, Eldaly H, Raza SEA, Gopalakrishnan K, Snead D, Rajpoot N. Screening of normal endoscopic large bowel biopsies with interpretable graph learning: a retrospective study. Gut 2023; 72:1709-1721. [PMID: 37173125 PMCID: PMC10423541 DOI: 10.1136/gutjnl-2023-329512] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/15/2023] [Indexed: 05/15/2023]
Abstract
OBJECTIVE To develop an interpretable artificial intelligence algorithm to rule out normal large bowel endoscopic biopsies, saving pathologist resources and helping with early diagnosis. DESIGN A graph neural network was developed incorporating pathologist domain knowledge to classify 6591 whole-slides images (WSIs) of endoscopic large bowel biopsies from 3291 patients (approximately 54% female, 46% male) as normal or abnormal (non-neoplastic and neoplastic) using clinically driven interpretable features. One UK National Health Service (NHS) site was used for model training and internal validation. External validation was conducted on data from two other NHS sites and one Portuguese site. RESULTS Model training and internal validation were performed on 5054 WSIs of 2080 patients resulting in an area under the curve-receiver operating characteristic (AUC-ROC) of 0.98 (SD=0.004) and AUC-precision-recall (PR) of 0.98 (SD=0.003). The performance of the model, named Interpretable Gland-Graphs using a Neural Aggregator (IGUANA), was consistent in testing over 1537 WSIs of 1211 patients from three independent external datasets with mean AUC-ROC=0.97 (SD=0.007) and AUC-PR=0.97 (SD=0.005). At a high sensitivity threshold of 99%, the proposed model can reduce the number of normal slides to be reviewed by a pathologist by approximately 55%. IGUANA also provides an explainable output highlighting potential abnormalities in a WSI in the form of a heatmap as well as numerical values associating the model prediction with various histological features. CONCLUSION The model achieved consistently high accuracy showing its potential in optimising increasingly scarce pathologist resources. Explainable predictions can guide pathologists in their diagnostic decision-making and help boost their confidence in the algorithm, paving the way for its future clinical adoption.
Collapse
Affiliation(s)
- Simon Graham
- Department of Computer Science, University of Warwick, Coventry, UK
- Histofy Ltd, Birmingham, UK
| | - Fayyaz Minhas
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Mohsin Bilal
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Mahmoud Ali
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Mark Eastwood
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Noorul Wahab
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Emily Hero
- Department of Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Katherine Dodd
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Harvir Sahota
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Shaobin Wu
- Department of Pathology, East Suffolk and North Essex NHS Foundation Trust, Colchester, UK
| | - Wenqi Lu
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Ayesha Azam
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Ksenija Benes
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Department of Pathology, Royal Wolverhampton Hospitals NHS Trust, Wolverhampton, UK
| | - Mohammed Nimir
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Abhir Bhalerao
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Andrew Robinson
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Hesham Eldaly
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | | | - Kishore Gopalakrishnan
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - David Snead
- Histofy Ltd, Birmingham, UK
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Division of Biomedical Sciences, University of Warwick Warwick Medical School, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK
- Histofy Ltd, Birmingham, UK
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| |
Collapse
|
20
|
Shen L, Gao C, Hu S, Kang D, Zhang Z, Xia D, Xu Y, Xiang S, Zhu Q, Xu G, Tang F, Yue H, Yu W, Zhang Z. Using Artificial Intelligence to Diagnose Osteoporotic Vertebral Fractures on Plain Radiographs. J Bone Miner Res 2023; 38:1278-1287. [PMID: 37449775 DOI: 10.1002/jbmr.4879] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 06/18/2023] [Accepted: 07/06/2023] [Indexed: 07/18/2023]
Abstract
Osteoporotic vertebral fracture (OVF) is a risk factor for morbidity and mortality in elderly population, and accurate diagnosis is important for improving treatment outcomes. OVF diagnosis suffers from high misdiagnosis and underdiagnosis rates, as well as high workload. Deep learning methods applied to plain radiographs, a simple, fast, and inexpensive examination, might solve this problem. We developed and validated a deep-learning-based vertebral fracture diagnostic system using area loss ratio, which assisted a multitasking network to perform skeletal position detection and segmentation and identify and grade vertebral fractures. As the training set and internal validation set, we used 11,397 plain radiographs from six community centers in Shanghai. For the external validation set, 1276 participants were recruited from the outpatient clinic of the Shanghai Sixth People's Hospital (1276 plain radiographs). Radiologists performed all X-ray images and used the Genant semiquantitative tool for fracture diagnosis and grading as the ground truth data. Accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were used to evaluate diagnostic performance. The AI_OVF_SH system demonstrated high accuracy and computational speed in skeletal position detection and segmentation. In the internal validation set, the accuracy, sensitivity, and specificity with the AI_OVF_SH model were 97.41%, 84.08%, and 97.25%, respectively, for all fractures. The sensitivity and specificity for moderate fractures were 88.55% and 99.74%, respectively, and for severe fractures, they were 92.30% and 99.92%. In the external validation set, the accuracy, sensitivity, and specificity for all fractures were 96.85%, 83.35%, and 94.70%, respectively. For moderate fractures, the sensitivity and specificity were 85.61% and 99.85%, respectively, and 93.46% and 99.92% for severe fractures. Therefore, the AI_OVF_SH system is an efficient tool to assist radiologists and clinicians to improve the diagnosing of vertebral fractures. © 2023 The Authors. Journal of Bone and Mineral Research published by Wiley Periodicals LLC on behalf of American Society for Bone and Mineral Research (ASBMR).
Collapse
Affiliation(s)
- Li Shen
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Chao Gao
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shundong Hu
- Department of Radiology, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Dan Kang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Zhaogang Zhang
- Shanghai Jiyinghui Intelligent Technology Co, Shanghai, China
| | - Dongdong Xia
- Department of Orthopaedics, Ning Bo First Hospital, Zhejiang, China
| | - Yiren Xu
- Department of Radiology, Ning Bo First Hospital, Zhejiang, China
| | - Shoukui Xiang
- Department of Endocrinology and Metabolism, The First People's Hospital of Changzhou, Changzhou, China
| | - Qiong Zhu
- Kangjian Community Health Service Center, Shanghai, China
| | - GeWen Xu
- Kangjian Community Health Service Center, Shanghai, China
| | - Feng Tang
- Jinhui Community Health Service Center, Shanghai, China
| | - Hua Yue
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Wei Yu
- Department of Radiology, Peking Union Medical College Hospital, Beijing, China
| | - Zhenlin Zhang
- Department of Osteoporosis and Bone Disease, Shanghai Clinical Research Center of Bone Disease, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Clinical Research Center, Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
21
|
Kaczmarzyk JR, Gupta R, Kurc TM, Abousamra S, Saltz JH, Koo PK. ChampKit: A framework for rapid evaluation of deep neural networks for patch-based histopathology classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 239:107631. [PMID: 37271050 PMCID: PMC11093625 DOI: 10.1016/j.cmpb.2023.107631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 04/23/2023] [Accepted: 05/28/2023] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVE Histopathology is the gold standard for diagnosis of many cancers. Recent advances in computer vision, specifically deep learning, have facilitated the analysis of histopathology images for many tasks, including the detection of immune cells and microsatellite instability. However, it remains difficult to identify optimal models and training configurations for different histopathology classification tasks due to the abundance of available architectures and the lack of systematic evaluations. Our objective in this work is to present a software tool that addresses this need and enables robust, systematic evaluation of neural network models for patch classification in histology in a light-weight, easy-to-use package for both algorithm developers and biomedical researchers. METHODS Here we present ChampKit (Comprehensive Histopathology Assessment of Model Predictions toolKit): an extensible, fully reproducible evaluation toolkit that is a one-stop-shop to train and evaluate deep neural networks for patch classification. ChampKit curates a broad range of public datasets. It enables training and evaluation of models supported by timm directly from the command line, without the need for users to write any code. External models are enabled through a straightforward API and minimal coding. As a result, Champkit facilitates the evaluation of existing and new models and deep learning architectures on pathology datasets, making it more accessible to the broader scientific community. To demonstrate the utility of ChampKit, we establish baseline performance for a subset of possible models that could be employed with ChampKit, focusing on several popular deep learning models, namely ResNet18, ResNet50, and R26-ViT, a hybrid vision transformer. In addition, we compare each model trained either from random weight initialization or with transfer learning from ImageNet pretrained models. For ResNet18, we also consider transfer learning from a self-supervised pretrained model. RESULTS The main result of this paper is the ChampKit software. Using ChampKit, we were able to systemically evaluate multiple neural networks across six datasets. We observed mixed results when evaluating the benefits of pretraining versus random intialization, with no clear benefit except in the low data regime, where transfer learning was found to be beneficial. Surprisingly, we found that transfer learning from self-supervised weights rarely improved performance, which is counter to other areas of computer vision. CONCLUSIONS Choosing the right model for a given digital pathology dataset is nontrivial. ChampKit provides a valuable tool to fill this gap by enabling the evaluation of hundreds of existing (or user-defined) deep learning models across a variety of pathology tasks. Source code and data for the tool are freely accessible at https://github.com/SBU-BMI/champkit.
Collapse
Affiliation(s)
- Jakub R Kaczmarzyk
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA; Simons Center for Quantitative Biology, 1 Bungtown Rd, Cold Spring Harbor, 11724, NY, USA.
| | - Rajarsi Gupta
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA
| | - Tahsin M Kurc
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA
| | - Shahira Abousamra
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Joel H Saltz
- Department of Biomedical Informatics, Stony Brook Medicine, 101 Nicolls Rd, Stony Brook, 11794, NY, USA.
| | - Peter K Koo
- Simons Center for Quantitative Biology, 1 Bungtown Rd, Cold Spring Harbor, 11724, NY, USA.
| |
Collapse
|
22
|
Sharma A, Kumar R, Garg P. Deep learning-based prediction model for diagnosing gastrointestinal diseases using endoscopy images. Int J Med Inform 2023; 177:105142. [PMID: 37422969 DOI: 10.1016/j.ijmedinf.2023.105142] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/01/2023] [Accepted: 07/04/2023] [Indexed: 07/11/2023]
Abstract
BACKGROUND Gastrointestinal (GI) infections are quite common today around the world. Colonoscopy or wireless capsule endoscopy (WCE) are noninvasive methods for examining the whole GI tract for abnormalities. Nevertheless, it requires a great deal of time and effort for doctors to visualize a large number of images, and diagnosis is prone to human error. As a result, developing automated artificial intelligence (AI) based GI disease diagnosis methods is a crucial and emerging research area. AI-based prediction models may lead to improvements in the early diagnosis of gastrointestinal disorders, assessing severity, and healthcare systems for the benefit of patients as well as clinicians. The focus of this research is on the early diagnosis of gastrointestinal diseases using a convolution neural network (CNN) to enhance diagnosis accuracy. METHODS Various CNN models (baseline model and using transfer learning (VGG16, InceptionV3, and ResNet50)) were trained on a benchmark image dataset, KVASIR, containing images from inside the GI tract using n-fold cross-validation. The dataset comprises images of three disease states-polyps, ulcerative colitis, and esophagitis-as well as images of the healthy colon. Data augmentation strategies together with statistical measures were used to improve and evaluate the model's performance. Additionally, the test set comprising 1200 images was used to evaluate the model's accuracy and robustness. RESULTS The CNN model using the weights of the ResNet50 pre-trained model achieved the highest average accuracy of approximately 99.80% on the training set (100% precision and approximately 99% recall) and accuracies of 99.50% and 99.16% on the validation and additional test set, respectively, while diagnosing GI diseases. When compared to other existing systems, the proposed ResNet50 model outperforms them all. CONCLUSION The findings of this study indicate that AI-based prediction models using CNNs, specifically ResNet50, can improve diagnostic accuracy for detecting gastrointestinal polyps, ulcerative colitis, and esophagitis. The prediction model is available at https://github.com/anjus02/GI-disease-classification.git.
Collapse
Affiliation(s)
- Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S. Nagar, Punjab 160062, India
| | - Rajnish Kumar
- Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, MO, USA
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S. Nagar, Punjab 160062, India.
| |
Collapse
|
23
|
Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, Dhar M. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell 2023; 6:1227091. [PMID: 37705603 PMCID: PMC10497111 DOI: 10.3389/frai.2023.1227091] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
As the demand for quality healthcare increases, healthcare systems worldwide are grappling with time constraints and excessive workloads, which can compromise the quality of patient care. Artificial intelligence (AI) has emerged as a powerful tool in clinical medicine, revolutionizing various aspects of patient care and medical research. The integration of AI in clinical medicine has not only improved diagnostic accuracy and treatment outcomes, but also contributed to more efficient healthcare delivery, reduced costs, and facilitated better patient experiences. This review article provides an extensive overview of AI applications in history taking, clinical examination, imaging, therapeutics, prognosis and research. Furthermore, it highlights the critical role AI has played in transforming healthcare in developing nations.
Collapse
Affiliation(s)
- Gokul Krishnan
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shiana Singh
- Department of Emergency Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Monika Pathania
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Siddharth Gosavi
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shuchi Abhishek
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Ashwin Parchani
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Minakshi Dhar
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| |
Collapse
|
24
|
Aboumerhi K, Güemes A, Liu H, Tenore F, Etienne-Cummings R. Neuromorphic applications in medicine. J Neural Eng 2023; 20:041004. [PMID: 37531951 DOI: 10.1088/1741-2552/aceca3] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 08/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, there has been a growing demand for miniaturization, low power consumption, quick treatments, and non-invasive clinical strategies in the healthcare industry. To meet these demands, healthcare professionals are seeking new technological paradigms that can improve diagnostic accuracy while ensuring patient compliance. Neuromorphic engineering, which uses neural models in hardware and software to replicate brain-like behaviors, can help usher in a new era of medicine by delivering low power, low latency, small footprint, and high bandwidth solutions. This paper provides an overview of recent neuromorphic advancements in medicine, including medical imaging and cancer diagnosis, processing of biosignals for diagnosis, and biomedical interfaces, such as motor, cognitive, and perception prostheses. For each section, we provide examples of how brain-inspired models can successfully compete with conventional artificial intelligence algorithms, demonstrating the potential of neuromorphic engineering to meet demands and improve patient outcomes. Lastly, we discuss current struggles in fitting neuromorphic hardware with non-neuromorphic technologies and propose potential solutions for future bottlenecks in hardware compatibility.
Collapse
Affiliation(s)
- Khaled Aboumerhi
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - Amparo Güemes
- Electrical Engineering Division, Department of Engineering, University of Cambridge, 9 JJ Thomson Ave, Cambridge CB3 0FA, United Kingdom
| | - Hongtao Liu
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - Francesco Tenore
- Research and Exploratory Development Department, The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, United States of America
| | - Ralph Etienne-Cummings
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
25
|
Sahoo PK, Gupta P, Lai YC, Chiang SF, You JF, Onthoni DD, Chern YJ. Localization of Colorectal Cancer Lesions in Contrast-Computed Tomography Images via a Deep Learning Approach. Bioengineering (Basel) 2023; 10:972. [PMID: 37627857 PMCID: PMC10451186 DOI: 10.3390/bioengineering10080972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 07/26/2023] [Accepted: 07/31/2023] [Indexed: 08/27/2023] Open
Abstract
Abdominal computed tomography (CT) is a frequently used imaging modality for evaluating gastrointestinal diseases. The detection of colorectal cancer is often realized using CT before a more invasive colonoscopy. When a CT exam is performed for indications other than colorectal evaluation, the tortuous structure of the long, tubular colon makes it difficult to analyze the colon carefully and thoroughly. In addition, the sensitivity of CT in detecting colorectal cancer is greatly dependent on the size of the tumor. Missed incidental colon cancers using CT are an emerging problem for clinicians and radiologists; consequently, the automatic localization of lesions in the CT images of unprepared bowels is needed. Therefore, this study used artificial intelligence (AI) to localize colorectal cancer in CT images. We enrolled 190 colorectal cancer patients to obtain 1558 tumor slices annotated by radiologists and colorectal surgeons. The tumor sites were double-confirmed via colonoscopy or other related examinations, including physical examination or image study, and the final tumor sites were obtained from the operation records if available. The localization and training models used were RetinaNet, YOLOv3, and YOLOv8. We achieved an F1 score of 0.97 (±0.002), a mAP of 0.984 when performing slice-wise testing, 0.83 (±0.29) sensitivity, 0.97 (±0.01) specificity, and 0.96 (±0.01) accuracy when performing patient-wise testing using our derived model YOLOv8 with hyperparameter tuning.
Collapse
Affiliation(s)
- Prasan Kumar Sahoo
- Department of Computer Science and Information Engineering, Chang Gung University, Guishan, Taoyuan 33302, Taiwan; (P.K.S.); (P.G.); (D.D.O.)
- Department of Neurology, Chang Gung Memorial Hospital, Linkou, New Taipei City 33305, Taiwan
| | - Pushpanjali Gupta
- Department of Computer Science and Information Engineering, Chang Gung University, Guishan, Taoyuan 33302, Taiwan; (P.K.S.); (P.G.); (D.D.O.)
| | - Ying-Chieh Lai
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital, Linkou, New Taipei City 33305, Taiwan;
- Department of Metabolomics Core Lab, Chang Gung Memorial Hospital, Linkou, New Taipei City 33305, Taiwan
| | - Sum-Fu Chiang
- Division of Colon and Rectal Surgery, Chang Gung Memorial Hospital, Linkou, New Taipei City 33305, Taiwan; (S.-F.C.); (J.-F.Y.)
- College of Medicine, Chang Gung University, Guishan, Taoyuan 33302, Taiwan
| | - Jeng-Fu You
- Division of Colon and Rectal Surgery, Chang Gung Memorial Hospital, Linkou, New Taipei City 33305, Taiwan; (S.-F.C.); (J.-F.Y.)
- College of Medicine, Chang Gung University, Guishan, Taoyuan 33302, Taiwan
| | - Djeane Debora Onthoni
- Department of Computer Science and Information Engineering, Chang Gung University, Guishan, Taoyuan 33302, Taiwan; (P.K.S.); (P.G.); (D.D.O.)
| | - Yih-Jong Chern
- Division of Colon and Rectal Surgery, Chang Gung Memorial Hospital, Linkou, New Taipei City 33305, Taiwan; (S.-F.C.); (J.-F.Y.)
- Graduate Institute of Clinical Medical Sciences, College of Medicine, Chang Gung University, Guishan, Taoyuan 33302, Taiwan
| |
Collapse
|
26
|
DiPalma J, Torresani L, Hassanpour S. HistoPerm: A permutation-based view generation approach for improving histopathologic feature representation learning. J Pathol Inform 2023; 14:100320. [PMID: 37457594 PMCID: PMC10339175 DOI: 10.1016/j.jpi.2023.100320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/23/2023] [Accepted: 06/28/2023] [Indexed: 07/18/2023] Open
Abstract
Deep learning has been effective for histology image analysis in digital pathology. However, many current deep learning approaches require large, strongly- or weakly labeled images and regions of interest, which can be time-consuming and resource-intensive to obtain. To address this challenge, we present HistoPerm, a view generation method for representation learning using joint embedding architectures that enhances representation learning for histology images. HistoPerm permutes augmented views of patches extracted from whole-slide histology images to improve classification performance. We evaluated the effectiveness of HistoPerm on 2 histology image datasets for Celiac disease and Renal Cell Carcinoma, using 3 widely used joint embedding architecture-based representation learning methods: BYOL, SimCLR, and VICReg. Our results show that HistoPerm consistently improves patch- and slide-level classification performance in terms of accuracy, F1-score, and AUC. Specifically, for patch-level classification accuracy on the Celiac disease dataset, HistoPerm boosts BYOL and VICReg by 8% and SimCLR by 3%. On the Renal Cell Carcinoma dataset, patch-level classification accuracy is increased by 2% for BYOL and VICReg, and by 1% for SimCLR. In addition, on the Celiac disease dataset, models with HistoPerm outperform the fully supervised baseline model by 6%, 5%, and 2% for BYOL, SimCLR, and VICReg, respectively. For the Renal Cell Carcinoma dataset, HistoPerm lowers the classification accuracy gap for the models up to 10% relative to the fully supervised baseline. These findings suggest that HistoPerm can be a valuable tool for improving representation learning of histopathology features when access to labeled data is limited and can lead to whole-slide classification results that are comparable to or superior to fully supervised methods.
Collapse
Affiliation(s)
- Joseph DiPalma
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Lorenzo Torresani
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Saeed Hassanpour
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
- Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| |
Collapse
|
27
|
Stanciu SG, König K, Song YM, Wolf L, Charitidis CA, Bianchini P, Goetz M. Toward next-generation endoscopes integrating biomimetic video systems, nonlinear optical microscopy, and deep learning. BIOPHYSICS REVIEWS 2023; 4:021307. [PMID: 38510341 PMCID: PMC10903409 DOI: 10.1063/5.0133027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 05/26/2023] [Indexed: 03/22/2024]
Abstract
According to the World Health Organization, the proportion of the world's population over 60 years will approximately double by 2050. This progressive increase in the elderly population will lead to a dramatic growth of age-related diseases, resulting in tremendous pressure on the sustainability of healthcare systems globally. In this context, finding more efficient ways to address cancers, a set of diseases whose incidence is correlated with age, is of utmost importance. Prevention of cancers to decrease morbidity relies on the identification of precursor lesions before the onset of the disease, or at least diagnosis at an early stage. In this article, after briefly discussing some of the most prominent endoscopic approaches for gastric cancer diagnostics, we review relevant progress in three emerging technologies that have significant potential to play pivotal roles in next-generation endoscopy systems: biomimetic vision (with special focus on compound eye cameras), non-linear optical microscopies, and Deep Learning. Such systems are urgently needed to enhance the three major steps required for the successful diagnostics of gastrointestinal cancers: detection, characterization, and confirmation of suspicious lesions. In the final part, we discuss challenges that lie en route to translating these technologies to next-generation endoscopes that could enhance gastrointestinal imaging, and depict a possible configuration of a system capable of (i) biomimetic endoscopic vision enabling easier detection of lesions, (ii) label-free in vivo tissue characterization, and (iii) intelligently automated gastrointestinal cancer diagnostic.
Collapse
Affiliation(s)
- Stefan G. Stanciu
- Center for Microscopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| | | | | | - Lior Wolf
- School of Computer Science, Tel Aviv University, Tel-Aviv, Israel
| | - Costas A. Charitidis
- Research Lab of Advanced, Composite, Nano-Materials and Nanotechnology, School of Chemical Engineering, National Technical University of Athens, Athens, Greece
| | - Paolo Bianchini
- Nanoscopy and NIC@IIT, Italian Institute of Technology, Genoa, Italy
| | - Martin Goetz
- Medizinische Klinik IV-Gastroenterologie/Onkologie, Kliniken Böblingen, Klinikverbund Südwest, Böblingen, Germany
| |
Collapse
|
28
|
Khazaee Fadafen M, Rezaee K. Ensemble-based multi-tissue classification approach of colorectal cancer histology images using a novel hybrid deep learning framework. Sci Rep 2023; 13:8823. [PMID: 37258631 DOI: 10.1038/s41598-023-35431-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/17/2023] [Indexed: 06/02/2023] Open
Abstract
Colorectal cancer (CRC) is the second leading cause of cancer death in the world, so digital pathology is essential for assessing prognosis. Due to the increasing resolution and quantity of whole slide images (WSIs), as well as the lack of annotated information, previous methodologies cannot be generalized as effective decision-making systems. Since deep learning (DL) methods can handle large-scale applications, they can provide a viable alternative to histopathology image (HI) analysis. DL architectures, however, may not be sufficient to classify CRC tissues based on anatomical histopathology data. A dilated ResNet (dResNet) structure and attention module are used to generate deep feature maps in order to classify multiple tissues in HIs. In addition, neighborhood component analysis (NCA) overcomes the constraint of computational complexity. Data is fed into a deep support vector machine (SVM) based on an ensemble learning algorithm called DeepSVM after the features have been selected. CRC-5000 and NCT-CRC-HE-100 K datasets were analyzed to validate and test the hybrid procedure. We demonstrate that the hybrid model achieves 98.75% and 99.76% accuracy on CRC datasets. The results showed that only pathologists' labels could successfully classify unseen WSIs. Furthermore, the hybrid deep learning method outperforms state-of-the-art approaches in terms of computational efficiency and time. Using the proposed mechanism for tissue analysis, it will be possible to correctly predict CRC based on accurate pathology image classification.
Collapse
Affiliation(s)
- Masoud Khazaee Fadafen
- Department of Electrical Engineering, Technical and Vocational University (TVU), Tehran, Iran
| | - Khosro Rezaee
- Department of Biomedical Engineering, Meybod University, Meybod, Iran.
| |
Collapse
|
29
|
Bokhorst JM, Nagtegaal ID, Fraggetta F, Vatrano S, Mesker W, Vieth M, van der Laak J, Ciompi F. Deep learning for multi-class semantic segmentation enables colorectal cancer detection and classification in digital pathology images. Sci Rep 2023; 13:8398. [PMID: 37225743 PMCID: PMC10209185 DOI: 10.1038/s41598-023-35491-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 05/18/2023] [Indexed: 05/26/2023] Open
Abstract
In colorectal cancer (CRC), artificial intelligence (AI) can alleviate the laborious task of characterization and reporting on resected biopsies, including polyps, the numbers of which are increasing as a result of CRC population screening programs ongoing in many countries all around the globe. Here, we present an approach to address two major challenges in the automated assessment of CRC histopathology whole-slide images. We present an AI-based method to segment multiple ([Formula: see text]) tissue compartments in the H &E-stained whole-slide image, which provides a different, more perceptible picture of tissue morphology and composition. We test and compare a panel of state-of-the-art loss functions available for segmentation models, and provide indications about their use in histopathology image segmentation, based on the analysis of (a) a multi-centric cohort of CRC cases from five medical centers in the Netherlands and Germany, and (b) two publicly available datasets on segmentation in CRC. We used the best performing AI model as the basis for a computer-aided diagnosis system that classifies colon biopsies into four main categories that are relevant pathologically. We report the performance of this system on an independent cohort of more than 1000 patients. The results show that with a good segmentation network as a base, a tool can be developed which can support pathologists in the risk stratification of colorectal cancer patients, among other possible uses. We have made the segmentation model available for research use on https://grand-challenge.org/algorithms/colon-tissue-segmentation/ .
Collapse
Affiliation(s)
- John-Melle Bokhorst
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Iris D Nagtegaal
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Filippo Fraggetta
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Simona Vatrano
- Pathology Unit Gravina Hospital, Gravina Hospital, Caltagirone, Italy
| | - Wilma Mesker
- Leids Universitair Medisch Centrum, Leiden, The Netherlands
| | - Michael Vieth
- Klinikum Bayreuth, Friedrich-Alexander-University Erlangen-Nuremberg, Bayreuth, Germany
| | - Jeroen van der Laak
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Francesco Ciompi
- Department of pathology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
30
|
Hu W, Li X, Li C, Li R, Jiang T, Sun H, Huang X, Grzegorzek M, Li X. A state-of-the-art survey of artificial neural networks for Whole-slide Image analysis: From popular Convolutional Neural Networks to potential visual transformers. Comput Biol Med 2023; 161:107034. [PMID: 37230019 DOI: 10.1016/j.compbiomed.2023.107034] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 04/13/2023] [Accepted: 05/10/2023] [Indexed: 05/27/2023]
Abstract
In recent years, with the advancement of computer-aided diagnosis (CAD) technology and whole slide image (WSI), histopathological WSI has gradually played a crucial aspect in the diagnosis and analysis of diseases. To increase the objectivity and accuracy of pathologists' work, artificial neural network (ANN) methods have been generally needed in the segmentation, classification, and detection of histopathological WSI. However, the existing review papers only focus on equipment hardware, development status and trends, and do not summarize the art neural network used for full-slide image analysis in detail. In this paper, WSI analysis methods based on ANN are reviewed. Firstly, the development status of WSI and ANN methods is introduced. Secondly, we summarize the common ANN methods. Next, we discuss publicly available WSI datasets and evaluation metrics. These ANN architectures for WSI processing are divided into classical neural networks and deep neural networks (DNNs) and then analyzed. Finally, the application prospect of the analytical method in this field is discussed. The important potential method is Visual Transformers.
Collapse
Affiliation(s)
- Weiming Hu
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Xintong Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| | - Rui Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Xinyu Huang
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Shenyang, China.
| |
Collapse
|
31
|
Sharma A, Kumar R, Yadav G, Garg P. Artificial intelligence in intestinal polyp and colorectal cancer prediction. Cancer Lett 2023; 565:216238. [PMID: 37211068 DOI: 10.1016/j.canlet.2023.216238] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Revised: 05/17/2023] [Accepted: 05/17/2023] [Indexed: 05/23/2023]
Abstract
Artificial intelligence (AI) algorithms and their application to disease detection and decision support for healthcare professions have greatly evolved in the recent decade. AI has been widely applied and explored in gastroenterology for endoscopic analysis to diagnose intestinal cancers, premalignant polyps, gastrointestinal inflammatory lesions, and bleeding. Patients' responses to treatments and prognoses have both been predicted using AI by combining multiple algorithms. In this review, we explored the recent applications of AI algorithms in the identification and characterization of intestinal polyps and colorectal cancer predictions. AI-based prediction models have the potential to help medical practitioners diagnose, establish prognoses, and find accurate conclusions for the treatment of patients. With the understanding that rigorous validation of AI approaches using randomized controlled studies is solicited before widespread clinical use by health authorities, the article also discusses the limitations and challenges associated with deploying AI systems to diagnose intestinal malignancies and premalignant lesions.
Collapse
Affiliation(s)
- Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India
| | - Rajnish Kumar
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India; Department of Veterinary Medicine and Surgery, College of Veterinary Medicine, University of Missouri, Columbia, MO, USA
| | - Garima Yadav
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Uttar Pradesh, 226010, India
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S.A.S Nagar, 160062, Punjab, India.
| |
Collapse
|
32
|
Yengec-Tasdemir SB, Aydin Z, Akay E, Dogan S, Yilmaz B. Improved classification of colorectal polyps on histopathological images with ensemble learning and stain normalization. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 232:107441. [PMID: 36905748 DOI: 10.1016/j.cmpb.2023.107441] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 02/05/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
BACKGROUND AND OBJECTIVE Early detection of colon adenomatous polyps is critically important because correct detection of it significantly reduces the potential of developing colon cancers in the future. The key challenge in the detection of adenomatous polyps is differentiating it from its visually similar counterpart, non-adenomatous tissues. Currently, it solely depends on the experience of the pathologist. To assist the pathologists, the objective of this work is to provide a novel non-knowledge-based Clinical Decision Support System (CDSS) for improved detection of adenomatous polyps on colon histopathology images. METHODS The domain shift problem arises when the train and test data are coming from different distributions of diverse settings and unequal color levels. This problem, which can be tackled by stain normalization techniques, restricts the machine learning models to attain higher classification accuracies. In this work, the proposed method integrates stain normalization techniques with ensemble of competitively accurate, scalable and robust variants of CNNs, ConvNexts. The improvement is empirically analyzed for five widely employed stain normalization techniques. The classification performance of the proposed method is evaluated on three datasets comprising more than 10k colon histopathology images. RESULTS The comprehensive experiments demonstrate that the proposed method outperforms the state-of-the-art deep convolutional neural network based models by attaining 95% classification accuracy on the curated dataset, and 91.1% and 90% on EBHI and UniToPatho public datasets, respectively. CONCLUSIONS These results show that the proposed method can accurately classify colon adenomatous polyps on histopathology images. It retains remarkable performance scores even for different datasets coming from different distributions. This indicates that the model has a notable generalization ability.
Collapse
Affiliation(s)
- Sena Busra Yengec-Tasdemir
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT39DT, United Kingdom; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| | - Zafer Aydin
- Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey; Department of Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey
| | - Ebru Akay
- Pathology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Serkan Dogan
- Gastroenterology Clinic, Kayseri City Hospital, Kayseri, 38080, Turkey
| | - Bulent Yilmaz
- Department of Electrical Engineering, Gulf University for Science and Technology, Mishref, 40005, Kuwait; Department of Electrical and Computer Engineering, Abdullah Gul University, Kayseri, 38080, Turkey.
| |
Collapse
|
33
|
Kim J, Tomita N, Suriawinata AA, Hassanpour S. Detection of Colorectal Adenocarcinoma and Grading Dysplasia on Histopathologic Slides Using Deep Learning. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:332-340. [PMID: 36563748 PMCID: PMC10012966 DOI: 10.1016/j.ajpath.2022.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 10/28/2022] [Accepted: 12/01/2022] [Indexed: 12/24/2022]
Abstract
Colorectal cancer (CRC) is one of the most common types of cancer among men and women. The grading of dysplasia and the detection of adenocarcinoma are important clinical tasks in the diagnosis of CRC and shape the patients' follow-up plans. This study evaluated the feasibility of deep learning models for the classification of colorectal lesions into four classes: benign, low-grade dysplasia, high-grade dysplasia, and adenocarcinoma. To this end, a deep neural network was developed on a training set of 655 whole slide images of digitized colorectal resection slides from a tertiary medical institution; and the network was evaluated on an internal test set of 234 slides, as well as on an external test set of 606 adenocarcinoma slides from The Cancer Genome Atlas database. The model achieved an overall accuracy, sensitivity, and specificity of 95.5%, 91.0%, and 97.1%, respectively, on the internal test set, and an accuracy and sensitivity of 98.5% for adenocarcinoma detection task on the external test set. Results suggest that such deep learning models can potentially assist pathologists in grading colorectal dysplasia, detecting adenocarcinoma, prescreening, and prioritizing the reviewing of suspicious cases to improve the turnaround time for patients with a high risk of CRC. Furthermore, the high sensitivity on the external test set suggests the model's generalizability in detecting colorectal adenocarcinoma on whole slide images across different institutions.
Collapse
Affiliation(s)
- Junhwi Kim
- Department of Computer Science, Dartmouth College, Hanover, New Hampshire
| | - Naofumi Tomita
- Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, New Hampshire
| | - Saeed Hassanpour
- Department of Computer Science, Dartmouth College, Hanover, New Hampshire; Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire; Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, New Hampshire.
| |
Collapse
|
34
|
Cai H, Feng X, Yin R, Zhao Y, Guo L, Fan X, Liao J. MIST: multiple instance learning network based on Swin Transformer for whole slide image classification of colorectal adenomas. J Pathol 2023; 259:125-135. [PMID: 36318158 DOI: 10.1002/path.6027] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 09/30/2022] [Accepted: 10/28/2022] [Indexed: 12/12/2022]
Abstract
Colorectal adenoma is a recognized precancerous lesion of colorectal cancer (CRC), and at least 80% of colorectal cancers are malignantly transformed from it. Therefore, it is essential to distinguish benign from malignant adenomas in the early screening of colorectal cancer. Many deep learning computational pathology studies based on whole slide images (WSIs) have been proposed. Most approaches require manual annotation of lesion regions on WSIs, which is time-consuming and labor-intensive. This study proposes a new approach, MIST - Multiple Instance learning network based on the Swin Transformer, which can accurately classify colorectal adenoma WSIs only with slide-level labels. MIST uses the Swin Transformer as the backbone to extract features of images through self-supervised contrastive learning and uses a dual-stream multiple instance learning network to predict the class of slides. We trained and validated MIST on 666 WSIs collected from 480 colorectal adenoma patients in the Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School. These slides contained six common types of colorectal adenomas. The accuracy of external validation on 273 newly collected WSIs from Nanjing First Hospital was 0.784, which was superior to the existing methods and reached a level comparable to that of the local pathologist's accuracy of 0.806. Finally, we analyzed the interpretability of MIST and observed that the lesion areas of interest in MIST were generally consistent with those of interest to local pathologists. In conclusion, MIST is a low-burden, interpretable, and effective approach that can be used in colorectal cancer screening and may lead to a potential reduction in the mortality of CRC patients by assisting clinicians in the decision-making process. © 2022 The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Hongbin Cai
- School of Science, China Pharmaceutical University, Nanjing, PR China
| | - Xiaobing Feng
- College of Electrical and Information Engineering, Hunan University, Changsha, PR China
| | - Ruomeng Yin
- School of Science, China Pharmaceutical University, Nanjing, PR China
| | - Youcai Zhao
- Department of Pathology, Nanjing First Hospital, Nanjing, PR China
| | - Lingchuan Guo
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, PR China
| | - Xiangshan Fan
- Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, PR China
| | - Jun Liao
- School of Science, China Pharmaceutical University, Nanjing, PR China
| |
Collapse
|
35
|
Alharbi F, Vakanski A. Machine Learning Methods for Cancer Classification Using Gene Expression Data: A Review. Bioengineering (Basel) 2023; 10:bioengineering10020173. [PMID: 36829667 PMCID: PMC9952758 DOI: 10.3390/bioengineering10020173] [Citation(s) in RCA: 23] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/24/2023] [Accepted: 01/26/2023] [Indexed: 01/31/2023] Open
Abstract
Cancer is a term that denotes a group of diseases caused by the abnormal growth of cells that can spread in different parts of the body. According to the World Health Organization (WHO), cancer is the second major cause of death after cardiovascular diseases. Gene expression can play a fundamental role in the early detection of cancer, as it is indicative of the biochemical processes in tissue and cells, as well as the genetic characteristics of an organism. Deoxyribonucleic acid (DNA) microarrays and ribonucleic acid (RNA)-sequencing methods for gene expression data allow quantifying the expression levels of genes and produce valuable data for computational analysis. This study reviews recent progress in gene expression analysis for cancer classification using machine learning methods. Both conventional and deep learning-based approaches are reviewed, with an emphasis on the application of deep learning models due to their comparative advantages for identifying gene patterns that are distinctive for various types of cancers. Relevant works that employ the most commonly used deep neural network architectures are covered, including multi-layer perceptrons, as well as convolutional, recurrent, graph, and transformer networks. This survey also presents an overview of the data collection methods for gene expression analysis and lists important datasets that are commonly used for supervised machine learning for this task. Furthermore, we review pertinent techniques for feature engineering and data preprocessing that are typically used to handle the high dimensionality of gene expression data, caused by a large number of genes present in data samples. The paper concludes with a discussion of future research directions for machine learning-based gene expression analysis for cancer classification.
Collapse
|
36
|
Zhang H, He Y, Wu X, Huang P, Qin W, Wang F, Ye J, Huang X, Liao Y, Chen H, Guo L, Shi X, Luo L. PathNarratives: Data annotation for pathological human-AI collaborative diagnosis. Front Med (Lausanne) 2023; 9:1070072. [PMID: 36777158 PMCID: PMC9908590 DOI: 10.3389/fmed.2022.1070072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 12/22/2022] [Indexed: 01/27/2023] Open
Abstract
Pathology is the gold standard of clinical diagnosis. Artificial intelligence (AI) in pathology becomes a new trend, but it is still not widely used due to the lack of necessary explanations for pathologists to understand the rationale. Clinic-compliant explanations besides the diagnostic decision of pathological images are essential for AI model training to provide diagnostic suggestions assisting pathologists practice. In this study, we propose a new annotation form, PathNarratives, that includes a hierarchical decision-to-reason data structure, a narrative annotation process, and a multimodal interactive annotation tool. Following PathNarratives, we recruited 8 pathologist annotators to build a colorectal pathological dataset, CR-PathNarratives, containing 174 whole-slide images (WSIs). We further experiment on the dataset with classification and captioning tasks to explore the clinical scenarios of human-AI-collaborative pathological diagnosis. The classification tasks show that fine-grain prediction enhances the overall classification accuracy from 79.56 to 85.26%. In Human-AI collaboration experience, the trust and confidence scores from 8 pathologists raised from 3.88 to 4.63 with providing more details. Results show that the classification and captioning tasks achieve better results with reason labels, provide explainable clues for doctors to understand and make the final decision and thus can support a better experience of human-AI collaboration in pathological diagnosis. In the future, we plan to optimize the tools for the annotation process, and expand the datasets with more WSIs and covering more pathological domains.
Collapse
Affiliation(s)
- Heyu Zhang
- College of Engineering, Peking University, Beijing, China
| | - Yan He
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Xiaomin Wu
- College of Engineering, Peking University, Beijing, China
| | - Peixiang Huang
- College of Engineering, Peking University, Beijing, China
| | - Wenkang Qin
- College of Engineering, Peking University, Beijing, China
| | - Fan Wang
- College of Engineering, Peking University, Beijing, China
| | - Juxiang Ye
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China
| | - Xirui Huang
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Yanfang Liao
- Department of Pathology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Hang Chen
- College of Engineering, Peking University, Beijing, China
| | - Limei Guo
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China,*Correspondence: Limei Guo,
| | - Xueying Shi
- Department of Pathology, School of Basic Medical Science, Peking University Health Science Center, Peking University Third Hospital, Beijing, China,Xueying Shi,
| | - Lin Luo
- College of Engineering, Peking University, Beijing, China,Lin Luo,
| |
Collapse
|
37
|
Mansur A, Saleem Z, Elhakim T, Daye D. Role of artificial intelligence in risk prediction, prognostication, and therapy response assessment in colorectal cancer: current state and future directions. Front Oncol 2023; 13:1065402. [PMID: 36761957 PMCID: PMC9905815 DOI: 10.3389/fonc.2023.1065402] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
Artificial Intelligence (AI) is a branch of computer science that utilizes optimization, probabilistic and statistical approaches to analyze and make predictions based on a vast amount of data. In recent years, AI has revolutionized the field of oncology and spearheaded novel approaches in the management of various cancers, including colorectal cancer (CRC). Notably, the applications of AI to diagnose, prognosticate, and predict response to therapy in CRC, is gaining traction and proving to be promising. There have also been several advancements in AI technologies to help predict metastases in CRC and in Computer-Aided Detection (CAD) Systems to improve miss rates for colorectal neoplasia. This article provides a comprehensive review of the role of AI in predicting risk, prognosis, and response to therapies among patients with CRC.
Collapse
Affiliation(s)
- Arian Mansur
- Harvard Medical School, Boston, MA, United States
| | | | - Tarig Elhakim
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| |
Collapse
|
38
|
Tsuneki M, Abe M, Ichihara S, Kanavati F. Inference of core needle biopsy whole slide images requiring definitive therapy for prostate cancer. BMC Cancer 2023; 23:11. [PMID: 36600203 DOI: 10.1186/s12885-022-10488-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 12/26/2022] [Indexed: 01/06/2023] Open
Abstract
BACKGROUND Prostate cancer is often a slowly progressive indolent disease. Unnecessary treatments from overdiagnosis are a significant concern, particularly low-grade disease. Active surveillance has being considered as a risk management strategy to avoid potential side effects by unnecessary radical treatment. In 2016, American Society of Clinical Oncology (ASCO) endorsed the Cancer Care Ontario (CCO) Clinical Practice Guideline on active surveillance for the management of localized prostate cancer. METHODS Based on this guideline, we developed a deep learning model to classify prostate adenocarcinoma into indolent (applicable for active surveillance) and aggressive (necessary for definitive therapy) on core needle biopsy whole slide images (WSIs). In this study, we trained deep learning models using a combination of transfer, weakly supervised, and fully supervised learning approaches using a dataset of core needle biopsy WSIs (n=1300). In addition, we performed an inter-rater reliability evaluation on the WSI classification. RESULTS We evaluated the models on a test set (n=645), achieving ROC-AUCs of 0.846 for indolent and 0.980 for aggressive. The inter-rater reliability evaluation showed s-scores in the range of 0.10 to 0.95, with the lowest being on the WSIs with both indolent and aggressive classification by the model, and the highest on benign WSIs. CONCLUSION The results demonstrate the promising potential of deployment in a practical prostate adenocarcinoma histopathological diagnostic workflow system.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., 2-4-5-104, Akasaka, Chuo-ku, Fukuoka, 810-0042, Japan.
| | - Makoto Abe
- Department of Pathology, Tochigi Cancer Center, 4-9-13 Yohnan, Utsunomiya, 320-0834, Japan
| | - Shin Ichihara
- Department of Surgical Pathology, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo, 060-0033, Japan
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., 2-4-5-104, Akasaka, Chuo-ku, Fukuoka, 810-0042, Japan
| |
Collapse
|
39
|
Artificial intelligence in cancer research and precision medicine: Applications, limitations and priorities to drive transformation in the delivery of equitable and unbiased care. Cancer Treat Rev 2023; 112:102498. [PMID: 36527795 DOI: 10.1016/j.ctrv.2022.102498] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 12/03/2022] [Accepted: 12/06/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) has experienced explosive growth in oncology and related specialties in recent years. The improved expertise in data capture, the increased capacity for data aggregation and analytic power, along with decreasing costs of genome sequencing and related biologic "omics", set the foundation and need for novel tools that can meaningfully process these data from multiple sources and of varying types. These advances provide value across biomedical discovery, diagnosis, prognosis, treatment, and prevention, in a multimodal fashion. However, while big data and AI tools have already revolutionized many fields, medicine has partially lagged due to its complexity and multi-dimensionality, leading to technical challenges in developing and validating solutions that generalize to diverse populations. Indeed, inner biases and miseducation of algorithms, in view of their implementation in daily clinical practice, are increasingly relevant concerns; critically, it is possible for AI to mirror the unconscious biases of the humans who generated these algorithms. Therefore, to avoid worsening existing health disparities, it is critical to employ a thoughtful, transparent, and inclusive approach that involves addressing bias in algorithm design and implementation along the cancer care continuum. In this review, a broad landscape of major applications of AI in cancer care is provided, with a focus on cancer research and precision medicine. Major challenges posed by the implementation of AI in the clinical setting will be discussed. Potentially feasible solutions for mitigating bias are provided, in the light of promoting cancer health equity.
Collapse
|
40
|
Srivastava R. Applications of artificial intelligence multiomics in precision oncology. J Cancer Res Clin Oncol 2023; 149:503-510. [PMID: 35796775 DOI: 10.1007/s00432-022-04161-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2022] [Accepted: 06/17/2022] [Indexed: 02/06/2023]
Abstract
Cancer is the second leading worldwide disease that depends on oncogenic mutations and non-mutated genes for survival. Recent advancements in next-generation sequencing (NGS) have transformed the health care sector with big data and machine learning (ML) approaches. NGS data are able to detect the abnormalities and mutations in the oncogenes. These multi-omics analyses are used for risk prediction, early diagnosis, accurate prognosis, and identification of biomarkers in cancer patients. The availability of these cancer data and their analysis may provide insights into the biology of the disease, which can be used for the personalized treatment of cancer patients. Bioinformatics tools are delivering this promise by managing, integrating, and analyzing these complex datasets. The clinical outcomes of cancer patients are improved by the use of various innovative methods implicated particularly for diagnosis and therapeutics. ML-based artificial intelligence (AI) applications are solving these issues to a great extent. AI techniques are used to update the patients on a personalized basis about their treatment procedures, progress, recovery, therapies used, dietary changes in lifestyles patterns along with the survival summary of previously recovered cancer patients. In this way, the patients are becoming more aware of their diseases and the entire clinical treatment procedures. Though the technology has its own advantages and disadvantages, we hope that the day is not so far when AI techniques will provide personalized treatment to cancer patients tailored to their needs in much quicker ways.
Collapse
Affiliation(s)
- Ruby Srivastava
- CSIR-Centre for Cellular and Molecular Biology, Hyderabad, India.
| |
Collapse
|
41
|
Yavuz A, Alpsoy A, Gedik EO, Celik MY, Bassorgun CI, Unal B, Elpek GO. Artificial intelligence applications in predicting the behavior of gastrointestinal cancers in pathology. Artif Intell Gastroenterol 2022; 3:142-162. [DOI: 10.35712/aig.v3.i5.142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 11/25/2022] [Accepted: 12/14/2022] [Indexed: 12/28/2022] Open
Abstract
Recent research has provided a wealth of data supporting the application of artificial intelligence (AI)-based applications in routine pathology practice. Indeed, it is clear that these methods can significantly support an accurate and rapid diagnosis by eliminating errors, increasing reliability, and improving workflow. In addition, the effectiveness of AI in the pathological evaluation of prognostic parameters associated with behavior, course, and treatment in many types of tumors has also been noted. Regarding gastrointestinal system (GIS) cancers, the contribution of AI methods to pathological diagnosis has been investigated in many studies. On the other hand, studies focusing on AI applications in evaluating parameters to determine tumor behavior are relatively few. For this purpose, the potential of AI models has been studied over a broad spectrum, from tumor subtyping to the identification of new digital biomarkers. The capacity of AI to infer genetic alterations of cancer tissues from digital slides has been demonstrated. Although current data suggest the merit of AI-based approaches in assessing tumor behavior in GIS cancers, a wide range of challenges still need to be solved, from laboratory infrastructure to improving the robustness of algorithms, before incorporating AI applications into real-life GIS pathology practice. This review aims to present data from AI applications in evaluating pathological parameters related to the behavior of GIS cancer with an overview of the opportunities and challenges encountered in implementing AI in pathology.
Collapse
Affiliation(s)
- Aysen Yavuz
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Anil Alpsoy
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Elif Ocak Gedik
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | | | | | - Betul Unal
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| | - Gulsum Ozlem Elpek
- Department of Pathology, Akdeniz University Medical School, Antalya 07070, Turkey
| |
Collapse
|
42
|
Tsuneki M, Kanavati F. Weakly Supervised Learning for Poorly Differentiated Adenocarcinoma Classification in GastricEndoscopic Submucosal Dissection Whole Slide Images. Technol Cancer Res Treat 2022; 21:15330338221142674. [PMID: 36476107 PMCID: PMC9742706 DOI: 10.1177/15330338221142674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Objective: Endoscopic submucosal dissection (ESD) is the preferred technique for treating early gastric cancers including poorly differentiated adenocarcinoma without ulcerative findings. The histopathological classification of poorly differentiated adenocarcinoma including signet ring cell carcinoma is of pivotal importance for determining further optimum cancer treatment(s) and clinical outcomes. Because conventional diagnosis by pathologists using microscopes is time-consuming and limited in terms of human resources, it is very important to develop computer-aided techniques that can rapidly and accurately inspect large number of histopathological specimen whole-slide images (WSIs). Computational pathology applications which can assist pathologists in detecting and classifying gastric poorly differentiated adenocarcinoma from ESD WSIs would be of great benefit for routine histopathological diagnostic workflow. Methods: In this study, we trained the deep learning model to classify poorly differentiated adenocarcinoma in ESD WSIs by transfer and weakly supervised learning approaches. Results: We evaluated the model on ESD, endoscopic biopsy, and surgical specimen WSI test sets, achieving and ROC-AUC up to 0.975 in gastric ESD test sets for poorly differentiated adenocarcinoma. Conclusion: The deep learning model developed in this study demonstrates the high promising potential of deployment in a routine practical gastric ESD histopathological diagnostic workflow as a computer-aided diagnosis system.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Fukuoka, Japan,Masayuki Tsuneki, Medmain Research, Medmain Inc., Fukuoka, 810-0042, Japan.
| | | |
Collapse
|
43
|
Alzoubi I, Bao G, Zheng Y, Wang X, Graeber MB. Artificial intelligence techniques for neuropathological diagnostics and research. Neuropathology 2022. [PMID: 36443935 DOI: 10.1111/neup.12880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 12/03/2022]
Abstract
Artificial intelligence (AI) research began in theoretical neurophysiology, and the resulting classical paper on the McCulloch-Pitts mathematical neuron was written in a psychiatry department almost 80 years ago. However, the application of AI in digital neuropathology is still in its infancy. Rapid progress is now being made, which prompted this article. Human brain diseases represent distinct system states that fall outside the normal spectrum. Many differ not only in functional but also in structural terms, and the morphology of abnormal nervous tissue forms the traditional basis of neuropathological disease classifications. However, only a few countries have the medical specialty of neuropathology, and, given the sheer number of newly developed histological tools that can be applied to the study of brain diseases, a tremendous shortage of qualified hands and eyes at the microscope is obvious. Similarly, in neuroanatomy, human observers no longer have the capacity to process the vast amounts of connectomics data. Therefore, it is reasonable to assume that advances in AI technology and, especially, whole-slide image (WSI) analysis will greatly aid neuropathological practice. In this paper, we discuss machine learning (ML) techniques that are important for understanding WSI analysis, such as traditional ML and deep learning, introduce a recently developed neuropathological AI termed PathoFusion, and present thoughts on some of the challenges that must be overcome before the full potential of AI in digital neuropathology can be realized.
Collapse
Affiliation(s)
- Islam Alzoubi
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Guoqing Bao
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Yuqi Zheng
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| | - Xiuying Wang
- School of Computer Science The University of Sydney Sydney New South Wales Australia
| | - Manuel B. Graeber
- Ken Parker Brain Tumour Research Laboratories Brain and Mind Centre, Faculty of Medicine and Health, University of Sydney Camperdown New South Wales Australia
| |
Collapse
|
44
|
Tsuneki M, Kanavati F. Weakly supervised learning for multi-organ adenocarcinoma classification in whole slide images. PLoS One 2022; 17:e0275378. [PMID: 36417401 PMCID: PMC9683606 DOI: 10.1371/journal.pone.0275378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 09/15/2022] [Indexed: 11/25/2022] Open
Abstract
The primary screening by automated computational pathology algorithms of the presence or absence of adenocarcinoma in biopsy specimens (e.g., endoscopic biopsy, transbronchial lung biopsy, and needle biopsy) of possible primary organs (e.g., stomach, colon, lung, and breast) and radical lymph node dissection specimen is very useful and should be a powerful tool to assist surgical pathologists in routine histopathological diagnostic workflow. In this paper, we trained multi-organ deep learning models to classify adenocarcinoma in biopsy and radical lymph node dissection specimens whole slide images (WSIs). We evaluated the models on five independent test sets (stomach, colon, lung, breast, lymph nodes) to demonstrate the feasibility in multi-organ and lymph nodes specimens from different medical institutions, achieving receiver operating characteristic areas under the curves (ROC-AUCs) in the range of 0.91 -0.98.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Akasaka, Chuo-ku, Fukuoka, Japan
- * E-mail:
| | - Fahdi Kanavati
- Medmain Research, Medmain Inc., Akasaka, Chuo-ku, Fukuoka, Japan
| |
Collapse
|
45
|
Ahmed AA, Abouzid M, Kaczmarek E. Deep Learning Approaches in Histopathology. Cancers (Basel) 2022; 14:5264. [PMID: 36358683 PMCID: PMC9654172 DOI: 10.3390/cancers14215264] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2022] [Revised: 10/10/2022] [Accepted: 10/24/2022] [Indexed: 10/06/2023] Open
Abstract
The revolution of artificial intelligence and its impacts on our daily life has led to tremendous interest in the field and its related subtypes: machine learning and deep learning. Scientists and developers have designed machine learning- and deep learning-based algorithms to perform various tasks related to tumor pathologies, such as tumor detection, classification, grading with variant stages, diagnostic forecasting, recognition of pathological attributes, pathogenesis, and genomic mutations. Pathologists are interested in artificial intelligence to improve the diagnosis precision impartiality and to minimize the workload combined with the time consumed, which affects the accuracy of the decision taken. Regrettably, there are already certain obstacles to overcome connected to artificial intelligence deployments, such as the applicability and validation of algorithms and computational technologies, in addition to the ability to train pathologists and doctors to use these machines and their willingness to accept the results. This review paper provides a survey of how machine learning and deep learning methods could be implemented into health care providers' routine tasks and the obstacles and opportunities for artificial intelligence application in tumor morphology.
Collapse
Affiliation(s)
- Alhassan Ali Ahmed
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| | - Mohamed Abouzid
- Doctoral School, Poznan University of Medical Sciences, 60-812 Poznan, Poland
- Department of Physical Pharmacy and Pharmacokinetics, Faculty of Pharmacy, Poznan University of Medical Sciences, Rokietnicka 3 St., 60-806 Poznan, Poland
| | - Elżbieta Kaczmarek
- Department of Bioinformatics and Computational Biology, Poznan University of Medical Sciences, 60-812 Poznan, Poland
| |
Collapse
|
46
|
Yang S, Shen T, Fang Y, Wang X, Zhang J, Yang W, Huang J, Han X. DeepNoise: Signal and Noise Disentanglement Based on Classifying Fluorescent Microscopy Images via Deep Learning. GENOMICS, PROTEOMICS & BIOINFORMATICS 2022; 20:989-1001. [PMID: 36608842 PMCID: PMC10025761 DOI: 10.1016/j.gpb.2022.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 11/26/2022] [Accepted: 12/11/2022] [Indexed: 01/05/2023]
Abstract
The high-content image-based assay is commonly leveraged for identifying the phenotypic impact of genetic perturbations in biology field. However, a persistent issue remains unsolved during experiments: the interferential technical noises caused by systematic errors (e.g., temperature, reagent concentration, and well location) are always mixed up with the real biological signals, leading to misinterpretation of any conclusion drawn. Here, we reported a mean teacher-based deep learning model (DeepNoise) that can disentangle biological signals from the experimental noises. Specifically, we aimed to classify the phenotypic impact of 1108 different genetic perturbations screened from 125,510 fluorescent microscopy images, which were totally unrecognizable by the human eye. We validated our model by participating in the Recursion Cellular Image Classification Challenge, and DeepNoise achieved an extremely high classification score (accuracy: 99.596%), ranking the 2nd place among 866 participating groups. This promising result indicates the successful separation of biological and technical factors, which might help decrease the cost of treatment development and expedite the drug discovery process. The source code of DeepNoise is available at https://github.com/Scu-sen/Recursion-Cellular-Image-Classification-Challenge.
Collapse
Affiliation(s)
- Sen Yang
- Tencent AI Lab, Shenzhen 518057, China
| | - Tao Shen
- Tencent AI Lab, Shenzhen 518057, China
| | - Yuqi Fang
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong Special Administrative Region 999077, China
| | - Xiyue Wang
- College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Jun Zhang
- Tencent AI Lab, Shenzhen 518057, China.
| | - Wei Yang
- Tencent AI Lab, Shenzhen 518057, China
| | | | - Xiao Han
- Tencent AI Lab, Shenzhen 518057, China.
| |
Collapse
|
47
|
High-Resolution Histopathological Image Classification Model Based on Fused Heterogeneous Networks with Self-Supervised Feature Representation. BIOMED RESEARCH INTERNATIONAL 2022; 2022:8007713. [PMID: 36046446 PMCID: PMC9420597 DOI: 10.1155/2022/8007713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 08/02/2022] [Indexed: 11/18/2022]
Abstract
Applying machine learning technology to automatic image analysis and auxiliary diagnosis of whole slide image (WSI) may help to improve the efficiency, objectivity, and consistency of pathological diagnosis. Due to its extremely high resolution, it is still a great challenge to directly process WSI through deep neural networks. In this paper, we propose a novel model for the task of classification of WSIs. The model is composed of two parts. The first part is a self-supervised encoding network with a UNet-like architecture. Each patch from a WSI is encoded as a compressed latent representation. These features are placed according to their corresponding patch’s original location in WSI, forming a feature cube. The second part is a classification network fused by 4 famous network blocks with heterogeneous architectures, with feature cube as input. Our model effectively expresses the feature and preserves location information of each patch. The fused network integrates heterogeneous features generated by different networks which yields robust classification results. The model is evaluated on two public datasets with comparison to baseline models. The evaluation results show the effectiveness of the proposed model.
Collapse
|
48
|
Quero G, Mascagni P, Kolbinger FR, Fiorillo C, De Sio D, Longo F, Schena CA, Laterza V, Rosa F, Menghi R, Papa V, Tondolo V, Cina C, Distler M, Weitz J, Speidel S, Padoy N, Alfieri S. Artificial Intelligence in Colorectal Cancer Surgery: Present and Future Perspectives. Cancers (Basel) 2022; 14:3803. [PMID: 35954466 PMCID: PMC9367568 DOI: 10.3390/cancers14153803] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 07/29/2022] [Accepted: 08/03/2022] [Indexed: 02/05/2023] Open
Abstract
Artificial intelligence (AI) and computer vision (CV) are beginning to impact medicine. While evidence on the clinical value of AI-based solutions for the screening and staging of colorectal cancer (CRC) is mounting, CV and AI applications to enhance the surgical treatment of CRC are still in their early stage. This manuscript introduces key AI concepts to a surgical audience, illustrates fundamental steps to develop CV for surgical applications, and provides a comprehensive overview on the state-of-the-art of AI applications for the treatment of CRC. Notably, studies show that AI can be trained to automatically recognize surgical phases and actions with high accuracy even in complex colorectal procedures such as transanal total mesorectal excision (TaTME). In addition, AI models were trained to interpret fluorescent signals and recognize correct dissection planes during total mesorectal excision (TME), suggesting CV as a potentially valuable tool for intraoperative decision-making and guidance. Finally, AI could have a role in surgical training, providing automatic surgical skills assessment in the operating room. While promising, these proofs of concept require further development, validation in multi-institutional data, and clinical studies to confirm AI as a valuable tool to enhance CRC treatment.
Collapse
Affiliation(s)
- Giuseppe Quero
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Pietro Mascagni
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
| | - Fiona R. Kolbinger
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Claudio Fiorillo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Davide De Sio
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Fabio Longo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Alberto Schena
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vito Laterza
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Fausto Rosa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Roberta Menghi
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Valerio Papa
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Vincenzo Tondolo
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Caterina Cina
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Marius Distler
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Juergen Weitz
- Department for Visceral, Thoracic and Vascular Surgery, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, 01307 Dresden, Germany
| | - Stefanie Speidel
- National Center for Tumor Diseases (NCT), Partner Site Dresden, 01307 Dresden, Germany
| | - Nicolas Padoy
- Institute of Image-Guided Surgery, IHU-Strasbourg, 67000 Strasbourg, France
- ICube, Centre National de la Recherche Scientifique (CNRS), University of Strasbourg, 67000 Strasbourg, France
| | - Sergio Alfieri
- Digestive Surgery Unit, Fondazione Policlinico Universitario A. Gemelli IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
- Faculty of Medicine, Università Cattolica del Sacro Cuore di Roma, Largo Francesco Vito 1, 00168 Rome, Italy
| |
Collapse
|
49
|
Wong ANN, He Z, Leung KL, To CCK, Wong CY, Wong SCC, Yoo JS, Chan CKR, Chan AZ, Lacambra MD, Yeung MHY. Current Developments of Artificial Intelligence in Digital Pathology and Its Future Clinical Applications in Gastrointestinal Cancers. Cancers (Basel) 2022; 14:3780. [PMID: 35954443 PMCID: PMC9367360 DOI: 10.3390/cancers14153780] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/27/2022] [Accepted: 08/01/2022] [Indexed: 02/05/2023] Open
Abstract
The implementation of DP will revolutionize current practice by providing pathologists with additional tools and algorithms to improve workflow. Furthermore, DP will open up opportunities for development of AI-based tools for more precise and reproducible diagnosis through computational pathology. One of the key features of AI is its capability to generate perceptions and recognize patterns beyond the human senses. Thus, the incorporation of AI into DP can reveal additional morphological features and information. At the current rate of AI development and adoption of DP, the interest in computational pathology is expected to rise in tandem. There have already been promising developments related to AI-based solutions in prostate cancer detection; however, in the GI tract, development of more sophisticated algorithms is required to facilitate histological assessment of GI specimens for early and accurate diagnosis. In this review, we aim to provide an overview of the current histological practices in AP laboratories with respect to challenges faced in image preprocessing, present the existing AI-based algorithms, discuss their limitations and present clinical insight with respect to the application of AI in early detection and diagnosis of GI cancer.
Collapse
Affiliation(s)
- Alex Ngai Nick Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Zebang He
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Ka Long Leung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Curtis Chun Kit To
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Chun Yin Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Sze Chuen Cesar Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Jung Sun Yoo
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| | - Cheong Kin Ronald Chan
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Angela Zaneta Chan
- Department of Anatomical and Cellular Pathology, Prince of Wales Hospital, Shatin, Hong Kong SAR, China;
| | - Maribel D. Lacambra
- Department of Anatomical and Cellular Pathology, The Chinese University of Hong Kong, Prince of Wales Hospital, Shatin, Hong Kong SAR, China; (C.C.K.T.); (C.K.R.C.); (M.D.L.)
| | - Martin Ho Yin Yeung
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China; (A.N.N.W.); (Z.H.); (K.L.L.); (C.Y.W.); (S.C.C.W.); (J.S.Y.)
| |
Collapse
|
50
|
Jellyfish Search-Optimized Deep Learning for Compressive Strength Prediction in Images of Ready-Mixed Concrete. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9541115. [PMID: 35958762 PMCID: PMC9359848 DOI: 10.1155/2022/9541115] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 06/07/2022] [Indexed: 11/17/2022]
Abstract
Most building structures that are built today are built from concrete, owing to its various favorable properties. Compressive strength is one of the mechanical properties of concrete that is directly related to the safety of the structures. Therefore, predicting the compressive strength can facilitate the early planning of material quality management. A series of deep learning (DL) models that suit computer vision tasks, namely the convolutional neural networks (CNNs), are used to predict the compressive strength of ready-mixed concrete. To demonstrate the efficacy of computer vision-based prediction, its effectiveness using imaging numerical data was compared with that of the deep neural networks (DNNs) technique that uses conventional numerical data. Various DL prediction models were compared and the best ones were identified with the relevant concrete datasets. The best DL models were then optimized by fine-tuning their hyperparameters using a newly developed bio-inspired metaheuristic algorithm, called jellyfish search optimizer, to enhance the accuracy and reliability. Analytical experiments indicate that the computer vision-based CNNs outperform the numerical data-based DNNs in all evaluation metrics except the training time. Thus, the bio-inspired optimization of computer vision-based convolutional neural networks is potentially a promising approach to predict the compressive strength of ready-mixed concrete.
Collapse
|