1
|
Hashimoto N, Hanada H, Miyoshi H, Nagaishi M, Sato K, Hontani H, Ohshima K, Takeuchi I. Multimodal Gated Mixture of Experts Using Whole Slide Image and Flow Cytometry for Multiple Instance Learning Classification of Lymphoma. J Pathol Inform 2024; 15:100359. [PMID: 38322152 PMCID: PMC10844119 DOI: 10.1016/j.jpi.2023.100359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 12/07/2023] [Accepted: 12/23/2023] [Indexed: 02/08/2024] Open
Abstract
In this study, we present a deep-learning-based multimodal classification method for lymphoma diagnosis in digital pathology, which utilizes a whole slide image (WSI) as the primary image data and flow cytometry (FCM) data as auxiliary information. In pathological diagnosis of malignant lymphoma, FCM serves as valuable auxiliary information during the diagnosis process, offering useful insights into predicting the major class (superclass) of subtypes. By incorporating both images and FCM data into the classification process, we can develop a method that mimics the diagnostic process of pathologists, enhancing the explainability. In order to incorporate the hierarchical structure between superclasses and their subclasses, the proposed method utilizes a network structure that effectively combines the mixture of experts (MoE) and multiple instance learning (MIL) techniques, where MIL is widely recognized for its effectiveness in handling WSIs in digital pathology. The MoE network in the proposed method consists of a gating network for superclass classification and multiple expert networks for (sub)class classification, specialized for each superclass. To evaluate the effectiveness of our method, we conducted experiments involving a six-class classification task using 600 lymphoma cases. The proposed method achieved a classification accuracy of 72.3%, surpassing the 69.5% obtained through the straightforward combination of FCM and images, as well as the 70.2% achieved by the method using only images. Moreover, the combination of multiple weights in the MoE and MIL allows for the visualization of specific cellular and tumor regions, resulting in a highly explanatory model that cannot be attained with conventional methods. It is anticipated that by targeting a larger number of classes and increasing the number of expert networks, the proposed method could be effectively applied to the real problem of lymphoma diagnosis.
Collapse
Affiliation(s)
- Noriaki Hashimoto
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| | - Hiroyuki Hanada
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| | - Hiroaki Miyoshi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Miharu Nagaishi
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Kensaku Sato
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Hidekata Hontani
- Department of Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, 4668555, Japan
| | - Koichi Ohshima
- Department of Pathology, Kurume University School of Medicine, 67 Asahi-machi, Kurume, 8300011, Japan
| | - Ichiro Takeuchi
- RIKEN Center for Advanced Intelligence Project, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
- Department of Mechanical Systems Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 4648603, Japan
| |
Collapse
|
2
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
3
|
Budginaite E, Magee DR, Kloft M, Woodruff HC, Grabsch HI. Computational methods for metastasis detection in lymph nodes and characterization of the metastasis-free lymph node microarchitecture: A systematic-narrative hybrid review. J Pathol Inform 2024; 15:100367. [PMID: 38455864 PMCID: PMC10918266 DOI: 10.1016/j.jpi.2024.100367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 01/31/2024] [Accepted: 01/31/2024] [Indexed: 03/09/2024] Open
Abstract
Background Histological examination of tumor draining lymph nodes (LNs) plays a vital role in cancer staging and prognostication. However, as soon as a LN is classed as metastasis-free, no further investigation will be performed and thus, potentially clinically relevant information detectable in tumor-free LNs is currently not captured. Objective To systematically study and critically assess methods for the analysis of digitized histological LN images described in published research. Methods A systematic search was conducted in several public databases up to December 2023 using relevant search terms. Studies using brightfield light microscopy images of hematoxylin and eosin or immunohistochemically stained LN tissue sections aiming to detect and/or segment LNs, their compartments or metastatic tumor using artificial intelligence (AI) were included. Dataset, AI methodology, cancer type, and study objective were compared between articles. Results A total of 7201 articles were collected and 73 articles remained for detailed analyses after article screening. Of the remaining articles, 86% aimed at LN metastasis identification, 8% aimed at LN compartment segmentation, and remaining focused on LN contouring. Furthermore, 78% of articles used patch classification and 22% used pixel segmentation models for analyses. Five out of six studies (83%) of metastasis-free LNs were performed on publicly unavailable datasets, making quantitative article comparison impossible. Conclusions Multi-scale models mimicking multiple microscopy zooms show promise for computational LN analysis. Large-scale datasets are needed to establish the clinical relevance of analyzing metastasis-free LN in detail. Further research is needed to identify clinically interpretable metrics for LN compartment characterization.
Collapse
Affiliation(s)
- Elzbieta Budginaite
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | | | - Maximilian Kloft
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Department of Internal Medicine, Justus-Liebig-University, Giessen, Germany
| | - Henry C. Woodruff
- Department of Precision Medicine, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
| | - Heike I. Grabsch
- Department of Pathology, GROW - Research Institute for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, The Netherlands
- Pathology and Data Analytics, Leeds Institute of Medical Research at St James’s, University of Leeds, Leeds, UK
| |
Collapse
|
4
|
Lv H, Li W, Lu Z, Gao X, Zhang Q, Bao Y, Fu Y, Xiao J. SPMLD: A skin pathological image dataset for non-melanoma with detailed lesion area annotation. Comput Biol Med 2024; 179:108793. [PMID: 38955126 DOI: 10.1016/j.compbiomed.2024.108793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 06/05/2024] [Accepted: 06/18/2024] [Indexed: 07/04/2024]
Abstract
Skin tumors are the most common tumors in humans and the clinical characteristics of three common non-melanoma tumors (IDN, SK, BCC) are similar, resulting in a high misdiagnosis rate. The accurate differential diagnosis of these tumors needs to be judged based on pathological images. However, a shortage of experienced dermatological pathologists leads to bias in the diagnostic accuracy of these skin tumors in China. In this paper, we establish a skin pathological image dataset, SPMLD, for three non-melanoma to achieve automatic and accurate intelligent identification for them. Meanwhile, we propose a lesion-area-based enhanced classification network with the KLS module and an attention module. Specifically, we first collect thousands of H&E-stained tissue sections from patients with clinically and pathologically confirmed IDN, SK, and BCC from a single-center hospital. Then, we scan them to construct a pathological image dataset of these three skin tumors. Furthermore, we mark the complete lesion area of the entire pathology image to better learn the pathologist's diagnosis process. In addition, we applied the proposed network for lesion classification prediction on the SPMLD dataset. Finally, we conduct a series of experiments to demonstrate that this annotation and our network can effectively improve the classification results of various networks. The source dataset and code are available at https://github.com/efss24/SPMLD.git.
Collapse
Affiliation(s)
- Haozhen Lv
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Wentao Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zhengda Lu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Xiaoman Gao
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Qiuli Zhang
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Yingqiu Bao
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China.
| | - Yu Fu
- Department of Dermatology, Beijing Hospital, National Center of Gerontology, Beijing, China; Institute of Geriatric Medicine, Chinese Academy of Medical Sciences, Beijing, China
| | - Jun Xiao
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 101408, China
| |
Collapse
|
5
|
Li J, Dong P, Wang X, Zhang J, Zhao M, Shen H, Cai L, He J, Han M, Miao J, Liu H, Yang W, Han X, Liu Y. Artificial intelligence enhances whole-slide interpretation of PD-L1 CPS in triple-negative breast cancer: A multi-institutional ring study. Histopathology 2024; 85:451-467. [PMID: 38747491 DOI: 10.1111/his.15205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 03/11/2024] [Accepted: 04/21/2024] [Indexed: 08/09/2024]
Abstract
BACKGROUND AND AIMS Evaluation of the programmed cell death ligand-1 (PD-L1) combined positive score (CPS) is vital to predict the efficacy of the immunotherapy in triple-negative breast cancer (TNBC), but pathologists show substantial variability in the consistency and accuracy of the interpretation. It is of great importance to establish an objective and effective method which is highly repeatable. METHODS We proposed a model in a deep learning-based framework, which at the patch level incorporated cell analysis and tissue region analysis, followed by the whole-slide level fusion of patch results. Three rounds of ring studies (RSs) were conducted. Twenty-one pathologists of different levels from four institutions evaluated the PD-L1 CPS in TNBC specimens as continuous scores by visual assessment and our artificial intelligence (AI)-assisted method. RESULTS In the visual assessment, the interpretation results of PD-L1 (Dako 22C3) CPS by different levels of pathologists have significant differences and showed weak consistency. Using AI-assisted interpretation, there were no significant differences between all pathologists (P = 0.43), and the intraclass correlation coefficient (ICC) value was increased from 0.618 [95% confidence interval (CI) = 0.524-0.719] to 0.931 (95% CI = 0.902-0.955). The accuracy of interpretation result is further improved to 0.919 (95% CI = 0.886-0.947). Acceptance of AI results by junior pathologists was the highest among all levels, and 80% of the AI results were accepted overall. CONCLUSION With the help of the AI-assisted diagnostic method, different levels of pathologists achieved excellent consistency and repeatability in the interpretation of PD-L1 (Dako 22C3) CPS. Our AI-assisted diagnostic approach was proved to strengthen the consistency and repeatability in clinical practice.
Collapse
Affiliation(s)
- Jinze Li
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Pei Dong
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Xinran Wang
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Jun Zhang
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Meng Zhao
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | | | - Lijing Cai
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Jiankun He
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Mengxue Han
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Jiaxian Miao
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Hongbo Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Wei Yang
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Xiao Han
- AI Lab, Tencent, Shenzhen, Guangdong, China
| | - Yueping Liu
- Department of Pathology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| |
Collapse
|
6
|
Li C, Zhang G, Zhao B, Xie D, Du H, Duan X, Hu Y, Zhang L. Advances of surgical robotics: image-guided classification and application. Natl Sci Rev 2024; 11:nwae186. [PMID: 39144738 PMCID: PMC11321255 DOI: 10.1093/nsr/nwae186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/19/2024] [Accepted: 05/07/2024] [Indexed: 08/16/2024] Open
Abstract
Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
Collapse
Affiliation(s)
- Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Gongzi Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dongsheng Xie
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Hailong Du
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lihai Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
7
|
Messika J, Belousova N, Parquin F, Roux A. Antibody-Mediated Rejection in Lung Transplantation: Diagnosis and Therapeutic Armamentarium in a 21st Century Perspective. Transpl Int 2024; 37:12973. [PMID: 39170865 PMCID: PMC11336419 DOI: 10.3389/ti.2024.12973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 07/10/2024] [Indexed: 08/23/2024]
Abstract
Humoral immunity is a major waypoint towards chronic allograft dysfunction in lung transplantation (LT) recipients. Though allo-immunization and antibody-mediated rejection (AMR) are well-known entities, some diagnostic gaps need to be addressed. Morphological analysis could be enhanced by digital pathology and artificial intelligence-based companion tools. Graft transcriptomics can help to identify graft failure phenotypes or endotypes. Donor-derived cell free DNA is being evaluated for graft-loss risk stratification and tailored surveillance. Preventative therapies should be tailored according to risk. The donor pool can be enlarged for candidates with HLA sensitization, with strategies combining plasma exchange, intravenous immunoglobulin and immune cell depletion, or with emerging or innovative therapies such as imlifidase or immunoadsorption. In cases of insufficient pre-transplant desensitization, the effects of antibodies on the allograft can be prevented by targeting the complement cascade, although evidence for this strategy in LT is limited. In LT recipients with a humoral response, strategies are combined, including depletion of immune cells (plasmapheresis or immunoadsorption), inhibition of immune pathways, or modulation of the inflammatory cascade, which can be achieved with photopheresis. Altogether, these innovative techniques offer promising perspectives for LT recipients and shape the 21st century's armamentarium against AMR.
Collapse
Affiliation(s)
- Jonathan Messika
- Thoracic Intensive Care Unit, Foch Hospital, Suresnes, France
- Physiopathology and Epidemiology of Respiratory Diseases, UMR1152 INSERM and Université de Paris, Paris, France
- Paris Transplant Group, Paris, France
| | - Natalia Belousova
- Paris Transplant Group, Paris, France
- Pneumology, Adult Cystic Fibrosis Center and Lung Transplantation Department, Foch Hospital, Suresnes, France
| | - François Parquin
- Thoracic Intensive Care Unit, Foch Hospital, Suresnes, France
- Paris Transplant Group, Paris, France
| | - Antoine Roux
- Paris Transplant Group, Paris, France
- Pneumology, Adult Cystic Fibrosis Center and Lung Transplantation Department, Foch Hospital, Suresnes, France
- Université Paris-Saclay, INRAE, UVSQ, VIM, Jouy-en-Josas, France
| |
Collapse
|
8
|
Chen S, Wang X, Zhang J, Jiang L, Gao F, Xiang J, Yang S, Yang W, Zheng J, Han X. Deep learning-based diagnosis and survival prediction of patients with renal cell carcinoma from primary whole slide images. Pathology 2024:S0031-3025(24)00185-5. [PMID: 39168777 DOI: 10.1016/j.pathol.2024.05.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/06/2024] [Accepted: 05/20/2024] [Indexed: 08/23/2024]
Abstract
There is an urgent clinical demand to explore novel diagnostic and prognostic biomarkers for renal cell carcinoma (RCC). We proposed deep learning-based artificial intelligence strategies. The study included 1752 whole slide images from multiple centres. Based on the pixel-level of RCC segmentation, the diagnosis diagnostic model achieved an area under the receiver operating characteristic curve (AUC) of 0.977 (95% CI 0.969-0.984) in the external validation cohort. In addition, our diagnostic model exhibited excellent performance in the differential diagnosis of RCC from renal oncocytoma, which achieved an AUC of 0.951 (0.922-0.972). The graderisk for the recognition of high-grade tumour achieved AUCs of 0.840 (0.805-0.871) in the Cancer Genome Atlas (TCGA) cohort, 0.857 (0.813-0.894) in the Shanghai General Hospital (General) cohort, and 0.894 (0.842-0.933) in the Clinical Proteomic Tumor Analysis Consortium (CPTAC) cohort, for the recognition of high-grade tumour. The OSrisk for predicting 5-year survival status achieved an AUC of 0.784 (0.746-0.819) in the TCGA cohort, which was further verified in the independent general cohort and the CPTAC cohort, with AUCs of 0.774 (0.723-0.820) and 0.702 (0.632-0.765), respectively. Moreover, the competing-risk nomogram (CRN) showed its potential to be a prognostic indicator, with a hazard ratio (HR) of 5.664 (3.893-8.239, p<0.0001), outperforming other traditional clinical prognostic indicators. Kaplan-Meier survival analysis further illustrated that our CRN could significantly distinguish patients with high survival risk. Deep learning-based artificial intelligence could be a useful tool for clinicians to diagnose and predict the prognosis of RCC patients, thus improving the process of individualised treatment.
Collapse
Affiliation(s)
- Siteng Chen
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiyue Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA, USA
| | | | - Liren Jiang
- Department of Pathology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Feng Gao
- Department of Pathology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | | | | | | | - Junhua Zheng
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Xiao Han
- Tencent AI Lab, Shenzhen, China.
| |
Collapse
|
9
|
Rieger T, Kugler L, Manzey D, Roesler E. The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support? HUMAN FACTORS 2024; 66:1995-2007. [PMID: 37632728 DOI: 10.1177/00187208231197347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/28/2023]
Abstract
OBJECTIVE This study's purpose was to better understand the dynamics of trust attitude and behavior in human-agent interaction. BACKGROUND Whereas past research provided evidence for a perfect automation schema, more recent research has provided contradictory evidence. METHOD To disentangle these conflicting findings, we conducted an online experiment using a simulated medical X-ray task. We manipulated the framing of support agents (i.e., artificial intelligence (AI) versus expert versus novice) between-subjects and failure experience (i.e., perfect support, imperfect support, back-to-perfect support) within subjects. Trust attitude and behavior as well as perceived reliability served as dependent variables. RESULTS Trust attitude and perceived reliability were higher for the human expert than for the AI than for the human novice. Moreover, the results showed the typical pattern of trust formation, dissolution, and restoration for trust attitude and behavior as well as perceived reliability. Forgiveness after failure experience did not differ between agents. CONCLUSION The results strongly imply the existence of an imperfect automation schema. This illustrates the need to consider agent expertise for human-agent interaction. APPLICATION When replacing human experts with AI as support agents, the challenge of lower trust attitude towards the novel agent might arise.
Collapse
|
10
|
Jan C, He M, Vingrys A, Zhu Z, Stafford RS. Diagnosing glaucoma in primary eye care and the role of Artificial Intelligence applications for reducing the prevalence of undetected glaucoma in Australia. Eye (Lond) 2024; 38:2003-2013. [PMID: 38514852 PMCID: PMC11269618 DOI: 10.1038/s41433-024-03026-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 02/05/2024] [Accepted: 03/08/2024] [Indexed: 03/23/2024] Open
Abstract
Glaucoma is the commonest cause of irreversible blindness worldwide, with over 70% of people affected remaining undiagnosed. Early detection is crucial for halting progressive visual impairment in glaucoma patients, as there is no cure available. This narrative review aims to: identify reasons for the significant under-diagnosis of glaucoma globally, particularly in Australia, elucidate the role of primary healthcare in glaucoma diagnosis using Australian healthcare as an example, and discuss how recent advances in artificial intelligence (AI) can be implemented to improve diagnostic outcomes. Glaucoma is a prevalent disease in ageing populations and can have improved visual outcomes through appropriate treatment, making it essential for general medical practice. In countries such as Australia, New Zealand, Canada, USA, and the UK, optometrists serve as the gatekeepers for primary eye care, and glaucoma detection often falls on their shoulders. However, there is significant variation in the capacity for glaucoma diagnosis among eye professionals. Automation with Artificial Intelligence (AI) analysis of optic nerve photos can help optometrists identify high-risk changes and mitigate the challenges of image interpretation rapidly and consistently. Despite its potential, there are significant barriers and challenges to address before AI can be deployed in primary healthcare settings, including external validation, high quality real-world implementation, protection of privacy and cybersecurity, and medico-legal implications. Overall, the incorporation of AI technology in primary healthcare has the potential to reduce the global prevalence of undiagnosed glaucoma cases by improving diagnostic accuracy and efficiency.
Collapse
Affiliation(s)
- Catherine Jan
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia.
- Ophthalmology, Department of Surgery, Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne, Melbourne, VIC, Australia.
- Lost Child's Vision Project, Sydney, NSW, Australia.
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne, Melbourne, VIC, Australia
- Centre for Eye and Vision Research, The Hong Kong Polytechnic University, Kowloon, TU428, Hong Kong SAR
| | - Algis Vingrys
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne, Melbourne, VIC, Australia
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Zhuoting Zhu
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, VIC, Australia
- Ophthalmology, Department of Surgery, Faculty of Medicine, Dentistry & Health Sciences, University of Melbourne, Melbourne, VIC, Australia
| | - Randall S Stafford
- Stanford Prevention Research Center, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
11
|
Faroog Z, Dirar QSE, Zaidi ARZ, Khan MS, Mahamud G, Ambia SR, Al-Hazzaa S. Knowledge and attitude of medical students towards artificial intelligence in ophthalmology in Riyadh, Saudi Arabia: a cross-sectional study. Ann Med Surg (Lond) 2024; 86:4377-4383. [PMID: 39118699 PMCID: PMC11305754 DOI: 10.1097/ms9.0000000000002238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/23/2024] [Indexed: 08/10/2024] Open
Abstract
Background The use of artificial intelligence (AI) in ophthalmology represents a transformative leap in healthcare. AI-powered technologies, such as machine learning and computer vision, enhance the accuracy and efficiency of ophthalmic diagnosis and treatment. Objective This study aimed to determine medical students' awareness and attitudes towards the use of artificial intelligence in ophthalmology. Methods This cross-sectional, questionnaire-based study was conducted between November 2022 and January 2023 using online questionnaires. Data collection was carried out using convenience sampling among medical students at the University. IBM SPSS version 23 was used to analyze the data. Results The current finding shows that most of the participants N=309 (89.6%) had heard of the use of AI in medicine, and N=294 (85.2%) heard of the use of AI in ophthalmology. 98.6% (n=340) of respondents believed AI would be a helpful tool in ophthalmology. Along this line of questioning, a significant majority of respondents, 332 (96.2%) selected screening, 332 (96.2%) selected diagnosis, and 293 (84.9%) selected prevention as a usage of AI ophthalmology. However, the majority, 76.5%) of students had little understanding of the development of AI in ophthalmology. In addition, a significant relationship between sex, academic year, cumulative GPA (cGPA), and awareness of AI in ophthalmology (P<0.001) was found in this study. Conclusions Overall, medical students in Saudi Arabia appear to have favorable thoughts about AI and positive perceptions towards AI in ophthalmology. However, the findings of this study emphasize the limited understanding and low confidence levels of medical students in Saudi Arabia regarding the use of AI in ophthalmology. As a result, early exposure to AI-related materials in medical curricula is crucial for addressing these challenges through comprehensive AI education and practical exposure to prepare future ophthalmologists.
Collapse
Affiliation(s)
| | | | - Abdul Rehman Zia Zaidi
- Department of Family & Community Medicine, College of Medicine, Alfaisal University, Alfaisal University, Riyadh, Saudi Arabia
| | | | - Golam Mahamud
- College of Medicine, Alfaisal University, Riyadh, Saudi Arabia
| | | | - Selwa Al-Hazzaa
- King Abdulaziz City for Science & Technology (KACST), Riyadh, Saudi Arabia
| |
Collapse
|
12
|
Zhang X, Liu C, Zhu H, Wang T, Du Z, Ding W. A universal multiple instance learning framework for whole slide image analysis. Comput Biol Med 2024; 178:108714. [PMID: 38889627 DOI: 10.1016/j.compbiomed.2024.108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024]
Abstract
BACKGROUND The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
Collapse
Affiliation(s)
- Xueqin Zhang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China; Shanghai Key Laboratory of Computer Software Evaluating and Testing, Shanghai 201112, China
| | - Chang Liu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China.
| | - Huitong Zhu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Tianqi Wang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Zunguo Du
- Department of Pathology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Weihong Ding
- Department of Urology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
13
|
Javed S, Mahmood A, Qaiser T, Werghi N, Rajpoot N. Unsupervised mutual transformer learning for multi-gigapixel Whole Slide Image classification. Med Image Anal 2024; 96:103203. [PMID: 38810517 DOI: 10.1016/j.media.2024.103203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 03/30/2024] [Accepted: 05/13/2024] [Indexed: 05/31/2024]
Abstract
The classification of gigapixel Whole Slide Images (WSIs) is an important task in the emerging area of computational pathology. There has been a surge of interest in deep learning models for WSI classification with clinical applications such as cancer detection or prediction of cellular mutations. Most supervised methods require expensive and labor-intensive manual annotations by expert pathologists. Weakly supervised Multiple Instance Learning (MIL) methods have recently demonstrated excellent performance; however, they still require large-scale slide-level labeled training datasets that require a careful inspection of each slide by an expert pathologist. In this work, we propose a fully unsupervised WSI classification algorithm based on mutual transformer learning. The instances (i.e., patches) from gigapixel WSIs are transformed into a latent space and then inverse-transformed to the original space. Using the transformation loss, pseudo labels are generated and cleaned using a transformer label cleaner. The proposed transformer-based pseudo-label generator and cleaner modules mutually train each other iteratively in an unsupervised manner. A discriminative learning mechanism is introduced to improve normal versus cancerous instance labeling. In addition to the unsupervised learning, we demonstrate the effectiveness of the proposed framework for weakly supervised learning and cancer subtype classification as downstream analysis. Extensive experiments on four publicly available datasets show better performance of the proposed algorithm compared to the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Sajid Javed
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi, P.O. Box 127788, United Arab Emirates.
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan.
| | - Talha Qaiser
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Naoufel Werghi
- Department of Computer Science, Khalifa University of Science and Technology, Abu Dhabi, P.O. Box 127788, United Arab Emirates
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, Walsgrave, Coventry, CV2 2DX, UK; The Alan Turing Institute, London, NW1 2DB, UK
| |
Collapse
|
14
|
Wang K, Zheng F, Cheng L, Dai HN, Dou Q, Qin J. Breast Cancer Classification From Digital Pathology Images via Connectivity-Aware Graph Transformer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2854-2865. [PMID: 38526888 DOI: 10.1109/tmi.2024.3381239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Automated classification of breast cancer subtypes from digital pathology images has been an extremely challenging task due to the complicated spatial patterns of cells in the tissue micro-environment. While newly proposed graph transformers are able to capture more long-range dependencies to enhance accuracy, they largely ignore the topological connectivity between graph nodes, which is nevertheless critical to extract more representative features to address this difficult task. In this paper, we propose a novel connectivity-aware graph transformer (CGT) for phenotyping the topology connectivity of the tissue graph constructed from digital pathology images for breast cancer classification. Our CGT seamlessly integrates connectivity embedding to node feature at every graph transformer layer by using local connectivity aggregation, in order to yield more comprehensive graph representations to distinguish different breast cancer subtypes. In light of the realistic intercellular communication mode, we then encode the spatial distance between two arbitrary nodes as connectivity bias in self-attention calculation, thereby allowing the CGT to distinctively harness the connectivity embedding based on the distance of two nodes. We extensively evaluate the proposed CGT on a large cohort of breast carcinoma digital pathology images stained by Haematoxylin & Eosin. Experimental results demonstrate the effectiveness of our CGT, which outperforms state-of-the-art methods by a large margin. Codes are released on https://github.com/wang-kang-6/CGT.
Collapse
|
15
|
Wang F, Song Y, Xu H, Liu J, Tang F, Yang D, Yang D, Liang W, Ren L, Wang J, Luo X, Zhou Y, Zeng X, Dan H, Chen Q. Prediction of the short-term efficacy and recurrence of photodynamic therapy in the treatment of oral leukoplakia based on deep learning. Photodiagnosis Photodyn Ther 2024; 48:104236. [PMID: 38851310 DOI: 10.1016/j.pdpdt.2024.104236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2024] [Revised: 05/28/2024] [Accepted: 06/05/2024] [Indexed: 06/10/2024]
Abstract
BACKGROUND The treatment of oral leukoplakia (OLK) with aminolaevulinic acid photodynamic therapy (ALA-PDT) is widespread. Nonetheless, there is variation in efficacy. Therefore, this study constructed a model for predicting the short-term efficacy and recurrence of OLK after ALA-PDT. METHODS The short-term efficacy and recurrence of ALA-PDT were calculated by statistical analysis, and the relevant influencing factors were analyzed by Logistic regression and COX regression model. Finally, prediction models for total response (TR) rate, complete response (CR) rate and recurrence in OLK patients after ALA-PDT treatment were established. Features from pathology sections were extracted using deep learning autoencoder and combined with clinical variables to improve prediction performance of the model. RESULTS The logistic regression analysis showed that the non-homogeneous (OR: 4.911, P: 0.023) OLK and lesions with moderate to severe epithelial dysplasia (OR: 4.288, P: 0.042) had better short-term efficacy. The area under receiver operating characteristic curve (AUC) of CR, TR and recurrence predict models after the ALA-PDT treatment of OLK patients is 0.872, 0.718, and 0.564, respectively. Feature extraction revealed an association between inflammatory cell infiltration in the lamina propria and recurrence after PDT. Combining clinical variables and deep learning improved the performance of recurrence model by more than 30 %. CONCLUSIONS ALA-PDT has excellent short-term efficacy in the management of OLK but the recurrence rate was high. Prediction model based on clinicopathological characteristics has excellent predictive effect for short-term efficacy but limited effect for recurrence. The use of deep learning and pathology images greatly improves predictive value of the models.
Collapse
Affiliation(s)
- Fei Wang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Yansong Song
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Hao Xu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Jiaxin Liu
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Fan Tang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China; Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Cancer Center of Zhejiang University, Hangzhou 310000, PR China
| | - Dan Yang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Dan Yang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Wenhui Liang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Ling Ren
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Jiongke Wang
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Xiaobo Luo
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Yu Zhou
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China
| | - Xin Zeng
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China.
| | - Hongxia Dan
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China.
| | - Qianming Chen
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases & Research Unit of Oral Carcinogenesis and Management, Chinese Academy of Medical Sciences, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan 610041, PR China; Zhejiang Provincial Clinical Research Center for Oral Diseases, Key Laboratory of Oral Biomedical Research of Zhejiang Province, Stomatology Hospital, School of Stomatology, Zhejiang University School of Medicine, Cancer Center of Zhejiang University, Hangzhou 310000, PR China.
| |
Collapse
|
16
|
Ma M, Zeng X, Qu L, Sheng X, Ren H, Chen W, Li B, You Q, Xiao L, Wang Y, Dai M, Zhang B, Lu C, Sheng W, Huang D. Advancing Automatic Gastritis Diagnosis: An Interpretable Multilabel Deep Learning Framework for the Simultaneous Assessment of Multiple Indicators. THE AMERICAN JOURNAL OF PATHOLOGY 2024; 194:1538-1549. [PMID: 38762117 DOI: 10.1016/j.ajpath.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 03/17/2024] [Accepted: 04/26/2024] [Indexed: 05/20/2024]
Abstract
The evaluation of morphologic features, such as inflammation, gastric atrophy, and intestinal metaplasia, is crucial for diagnosing gastritis. However, artificial intelligence analysis for nontumor diseases like gastritis is limited. Previous deep learning models have omitted important morphologic indicators and cannot simultaneously diagnose gastritis indicators or provide interpretable labels. To address this, an attention-based multi-instance multilabel learning network (AMMNet) was developed to simultaneously achieve the multilabel diagnosis of activity, atrophy, and intestinal metaplasia with only slide-level weak labels. To evaluate AMMNet's real-world performance, a diagnostic test was designed to observe improvements in junior pathologists' diagnostic accuracy and efficiency with and without AMMNet assistance. In this study of 1096 patients from seven independent medical centers, AMMNet performed well in assessing activity [area under the curve (AUC), 0.93], atrophy (AUC, 0.97), and intestinal metaplasia (AUC, 0.93). The false-negative rates of these indicators were only 0.04, 0.08, and 0.18, respectively, and junior pathologists had lower false-negative rates with model assistance (0.15 versus 0.10). Furthermore, AMMNet reduced the time required per whole slide image from 5.46 to 2.85 minutes, enhancing diagnostic efficiency. In block-level clustering analysis, AMMNet effectively visualized task-related patches within whole slide images, improving interpretability. These findings highlight AMMNet's effectiveness in accurately evaluating gastritis morphologic indicators on multicenter data sets. Using multi-instance multilabel learning strategies to support routine diagnostic pathology deserves further evaluation.
Collapse
Affiliation(s)
- Mengke Ma
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Xixi Zeng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Linhao Qu
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China
| | - Xia Sheng
- Department of Pathology, Minhang Hospital, Fudan University, Shanghai, China
| | - Hongzheng Ren
- Department of Pathology, Gongli Hospital, Naval Medical University, Shanghai, China
| | - Weixiang Chen
- Department of Pathology, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bin Li
- Department of Pathology, Shanghai Xu-Hui Central Hospital, Shanghai, China
| | - Qinghua You
- Department of Pathology, Shanghai Pudong Hospital, Fudan University Pudong Medical Center, Shanghai, China
| | - Li Xiao
- Department of Pathology, Huadong Hospital, Shanghai, China
| | - Yi Wang
- Information Center, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Mei Dai
- Information Center, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Boqiang Zhang
- Shanghai Foremost Medical Technology Co. Ltd., Shanghai, China
| | - Changqing Lu
- Shanghai Foremost Medical Technology Co. Ltd., Shanghai, China
| | - Weiqi Sheng
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China.
| | - Dan Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China; Department of Oncology, Fudan University Shanghai Medical College, Shanghai, China; Institute of Pathology, Fudan University, Shanghai, China.
| |
Collapse
|
17
|
Miyahira AK, Kamran SC, Jamaspishvili T, Marshall CH, Maxwell KN, Parolia A, Zorko NA, Pienta KJ, Soule HR. Disrupting prostate cancer research: Challenge accepted; report from the 2023 Coffey-Holden Prostate Cancer Academy Meeting. Prostate 2024; 84:993-1015. [PMID: 38682886 DOI: 10.1002/pros.24721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Accepted: 04/16/2024] [Indexed: 05/01/2024]
Abstract
INTRODUCTION The 2023 Coffey-Holden Prostate Cancer Academy (CHPCA) Meeting, themed "Disrupting Prostate Cancer Research: Challenge Accepted," was convened at the University of California, Los Angeles, Luskin Conference Center, in Los Angeles, CA, from June 22 to 25, 2023. METHODS The 2023 marked the 10th Annual CHPCA Meeting, a discussion-oriented scientific think-tank conference convened annually by the Prostate Cancer Foundation, which centers on innovative and emerging research topics deemed pivotal for advancing critical unmet needs in prostate cancer research and clinical care. The 2023 CHPCA Meeting was attended by 81 academic investigators and included 40 talks across 8 sessions. RESULTS The central topic areas covered at the meeting included: targeting transcription factor neo-enhancesomes in cancer, AR as a pro-differentiation and oncogenic transcription factor, why few are cured with androgen deprivation therapy and how to change dogma to cure metastatic prostate cancer without castration, reducing prostate cancer morbidity and mortality with genetics, opportunities for radiation to enhance therapeutic benefit in oligometastatic prostate cancer, novel immunotherapeutic approaches, and the new era of artificial intelligence-driven precision medicine. DISCUSSION This article provides an overview of the scientific presentations delivered at the 2023 CHPCA Meeting, such that this knowledge can help in facilitating the advancement of prostate cancer research worldwide.
Collapse
Affiliation(s)
- Andrea K Miyahira
- Science Department, Prostate Cancer Foundation, Santa Monica, California, USA
| | - Sophia C Kamran
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Tamara Jamaspishvili
- Department of Pathology and Laboratory Medicine, SUNY Upstate Medical University, Syracuse, New York, USA
| | - Catherine H Marshall
- Department of Oncology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Kara N Maxwell
- Department of Medicine-Hematology/Oncology and Department of Genetics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Medicine Service, Corporal Michael J. Crescenz VA Medical Center, Philadelphia, Pennsylvania, USA
| | - Abhijit Parolia
- Department of Pathology, Rogel Cancer Center, University of Michigan, Ann Arbor, Michigan, USA
| | - Nicholas A Zorko
- Division of Hematology, Oncology and Transplantation, Department of Medicine, University of Minnesota, Minneapolis, Minnesota, USA
- University of Minnesota Masonic Cancer Center, University of Minnesota, Minneapolis, Minnesota, USA
| | - Kenneth J Pienta
- The James Buchanan Brady Urological Institute, The Johns Hopkins School of Medicine, Baltimore, Maryland, USA
| | - Howard R Soule
- Science Department, Prostate Cancer Foundation, Santa Monica, California, USA
| |
Collapse
|
18
|
Zhou H, Zhao Q, Huang W, Liang Z, Cui C, Ma H, Luo C, Li S, Ruan G, Chen H, Zhu Y, Zhang G, Liu S, Liu L, Li H, Yang H, Xie H. A novel fully automatic segmentation and counting system for metastatic lymph nodes on multimodal magnetic resonance imaging: Evaluation and prognostic implications in nasopharyngeal carcinoma. Radiother Oncol 2024; 197:110367. [PMID: 38834152 DOI: 10.1016/j.radonc.2024.110367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 05/28/2024] [Accepted: 06/01/2024] [Indexed: 06/06/2024]
Abstract
BACKGROUND The number of metastatic lymph nodes (MLNs) is crucial for the survival of nasopharyngeal carcinoma (NPC), but manual counting is laborious. This study aims to explore the feasibility and prognostic value of automatic MLNs segmentation and counting. METHODS We retrospectively enrolled 980 newly diagnosed patients in the primary cohort and 224 patients from two external cohorts. We utilized the nnUnet model for automatic MLNs segmentation on multimodal magnetic resonance imaging. MLNs counting methods, including manual delineation-assisted counting (MDAC) and fully automatic lymph node counting system (AMLNC), were compared with manual evaluation (Gold standard). RESULTS In the internal validation group, the MLNs segmentation results showed acceptable agreement with manual delineation, with a mean Dice coefficient of 0.771. The consistency among three counting methods was as follows 0.778 (Gold vs. AMLNC), 0.638 (Gold vs. MDAC), and 0.739 (AMLNC vs. MDAC). MLNs numbers were categorized into three-category variable (1-4, 5-9, > 9) and two-category variable (<4, ≥ 4) based on the gold standard and AMLNC. These categorical variables demonstrated acceptable discriminating abilities for 5-year overall survival (OS), progression-free, and distant metastasis-free survival. Compared with base prediction model, the model incorporating two-category AMLNC-counting numbers showed improved C-indexes for 5-year OS prediction (0.658 vs. 0.675, P = 0.045). All results have been successfully validated in the external cohort. CONCLUSIONS The AMLNC system offers a time- and labor-saving approach for fully automatic MLNs segmentation and counting in NPC. MLNs counting using AMLNC demonstrated non-inferior performance in survival discrimination compared to manual detection.
Collapse
Affiliation(s)
- Haoyang Zhou
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Qin Zhao
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Wenjie Huang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Zhiying Liang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chunyan Cui
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Huali Ma
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Chao Luo
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Shuqi Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Guangying Ruan
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hongbo Chen
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Yuliang Zhu
- Department of Nasopharyngeal Head and Neck Tumor Radiotherapy, Zhongshan City People's Hospital, ZhongShan, PR China.
| | - Guoyi Zhang
- Department of Radiation Oncology, Foshan Academy of Medical Sciences, Sun Yat-Sen University Foshan Hospital and The First People's Hospital of Foshan, Foshan, PR China.
| | - Shanshan Liu
- School of Life & Environmental Science, Guangxi Colleges and Universities Key Laboratory of Biomedical Sensors and Intelligent Instruments, Guilin University of Electronic Technology, Guilin, Guangxi, PR China.
| | - Lizhi Liu
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Haojiang Li
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hui Yang
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| | - Hui Xie
- Department of Radiology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guangzhou 510060, PR China.
| |
Collapse
|
19
|
Hua S, Yan F, Shen T, Ma L, Zhang X. PathoDuet: Foundation models for pathological slide analysis of H&E and IHC stains. Med Image Anal 2024; 97:103289. [PMID: 39106763 DOI: 10.1016/j.media.2024.103289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 07/19/2024] [Accepted: 07/24/2024] [Indexed: 08/09/2024]
Abstract
Large amounts of digitized histopathological data display a promising future for developing pathological foundation models via self-supervised learning methods. Foundation models pretrained with these methods serve as a good basis for downstream tasks. However, the gap between natural and histopathological images hinders the direct application of existing methods. In this work, we present PathoDuet, a series of pretrained models on histopathological images, and a new self-supervised learning framework in histopathology. The framework is featured by a newly-introduced pretext token and later task raisers to explicitly utilize certain relations between images, like multiple magnifications and multiple stains. Based on this, two pretext tasks, cross-scale positioning and cross-stain transferring, are designed to pretrain the model on Hematoxylin and Eosin (H&E) images and transfer the model to immunohistochemistry (IHC) images, respectively. To validate the efficacy of our models, we evaluate the performance over a wide variety of downstream tasks, including patch-level colorectal cancer subtyping and whole slide image (WSI)-level classification in H&E field, together with expression level prediction of IHC marker, tumor identification and slide-level qualitative analysis in IHC field. The experimental results show the superiority of our models over most tasks and the efficacy of proposed pretext tasks. The codes and models are available at https://github.com/openmedlab/PathoDuet.
Collapse
Affiliation(s)
- Shengyi Hua
- Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Fang Yan
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
| | - Tianle Shen
- Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lei Ma
- National Biomedical Imaging Center, College of Future Technology, Peking University, Beijing 100871, China
| | - Xiaofan Zhang
- Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai 200240, China; Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China.
| |
Collapse
|
20
|
Jackson P, Ponath Sukumaran G, Babu C, Tony MC, Jack DS, Reshma VR, Davis D, Kurian N, John A. Artificial intelligence in medical education - perception among medical students. BMC MEDICAL EDUCATION 2024; 24:804. [PMID: 39068482 PMCID: PMC11283685 DOI: 10.1186/s12909-024-05760-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 07/09/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND As Artificial Intelligence (AI) becomes pervasive in healthcare, including applications like robotic surgery and image analysis, the World Medical Association emphasises integrating AI education into medical curricula. This study evaluates medical students' perceptions of 'AI in medicine', their preferences for AI training in education, and their grasp of AI's ethical implications in healthcare. MATERIALS & METHODS A cross-sectional study was conducted among 325 medical students in Kerala using a pre-validated, semi structured questionnaire. The survey collected demographic data, any past educational experience about AI, participants' self-evaluation of their knowledge and evaluated self-perceived understanding of applications of AI in medicine. Participants responded to twelve Likert-scale questions targeting perceptions and ethical aspects and their opinions on suggested topics on AI to be included in their curriculum. RESULTS & DISCUSSION AI was viewed as an assistive technology for reducing medical errors by 57.2% students and 54.2% believed AI could enhance medical decision accuracy. About 49% agreed that AI could potentially improve accessibility to healthcare. Concerns about AI replacing physicians were reported by 37.6% and 69.2% feared a reduction in the humanistic aspect of medicine. Students were worried about challenges to trust (52.9%), patient-physician relationships (54.5%) and breach of professional confidentiality (53.5%). Only 3.7% felttotally competent in informing patients about features and risks associated with AI applications. Strong demand for structured AI training was expressed, particularly on reducing medical errors (76.9%) and ethical issues (79.4%). CONCLUSION This study highlights medical students' demand for structured AI training in undergraduate curricula, emphasising its importance in addressing evolving healthcare needs and ethical considerations. Despite widespread ethical concerns, the majority perceive AI as an assistive technology in healthcare. These findings provide valuable insights for curriculum development and defining learning outcomes in AI education for medical students.
Collapse
Affiliation(s)
| | | | - Chikku Babu
- Pushpagiri Medical College, Tiruvalla, Kerala, India
| | | | | | - V R Reshma
- Pushpagiri Medical College, Tiruvalla, Kerala, India
| | - Dency Davis
- Pushpagiri Medical College, Tiruvalla, Kerala, India
| | - Nisha Kurian
- Pushpagiri Medical College, Tiruvalla, Kerala, India
| | - Anjum John
- Pushpagiri Medical College, Tiruvalla, Kerala, India.
| |
Collapse
|
21
|
Vorontsov E, Bozkurt A, Casson A, Shaikovski G, Zelechowski M, Severson K, Zimmermann E, Hall J, Tenenholtz N, Fusi N, Yang E, Mathieu P, van Eck A, Lee D, Viret J, Robert E, Wang YK, Kunz JD, Lee MCH, Bernhard JH, Godrich RA, Oakley G, Millar E, Hanna M, Wen H, Retamero JA, Moye WA, Yousfi R, Kanan C, Klimstra DS, Rothrock B, Liu S, Fuchs TJ. A foundation model for clinical-grade computational pathology and rare cancers detection. Nat Med 2024:10.1038/s41591-024-03141-0. [PMID: 39039250 DOI: 10.1038/s41591-024-03141-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 06/19/2024] [Indexed: 07/24/2024]
Abstract
The analysis of histopathology images with artificial intelligence aims to enable clinical decision support systems and precision medicine. The success of such applications depends on the ability to model the diverse patterns observed in pathology images. To this end, we present Virchow, the largest foundation model for computational pathology to date. In addition to the evaluation of biomarker prediction and cell identification, we demonstrate that a large foundation model enables pan-cancer detection, achieving 0.95 specimen-level area under the (receiver operating characteristic) curve across nine common and seven rare cancers. Furthermore, we show that with less training data, the pan-cancer detector built on Virchow can achieve similar performance to tissue-specific clinical-grade models in production and outperform them on some rare variants of cancer. Virchow's performance gains highlight the value of a foundation model and open possibilities for many high-impact applications with limited amounts of labeled training data.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | - Ellen Yang
- Memorial Sloan Kettering Cancer Center, New York, NY, US
| | | | | | | | | | | | | | | | | | | | | | | | - Ewan Millar
- NSW Health Pathology, St George Hospital, Sydney, New South Wales, Australia
| | - Matthew Hanna
- Memorial Sloan Kettering Cancer Center, New York, NY, US
| | - Hannah Wen
- Memorial Sloan Kettering Cancer Center, New York, NY, US
| | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Ueda A, Nakai H, Miyagawa C, Otani T, Yoshida M, Murakami R, Komiyama S, Tanigawa T, Yokoi T, Takano H, Baba T, Miura K, Shimada M, Kigawa J, Enomoto T, Hamanishi J, Okamoto A, Okuno Y, Mandai M, Matsumura N. Artificial Intelligence-Based Histopathological Subtyping of High-Grade Serous Ovarian Cancer. THE AMERICAN JOURNAL OF PATHOLOGY 2024:S0002-9440(24)00243-8. [PMID: 39032605 DOI: 10.1016/j.ajpath.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 05/30/2024] [Accepted: 06/20/2024] [Indexed: 07/23/2024]
Abstract
Four subtypes of ovarian high-grade serous carcinoma (HGSC) have previously been identified, each with different prognoses and drug sensitivities. However, the accuracy of classification depended on the assessor's experience. This study aimed to develop a universal algorithm for HGSC-subtype classification using deep learning techniques. An artificial intelligence (AI)-based classification algorithm, which replicates the consensus diagnosis of pathologists, was formulated to analyze the morphological patterns and tumor-infiltrating lymphocyte counts for each tile extracted from whole slide images of ovarian HGSC available in The Cancer Genome Atlas (TCGA) data set. The accuracy of the algorithm was determined using the validation set from the Japanese Gynecologic Oncology Group 3022A1 (JGOG3022A1) and Kindai and Kyoto University (Kindai/Kyoto) cohorts. The algorithm classified the four HGSC-subtypes with mean accuracies of 0.933, 0.910, and 0.862 for the TCGA, JGOG3022A1, and Kindai/Kyoto cohorts, respectively. To compare mesenchymal transition (MT) with non-MT groups, overall survival analysis was performed in the TCGA data set. The AI-based prediction of HGSC-subtype classification in TCGA cases showed that the MT group had a worse prognosis than the non-MT group (P = 0.017). Furthermore, Cox proportional hazard regression analysis identified AI-based MT subtype classification prediction as a contributing factor along with residual disease after surgery, stage, and age. In conclusion, a robust AI-based HGSC-subtype classification algorithm was established using virtual slides of ovarian HGSC.
Collapse
Affiliation(s)
- Akihiko Ueda
- Departments of Gynecology and Obstetrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan; Department of Pathology, Kindai University Nara Hospital, Nara, Japan
| | - Hidekatsu Nakai
- Department of Obstetrics and Gynecology, Kindai University Faculty of Medicine, Osaka, Japan
| | - Chiho Miyagawa
- Department of Obstetrics and Gynecology, Kindai University Faculty of Medicine, Osaka, Japan
| | - Tomoyuki Otani
- Department of Obstetrics and Gynecology, Kindai University Faculty of Medicine, Osaka, Japan
| | - Manabu Yoshida
- Department of Pathology, Matsue City Hospital, Matsue City, Japan
| | - Ryusuke Murakami
- Departments of Gynecology and Obstetrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Shinichi Komiyama
- Department of Obstetrics and Gynecology, Toho University Faculty of Medicine, Tokyo, Japan
| | - Terumi Tanigawa
- Department of Gynecologic Oncology, Cancer Institute Hospital, Tokyo, Japan
| | - Takeshi Yokoi
- Department of Obstetrics and Gynecology, Kaizuka City Hospital, Osaka, Japan
| | - Hirokuni Takano
- Department of Obstetrics and Gynecology, The Jikei University Kashiwa Hospital, Kashiwa, Japan
| | - Tsukasa Baba
- Department of Obstetrics and Gynecology, Iwate Medical University School of Medicine, Morioka, Japan
| | - Kiyonori Miura
- Department of Gynecology and Obstetrics, Nagasaki University Graduate School of Biolomedical Sciences
| | - Muneaki Shimada
- Department of Obstetrics and Gynecology, Tohoku University Graduate School of Medicine, Sendai, Japan
| | - Junzo Kigawa
- Department of Gynecology and Obstetrics, Matsue City Hospital, Matsue City, Japan
| | - Takayuki Enomoto
- Department of Obstetrics and Gynecology, Niigata University Graduate School of Medical and Dental Sciences, Niigata, Japan
| | - Junzo Hamanishi
- Departments of Gynecology and Obstetrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Aikou Okamoto
- Department of Obstetrics and Gynecology, The Jikei University School of Medicine, Tokyo, Japan
| | - Yasushi Okuno
- Departments of Biomedical Data Intelligence, Graduate School of Medicine, Kyoto University, Kyoto, Japan; Medical Sciences Innovation Hub Program, RIKEN Cluster for Science, Technology and Innovation Hub, Yokohama, Japan
| | - Masaki Mandai
- Departments of Gynecology and Obstetrics, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Noriomi Matsumura
- Department of Obstetrics and Gynecology, Kindai University Faculty of Medicine, Osaka, Japan.
| |
Collapse
|
23
|
Kumar K, Yeo AU, McIntosh L, Kron T, Wheeler G, Franich RD. Deep Learning Auto-Segmentation Network for Pediatric Computed Tomography Data Sets: Can We Extrapolate From Adults? Int J Radiat Oncol Biol Phys 2024; 119:1297-1306. [PMID: 38246249 DOI: 10.1016/j.ijrobp.2024.01.201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 12/10/2023] [Accepted: 01/07/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Artificial intelligence (AI)-based auto-segmentation models hold promise for enhanced efficiency and consistency in organ contouring for adaptive radiation therapy and radiation therapy planning. However, their performance on pediatric computed tomography (CT) data and cross-scanner compatibility remain unclear. This study aimed to evaluate the performance of AI-based auto-segmentation models trained on adult CT data when applied to pediatric data sets and explore the improvement in performance gained by including pediatric training data. It also examined their ability to accurately segment CT data acquired from different scanners. METHODS AND MATERIALS Using the nnU-Net framework, segmentation models were trained on data sets of adult, pediatric, and combined CT scans for 7 pelvic/thoracic organs. Each model was trained on 290 to 300 cases per category and organ. Training data sets included a combination of clinical data and several open repositories. The study incorporated a database of 459 pediatric (0-16 years) CT scans and 950 adults (>18 years), ensuring all scans had human expert ground-truth contours of the selected organs. Performance was evaluated based on Dice similarity coefficients (DSC) of the model-generated contours. RESULTS AI models trained exclusively on adult data underperformed on pediatric data, especially for the 0 to 2 age group: mean DSC was below 0.5 for the bladder and spleen. The addition of pediatric training data demonstrated significant improvement for all age groups, achieving a mean DSC of above 0.85 for all organs in every age group. Larger organs like the liver and kidneys maintained consistent performance for all models across age groups. No significant difference emerged in the cross-scanner performance evaluation, suggesting robust cross-scanner generalization. CONCLUSIONS For optimal segmentation across age groups, it is important to include pediatric data in the training of segmentation models. The successful cross-scanner generalization also supports the real-world clinical applicability of these AI models. This study emphasizes the significance of data set diversity in training robust AI systems for medical image interpretation tasks.
Collapse
Affiliation(s)
- Kartik Kumar
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Adam U Yeo
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Lachlan McIntosh
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Tomas Kron
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia; Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Greg Wheeler
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Rick D Franich
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia.
| |
Collapse
|
24
|
Osorio P, Jimenez-Perez G, Montalt-Tordera J, Hooge J, Duran-Ballester G, Singh S, Radbruch M, Bach U, Schroeder S, Siudak K, Vienenkoetter J, Lawrenz B, Mohammadi S. Latent Diffusion Models with Image-Derived Annotations for Enhanced AI-Assisted Cancer Diagnosis in Histopathology. Diagnostics (Basel) 2024; 14:1442. [PMID: 39001331 PMCID: PMC11241396 DOI: 10.3390/diagnostics14131442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Revised: 06/19/2024] [Accepted: 06/26/2024] [Indexed: 07/16/2024] Open
Abstract
Artificial Intelligence (AI)-based image analysis has immense potential to support diagnostic histopathology, including cancer diagnostics. However, developing supervised AI methods requires large-scale annotated datasets. A potentially powerful solution is to augment training data with synthetic data. Latent diffusion models, which can generate high-quality, diverse synthetic images, are promising. However, the most common implementations rely on detailed textual descriptions, which are not generally available in this domain. This work proposes a method that constructs structured textual prompts from automatically extracted image features. We experiment with the PCam dataset, composed of tissue patches only loosely annotated as healthy or cancerous. We show that including image-derived features in the prompt, as opposed to only healthy and cancerous labels, improves the Fréchet Inception Distance (FID) by 88.6. We also show that pathologists find it challenging to detect synthetic images, with a median sensitivity/specificity of 0.55/0.55. Finally, we show that synthetic data effectively train AI models.
Collapse
Affiliation(s)
- Pedro Osorio
- Decision Science & Advanced Analytics, Bayer AG, 13353 Berlin, Germany
| | | | | | - Jens Hooge
- Decision Science & Advanced Analytics, Bayer AG, 13353 Berlin, Germany
| | | | - Shivam Singh
- Decision Science & Advanced Analytics, Bayer AG, 13353 Berlin, Germany
| | - Moritz Radbruch
- Pathology and Clinical Pathology, Bayer AG, 13353 Berlin, Germany
| | - Ute Bach
- Pathology and Clinical Pathology, Bayer AG, 13353 Berlin, Germany
| | | | - Krystyna Siudak
- Pathology and Clinical Pathology, Bayer AG, 13353 Berlin, Germany
| | | | - Bettina Lawrenz
- Pathology and Clinical Pathology, Bayer AG, 13353 Berlin, Germany
| | - Sadegh Mohammadi
- Decision Science & Advanced Analytics, Bayer AG, 13353 Berlin, Germany
| |
Collapse
|
25
|
Ye Y, Xia L, Yang S, Luo Y, Tang Z, Li Y, Han L, Xie H, Ren Y, Na N. Deep learning-enabled classification of kidney allograft rejection on whole slide histopathologic images. Front Immunol 2024; 15:1438247. [PMID: 39034991 PMCID: PMC11257957 DOI: 10.3389/fimmu.2024.1438247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Accepted: 06/21/2024] [Indexed: 07/23/2024] Open
Abstract
Background Diagnosis of kidney transplant rejection currently relies on manual histopathological assessment, which is subjective and susceptible to inter-observer variability, leading to limited reproducibility. We aim to develop a deep learning system for automated assessment of whole-slide images (WSIs) from kidney allograft biopsies to enable detection and subtyping of rejection and to predict the prognosis of rejection. Method We collected H&E-stained WSIs of kidney allograft biopsies at 400x magnification from January 2015 to September 2023 at two hospitals. These biopsy specimens were classified as T cell-mediated rejection, antibody-mediated rejection, and other lesions based on the consensus reached by two experienced transplant pathologists. To achieve feature extraction, feature aggregation, and global classification, we employed multi-instance learning and common convolution neural networks (CNNs). The performance of the developed models was evaluated using various metrics, including confusion matrix, receiver operating characteristic curves, the area under the curve (AUC), classification map, heat map, and pathologist-machine confrontations. Results In total, 906 WSIs from 302 kidney allograft biopsies were included for analysis. The model based on multi-instance learning enables detection and subtyping of rejection, named renal rejection artificial intelligence model (RRAIM), with the overall 3-category AUC of 0.798 in the independent test set, which is superior to that of three transplant pathologists under nearly routine assessment conditions. Moreover, the prognosis models accurately predicted graft loss within 1 year following rejection and treatment response for rejection, achieving AUC of 0.936 and 0.756, respectively. Conclusion We first developed deep-learning models utilizing multi-instance learning for the detection and subtyping of rejection and prediction of rejection prognosis in kidney allograft biopsies. These models performed well and may be useful in assisting the pathological diagnosis.
Collapse
Affiliation(s)
- Yongrong Ye
- Department of Kidney Transplantation, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Liubing Xia
- Department of Kidney Transplantation, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Shicong Yang
- Department of Pathology, The First Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - You Luo
- Department of Kidney Transplantation, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Zuofu Tang
- Department of Kidney Transplantation, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, China
| | - Lanqing Han
- Center for Artificial Intelligence in Medicine, Research Institute of Tsinghua, Pearl River Delta, Guangzhou, China
| | - Hanbin Xie
- Department of Anesthesiology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yong Ren
- Scientific Research Project Department, Guangdong Artificial Intelligence and Digital Economy Laboratory (Guangzhou), Pazhou Lab, Guangzhou, China
- Shensi lab, Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China (UESTC), Shenzhen, China
- The Seventh Affiliated Hospital of Sun Yat-Sen University, Shenzhen, China
| | - Ning Na
- Department of Kidney Transplantation, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
26
|
Szymaszek P, Tyszka-Czochara M, Ortyl J. Application of Photoactive Compounds in Cancer Theranostics: Review on Recent Trends from Photoactive Chemistry to Artificial Intelligence. Molecules 2024; 29:3164. [PMID: 38999115 PMCID: PMC11243723 DOI: 10.3390/molecules29133164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 06/14/2024] [Accepted: 06/25/2024] [Indexed: 07/14/2024] Open
Abstract
According to the World Health Organization (WHO) and the International Agency for Research on Cancer (IARC), the number of cancer cases and deaths worldwide is predicted to nearly double by 2030, reaching 21.7 million cases and 13 million fatalities. The increase in cancer mortality is due to limitations in the diagnosis and treatment options that are currently available. The close relationship between diagnostics and medicine has made it possible for cancer patients to receive precise diagnoses and individualized care. This article discusses newly developed compounds with potential for photodynamic therapy and diagnostic applications, as well as those already in use. In addition, it discusses the use of artificial intelligence in the analysis of diagnostic images obtained using, among other things, theranostic agents.
Collapse
Affiliation(s)
- Patryk Szymaszek
- Department of Biotechnology and Physical Chemistry, Faculty of Chemical Engineering and Technology, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
| | | | - Joanna Ortyl
- Department of Biotechnology and Physical Chemistry, Faculty of Chemical Engineering and Technology, Cracow University of Technology, Warszawska 24, 31-155 Kraków, Poland
- Photo HiTech Ltd., Bobrzyńskiego 14, 30-348 Kraków, Poland
- Photo4Chem Ltd., Juliusza Lea 114/416A-B, 31-133 Cracow, Poland
| |
Collapse
|
27
|
Oku T, Furuya S, Lee A, Altenmüller E. Video-based diagnosis support system for pianists with Musician's dystonia. Front Neurol 2024; 15:1409962. [PMID: 39015318 PMCID: PMC11250081 DOI: 10.3389/fneur.2024.1409962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Accepted: 06/18/2024] [Indexed: 07/18/2024] Open
Abstract
Background Musician's dystonia is a task-specific movement disorder that deteriorates fine motor control of skilled movements in musical performance. Although this disorder threatens professional careers, its diagnosis is challenging for clinicians who have no specialized knowledge of musical performance. Objectives To support diagnostic evaluation, the present study proposes a novel approach using a machine learning-based algorithm to identify the symptomatic movements of Musician's dystonia. Methods We propose an algorithm that identifies the dystonic movements using the anomaly detection method with an autoencoder trained with the hand kinematics of healthy pianists. A unique feature of the algorithm is that it requires only the video image of the hand, which can be derived by a commercially available camera. We also measured the hand biomechanical functions to assess the contribution of peripheral factors and improve the identification of dystonic symptoms. Results The proposed algorithm successfully identified Musician's dystonia with an accuracy and specificity of 90% based only on video footages of the hands. In addition, we identified the degradation of biomechanical functions involved in controlling multiple fingers, which is not specific to musical performance. By contrast, there were no dystonia-specific malfunctions of hand biomechanics, including the strength and agility of individual digits. Conclusion These findings demonstrate the effectiveness of the present technique in aiding in the accurate diagnosis of Musician's dystonia.
Collapse
Affiliation(s)
- Takanori Oku
- College of Engineering and Design, Shibaura Institute of Technology, Tokyo, Japan
- Sony Computer Science Laboratories, Inc., Tokyo, Japan
- NeuroPiano Institute, Kyoto, Japan
| | - Shinichi Furuya
- Sony Computer Science Laboratories, Inc., Tokyo, Japan
- NeuroPiano Institute, Kyoto, Japan
- Institute of Music Physiology and Musicians’ Medicine, University of Music, Drama and Media, Hanover, Germany
| | - André Lee
- Institute of Music Physiology and Musicians’ Medicine, University of Music, Drama and Media, Hanover, Germany
- Department of Neurology, Klinikum rechts der Isar, Technical University of Munich, München, Germany
| | - Eckart Altenmüller
- Institute of Music Physiology and Musicians’ Medicine, University of Music, Drama and Media, Hanover, Germany
| |
Collapse
|
28
|
White BS, Woo XY, Koc S, Sheridan T, Neuhauser SB, Wang S, Evrard YA, Chen L, Foroughi pour A, Landua JD, Mashl RJ, Davies SR, Fang B, Rosa MG, Evans KW, Bailey MH, Chen Y, Xiao M, Rubinstein JC, Sanderson BJ, Lloyd MW, Domanskyi S, Dobrolecki LE, Fujita M, Fujimoto J, Xiao G, Fields RC, Mudd JL, Xu X, Hollingshead MG, Jiwani S, Acevedo S, Davis-Dusenbery BN, Robinson PN, Moscow JA, Doroshow JH, Mitsiades N, Kaochar S, Pan CX, Carvajal-Carmona LG, Welm AL, Welm BE, Govindan R, Li S, Davies MA, Roth JA, Meric-Bernstam F, Xie Y, Herlyn M, Ding L, Lewis MT, Bult CJ, Dean DA, Chuang JH. A Pan-Cancer Patient-Derived Xenograft Histology Image Repository with Genomic and Pathologic Annotations Enables Deep Learning Analysis. Cancer Res 2024; 84:2060-2072. [PMID: 39082680 PMCID: PMC11217732 DOI: 10.1158/0008-5472.can-23-1349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Revised: 10/13/2023] [Accepted: 03/27/2024] [Indexed: 08/04/2024]
Abstract
Patient-derived xenografts (PDX) model human intra- and intertumoral heterogeneity in the context of the intact tissue of immunocompromised mice. Histologic imaging via hematoxylin and eosin (H&E) staining is routinely performed on PDX samples, which could be harnessed for computational analysis. Prior studies of large clinical H&E image repositories have shown that deep learning analysis can identify intercellular and morphologic signals correlated with disease phenotype and therapeutic response. In this study, we developed an extensive, pan-cancer repository of >1,000 PDX and paired parental tumor H&E images. These images, curated from the PDX Development and Trial Centers Research Network Consortium, had a range of associated genomic and transcriptomic data, clinical metadata, pathologic assessments of cell composition, and, in several cases, detailed pathologic annotations of neoplastic, stromal, and necrotic regions. The amenability of these images to deep learning was highlighted through three applications: (i) development of a classifier for neoplastic, stromal, and necrotic regions; (ii) development of a predictor of xenograft-transplant lymphoproliferative disorder; and (iii) application of a published predictor of microsatellite instability. Together, this PDX Development and Trial Centers Research Network image repository provides a valuable resource for controlled digital pathology analysis, both for the evaluation of technical issues and for the development of computational image-based methods that make clinical predictions based on PDX treatment studies. Significance: A pan-cancer repository of >1,000 patient-derived xenograft hematoxylin and eosin-stained images will facilitate cancer biology investigations through histopathologic analysis and contributes important model system data that expand existing human histology repositories.
Collapse
Affiliation(s)
- Brian S. White
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | - Xing Yi Woo
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
- Bioinformatics Institute (BII), Agency for Science, Technology and Research (A*STAR), Singapore, Singapore.
| | - Soner Koc
- Velsera, Charlestown, Massachusetts.
| | - Todd Sheridan
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - Shidan Wang
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | - Yvonne A. Evrard
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | - Li Chen
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | - Ali Foroughi pour
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - R. Jay Mashl
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Bingliang Fang
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | | | - Kurt W. Evans
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Matthew H. Bailey
- Simmons Center for Cancer Research, Brigham Young University, Provo, Utah.
| | - Yeqing Chen
- The Wistar Institute, Philadelphia, Pennsylvania.
| | - Min Xiao
- The Wistar Institute, Philadelphia, Pennsylvania.
| | | | | | | | - Sergii Domanskyi
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | - Maihi Fujita
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | - Junya Fujimoto
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Guanghua Xiao
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | - Ryan C. Fields
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Xiaowei Xu
- The Wistar Institute, Philadelphia, Pennsylvania.
| | | | - Shahanawaz Jiwani
- Leidos Biomedical Research Inc., Frederick National Laboratory for Cancer Research, Frederick, Maryland.
| | | | | | | | - Peter N. Robinson
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| | | | | | | | | | | | | | - Alana L. Welm
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | - Bryan E. Welm
- Huntsman Cancer Institute, University of Utah, Salt Lake City, Utah.
| | | | - Shunqiang Li
- Washington University School of Medicine, St. Louis, Missouri.
| | | | - Jack A. Roth
- University of Texas MD Anderson Cancer Center, Houston, Texas.
| | | | - Yang Xie
- University of Texas Southwestern Medical Center, Dallas, Texas.
| | | | - Li Ding
- Washington University School of Medicine, St. Louis, Missouri.
| | | | | | | | - Jeffrey H. Chuang
- The Jackson Laboratory for Genomic Medicine, Farmington, Connecticut.
| |
Collapse
|
29
|
Schmidt A, Morales-Álvarez P, Cooper LA, Newberg LA, Enquobahrie A, Molina R, Katsaggelos AK. Focused active learning for histopathological image classification. Med Image Anal 2024; 95:103162. [PMID: 38593644 DOI: 10.1016/j.media.2024.103162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 11/05/2023] [Accepted: 04/02/2024] [Indexed: 04/11/2024]
Abstract
Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.
Collapse
Affiliation(s)
- Arne Schmidt
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, 18010, Spain.
| | - Pablo Morales-Álvarez
- Department of Statistics and Operation Research, University of Granada, Granada, 18010, Spain.
| | - Lee Ad Cooper
- Department of Pathology, Northwestern University, Chicago, IL, 60611, USA.
| | | | | | - Rafael Molina
- Department of Computer Science and Artificial Intelligence, University of Granada, Granada, 18010, Spain.
| | - Aggelos K Katsaggelos
- Department of Electrical Computer Engineering, Northwestern University, Evanston, IL, 60208, USA.
| |
Collapse
|
30
|
Hwang EJ. [Clinical Application of Artificial Intelligence-Based Detection Assistance Devices for Chest X-Ray Interpretation: Current Status and Practical Considerations]. JOURNAL OF THE KOREAN SOCIETY OF RADIOLOGY 2024; 85:693-704. [PMID: 39130790 PMCID: PMC11310435 DOI: 10.3348/jksr.2024.0052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 06/14/2024] [Accepted: 07/04/2024] [Indexed: 08/13/2024]
Abstract
Artificial intelligence (AI) technology is actively being applied for the interpretation of medical imaging, such as chest X-rays. AI-based software medical devices, which automatically detect various types of abnormal findings in chest X-ray images to assist physicians in their interpretation, are actively being commercialized and clinically implemented in Korea. Several important issues need to be considered for AI-based detection assistant tools to be applied in clinical practice: the evaluation of performance and efficacy prior to implementation; the determination of the target application, range, and method of delivering results; and monitoring after implementation and legal liability issues. Appropriate decision making regarding these devices based on the situation in each institution is necessary. Radiologists must be engaged as medical assessment experts using the software for these devices as well as in medical image interpretation to ensure the safe and efficient implementation and operation of AI-based detection assistant tools.
Collapse
|
31
|
Laury AR, Zheng S, Aho N, Fallegger R, Hänninen S, Saez-Rodriguez J, Tanevski J, Youssef O, Tang J, Carpén OM. Opening the Black Box: Spatial Transcriptomics and the Relevance of Artificial Intelligence-Detected Prognostic Regions in High-Grade Serous Carcinoma. Mod Pathol 2024; 37:100508. [PMID: 38704029 DOI: 10.1016/j.modpat.2024.100508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/04/2024] [Accepted: 04/22/2024] [Indexed: 05/06/2024]
Abstract
Image-based deep learning models are used to extract new information from standard hematoxylin and eosin pathology slides; however, biological interpretation of the features detected by artificial intelligence (AI) remains a challenge. High-grade serous carcinoma of the ovary (HGSC) is characterized by aggressive behavior and chemotherapy resistance, but also exhibits striking variability in outcome. Our understanding of this disease is limited, partly due to considerable tumor heterogeneity. We previously trained an AI model to identify HGSC tumor regions that are highly associated with outcome status but are indistinguishable by conventional morphologic methods. Here, we applied spatially resolved transcriptomics to further profile the AI-identified tumor regions in 16 patients (8 per outcome group) and identify molecular features related to disease outcome in patients who underwent primary debulking surgery and platinum-based chemotherapy. We examined formalin-fixed paraffin-embedded tissue from (1) regions identified by the AI model as highly associated with short or extended chemotherapy response, and (2) background tumor regions (not identified by the AI model as highly associated with outcome status) from the same tumors. We show that the transcriptomic profiles of AI-identified regions are more distinct than background regions from the same tumors, are superior in predicting outcome, and differ in several pathways including those associated with chemoresistance in HGSC. Further, we find that poor outcome and good outcome regions are enriched by different tumor subpopulations, suggesting distinctive interaction patterns. In summary, our work presents proof of concept that AI-guided spatial transcriptomic analysis improves recognition of biologic features relevant to patient outcomes.
Collapse
Affiliation(s)
- Anna Ray Laury
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, Helsinki, Finland.
| | - Shuyu Zheng
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Niina Aho
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Robin Fallegger
- Institute for Computational Biomedicine, Faculty of Medicine, Heidelberg University and Heidelberg University Hospital, Heidelberg, Germany
| | - Satu Hänninen
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, Helsinki, Finland
| | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Faculty of Medicine, Heidelberg University and Heidelberg University Hospital, Heidelberg, Germany
| | - Jovan Tanevski
- Institute for Computational Biomedicine, Faculty of Medicine, Heidelberg University and Heidelberg University Hospital, Heidelberg, Germany; Department of Knowledge Technologies, Jožef Stefan Institute, Ljubljana, Slovenia
| | - Omar Youssef
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Clinical and Chemical Pathology Department, National Cancer Institute, Cairo University, Cairo, Egypt
| | - Jing Tang
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Department of Biochemistry and Developmental Biology, Faculty of Medicine, University of Helsinki, Helsinki, Finland.
| | - Olli Mikael Carpén
- Research Program in Systems Oncology, Research Programs Unit, Faculty of Medicine, University of Helsinki, Helsinki, Finland; Department of Pathology, University of Helsinki and HUS Diagnostic Center, Helsinki University Hospital, Helsinki, Finland; iCAN Digital Precision Cancer Medicine Flagship, University of Helsinki, Helsinki, Finland
| |
Collapse
|
32
|
Zhou X, Lu Y, Wu Y, Yu Y, Liu Y, Wang C, Zhao Z, Wang C, Gao Z, Li Z, Zhao Y, Cao W. Construction and validation of a deep learning prognostic model based on digital pathology images of stage III colorectal cancer. EUROPEAN JOURNAL OF SURGICAL ONCOLOGY 2024; 50:108369. [PMID: 38703632 DOI: 10.1016/j.ejso.2024.108369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 03/09/2024] [Accepted: 04/23/2024] [Indexed: 05/06/2024]
Abstract
BACKGROUND TNM staging is the main reference standard for prognostic prediction of colorectal cancer (CRC), but the prognosis heterogeneity of patients with the same stage is still large. This study aimed to classify the tumor microenvironment of patients with stage III CRC and quantify the classified tumor tissues based on deep learning to explore the prognostic value of the developed tumor risk signature (TRS). METHODS A tissue classification model was developed to identify nine tissues (adipose, background, debris, lymphocytes, mucus, smooth muscle, normal mucosa, stroma, and tumor) in whole-slide images (WSIs) of stage III CRC patients. This model was used to extract tumor tissues from WSIs of 265 stage III CRC patients from The Cancer Genome Atlas and 70 stage III CRC patients from the Sixth Affiliated Hospital of Sun Yat-sen University. We used three different deep learning models for tumor feature extraction and applied a Cox model to establish the TRS. Survival analysis was conducted to explore the prognostic performance of TRS. RESULTS The tissue classification model achieved 94.4 % accuracy in identifying nine tissue types. The TRS showed a Harrell's concordance index of 0.736, 0.716, and 0.711 in the internal training, internal validation, and external validation sets. Survival analysis showed that TRS had significant predictive ability (hazard ratio: 3.632, p = 0.03) for prognostic prediction. CONCLUSION The TRS is an independent and significant prognostic factor for PFS of stage III CRC patients and it contributes to risk stratification of patients with different clinical stages.
Collapse
Affiliation(s)
- Xuezhi Zhou
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Yizhan Lu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Yue Wu
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangdong Research Institute of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yi Yu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Yong Liu
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Chang Wang
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Zongya Zhao
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Chong Wang
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Zhixian Gao
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China
| | - Zhenxin Li
- College of Medical Engineering, Xinxiang Medical University, Xinxiang, Henan, China; Engineering Technology Research Center of Neurosense and Control of Henan Province, Xinxiang, China; Henan International Joint Laboratory of Neural Information Analysis and Drug Intelligent Design, Xinxiang, China.
| | - Yandong Zhao
- Department of Pathology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangdong Research Institute of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Wuteng Cao
- Department of Radiology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Guangdong Research Institute of Gastroenterology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
33
|
Retamero JA, Gulturk E, Bozkurt A, Liu S, Gorgan M, Moral L, Horton M, Parke A, Malfroid K, Sue J, Rothrock B, Oakley G, DeMuth G, Millar E, Fuchs TJ, Klimstra DS. Artificial Intelligence Helps Pathologists Increase Diagnostic Accuracy and Efficiency in the Detection of Breast Cancer Lymph Node Metastases. Am J Surg Pathol 2024; 48:846-854. [PMID: 38809272 PMCID: PMC11191045 DOI: 10.1097/pas.0000000000002248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
The detection of lymph node metastases is essential for breast cancer staging, although it is a tedious and time-consuming task where the sensitivity of pathologists is suboptimal. Artificial intelligence (AI) can help pathologists detect lymph node metastases, which could help alleviate workload issues. We studied how pathologists' performance varied when aided by AI. An AI algorithm was trained using more than 32 000 breast sentinel lymph node whole slide images (WSIs) matched with their corresponding pathology reports from more than 8000 patients. The algorithm highlighted areas suspicious of harboring metastasis. Three pathologists were asked to review a dataset comprising 167 breast sentinel lymph node WSIs, of which 69 harbored cancer metastases of different sizes, enriched for challenging cases. Ninety-eight slides were benign. The pathologists read the dataset twice, both digitally, with and without AI assistance, randomized for slide and reading orders to reduce bias, separated by a 3-week washout period. Their slide-level diagnosis was recorded, and they were timed during their reads. The average reading time per slide was 129 seconds during the unassisted phase versus 58 seconds during the AI-assisted phase, resulting in an overall efficiency gain of 55% ( P <0.001). These efficiency gains are applied to both benign and malignant WSIs. Two of the 3 reading pathologists experienced significant sensitivity improvements, from 74.5% to 93.5% ( P ≤0.006). This study highlights that AI can help pathologists shorten their reading times by more than half and also improve their metastasis detection rate.
Collapse
Affiliation(s)
| | | | | | - Sandy Liu
- New England Pathology Associates, Springfield, MA
| | - Maria Gorgan
- New England Pathology Associates, Springfield, MA
| | - Luis Moral
- New England Pathology Associates, Springfield, MA
| | | | | | | | - Jill Sue
- Paige.AI. 11 Times Square, New York, NY
| | | | | | | | - Ewan Millar
- Paige.AI. 11 Times Square, New York, NY
- Department of Anatomical Pathology, NSW Health Pathology, St George Hospital, Sydney, NSW, Australia
| | - Thomas J. Fuchs
- Paige.AI. 11 Times Square, New York, NY
- Windreich Department of Artificial Intelligence and Human Health, Icahn School of Medicine at Mount Sinai, New York, NY
- Hasso Platner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY
| | | |
Collapse
|
34
|
Koh SY, Lee JH, Park H, Goo JM. Value of CT quantification in progressive fibrosing interstitial lung disease: a deep learning approach. Eur Radiol 2024; 34:4195-4205. [PMID: 38085286 DOI: 10.1007/s00330-023-10483-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 10/23/2023] [Accepted: 10/27/2023] [Indexed: 06/29/2024]
Abstract
OBJECTIVES To evaluate the relationship of changes in the deep learning-based CT quantification of interstitial lung disease (ILD) with changes in forced vital capacity (FVC) and visual assessments of ILD progression, and to investigate their prognostic implications. METHODS This study included ILD patients with CT scans at intervals of over 2 years between January 2015 and June 2021. Deep learning-based texture analysis software was used to segment ILD findings on CT images (fibrosis: reticular opacity + honeycombing cysts; total ILD extent: ground-glass opacity + fibrosis). Patients were grouped according to the absolute decline of predicted FVC (< 5%, 5-10%, and ≥ 10%) and ILD progression assessed by thoracic radiologists, and their quantification results were compared among these groups. The associations between quantification results and survival were evaluated using multivariable Cox regression analysis. RESULTS In total, 468 patients (239 men; 64 ± 9.5 years) were included. Fibrosis and total ILD extents more increased in patients with larger FVC decline (p < .001 in both). Patients with ILD progression had higher fibrosis and total ILD extent increases than those without ILD progression (p < .001 in both). Increases in fibrosis and total ILD extent were significant prognostic factors when adjusted for absolute FVC declines of ≥ 5% (hazard ratio [HR] 1.844, p = .01 for fibrosis; HR 2.484, p < .001 for total ILD extent) and ≥ 10% (HR 2.918, p < .001 for fibrosis; HR 3.125, p < .001 for total ILD extent). CONCLUSION Changes in ILD CT quantification correlated with changes in FVC and visual assessment of ILD progression, and they were independent prognostic factors in ILD patients. CLINICAL RELEVANCE STATEMENT Quantifying the CT features of interstitial lung disease using deep learning techniques could play a key role in defining and predicting the prognosis of progressive fibrosing interstitial lung disease. KEY POINTS • Radiologic findings on high-resolution CT are important in diagnosing progressive fibrosing interstitial lung disease. • Deep learning-based quantification results for fibrosis and total interstitial lung disease extents correlated with the decline in forced vital capacity and visual assessments of interstitial lung disease progression, and emerged as independent prognostic factors. • Deep learning-based interstitial lung disease CT quantification can play a key role in diagnosing and prognosticating progressive fibrosing interstitial lung disease.
Collapse
Affiliation(s)
- Seok Young Koh
- Department of Radiology, Seoul National University Hospital, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jong Hyuk Lee
- Department of Radiology, Seoul National University Hospital, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Hyungin Park
- Department of Radiology, Seoul National University Hospital, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Department of Radiology, Seoul National University College of Medicine, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Institute of Radiation Medicine, Seoul National University Medical Research Center, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Cancer Research Institute, Seoul National University, 101, Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
| |
Collapse
|
35
|
Aden D, Zaheer S, Khan S. Possible benefits, challenges, pitfalls, and future perspective of using ChatGPT in pathology. REVISTA ESPANOLA DE PATOLOGIA : PUBLICACION OFICIAL DE LA SOCIEDAD ESPANOLA DE ANATOMIA PATOLOGICA Y DE LA SOCIEDAD ESPANOLA DE CITOLOGIA 2024; 57:198-210. [PMID: 38971620 DOI: 10.1016/j.patol.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 02/22/2024] [Accepted: 04/16/2024] [Indexed: 07/08/2024]
Abstract
The much-hyped artificial intelligence (AI) model called ChatGPT developed by Open AI can have great benefits for physicians, especially pathologists, by saving time so that they can use their time for more significant work. Generative AI is a special class of AI model, which uses patterns and structures learned from existing data and can create new data. Utilizing ChatGPT in Pathology offers a multitude of benefits, encompassing the summarization of patient records and its promising prospects in Digital Pathology, as well as its valuable contributions to education and research in this field. However, certain roadblocks need to be dealt like integrating ChatGPT with image analysis which will act as a revolution in the field of pathology by increasing diagnostic accuracy and precision. The challenges with the use of ChatGPT encompass biases from its training data, the need for ample input data, potential risks related to bias and transparency, and the potential adverse outcomes arising from inaccurate content generation. Generation of meaningful insights from the textual information which will be efficient in processing different types of image data, such as medical images, and pathology slides. Due consideration should be given to ethical and legal issues including bias.
Collapse
Affiliation(s)
- Durre Aden
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| | - Sufian Zaheer
- Department of Pathology, Vardhman Mahavir Medical College and Safdarjung Hospital, New Delhi, India.
| | - Sabina Khan
- Department of Pathology, Hamdard Institute of Medical Sciences and Research, Jamia Hamdard, New Delhi, India
| |
Collapse
|
36
|
Villanueva-Miranda I, Rong R, Quan P, Wen Z, Zhan X, Yang DM, Chi Z, Xie Y, Xiao G. Enhancing Medical Imaging Segmentation with GB-SAM: A Novel Approach to Tissue Segmentation Using Granular Box Prompts. Cancers (Basel) 2024; 16:2391. [PMID: 39001452 PMCID: PMC11240495 DOI: 10.3390/cancers16132391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 06/23/2024] [Accepted: 06/25/2024] [Indexed: 07/16/2024] Open
Abstract
Recent advances in foundation models have revolutionized model development in digital pathology, reducing dependence on extensive manual annotations required by traditional methods. The ability of foundation models to generalize well with few-shot learning addresses critical barriers in adapting models to diverse medical imaging tasks. This work presents the Granular Box Prompt Segment Anything Model (GB-SAM), an improved version of the Segment Anything Model (SAM) fine-tuned using granular box prompts with limited training data. The GB-SAM aims to reduce the dependency on expert pathologist annotators by enhancing the efficiency of the automated annotation process. Granular box prompts are small box regions derived from ground truth masks, conceived to replace the conventional approach of using a single large box covering the entire H&E-stained image patch. This method allows a localized and detailed analysis of gland morphology, enhancing the segmentation accuracy of individual glands and reducing the ambiguity that larger boxes might introduce in morphologically complex regions. We compared the performance of our GB-SAM model against U-Net trained on different sizes of the CRAG dataset. We evaluated the models across histopathological datasets, including CRAG, GlaS, and Camelyon16. GB-SAM consistently outperformed U-Net, with reduced training data, showing less segmentation performance degradation. Specifically, on the CRAG dataset, GB-SAM achieved a Dice coefficient of 0.885 compared to U-Net's 0.857 when trained on 25% of the data. Additionally, GB-SAM demonstrated segmentation stability on the CRAG testing dataset and superior generalization across unseen datasets, including challenging lymph node segmentation in Camelyon16, which achieved a Dice coefficient of 0.740 versus U-Net's 0.491. Furthermore, compared to SAM-Path and Med-SAM, GB-SAM showed competitive performance. GB-SAM achieved a Dice score of 0.900 on the CRAG dataset, while SAM-Path achieved 0.884. On the GlaS dataset, Med-SAM reported a Dice score of 0.956, whereas GB-SAM achieved 0.885 with significantly less training data. These results highlight GB-SAM's advanced segmentation capabilities and reduced dependency on large datasets, indicating its potential for practical deployment in digital pathology, particularly in settings with limited annotated datasets.
Collapse
Affiliation(s)
- Ismael Villanueva-Miranda
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ruichen Rong
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Peiran Quan
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Zhikai Chi
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Yang Xie
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
- Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
37
|
van Dooijeweert C, Flach RN, Ter Hoeve ND, Vreuls CPH, Goldschmeding R, Freund JE, Pham P, Nguyen TQ, van der Wall E, Frederix GWJ, Stathonikos N, van Diest PJ. Clinical implementation of artificial-intelligence-assisted detection of breast cancer metastases in sentinel lymph nodes: the CONFIDENT-B single-center, non-randomized clinical trial. NATURE CANCER 2024:10.1038/s43018-024-00788-z. [PMID: 38937624 DOI: 10.1038/s43018-024-00788-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 05/29/2024] [Indexed: 06/29/2024]
Abstract
Pathologists' assessment of sentinel lymph nodes (SNs) for breast cancer (BC) metastases is a treatment-guiding yet labor-intensive and costly task because of the performance of immunohistochemistry (IHC) in morphologically negative cases. This non-randomized, single-center clinical trial (International Standard Randomized Controlled Trial Number:14323711) assessed the efficacy of an artificial intelligence (AI)-assisted workflow for detecting BC metastases in SNs while maintaining diagnostic safety standards. From September 2022 to May 2023, 190 SN specimens were consecutively enrolled and allocated biweekly to the intervention arm (n = 100) or control arm (n = 90). In both arms, digital whole-slide images of hematoxylin-eosin sections of SN specimens were assessed by an expert pathologist, who was assisted by the 'Metastasis Detection' app (Visiopharm) in the intervention arm. Our primary endpoint showed a significantly reduced adjusted relative risk of IHC use (0.680, 95% confidence interval: 0.347-0.878) for AI-assisted pathologists, with subsequent cost savings of ~3,000 €. Secondary endpoints showed significant time reductions and up to 30% improved sensitivity for AI-assisted pathologists. This trial demonstrates the safety and potential for cost and time savings of AI assistance.
Collapse
Affiliation(s)
- C van Dooijeweert
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands.
| | - R N Flach
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - N D Ter Hoeve
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - C P H Vreuls
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - R Goldschmeding
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - J E Freund
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - P Pham
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - T Q Nguyen
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - E van der Wall
- Department of Medical Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - G W J Frederix
- Department of Epidemiology and Health Economics, Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, The Netherlands
| | - N Stathonikos
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - P J van Diest
- Department of Pathology, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
38
|
Chang J, Hatfield B. Advancements in computer vision and pathology: Unraveling the potential of artificial intelligence for precision diagnosis and beyond. Adv Cancer Res 2024; 161:431-478. [PMID: 39032956 DOI: 10.1016/bs.acr.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
The integration of computer vision into pathology through slide digitalization represents a transformative leap in the field's evolution. Traditional pathology methods, while reliable, are often time-consuming and susceptible to intra- and interobserver variability. In contrast, computer vision, empowered by artificial intelligence (AI) and machine learning (ML), promises revolutionary changes, offering consistent, reproducible, and objective results with ever-increasing speed and scalability. The applications of advanced algorithms and deep learning architectures like CNNs and U-Nets augment pathologists' diagnostic capabilities, opening new frontiers in automated image analysis. As these technologies mature and integrate into digital pathology workflows, they are poised to provide deeper insights into disease processes, quantify and standardize biomarkers, enhance patient outcomes, and automate routine tasks, reducing pathologists' workload. However, this transformative force calls for cross-disciplinary collaboration between pathologists, computer scientists, and industry innovators to drive research and development. While acknowledging its potential, this chapter addresses the limitations of AI in pathology, encompassing technical, practical, and ethical considerations during development and implementation.
Collapse
Affiliation(s)
- Justin Chang
- Virginia Commonwealth University Health System, Richmond, VA, United States
| | - Bryce Hatfield
- Virginia Commonwealth University Health System, Richmond, VA, United States.
| |
Collapse
|
39
|
Darbandsari A, Farahani H, Asadi M, Wiens M, Cochrane D, Khajegili Mirabadi A, Jamieson A, Farnell D, Ahmadvand P, Douglas M, Leung S, Abolmaesumi P, Jones SJM, Talhouk A, Kommoss S, Gilks CB, Huntsman DG, Singh N, McAlpine JN, Bashashati A. AI-based histopathology image analysis reveals a distinct subset of endometrial cancers. Nat Commun 2024; 15:4973. [PMID: 38926357 PMCID: PMC11208496 DOI: 10.1038/s41467-024-49017-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Endometrial cancer (EC) has four molecular subtypes with strong prognostic value and therapeutic implications. The most common subtype (NSMP; No Specific Molecular Profile) is assigned after exclusion of the defining features of the other three molecular subtypes and includes patients with heterogeneous clinical outcomes. In this study, we employ artificial intelligence (AI)-powered histopathology image analysis to differentiate between p53abn and NSMP EC subtypes and consequently identify a sub-group of NSMP EC patients that has markedly inferior progression-free and disease-specific survival (termed 'p53abn-like NSMP'), in a discovery cohort of 368 patients and two independent validation cohorts of 290 and 614 from other centers. Shallow whole genome sequencing reveals a higher burden of copy number abnormalities in the 'p53abn-like NSMP' group compared to NSMP, suggesting that this group is biologically distinct compared to other NSMP ECs. Our work demonstrates the power of AI to detect prognostically different and otherwise unrecognizable subsets of EC where conventional and standard molecular or pathologic criteria fall short, refining image-based tumor classification. This study's findings are applicable exclusively to females.
Collapse
Affiliation(s)
- Amirali Darbandsari
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Hossein Farahani
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Maryam Asadi
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Matthew Wiens
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Dawn Cochrane
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | | | - Amy Jamieson
- Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, BC, Canada
| | - David Farnell
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Pouya Ahmadvand
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Maxwell Douglas
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | - Samuel Leung
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Steven J M Jones
- Michael Smith Genome Sciences Center, British Columbia Cancer Research Center, Vancouver, BC, Canada
| | - Aline Talhouk
- Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, BC, Canada
| | - Stefan Kommoss
- Department of Women's Health, Tübingen University Hospital, Tübingen, Germany
| | - C Blake Gilks
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - David G Huntsman
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Department of Molecular Oncology, British Columbia Cancer Research Institute, Vancouver, BC, Canada
| | - Naveena Singh
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada
- Vancouver General Hospital, Vancouver, BC, Canada
| | - Jessica N McAlpine
- Department of Obstetrics and Gynaecology, University of British Columbia, Vancouver, BC, Canada
| | - Ali Bashashati
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.
- Department of Pathology and Laboratory Medicine, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
40
|
Zhao R, Xi Z, Liu H, Jian X, Zhang J, Zhang Z, Li S. MIST: Multi-instance selective transformer for histopathological subtype prediction. Med Image Anal 2024; 97:103251. [PMID: 38954942 DOI: 10.1016/j.media.2024.103251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 01/24/2024] [Accepted: 06/21/2024] [Indexed: 07/04/2024]
Abstract
Accurate histopathological subtype prediction is clinically significant for cancer diagnosis and tumor microenvironment analysis. However, achieving accurate histopathological subtype prediction is a challenging task due to (1) instance-level discrimination of histopathological images, (2) low inter-class and large intra-class variances among histopathological images in their shape and chromatin texture, and (3) heterogeneous feature distribution over different images. In this paper, we formulate subtype prediction as fine-grained representation learning and propose a novel multi-instance selective transformer (MIST) framework, effectively achieving accurate histopathological subtype prediction. The proposed MIST designs an effective selective self-attention mechanism with multi-instance learning (MIL) and vision transformer (ViT) to adaptive identify informative instances for fine-grained representation. Innovatively, the MIST entrusts each instance with different contributions to the bag representation based on its interactions with instances and bags. Specifically, a SiT module with selective multi-head self-attention (S-MSA) is well-designed to identify the representative instances by modeling the instance-to-instance interactions. On the contrary, a MIFD module with the information bottleneck is proposed to learn the discriminative fine-grained representation for histopathological images by modeling instance-to-bag interactions with the selected instances. Substantial experiments on five clinical benchmarks demonstrate that the MIST achieves accurate histopathological subtype prediction and obtains state-of-the-art performance with an accuracy of 0.936. The MIST shows great potential to handle fine-grained medical image analysis, such as histopathological subtype prediction in clinical applications.
Collapse
Affiliation(s)
- Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Zijun Xi
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Huanchi Liu
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Xiangkun Jian
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Jian Zhang
- School of Computer Science and Engineering, Central South University, Changsha, China
| | - Zijian Zhang
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha, China
| | - Shuo Li
- School of Computer Science and Engineering, Central South University, Changsha, China; Department of Computer and Data Science and Department of Biomedical Engineering, Case Western Reserve University, Cleveland, USA.
| |
Collapse
|
41
|
Li S, Wei X, Wang L, Zhang G, Jiang L, Zhou X, Huang Q. Dual-source dual-energy CT and deep learning for equivocal lymph nodes on CT images for thyroid cancer. Eur Radiol 2024:10.1007/s00330-024-10854-w. [PMID: 38904758 DOI: 10.1007/s00330-024-10854-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 04/08/2024] [Accepted: 04/23/2024] [Indexed: 06/22/2024]
Abstract
OBJECTIVES This study investigated the diagnostic performance of dual-energy computed tomography (CT) and deep learning for the preoperative classification of equivocal lymph nodes (LNs) on CT images in thyroid cancer patients. METHODS In this prospective study, from October 2020 to March 2021, 375 patients with thyroid disease underwent thin-section dual-energy thyroid CT at a small field of view (FOV) and thyroid surgery. The data of 183 patients with 281 LNs were analyzed. The targeted LNs were negative or equivocal on small FOV CT images. Six deep-learning models were used to classify the LNs on conventional CT images. The performance of all models was compared with pathology reports. RESULTS Of the 281 LNs, 65.5% had a short diameter of less than 4 mm. Multiple quantitative dual-energy CT parameters significantly differed between benign and malignant LNs. Multivariable logistic regression analyses showed that the best combination of parameters had an area under the curve (AUC) of 0.857, with excellent consistency and discrimination, and its diagnostic accuracy and sensitivity were 74.4% and 84.2%, respectively (p < 0.001). The visual geometry group 16 (VGG16) based model achieved the best accuracy (86%) and sensitivity (88%) in differentiating between benign and malignant LNs, with an AUC of 0.89. CONCLUSIONS The VGG16 model based on small FOV CT images showed better diagnostic accuracy and sensitivity than the spectral parameter model. Our study presents a noninvasive and convenient imaging biomarker to predict malignant LNs without suspicious CT features in thyroid cancer patients. CLINICAL RELEVANCE STATEMENT Our study presents a deep-learning-based model to predict malignant lymph nodes in thyroid cancer without suspicious features on conventional CT images, which shows better diagnostic accuracy and sensitivity than the regression model based on spectral parameters. KEY POINTS Many cervical lymph nodes (LNs) do not express suspicious features on conventional computed tomography (CT). Dual-energy CT parameters can distinguish between benign and malignant LNs. Visual geometry group 16 model shows superior diagnostic accuracy and sensitivity for malignant LNs.
Collapse
Affiliation(s)
- Sheng Li
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, China
- Guangdong Esophageal Cancer Institute, Guangzhou, 510060, China
| | - Xiaoting Wei
- Department of Radiology, The Eighth Affiliated Hospital, Sun Yat-sen University, Shenzhen, 518036, China
| | - Li Wang
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, 710072, China
| | - Guizhi Zhang
- Department of Radiology, The Eighth Affiliated Hospital, Sun Yat-sen University, Shenzhen, 518036, China
| | - Linling Jiang
- Department of Radiology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, China
| | - Xuhui Zhou
- Department of Radiology, The Eighth Affiliated Hospital, Sun Yat-sen University, Shenzhen, 518036, China.
| | - Qinghua Huang
- School of Artificial Intelligence, Optics and Electronics (iOPEN), Northwestern Polytechnical University, Xi'an, 710072, China.
| |
Collapse
|
42
|
Machiraju G, Derry A, Desai A, Guha N, Karimi AH, Zou J, Altman RB, Ré C, Mallick P. Prospector Heads: Generalized Feature Attribution for Large Models & Data. ARXIV 2024:arXiv:2402.11729v2. [PMID: 38947933 PMCID: PMC11213143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Feature attribution, the ability to localize regions of the input data that are relevant for classification, is an important capability for ML models in scientific and biomedical domains. Current methods for feature attribution, which rely on "explaining" the predictions of end-to-end classifiers, suffer from imprecise feature localization and are inadequate for use with small sample sizes and high-dimensional datasets due to computational challenges. We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods that can be applied to any encoder and any data modality. Prospector heads generalize across modalities through experiments on sequences (text), images (pathology), and graphs (protein structures), outperforming baseline attribution methods by up to 26.3 points in mean localization AUPRC. We also demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data. Through their high performance, flexibility, and generalizability, prospectors provide a framework for improving trust and transparency for ML models in complex domains.
Collapse
Affiliation(s)
| | | | | | - Neel Guha
- Department of Computer Science, Stanford University
| | | | - James Zou
- Department of Biomedical Data Science, Stanford University
| | - Russ B. Altman
- Department of Biomedical Data Science, Stanford University
| | | | | |
Collapse
|
43
|
Sinitca AM, Lyanova AI, Kaplun DI, Hassan H, Krasichkov AS, Sanarova KE, Shilenko LA, Sidorova EE, Akhmetova AA, Vaulina DD, Karpov AA. Microscopy Image Dataset for Deep Learning-Based Quantitative Assessment of Pulmonary Vascular Changes. Sci Data 2024; 11:635. [PMID: 38879569 PMCID: PMC11180164 DOI: 10.1038/s41597-024-03473-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 06/04/2024] [Indexed: 06/19/2024] Open
Abstract
Pulmonary hypertension (PH) is a syndrome complex that accompanies a number of diseases of different etiologies, associated with basic mechanisms of structural and functional changes of the pulmonary circulation vessels and revealed pressure increasing in the pulmonary artery. The structural changes in the pulmonary circulation vessels are the main limiting factor determining the prognosis of patients with PH. Thickening and irreversible deposition of collagen in the pulmonary artery branches walls leads to rapid disease progression and a therapy effectiveness decreasing. In this regard, histological examination of the pulmonary circulation vessels is critical both in preclinical studies and clinical practice. However, measurements of quantitative parameters such as the average vessel outer diameter, the vessel walls area, and the hypertrophy index claimed significant time investment and the requirement for specialist training to analyze micrographs. A dataset of pulmonary circulation vessels for pathology assessment using semantic segmentation techniques based on deep-learning is presented in this work. 609 original microphotographs of vessels, numerical data from experts' measurements, and microphotographs with outlines of these measurements for each of the vessels are presented. Furthermore, here we cite an example of a deep learning pipeline using the U-Net semantic segmentation model to extract vascular regions. The presented database will be useful for the development of new software solutions for the analysis of histological micrograph.
Collapse
Affiliation(s)
- Aleksandr M Sinitca
- Centre for Digital Telecommunication Technologies, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Asya I Lyanova
- Centre for Digital Telecommunication Technologies, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Dmitrii I Kaplun
- Artificial Intelligence Research Institute, China University of Mining and Technology, Xuzhou, 221116, China.
- Department of Automation and Control Processes, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia.
| | - Hassan Hassan
- Department of Automation and Control Processes, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Alexander S Krasichkov
- Radio Engineering Systems Department, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
- Department of Computer Science and Engineering, St. Petersburg Electrotechnical University "LETI", 197022, Saint Petersburg, Russia
| | - Kseniia E Sanarova
- Radio Engineering Systems Department, St. Petersburg Electrotechnical University "LETI", St. Petersburg, 197022, Russia
| | - Leonid A Shilenko
- Institute of Experimental Medicine, Almazov National Medical Research Centre, St. Petersburg, 197341, Russia
| | - Elizaveta E Sidorova
- Institute of Experimental Medicine, Almazov National Medical Research Centre, St. Petersburg, 197341, Russia
| | - Anna A Akhmetova
- Institute of Experimental Medicine, Almazov National Medical Research Centre, St. Petersburg, 197341, Russia
| | - Dariya D Vaulina
- Institute of Experimental Medicine, Almazov National Medical Research Centre, St. Petersburg, 197341, Russia
| | - Andrei A Karpov
- Department of Computer Science and Engineering, St. Petersburg Electrotechnical University "LETI", 197022, Saint Petersburg, Russia
- Institute of Experimental Medicine, Almazov National Medical Research Centre, St. Petersburg, 197341, Russia
| |
Collapse
|
44
|
Boulogne LH, Lorenz J, Kienzle D, Schön R, Ludwig K, Lienhart R, Jégou S, Li G, Chen C, Wang Q, Shi D, Maniparambil M, Müller D, Mertes S, Schröter N, Hellmann F, Elia M, Dirks I, Bossa MN, Berenguer AD, Mukherjee T, Vandemeulebroucke J, Sahli H, Deligiannis N, Gonidakis P, Huynh ND, Razzak I, Bouadjenek R, Verdicchio M, Borrelli P, Aiello M, Meakin JA, Lemm A, Russ C, Ionasec R, Paragios N, van Ginneken B, Revel-Dubois MP. The STOIC2021 COVID-19 AI challenge: Applying reusable training methodologies to private data. Med Image Anal 2024; 97:103230. [PMID: 38875741 DOI: 10.1016/j.media.2024.103230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Revised: 01/11/2024] [Accepted: 06/03/2024] [Indexed: 06/16/2024]
Abstract
Challenges drive the state-of-the-art of automated medical image analysis. The quantity of public training data that they provide can limit the performance of their solutions. Public access to the training methodology for these solutions remains absent. This study implements the Type Three (T3) challenge format, which allows for training solutions on private data and guarantees reusable training methodologies. With T3, challenge organizers train a codebase provided by the participants on sequestered training data. T3 was implemented in the STOIC2021 challenge, with the goal of predicting from a computed tomography (CT) scan whether subjects had a severe COVID-19 infection, defined as intubation or death within one month. STOIC2021 consisted of a Qualification phase, where participants developed challenge solutions using 2000 publicly available CT scans, and a Final phase, where participants submitted their training methodologies with which solutions were trained on CT scans of 9724 subjects. The organizers successfully trained six of the eight Final phase submissions. The submitted codebases for training and running inference were released publicly. The winning solution obtained an area under the receiver operating characteristic curve for discerning between severe and non-severe COVID-19 of 0.815. The Final phase solutions of all finalists improved upon their Qualification phase solutions.
Collapse
Affiliation(s)
- Luuk H Boulogne
- Radboud university medical center, P.O. Box 9101, 6500HB Nijmegen, The Netherlands.
| | - Julian Lorenz
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany.
| | - Daniel Kienzle
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | - Robin Schön
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | - Katja Ludwig
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | - Rainer Lienhart
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany
| | | | - Guang Li
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China.
| | - Cong Chen
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China
| | - Qi Wang
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China
| | - Derik Shi
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China
| | - Mayug Maniparambil
- ML-Labs, Dublin City University, N210, Marconi building, Dublin City University, Glasnevin, Dublin 9, Ireland.
| | - Dominik Müller
- University of Augsburg, Universitätsstraße 2, 86159 Augsburg, Germany; Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Silvan Mertes
- Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Niklas Schröter
- Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Fabio Hellmann
- Faculty of Applied Computer Science, University of Augsburg, Germany
| | - Miriam Elia
- Faculty of Applied Computer Science, University of Augsburg, Germany.
| | - Ine Dirks
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium.
| | - Matías Nicolás Bossa
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Abel Díaz Berenguer
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Tanmoy Mukherjee
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Jef Vandemeulebroucke
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Hichem Sahli
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Nikos Deligiannis
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | - Panagiotis Gonidakis
- Vrije Universiteit Brussel, Department of Electronics and Informatics, Pleinlaan 2, 1050 Brussels, Belgium; imec, Kapeldreef 75, 3001 Leuven, Belgium
| | | | - Imran Razzak
- University of New South Wales, Sydney, Australia.
| | | | | | | | | | - James A Meakin
- Radboud university medical center, P.O. Box 9101, 6500HB Nijmegen, The Netherlands
| | - Alexander Lemm
- Amazon Web Services, Marcel-Breuer-Str. 12, 80807 München, Germany
| | - Christoph Russ
- Amazon Web Services, Marcel-Breuer-Str. 12, 80807 München, Germany
| | - Razvan Ionasec
- Amazon Web Services, Marcel-Breuer-Str. 12, 80807 München, Germany
| | - Nikos Paragios
- Keya medical technology co. ltd, Floor 20, Building A, 1 Ronghua South Road, Yizhuang Economic Development Zone, Daxing District, Beijing, PR China; TheraPanacea, 75004, Paris, France
| | - Bram van Ginneken
- Radboud university medical center, P.O. Box 9101, 6500HB Nijmegen, The Netherlands
| | - Marie-Pierre Revel-Dubois
- Department of Radiology, Université de Paris, APHP, Hôpital Cochin, 27 rue du Fg Saint Jacques, 75014 Paris, France
| |
Collapse
|
45
|
Botros M, de Boer OJ, Cardenas B, Bekkers EJ, Jansen M, van der Wel MJ, Sánchez CI, Meijer SL. Deep Learning for Histopathological Assessment of Esophageal Adenocarcinoma Precursor Lesions. Mod Pathol 2024; 37:100531. [PMID: 38830407 DOI: 10.1016/j.modpat.2024.100531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 05/06/2024] [Accepted: 05/28/2024] [Indexed: 06/05/2024]
Abstract
Histopathological assessment of esophageal biopsies is a key part in the management of patients with Barrett esophagus (BE) but prone to observer variability and reliable diagnostic methods are needed. Artificial intelligence (AI) is emerging as a powerful tool for aided diagnosis but often relies on abstract test and validation sets while real-world behavior is unknown. In this study, we developed a 2-stage AI system for histopathological assessment of BE-related dysplasia using deep learning to enhance the efficiency and accuracy of the pathology workflow. The AI system was developed and trained on 290 whole-slide images (WSIs) that were annotated at glandular and tissue levels. The system was designed to identify individual glands, grade dysplasia, and assign a WSI-level diagnosis. The proposed method was evaluated by comparing the performance of our AI system with that of a large international and heterogeneous group of 55 gastrointestinal pathologists assessing 55 digitized biopsies spanning the complete spectrum of BE-related dysplasia. The AI system correctly graded 76.4% of the WSIs, surpassing the performance of 53 out of the 55 participating pathologists. Furthermore, the receiver-operating characteristic analysis showed that the system's ability to predict the absence (nondysplastic BE) versus the presence of any dysplasia was with an area under the curve of 0.94 and a sensitivity of 0.92 at a specificity of 0.94. These findings demonstrate that this AI system has the potential to assist pathologists in assessment of BE-related dysplasia. The system's outputs could provide a reliable and consistent secondary diagnosis in challenging cases or be used for triaging low-risk nondysplastic biopsies, thereby reducing the workload of pathologists and increasing throughput.
Collapse
Affiliation(s)
- Michel Botros
- Department of Pathology, Amsterdam University Medical Centers, Amsterdam, The Netherlands; Department of Biomedical Engineering and Physics, Amsterdam University Medical Centers, Amsterdam, The Netherlands; Quantitative Healthcare Analysis Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands; Amsterdam Machine Learning Lab, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Onno J de Boer
- Department of Pathology, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Bryan Cardenas
- Department of Pathology, Amsterdam University Medical Centers, Amsterdam, The Netherlands; Amsterdam Machine Learning Lab, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Erik J Bekkers
- Amsterdam Machine Learning Lab, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Marnix Jansen
- Research Department of Pathology, Cancer Institute, University College London, London, United Kingdom
| | - Myrtle J van der Wel
- Department of Pathology, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Clara I Sánchez
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centers, Amsterdam, The Netherlands; Quantitative Healthcare Analysis Group, Informatics Institute, University of Amsterdam, Amsterdam, The Netherlands
| | - Sybren L Meijer
- Department of Pathology, Amsterdam University Medical Centers, Amsterdam, The Netherlands.
| |
Collapse
|
46
|
Juan Ramon A, Parmar C, Carrasco-Zevallos OM, Csiszer C, Yip SSF, Raciti P, Stone NL, Triantos S, Quiroz MM, Crowley P, Batavia AS, Greshock J, Mansi T, Standish KA. Development and deployment of a histopathology-based deep learning algorithm for patient prescreening in a clinical trial. Nat Commun 2024; 15:4690. [PMID: 38824132 PMCID: PMC11144215 DOI: 10.1038/s41467-024-49153-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 05/24/2024] [Indexed: 06/03/2024] Open
Abstract
Accurate identification of genetic alterations in tumors, such as Fibroblast Growth Factor Receptor, is crucial for treating with targeted therapies; however, molecular testing can delay patient care due to the time and tissue required. Successful development, validation, and deployment of an AI-based, biomarker-detection algorithm could reduce screening cost and accelerate patient recruitment. Here, we develop a deep-learning algorithm using >3000 H&E-stained whole slide images from patients with advanced urothelial cancers, optimized for high sensitivity to avoid ruling out trial-eligible patients. The algorithm is validated on a dataset of 350 patients, achieving an area under the curve of 0.75, specificity of 31.8% at 88.7% sensitivity, and projected 28.7% reduction in molecular testing. We successfully deploy the system in a non-interventional study comprising 89 global study clinical sites and demonstrate its potential to prioritize/deprioritize molecular testing resources and provide substantial cost savings in the drug development and clinical settings.
Collapse
Affiliation(s)
- Albert Juan Ramon
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA.
| | - Chaitanya Parmar
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA
| | | | - Carlos Csiszer
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Stephen S F Yip
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Cambridge, MA, USA
| | - Patricia Raciti
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Nicole L Stone
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Spyros Triantos
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Michelle M Quiroz
- Janssen R&D, LLC, a Johnson & Johnson Company. Oncology, Spring House, PA, USA
| | - Patrick Crowley
- Janssen R&D, LLC, a Johnson & Johnson Company. Global Development, High Wycombe, UK
| | - Ashita S Batavia
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Joel Greshock
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Spring House, PA, USA
| | - Tommaso Mansi
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, Titusville, NJ, USA
| | - Kristopher A Standish
- Janssen R&D, LLC, a Johnson & Johnson Company. Data Science and Digital Health, San Diego, CA, USA
| |
Collapse
|
47
|
Chen Z, Wong IHM, Dai W, Lo CTK, Wong TTW. Lung Cancer Diagnosis on Virtual Histologically Stained Tissue Using Weakly Supervised Learning. Mod Pathol 2024; 37:100487. [PMID: 38588884 DOI: 10.1016/j.modpat.2024.100487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 03/05/2024] [Accepted: 03/30/2024] [Indexed: 04/10/2024]
Abstract
Lung adenocarcinoma (LUAD) is the most common primary lung cancer and accounts for 40% of all lung cancer cases. The current gold standard for lung cancer analysis is based on the pathologists' interpretation of hematoxylin and eosin (H&E)-stained tissue slices viewed under a brightfield microscope or a digital slide scanner. Computational pathology using deep learning has been proposed to detect lung cancer on histology images. However, the histological staining workflow to acquire the H&E-stained images and the subsequent cancer diagnosis procedures are labor-intensive and time-consuming with tedious sample preparation steps and repetitive manual interpretation, respectively. In this work, we propose a weakly supervised learning method for LUAD classification on label-free tissue slices with virtual histological staining. The autofluorescence images of label-free tissue with histopathological information can be converted into virtual H&E-stained images by a weakly supervised deep generative model. For the downstream LUAD classification task, we trained the attention-based multiple-instance learning model with different settings on the open-source LUAD H&E-stained whole-slide images (WSIs) dataset from the Cancer Genome Atlas (TCGA). The model was validated on the 150 H&E-stained WSIs collected from patients in Queen Mary Hospital and Prince of Wales Hospital with an average area under the curve (AUC) of 0.961. The model also achieved an average AUC of 0.973 on 58 virtual H&E-stained WSIs, comparable to the results on 58 standard H&E-stained WSIs with an average AUC of 0.977. The attention heatmaps of virtual H&E-stained WSIs and ground-truth H&E-stained WSIs can indicate tumor regions of LUAD tissue slices. In conclusion, the proposed diagnostic workflow on virtual H&E-stained WSIs of label-free tissue is a rapid, cost effective, and interpretable approach to assist clinicians in postoperative pathological examinations. The method could serve as a blueprint for other label-free imaging modalities and disease contexts.
Collapse
Affiliation(s)
- Zhenghui Chen
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Ivy H M Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Weixing Dai
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Claudia T K Lo
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China
| | - Terence T W Wong
- Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
| |
Collapse
|
48
|
Huang K, Liao J, He J, Lai S, Peng Y, Deng Q, Wang H, Liu Y, Peng L, Bai Z, Yu N, Li Y, Jiang Z, Su J, Li J, Tang Y, Chen M, Lu L, Chen X, Yao J, Zhao S. A real-time augmented reality system integrated with artificial intelligence for skin tumor surgery: experimental study and case series. Int J Surg 2024; 110:3294-3306. [PMID: 38549223 PMCID: PMC11175769 DOI: 10.1097/js9.0000000000001371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 03/11/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Skin tumors affect many people worldwide, and surgery is the first treatment choice. Achieving precise preoperative planning and navigation of intraoperative sampling remains a problem and is excessively reliant on the experience of surgeons, especially for Mohs surgery for malignant tumors. MATERIALS AND METHODS To achieve precise preoperative planning and navigation of intraoperative sampling, we developed a real-time augmented reality (AR) surgical system integrated with artificial intelligence (AI) to enhance three functions: AI-assisted tumor boundary segmentation, surgical margin design, and navigation in intraoperative tissue sampling. Non-randomized controlled trials were conducted on manikin, tumor-simulated rabbits, and human volunteers in Hunan Engineering Research Center of Skin Health and Disease Laboratory to evaluate the surgical system. RESULTS The results showed that the accuracy of the benign and malignant tumor segmentation was 0.9556 and 0.9548, respectively, and the average AR navigation mapping error was 0.644 mm. The proposed surgical system was applied in 106 skin tumor surgeries, including intraoperative navigation of sampling in 16 Mohs surgery cases. Surgeons who have used this system highly recognize it. CONCLUSIONS The surgical system highlighted the potential to achieve accurate treatment of skin tumors and to fill the gap in global research on skin tumor surgery systems.
Collapse
Affiliation(s)
- Kai Huang
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
- Tencent AI Lab, Shenzhen, People’s Republic of China
| | - Jun Liao
- Tencent AI Lab, Shenzhen, People’s Republic of China
| | - Jishuai He
- Tencent AI Lab, Shenzhen, People’s Republic of China
| | - Sicen Lai
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Yihao Peng
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Qian Deng
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Han Wang
- Tencent AI Lab, Shenzhen, People’s Republic of China
| | - Yuancheng Liu
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Lanyuan Peng
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Ziqi Bai
- Tencent AI Lab, Shenzhen, People’s Republic of China
| | - Nianzhou Yu
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Yixin Li
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Zixi Jiang
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Juan Su
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Jinmao Li
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Yan Tang
- Department of Dermatology
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Mingliang Chen
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Lixia Lu
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Xiang Chen
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| | - Jianhua Yao
- Tencent AI Lab, Shenzhen, People’s Republic of China
| | - Shuang Zhao
- Department of Dermatology
- Hunan Key Laboratory of Skin Cancer and Psoriasis
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital
- Hunan Engineering Research Center of Skin Health and Disease, Central South University
- National Engineering Research Center of Personalized Diagnostic and Therapeutic Technology, Hunan
| |
Collapse
|
49
|
Bryant AK, Zamora‐Resendiz R, Dai X, Morrow D, Lin Y, Jungles KM, Rae JM, Tate A, Pearson AN, Jiang R, Fritsche L, Lawrence TS, Zou W, Schipper M, Ramnath N, Yoo S, Crivelli S, Green MD. Artificial intelligence to unlock real-world evidence in clinical oncology: A primer on recent advances. Cancer Med 2024; 13:e7253. [PMID: 38899720 PMCID: PMC11187737 DOI: 10.1002/cam4.7253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 02/05/2024] [Accepted: 04/28/2024] [Indexed: 06/21/2024] Open
Abstract
PURPOSE Real world evidence is crucial to understanding the diffusion of new oncologic therapies, monitoring cancer outcomes, and detecting unexpected toxicities. In practice, real world evidence is challenging to collect rapidly and comprehensively, often requiring expensive and time-consuming manual case-finding and annotation of clinical text. In this Review, we summarise recent developments in the use of artificial intelligence to collect and analyze real world evidence in oncology. METHODS We performed a narrative review of the major current trends and recent literature in artificial intelligence applications in oncology. RESULTS Artificial intelligence (AI) approaches are increasingly used to efficiently phenotype patients and tumors at large scale. These tools also may provide novel biological insights and improve risk prediction through multimodal integration of radiographic, pathological, and genomic datasets. Custom language processing pipelines and large language models hold great promise for clinical prediction and phenotyping. CONCLUSIONS Despite rapid advances, continued progress in computation, generalizability, interpretability, and reliability as well as prospective validation are needed to integrate AI approaches into routine clinical care and real-time monitoring of novel therapies.
Collapse
Affiliation(s)
- Alex K. Bryant
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
- Department of Radiation Oncology, Veterans Affairs Ann Arbor Healthcare SystemAnn ArborMichiganUSA
| | - Rafael Zamora‐Resendiz
- Applied Mathematics and Computational Research Division, Lawrence Berkeley National LaboratoryBerkeleyCaliforniaUSA
| | - Xin Dai
- Computational Science Initiative, Brookhaven National LaboratoryUptonNew YorkUSA
| | - Destinee Morrow
- Applied Mathematics and Computational Research Division, Lawrence Berkeley National LaboratoryBerkeleyCaliforniaUSA
| | - Yuewei Lin
- Computational Science Initiative, Brookhaven National LaboratoryUptonNew YorkUSA
| | - Kassidy M. Jungles
- Department of PharmacologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| | - James M. Rae
- Department of PharmacologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
- Department of Internal MedicineUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| | - Akshay Tate
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| | - Ashley N. Pearson
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| | - Ralph Jiang
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
- Department of StatisticsUniversity of MichiganAnn ArborMichiganUSA
| | - Lars Fritsche
- Department of StatisticsUniversity of MichiganAnn ArborMichiganUSA
| | - Theodore S. Lawrence
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| | - Weiping Zou
- Department of StatisticsUniversity of MichiganAnn ArborMichiganUSA
- Center of Excellence for Cancer Immunology and ImmunotherapyUniversity of Michigan Rogel Cancer CenterAnn ArborMichiganUSA
- Department of PathologyUniversity of MichiganAnn ArborMichiganUSA
- Graduate Program in ImmunologyUniversity of MichiganAnn ArborMichiganUSA
| | - Matthew Schipper
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
- Department of PharmacologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| | - Nithya Ramnath
- Division of Hematology Oncology, Department of MedicineUniversity of Michigan School of MedicineAnn ArborMichiganUSA
- Division of Hematology Oncology, Department of MedicineVeterans Affairs Ann Arbor Healthcare SystemAnn ArborMichiganUSA
| | - Shinjae Yoo
- Computational Science Initiative, Brookhaven National LaboratoryUptonNew YorkUSA
| | - Silvia Crivelli
- Applied Mathematics and Computational Research Division, Lawrence Berkeley National LaboratoryBerkeleyCaliforniaUSA
| | - Michael D. Green
- Department of Radiation OncologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
- Department of Radiation Oncology, Veterans Affairs Ann Arbor Healthcare SystemAnn ArborMichiganUSA
- Graduate Program in ImmunologyUniversity of MichiganAnn ArborMichiganUSA
- Graduate Program in Cancer BiologyUniversity of MichiganAnn ArborMichiganUSA
- Department of Microbiology and ImmunologyUniversity of Michigan School of MedicineAnn ArborMichiganUSA
| |
Collapse
|
50
|
Khene ZE, Kammerer-Jacquet SF, Bigot P, Rabilloud N, Albiges L, Margulis V, De Crevoisier R, Acosta O, Rioux-Leclercq N, Lotan Y, Rouprêt M, Bensalah K. Clinical Application of Digital and Computational Pathology in Renal Cell Carcinoma: A Systematic Review. Eur Urol Oncol 2024; 7:401-411. [PMID: 37925349 DOI: 10.1016/j.euo.2023.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 09/26/2023] [Accepted: 10/24/2023] [Indexed: 11/06/2023]
Abstract
CONTEXT Computational pathology is a new interdisciplinary field that combines traditional pathology with modern technologies such as digital imaging and machine learning to better understand the diagnosis, prognosis, and natural history of many diseases. OBJECTIVE To provide an overview of digital and computational pathology and its current and potential applications in renal cell carcinoma (RCC). EVIDENCE ACQUISITION A systematic review of the English-language literature was conducted using the PubMed, Web of Science, and Scopus databases in December 2022 according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PROSPERO ID: CRD42023389282). Risk of bias was assessed according to the Prediction Model Study Risk of Bias Assessment Tool. EVIDENCE SYNTHESIS In total, 20 articles were included in the review. All the studies used a retrospective design, and all digital pathology techniques were implemented retrospectively. The studies were classified according to their primary objective: detection, tumor characterization, and patient outcome. Regarding the transition to clinical practice, several studies showed promising potential. However, none presented a comprehensive assessment of clinical utility and implementation. Notably, there was substantial heterogeneity for both the strategies used for model building and the performance metrics reported. CONCLUSIONS This review highlights the vast potential of digital and computational pathology for the detection, classification, and assessment of oncological outcomes in RCC. Preliminary work in this field has yielded promising results. However, these models have not yet reached a stage where they can be integrated into routine clinical practice. PATIENT SUMMARY Computational pathology combines traditional pathology and technologies such as digital imaging and artificial intelligence to improve diagnosis of disease and identify prognostic factors and new biomarkers. The number of studies exploring its potential in kidney cancer is rapidly increasing. However, despite the surge in research activity, computational pathology is not yet ready for widespread routine use.
Collapse
Affiliation(s)
- Zine-Eddine Khene
- Department of Urology, University of Rennes, Rennes, France; Laboratoire Traitement du Signal et de l'Image, Inserm U1099, Université de Rennes 1, Rennes, France; Department of Urology, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Solène-Florence Kammerer-Jacquet
- Laboratoire Traitement du Signal et de l'Image, Inserm U1099, Université de Rennes 1, Rennes, France; Department of Pathology, University of Rennes, Rennes, France
| | - Pierre Bigot
- Department of Urology, University of Angers, Rennes, France
| | - Noémie Rabilloud
- Laboratoire Traitement du Signal et de l'Image, Inserm U1099, Université de Rennes 1, Rennes, France
| | - Laurence Albiges
- Department of Medical Oncology, Gustave Roussy, Villejuif, France
| | - Vitaly Margulis
- Department of Urology, UT Southwestern Medical Center, Dallas, TX, USA
| | | | - Oscar Acosta
- Laboratoire Traitement du Signal et de l'Image, Inserm U1099, Université de Rennes 1, Rennes, France
| | | | - Yair Lotan
- Department of Urology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Morgan Rouprêt
- Department of Urology, La Pitie Salpétrière Hospital, Paris, France
| | - Karim Bensalah
- Department of Urology, University of Rennes, Rennes, France
| |
Collapse
|