1
|
Aravazhi PS, Gunasekaran P, Benjamin NZY, Thai A, Chandrasekar KK, Kolanu ND, Prajjwal P, Tekuru Y, Brito LV, Inban P. The integration of artificial intelligence into clinical medicine: Trends, challenges, and future directions. Dis Mon 2025; 71:101882. [PMID: 40140300 DOI: 10.1016/j.disamonth.2025.101882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2025]
Abstract
BACKGROUND AND OBJECTIVES AI has emerged as a transformative force in clinical medicine, changing the diagnosis, treatment, and management of patients. Tools have been derived for working with ML, DL, and NLP algorithms to analyze large complex medical datasets with unprecedented accuracy and speed, thereby improving diagnostic precision, treatment personalization, and patient care outcomes. For example, CNNs have dramatically improved the accuracy of medical imaging diagnoses, and NLP algorithms have greatly helped extract insights from unstructured data, including EHRs. However, there are still numerous challenges that face AI integration into clinical workflows, including data privacy, algorithmic bias, ethical dilemmas, and problems with the interpretability of "black-box" AI models. These barriers have thus far prevented the widespread application of AI in health care, and its possible trends, obstacles, and future implications are necessary to be systematically explored. The purpose of this paper is, therefore, to assess the current trends in AI applications in clinical medicine, identify those obstacles that are hindering adoption, and identify possible future directions. This research hopes to synthesize evidence from other peer-reviewed articles to provide a more comprehensive understanding of the role that AI plays to advance clinical practices, improve patient outcomes, or enhance decision-making. METHODS A systematic review was done according to the PRISMA guidelines to explore the integration of Artificial Intelligence in clinical medicine, including trends, challenges, and future directions. PubMed, Cochrane Library, Web of Science, and Scopus databases were searched for peer-reviewed articles from 2014 to 2024 with keywords such as "Artificial Intelligence in Medicine," "AI in Clinical Practice," "Machine Learning in Healthcare," and "Ethical Implications of AI in Medicine." Studies focusing on AI application in diagnostics, treatment planning, and patient care reporting measurable clinical outcomes were included. Non-clinical AI applications and articles published before 2014 were excluded. Selected studies were screened for relevance, and then their quality was critically appraised to synthesize data reliably and rigorously. RESULTS This systematic review includes the findings of 8 studies that pointed out the transformational role of AI in clinical medicine. AI tools, such as CNNs, had diagnostic accuracy more than the traditional methods, particularly in radiology and pathology. Predictive models efficiently supported risk stratification, early disease detection, and personalized medicine. Despite these improvements, significant hurdles, including data privacy, algorithmic bias, and resistance from clinicians regarding the "black-box" nature of AI, had yet to be surmounted. XAI has emerged as an attractive solution that offers the promise to enhance interpretability and trust. As a whole, AI appeared promising in enhancing diagnostics, treatment personalization, and clinical workflows by dealing with systemic inefficiencies. CONCLUSION The transformation potential of AI in clinical medicine can transform diagnostics, treatment strategies, and efficiency. Overcoming obstacles such as concerns about data privacy, the danger of algorithmic bias, and difficulties with interpretability may pave the way for broader use and facilitate improvement in patient outcomes while transforming clinical workflows to bring sustainability into healthcare delivery.
Collapse
Affiliation(s)
| | | | | | - Andy Thai
- Internal Medicine, Alameda Health System, Highland Hospital, Oakland, USA
| | | | | | | | - Yogesh Tekuru
- RVM Institute of Medical Sciences and Research Center, Laxmakkapally, India
| | | | - Pugazhendi Inban
- Internal Medicine, St. Mary's General Hospital and Saint Clare's Health, NY, USA.
| |
Collapse
|
2
|
Højlund SA, Mandrup JB, Nielsen PS, Georgsen JB, Steiniche T. Automated annotation of virtual dual stains to generate convolutional neural network for detecting cancer metastases in H&E-stained lymph nodes. Pathol Res Pract 2025; 270:155977. [PMID: 40300522 DOI: 10.1016/j.prp.2025.155977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2024] [Revised: 03/17/2025] [Accepted: 04/13/2025] [Indexed: 05/01/2025]
Abstract
CONTEXT Staging cancer patients is crucial and requires analyzing all removed lymph nodes microscopically for metastasis. For this pivotal task, convolutional neural networks (CNN) can reduce workload and improve diagnostic accuracy. OBJECTIVE This study aimed to develop a CNN for detecting lymph node metastases (LNM) in colorectal (CRC) and head and neck cancer patients (HNC) while also demonstrating how routine pathology departments can build tailored AI models without the need for large datasets or extensive manual annotations. DESIGN From 40 CRC and 40 HCN patients with LNM, we scanned 40 hematoxylin and eosin-stained (H&E) slides with and 40 slides without LNM. The same slides were re-stained with immunohistochemistry for pan-cytokeratin and re-scanned. The two corresponding whole slide images were aligned digitally before having the metastatic areas annotated by a color threshold-based algorithm on the immunohistochemistry slide. These annotations were digitally transferred onto H&E whole slide images, which served as the CNN training cohort. The two developed CNNs were tested on 388 lymph nodes from 20 CRC and 138 lymph nodes from 20 HNC patients. RESULTS The areas under the ROC curve were 0.9968 [95 %CI, 0.9925-0.9996] for CRC and 0.9485 [95 %CI, 0.8938-0.9888] for HNC patients representing a high sensitivity and specificity for both CNNs. The results are comparable to studies based on huge data sets or exhaustively manually annotated whole slide images. CONCLUSIONS Our study showed that it is possible to develop a high-performing CNN with no requirements for huge datasets or time-consuming manual annotations.
Collapse
Affiliation(s)
| | | | - Patricia Switten Nielsen
- Department of Pathology, Aarhus University Hospital, Aarhus N 8200, Denmark; Department of Clinical Medicine, Aarhus University, Aarhus N 8200, Denmark.
| | | | - Torben Steiniche
- Department of Pathology, Aarhus University Hospital, Aarhus N 8200, Denmark; Department of Clinical Medicine, Aarhus University, Aarhus N 8200, Denmark.
| |
Collapse
|
3
|
Lu K, Lin S, Xue K, Huang D, Ji Y. Optimized multiple instance learning for brain tumor classification using weakly supervised contrastive learning. Comput Biol Med 2025; 191:110075. [PMID: 40220594 DOI: 10.1016/j.compbiomed.2025.110075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 02/28/2025] [Accepted: 03/21/2025] [Indexed: 04/14/2025]
Abstract
Brain tumors have a great impact on patients' quality of life and accurate histopathological classification of brain tumors is crucial for patients' prognosis. Multi-instance learning (MIL) has become the mainstream method for analyzing whole-slide images (WSIs). However, current MIL-based methods face several issues, including significant redundancy in the input and feature space, insufficient modeling of spatial relations between patches and inadequate representation capability of the feature extractor. To solve these limitations, we propose a new multi-instance learning with weakly supervised contrastive learning for brain tumor classification. Our framework consists of two parts: a cross-detection MIL aggregator (CDMIL) for brain tumor classification and a contrastive learning model based on pseudo-labels (PSCL) for optimizing feature encoder. The CDMIL consists of three modules: an internal patch anchoring module (IPAM), a local structural learning module (LSLM) and a cross-detection module (CDM). Specifically, IPAM utilizes probability distribution to generate representations of anchor samples, while LSLM extracts representations of local structural information between anchor samples. These two representations are effectively fused in CDM. Additionally, we propose a bag-level contrastive loss to interact with different subtypes in the feature space. PSCL uses the samples and pseudo-labels anchored by IPAM to optimize the performance of the feature encoder, which extracts a better feature representation to train CDMIL. We performed benchmark tests on a self-collected dataset and a publicly available dataset. The experiments show that our method has better performance than several existing state-of-the-art methods.
Collapse
Affiliation(s)
- Kaoyan Lu
- Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, School of Physics, South China Normal University, 378 Waihuan West Road, Panyu District, Guangzhou, 510006, Guangdong Province, China; Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, 378 Waihuan West Road, Panyu District, 510006, Guangdong Province, Guangzhou, China
| | - Shiyu Lin
- Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, School of Physics, South China Normal University, 378 Waihuan West Road, Panyu District, Guangzhou, 510006, Guangdong Province, China; Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, 378 Waihuan West Road, Panyu District, 510006, Guangdong Province, Guangzhou, China
| | - Kaiwen Xue
- School of Cyberspace Security, Beijing University of Posts and Telecommunications, 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Duoxi Huang
- The Third Affiliated Hospital of Southern Medical University, 183 Zhongshan Avenue West, Tianhe District, Guangzhou, 510630, Guangdong Province, China.
| | - Yanghong Ji
- Key Laboratory of Atomic and Subatomic Structure and Quantum Control (Ministry of Education), Guangdong Basic Research Center of Excellence for Structure and Fundamental Interactions of Matter, School of Physics, South China Normal University, 378 Waihuan West Road, Panyu District, Guangzhou, 510006, Guangdong Province, China; Guangdong Provincial Key Laboratory of Quantum Engineering and Quantum Materials, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, South China Normal University, 378 Waihuan West Road, Panyu District, 510006, Guangdong Province, Guangzhou, China.
| |
Collapse
|
4
|
Duan M, Qu L, Yang Z, Wang M, Zhang C, Song Z. An efficient dual-branch framework via implicit self-texture enhancement for arbitrary-scale histopathology image super-resolution. Sci Rep 2025; 15:18883. [PMID: 40442141 PMCID: PMC12122856 DOI: 10.1038/s41598-025-02503-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2024] [Accepted: 05/13/2025] [Indexed: 06/02/2025] Open
Abstract
High-quality whole-slide scanning is expensive, complex, and time-consuming, thus limiting the acquisition and utilization of high-resolution histopathology images in daily clinical work. Deep learning-based single-image super-resolution (SISR) techniques provide an effective way to solve this problem. However, the existing SISR models applied in histopathology images can only work in fixed integer scaling factors, decreasing their applicability. Though methods based on implicit neural representation (INR) have shown promising results in arbitrary-scale super-resolution (SR) of natural images, applying them directly to histopathology images is inadequate because they have unique fine-grained image textures different from natural images. Thus, we propose an Implicit Self-Texture Enhancement-based dual-branch framework (ISTE) for arbitrary-scale SR of histopathology images to address this challenge. The proposed ISTE contains a feature aggregation branch and a texture learning branch. We employ the feature aggregation branch to enhance the learning of the local details for SR images while utilizing the texture learning branch to enhance the learning of high-frequency texture details. Then, we design a two-stage texture enhancement strategy to fuse the features from the two branches to obtain the SR images. Experiments on publicly available datasets, including TMA, HistoSR, and the TCGA lung cancer datasets, demonstrate that ISTE outperforms existing fixed-scale and arbitrary-scale SR algorithms across various scaling factors. Additionally, extensive experiments have shown that the histopathology images reconstructed by the proposed ISTE are applicable to downstream pathology image analysis tasks.
Collapse
Affiliation(s)
- Minghong Duan
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Linhao Qu
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Zhiwei Yang
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
- Academy for Engineering and Technology, Fudan University, Shanghai, 200433, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China
| | - Chenxi Zhang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China.
- Shanghai Key Laboratory of Medical Image Computing and Computer Assisted Intervention, Shanghai, 200032, China.
| |
Collapse
|
5
|
Kenig N, Monton Echeverria J, Muntaner Vives A. Evaluating Surgical Results in Breast Cancer with Artificial Intelligence. Aesthetic Plast Surg 2025:10.1007/s00266-025-04915-8. [PMID: 40425883 DOI: 10.1007/s00266-025-04915-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Accepted: 04/18/2025] [Indexed: 05/29/2025]
Abstract
INTRODUCTION Artificial intelligence (AI) is rapidly transforming healthcare, with increasing applications in surgical evaluation. In breast cancer surgery, achieving aesthetic symmetry is essential for patient satisfaction and emotional well-being. While human evaluation remains fundamental, AI-driven symmetry assessment promises objective alternatives. This study evaluates the performance of publicly available AI models in breast symmetry assessment and compares them with Pyolo8, a custom AI model developed by the authors. Additionally, the study explores the potential emotional impact and ethical considerations of AI-generated assessments in postoperative breast cancer patients. METHODS Sixty-eight patients who underwent breast reconstruction were evaluated with the use of publicly available AI models and contrasted with an AI model developed by the authors named Pyolo8. All results were evaluated by human observers. RESULTS ChatGPT 4o and Pyolo8 AI models showed statistically significant moderate to strong positive correlation for postoperative assessment when compared to human observers. Direct interaction between AI models and patients was censored due to concerns of misinterpretation. CONCLUSIONS Both ChatGPT and Pyolo8 showed moderate to strong correlation with humans, but ChatGPT demonstrated superior communication skills. However, AI systems may lack the subtlety and empathy required for direct patient interactions, as vulnerable postoperative patients receiving an AI-generated symmetry assessment without appropriate clinical context may experience emotional distress or misinterpret the results. Human oversight and empathetic communication remain essential to ensure quality care while AI is increasingly integrated into medicine. LEVEL OF EVIDENCE IV This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Nitzan Kenig
- Department of Plastic Surgery, Quironsalud Palmaplanas, Cami dels Reis 308, 03010, Palma, Balearic Islands, Spain.
- University of Castilla-La Mancha, Albacete, Spain.
| | - Javier Monton Echeverria
- University of Castilla-La Mancha, Albacete, Spain
- Department of Plastic Surgery, Albacete University Hospital, Albacete, Spain
| | | |
Collapse
|
6
|
Angeloni M, Rizzi D, Schoen S, Caputo A, Merolla F, Hartmann A, Ferrazzi F, Fraggetta F. Closing the gap in the clinical adoption of computational pathology: a standardized, open-source framework to integrate deep-learning models into the laboratory information system. Genome Med 2025; 17:60. [PMID: 40420213 DOI: 10.1186/s13073-025-01484-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Accepted: 05/06/2025] [Indexed: 05/28/2025] Open
Abstract
BACKGROUND Digital pathology (DP) has revolutionized cancer diagnostics and enabled the development of deep-learning (DL) models aimed at supporting pathologists in their daily work and improving patient care. However, the clinical adoption of such models remains challenging. Here, we describe a proof-of-concept framework that, leveraging Health Level 7 (HL7) standard and open-source DP resources, allows a seamless integration of both publicly available and custom developed DL models in the clinical workflow. METHODS Development and testing of the framework were carried out in a fully digitized Italian pathology department. A Python-based server-client architecture was implemented to interconnect through HL7 messaging the anatomic pathology laboratory information system (AP-LIS) with an external artificial intelligence-based decision support system (AI-DSS) containing 16 pre-trained DL models. Open-source toolboxes for DL model deployment were used to run DL model inference, and QuPath was used to provide an intuitive visualization of model predictions as colored heatmaps. RESULTS A default deployment mode runs continuously in the background as each new slide is digitized, choosing the correct DL model(s) on the basis of the tissue type and staining. In addition, pathologists can initiate the analysis on-demand by selecting a specific DL model from the virtual slide tray. In both cases, the AP-LIS transmits an HL7 message to the AI-DSS, which processes the message, runs DL model inference, and creates the appropriate visualization style for the employed classification model. The AI-DSS transmits model inference results to the AP-LIS, where pathologists can visualize the output in QuPath and/or directly as slide description in the virtual slide tray. CONCLUSIONS Taken together, the developed integration framework through the use of the HL7 standard and freely available DP resources offers a standardized, portable, and open-source solution that lays the groundwork for the future widespread adoption of DL models in pathology diagnostics.
Collapse
Affiliation(s)
- Miriam Angeloni
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | | | - Simon Schoen
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
| | - Alessandro Caputo
- Department of Pathology, University Hospital of Salerno, Salerno, Italy
- Department of Medicine and Surgery, University of Salerno, Salerno, Italy
| | - Francesco Merolla
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Arndt Hartmann
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany
- Bavarian Cancer Research Center (BZKF), Erlangen, Germany
| | - Fulvia Ferrazzi
- Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Erlangen, Germany.
- Comprehensive Cancer Center Erlangen-EMN (CCC ER-EMN), Erlangen, Germany.
- Bavarian Cancer Research Center (BZKF), Erlangen, Germany.
- Department of Nephropathology, Institute of Pathology, Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Krankenhausstr. 8-10, Erlangen, 91054, Germany.
| | - Filippo Fraggetta
- Unit of Pathology, Gravina Hospital, Via Portosalvo 1, Caltagirone, 95041, Italy.
| |
Collapse
|
7
|
Nambiar R, Bhat R, Achar H V B. Advancements in Hematologic Malignancy Detection: A Comprehensive Survey of Methodologies and Emerging Trends. ScientificWorldJournal 2025; 2025:1671766. [PMID: 40421320 PMCID: PMC12103971 DOI: 10.1155/tswj/1671766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2024] [Accepted: 04/24/2025] [Indexed: 05/28/2025] Open
Abstract
The investigation and diagnosis of hematologic malignancy using blood cell image analysis are major and emerging subjects that lie at the intersection of artificial intelligence and medical research. This survey systematically examines the state-of-the-art in blood cancer detection through image-based analysis, aimed at identifying the most effective computational strategies and highlighting emerging trends. This review focuses on three principal objectives, namely, to categorize and compare traditional machine learning (ML), deep learning (DL), and hybrid learning approaches; to evaluate performance metrics such as accuracy, precision, recall, and area under the ROC curve; and to identify methodological gaps and propose directions for future research. Methodologically, we organize the literature by categorizing the malignancy types-leukemia, lymphoma, and multiple myeloma-and particularizing the preprocessing steps, feature extraction techniques, network architectures, and ensemble strategies employed. For ML methods, we discuss classical classifiers including support vector machines and random forests; for DL, we analyze convolutional neural networks (e.g., AlexNet, VGG, and ResNet) and transformer-based models; and for hybrid systems, we examine combinations of CNNs with attention mechanisms or traditional classifiers. Our synthesis reveals that DL models consistently outperform ML baselines, achieving classification accuracies above 95% in benchmark datasets, with hybrid models pushing peak accuracy to 99.7%. However, challenges remain in data scarcity, class imbalance, and generalizability to clinical settings. We conclude by recommending the integration of multimodal data, semisupervised learning, and rigorous external validation to advance toward deployable diagnostic tools. This survey also provides a comprehensive roadmap for researchers and clinicians striving to harness AI for reliable hematologic cancer detection.
Collapse
Affiliation(s)
- Rajashree Nambiar
- Department of Robotics and AI Engineering, NMAM Institute of Technology, NITTE (Deemed to be University), Nitte, India
| | - Ranjith Bhat
- Department of Robotics and AI Engineering, NMAM Institute of Technology, NITTE (Deemed to be University), Nitte, India
| | - Balachandra Achar H V
- Department of Electronics and Communication Engineering, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
8
|
Cai L, Huang S, Zhang Y, Lu J, Zhang Y. AttriMIL: Revisiting attention-based multiple instance learning for whole-slide pathological image classification from a perspective of instance attributes. Med Image Anal 2025; 103:103631. [PMID: 40381256 DOI: 10.1016/j.media.2025.103631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2024] [Revised: 04/09/2025] [Accepted: 04/25/2025] [Indexed: 05/20/2025]
Abstract
Multiple instance learning (MIL) is a powerful approach for whole-slide pathological image (WSI) analysis, particularly suited for processing gigapixel-resolution images with slide-level labels. Recent attention-based MIL architectures have significantly advanced weakly supervised WSI classification, facilitating both clinical diagnosis and localization of disease-positive regions. However, these methods often face challenges in differentiating between instances, leading to tissue misidentification and a potential degradation in classification performance. To address these limitations, we propose AttriMIL, an attribute-aware multiple instance learning framework. By dissecting the computational flow of attention-based MIL models, we introduce a multi-branch attribute scoring mechanism that quantifies the pathological attributes of individual instances. Leveraging these quantified attributes, we further establish region-wise and slide-wise attribute constraints to dynamically model instance correlations both within and across slides during training. These constraints encourage the network to capture intrinsic spatial patterns and semantic similarities between image patches, thereby enhancing its ability to distinguish subtle tissue variations and sensitivity to challenging instances. To fully exploit the two constraints, we further develop a pathology adaptive learning technique to optimize pre-trained feature extractors, enabling the model to efficiently gather task-specific features. Extensive experiments on five public datasets demonstrate that AttriMIL consistently outperforms state-of-the-art methods across various dimensions, including bag classification accuracy, generalization ability, and disease-positive region localization. The implementation code is available at https://github.com/MedCAI/AttriMIL.
Collapse
Affiliation(s)
- Linghan Cai
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| | - Shenjin Huang
- Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China
| | - Ye Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Jinpeng Lu
- School of Science, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China
| | - Yongbing Zhang
- School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen), Shenzhen, 518055, China.
| |
Collapse
|
9
|
Badve S, Kumar GL, Lang T, Peigin E, Pratt J, Anders R, Chatterjee D, Gonzalez RS, Graham RP, Krasinskas AM, Liu X, Quaas A, Saxena R, Setia N, Tang L, Wang HL, Rüschoff J, Schildhaus HU, Daifalla K, Päpper M, Frey P, Faber F, Karasarides M. Augmented reality microscopy to bridge trust between AI and pathologists. NPJ Precis Oncol 2025; 9:139. [PMID: 40355526 PMCID: PMC12069518 DOI: 10.1038/s41698-025-00899-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2025] [Accepted: 04/02/2025] [Indexed: 05/14/2025] Open
Abstract
Diagnostic certainty is the cornerstone of modern medicine and critical for maximal treatment benefit. When evaluating biomarker expression by immunohistochemistry (IHC), however, pathologists are hindered by complex scoring methodologies, unique positivity cut-offs and subjective staining interpretation. Artificial intelligence (AI) can potentially eliminate diagnostic uncertainty, especially when AI "trustworthiness" is proven by expert pathologists in the context of real-world clinical practice. Building on an IHC foundation model, we employed pathologists-in-the-loop finetuning to produce a programmed cell death ligand 1 (PD-L1) CPS AI Model. We devised a multi-head augmented reality microscope (ARM) system overlayed with the PD-L1 CPS AI Model to assess interobserver variability and gauge the pathologists' trust in AI model outputs. Using difficult to interpret regions on gastroesophageal biopsies, we show that AI-assistance improved case agreement between any 2 pathologists by 14% (agreement on 77% vs 91%) and among 11 pathologists by 26% (agreement on 43% vs 69%). At a clinical cutoff of PD-L1 CPS ≥ 5, the number of cases diagnosed as positive by all 11 pathologists increased by 31%. Our findings underscore the benefits of fully engaging pathologists as active participants in the development and deployment of IHC AI models and frame the roadmap for trustworthy AI as a bridge to increased adoption in routine pathology practice.
Collapse
Affiliation(s)
- Sunil Badve
- Emory University School of Medicine, Atlanta, GA, USA.
| | | | | | | | | | - Robert Anders
- Johns Hopkins University Baltimore, Baltimore, MD, USA
| | | | | | | | | | - Xiuli Liu
- Washington University School of Medicine, St Louis, MO, USA
| | | | - Romil Saxena
- Emory University School of Medicine, Atlanta, GA, USA
| | | | - Laura Tang
- Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Hanlin L Wang
- UCLA David Geffen School of Medicine, Los Angeles, CA, USA
| | - Josef Rüschoff
- Discovery Life Sciences Biomarker Services GmbH, Kassel, Germany
| | | | | | | | | | | | | |
Collapse
|
10
|
Ren M, Huang M, Zhang Y, Zhang Z, Ren M. Enhanced hierarchical attention mechanism for mixed MIL in automatic Gleason grading and scoring. Sci Rep 2025; 15:15980. [PMID: 40341520 PMCID: PMC12062252 DOI: 10.1038/s41598-025-00048-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Accepted: 04/24/2025] [Indexed: 05/10/2025] Open
Abstract
Segmenting histological images and analyzing relevant regions are crucial for supporting pathologists in diagnosing various diseases. In prostate cancer diagnosis, Gleason grading and scoring relies on the recognition of different patterns in tissue samples. However, annotating large histological datasets is laborious, expensive, and often limited to slide-level or limited instance-level labels. To address this, we propose an enhanced hierarchical attention mechanism within a mixed multiple instance learning (MIL) model that effectively integrates slide-level and instance-level labels. Our hierarchical attention mechanism dynamically suppresses noisy instance-level labels while adaptively amplifying discriminative features, achieving a synergistic integration of global slide-level context and local superpixel patterns. This design significantly improves label utilization efficiency, leading to state-of-the-art performance in Gleason grading. Experimental results on the SICAPv2 and TMAs datasets demonstrate the superior performance of our model, achieving AUC scores of 0.9597 and 0.8889, respectively. Our work not only advances the state-of-the-art in Gleason grading but also highlights the potential of hierarchical attention mechanisms in mixed MIL models for medical image analysis.
Collapse
Affiliation(s)
- Meili Ren
- Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China.
- Center of Network and Information Education Technology, Shanxi University of Finance and Economics, Taiyuan, 030006, China.
| | - Mengxing Huang
- Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China
| | - Yu Zhang
- Hainan Provincial Key Laboratory of Big Data and Smart Service, Hainan University, Haikou, 570228, China
| | - Zhijun Zhang
- Center of Network and Information Education Technology, Shanxi University of Finance and Economics, Taiyuan, 030006, China
| | - Meiyan Ren
- School of Medical, Shanxi Datong University, Datong, 037009, China
| |
Collapse
|
11
|
Chadha S, Mukherjee S, Sanyal S. Advancements and implications of artificial intelligence for early detection, diagnosis and tailored treatment of cancer. Semin Oncol 2025; 52:152349. [PMID: 40345002 DOI: 10.1016/j.seminoncol.2025.152349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2025] [Revised: 03/20/2025] [Accepted: 04/04/2025] [Indexed: 05/11/2025]
Abstract
The complexity and heterogeneity of cancer makes early detection and effective treatment crucial to enhance patient survival and quality of life. The intrinsic creative ability of artificial intelligence (AI) offers improvements in patient screening, diagnosis, and individualized care. Advanced technologies, like computer vision, machine learning, deep learning, and natural language processing, can analyze large datasets and identify patterns that permit early cancer detection, diagnosis, management and incorporation of conclusive treatment plans, ensuring improved quality of life for patients by personalizing care and minimizing unnecessary interventions. Genomics, transcriptomics and proteomics data can be combined with AI algorithms to unveil an extensive overview of cancer biology, assisting in its detailed understanding and will help in identifying new drug targets and developing effective therapies. This can also help to identify personalized molecular signatures which can facilitate tailored interventions addressing the unique aspects of each patient. AI-driven transcriptomics, proteomics, and genomes represents a revolutionary strategy to improve patient outcome by offering precise diagnosis and tailored therapy. The inclusion of AI in oncology may boost efficiency, reduce errors, and save costs, but it cannot take the role of medical professionals. While clinicians and doctors have the final say in all matters, it might serve as their faithful assistant.
Collapse
Affiliation(s)
- Sonia Chadha
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India.
| | - Sayali Mukherjee
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India
| | - Somali Sanyal
- Amity Institute of Biotechnology, Amity University Uttar Pradesh, Lucknow Campus, Lucknow, Uttar Pradesh, India
| |
Collapse
|
12
|
Park D, Lee YM, Eo T, An HJ, Kang H, Park E, Cha YJ, Park H, Kwon D, Kwon SY, Jung HR, Shin SJ, Park H, Lee Y, Park S, Kim JM, Choi SE, Cho NH, Hwang D. Multimodal AI model for preoperative prediction of axillary lymph node metastasis in breast cancer using whole slide images. NPJ Precis Oncol 2025; 9:131. [PMID: 40328953 PMCID: PMC12056209 DOI: 10.1038/s41698-025-00914-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Accepted: 04/15/2025] [Indexed: 05/08/2025] Open
Abstract
In breast cancer management, predicting axillary lymph node (ALN) metastasis using whole-slide images (WSIs) of primary tumor biopsies is a challenging and underexplored task for pathologists. We developed METACANS, an multimodal artificial intelligence (AI) model that integrates WSIs with clinicopathological features to predict ALN metastasis. METACANS was trained on 1991 cases and externally validated across five cohorts with a total of 2166 cases. Across all validation cohorts, METACANS achieved an area under the curve (AUC) of 0.733 (95% CI, 0.711-0.755), with an overall negative predictive value of 0.846, sensitivity of 0.820, specificity of 0.504, and balanced accuracy of 0.662. Without additional annotations, METACANS identified pathological imaging patterns linked to metastatic status, such as micropapillary growth, infiltrative patterns, and necrosis. While its predictive performance may not yet support immediate clinical application, METACANS addresses the task of predicting ALN metastasis using WSIs and clinicopathological features, and demonstrates the feasibility of multimodal AI approaches for preoperative axillary staging in breast cancer.
Collapse
Grants
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health, Welfare and Family Affairs)
- HI21C0977 Ministry of Health and Welfare (Ministry of Health and Welfare, Taiwan)
- 2021R1C1C2008773 National Research Foundation of Korea (NRF)
Collapse
Affiliation(s)
- Doohyun Park
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Yong-Moon Lee
- Department of Pathology, Dankook University College of Medicine, Cheonan, Republic of Korea
| | - Taejoon Eo
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
- Probe Medical, Inc., Seoul, Republic of Korea
| | - Hee Jung An
- Department of Pathology, CHA University, CHA Bundang Medical Center, Seongnam-si, Kyeonggi-do, Republic of Korea
| | - Haeyoun Kang
- Department of Pathology, CHA University, CHA Bundang Medical Center, Seongnam-si, Kyeonggi-do, Republic of Korea
| | - Eunhyang Park
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yoon Jin Cha
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
- Institute of Breast Cancer Precision Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Heejung Park
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Dohee Kwon
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sun Young Kwon
- Department of Pathology, Keimyung University School of Medicine, Dongsan Hospital, Daegu, Republic of Korea
| | - Hye-Ra Jung
- Department of Pathology, Keimyung University School of Medicine, Dongsan Hospital, Daegu, Republic of Korea
| | - Su-Jin Shin
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hyunjin Park
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Yangkyu Lee
- Department of Pathology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
- Institute of Breast Cancer Precision Medicine, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Sanghui Park
- Department of Pathology, Ewha Womans University College of Medicine, Seoul, Republic of Korea
| | - Ji Min Kim
- Department of Pathology, Ewha Womans University College of Medicine, Seoul, Republic of Korea
| | - Sung-Eun Choi
- Department of Pathology, CHA Bundang Medical Center, CHA University School of Medicine, Seongnam, Republic of Korea
| | - Nam Hoon Cho
- Department of Pathology, Yonsei University College of Medicine, Seoul, Republic of Korea.
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.
- Artificial Intelligence and Robotics Institute, Korea Institute of Science and Technology, 5, Hwarang-ro 14-gil, Seongbuk-gu, Seoul, Republic of Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea.
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
13
|
Egevad L, Camilloni A, Delahunt B, Samaratunga H, Eklund M, Kartasalo K. The Role of Artificial Intelligence in the Evaluation of Prostate Pathology. Pathol Int 2025; 75:213-220. [PMID: 40226937 PMCID: PMC12101047 DOI: 10.1111/pin.70015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2024] [Revised: 01/31/2025] [Accepted: 04/07/2025] [Indexed: 04/15/2025]
Abstract
Artificial intelligence (AI) is an emerging tool in diagnostic pathology, including prostate pathology. This review summarizes the possibilities offered by AI and also discusses the challenges and risks. AI has the potential to assist in the diagnosis and grading of prostate cancer. Diagnostic safety can be enhanced by avoiding the accidental underdiagnosis of small lesions. Another possible benefit is a greater degree of standardization of grading. AI for clinical use needs to be trained on large, high-quality data sets that have been assessed by experienced pathologists. A problem with the use of AI in prostate pathology is the plethora of benign mimics of prostate cancer and morphological variants of cancer that are too unusual to allow sufficient training of AI. AI systems need to be able to account for variations in local routines for cutting, staining, and scanning of slides. We also need to be aware of the risk that users will rely too much on the output of an AI system, leading to diagnostic errors and loss of clinical competence. The reporting pathologist must ultimately be responsible for accepting or rejecting the diagnosis proposed by AI.
Collapse
Affiliation(s)
- Lars Egevad
- Department of Oncology‐PathologyKarolinska InstitutetStockholmSweden
| | - Andrea Camilloni
- Department of Medical Epidemiology and BiostatisticsKarolinska InstitutetStockholmSweden
| | - Brett Delahunt
- Department of Oncology‐PathologyKarolinska InstitutetStockholmSweden
- Malaghan Institute of Medical ResearchWellingtonNew Zealand
| | - Hemamali Samaratunga
- Aquesta Pathology and University of Queensland School of MedicineBrisbaneQueenslandAustralia
| | - Martin Eklund
- Department of Medical Epidemiology and BiostatisticsKarolinska InstitutetStockholmSweden
| | - Kimmo Kartasalo
- SciLifeLab, Department of Medical Epidemiology and BiostatisticsKarolinska InstitutetStockholmSweden
| |
Collapse
|
14
|
Cai Y, Zhang W, Chen H, Cheng KT. MedIAnomaly: A comparative study of anomaly detection in medical images. Med Image Anal 2025; 102:103500. [PMID: 40009901 DOI: 10.1016/j.media.2025.103500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 02/05/2025] [Accepted: 02/06/2025] [Indexed: 02/28/2025]
Abstract
Anomaly detection (AD) aims at detecting abnormal samples that deviate from the expected normal patterns. Generally, it can be trained merely on normal data, without a requirement for abnormal samples, and thereby plays an important role in rare disease recognition and health screening in the medical domain. Despite the emergence of numerous methods for medical AD, the lack of a fair and comprehensive evaluation causes ambiguous conclusions and hinders the development of this field. To address this problem, this paper builds a benchmark with unified comparison. Seven medical datasets with five image modalities, including chest X-rays, brain MRIs, retinal fundus images, dermatoscopic images, and histopathology images, are curated for extensive evaluation. Thirty typical AD methods, including reconstruction and self-supervised learning-based methods, are involved in comparison of image-level anomaly classification and pixel-level anomaly segmentation. Furthermore, for the first time, we systematically investigate the effect of key components in existing methods, revealing unresolved challenges and potential future directions. The datasets and code are available at https://github.com/caiyu6666/MedIAnomaly.
Collapse
Affiliation(s)
- Yu Cai
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Weiwen Zhang
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Hao Chen
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Division of Life Science, The Hong Kong University of Science and Technology, Hong Kong, China.
| | - Kwang-Ting Cheng
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
15
|
Alhussein M, Liu MX. Deep Learning in Echocardiography for Enhanced Detection of Left Ventricular Function and Wall Motion Abnormalities. ULTRASOUND IN MEDICINE & BIOLOGY 2025:S0301-5629(25)00094-8. [PMID: 40316488 DOI: 10.1016/j.ultrasmedbio.2025.03.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2025] [Revised: 03/15/2025] [Accepted: 03/30/2025] [Indexed: 05/04/2025]
Abstract
Cardiovascular diseases (CVDs) remain a leading cause of mortality worldwide, underscoring the need for advancements in diagnostic methodologies to improve early detection and treatment outcomes. This systematic review examines the integration of advanced deep learning (DL) techniques in echocardiography for detecting cardiovascular abnormalities, adhering to PRISMA 2020 guidelines. Through a comprehensive search across databases like IEEE Xplore, PubMed, and Web of Science, 29 studies were identified and analyzed, focusing on deep convolutional neural networks (DCNNs) and their role in enhancing the diagnostic precision of echocardiographic assessments. The findings highlight DL's capability to improve the accuracy and reproducibility of detecting and classifying echocardiographic data, particularly in measuring left ventricular function and identifying wall motion abnormalities. Despite these advancements, challenges such as data diversity, image quality, and the computational demands of DL models hinder their broader clinical adoption. In conclusion, DL offers significant potential to enhance the diagnostic capabilities of echocardiography. However, successful clinical implementation requires addressing issues related to data quality, computational demands, and system integration.
Collapse
Affiliation(s)
- Manal Alhussein
- Department of Health Administration and Policy, Health Services Research / Discovery, Knowledge, and Health Informatics, College of Public Health, George Mason University, Fairfax, Virginia, United States.
| | - Michelle Xiang Liu
- Information Technology and Cybersecurity, School of Technology and Innovation, College of Business, Innovation, Leadership, and Technology (BILT), Marymount University, Arlington, Virginia, United States
| |
Collapse
|
16
|
Chen Y, Shao X, Shi K, Rominger A, Caobelli F. AI in Breast Cancer Imaging: An Update and Future Trends. Semin Nucl Med 2025; 55:358-370. [PMID: 40011118 DOI: 10.1053/j.semnuclmed.2025.01.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2025] [Revised: 01/30/2025] [Accepted: 01/30/2025] [Indexed: 02/28/2025]
Abstract
Breast cancer is one of the most common types of cancer affecting women worldwide. Artificial intelligence (AI) is transforming breast cancer imaging by enhancing diagnostic capabilities across multiple imaging modalities including mammography, digital breast tomosynthesis, ultrasound, magnetic resonance imaging, and nuclear medicines techniques. AI is being applied to diverse tasks such as breast lesion detection and classification, risk stratification, molecular subtyping, gene mutation status prediction, and treatment response assessment, with emerging research demonstrating performance levels comparable to or potentially exceeding those of radiologists. The large foundation models are showing remarkable potential in different breast cancer imaging tasks. Self-supervised learning gives an insight into data inherent correlation, and federated learning is an alternative way to maintain data privacy. While promising results have been obtained so far, data standardization from source, large-scale annotated multimodal datasets, and extensive prospective clinical trials are still needed to fully explore and validate deep learning's clinical utility and address the legal and ethical considerations, which will ultimately determine its widespread adoption in breast cancer care. We hereby provide a review of the most up-to-date knowledge on AI in breast cancer imaging.
Collapse
Affiliation(s)
- Yizhou Chen
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Xiaoliang Shao
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Federico Caobelli
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| |
Collapse
|
17
|
Baker HP, Aggarwal S, Kalidoss S, Hess M, Haydon R, Strelzow JA. Diagnostic accuracy of ChatGPT-4 in orthopedic oncology: a comparative study with residents. Knee 2025; 55:153-160. [PMID: 40311171 DOI: 10.1016/j.knee.2025.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 03/16/2025] [Accepted: 04/05/2025] [Indexed: 05/03/2025]
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly being explored for its potential role in medical diagnostics. ChatGPT-4, a large language model (LLM) with image analysis capabilities, may assist in histopathological interpretation, but its accuracy in musculoskeletal oncology remains untested. This study evaluates ChatGPT-4's diagnostic accuracy in identifying musculoskeletal tumors from histology slides compared to orthopedic surgery residents. METHODS A comparative study was conducted using 24 histology slides randomly selected from an orthopedic oncology registry. Five teams of orthopedic surgery residents (PGY-1 to PGY-5) participated in a diagnostic competition, providing their best diagnosis for each slide. ChatGPT-4 was tested separately using identical histology images and clinical vignettes, with two independent attempts. Statistical analyses, including one-way ANOVA and independent t-tests were performed to compare diagnostic accuracy. RESULTS Orthopedic residents significantly outperformed ChatGPT-4 in diagnosing musculoskeletal tumors. The mean diagnostic accuracy among resident teams was 55%, while ChatGPT-4 achieved 25% on its first attempt and 33% on its second attempt. One-way ANOVA revealed a significant difference in accuracy across groups (F = 8.51, p = 0.033). Independent t-tests confirmed that residents performed significantly better than ChatGPT-4 (t = 5.80, p = 0.0004 for first attempt; t = 4.25, p = 0.0028 for second attempt). Both residents and ChatGPT-4 struggled with specific cases, particularly soft tissue sarcomas. CONCLUSIONS ChatGPT-4 demonstrated limited accuracy in interpreting histopathological slides compared to orthopedic residents. While AI holds promise for medical diagnostics, its current capabilities in musculoskeletal oncology remain insufficient for independent clinical use. These findings should be viewed as exploratory rather than confirmatory, and further research with larger, more diverse datasets is needed to assess AI's role in histopathology. Future studies should investigate AI-assisted workflows, refine prompt engineering, and explore AI models specifically trained for histopathological diagnosis.
Collapse
Affiliation(s)
- Hayden P Baker
- The University of Chicago Department of Orthopaedic Surgery, Chicago, IL 60637, United States.
| | - Sarthak Aggarwal
- The University of Chicago Department of Orthopaedic Surgery, Chicago, IL 60637, United States
| | - Senthooran Kalidoss
- The University of Chicago Department of Orthopaedic Surgery, Chicago, IL 60637, United States
| | - Matthew Hess
- The University of Chicago Department of Orthopaedic Surgery, Chicago, IL 60637, United States
| | - Rex Haydon
- The University of Chicago Department of Orthopaedic Surgery, Chicago, IL 60637, United States
| | - Jason A Strelzow
- The University of Chicago Department of Orthopaedic Surgery, Chicago, IL 60637, United States
| |
Collapse
|
18
|
Marra A, Morganti S, Pareja F, Campanella G, Bibeau F, Fuchs T, Loda M, Parwani A, Scarpa A, Reis-Filho JS, Curigliano G, Marchiò C, Kather JN. Artificial intelligence entering the pathology arena in oncology: current applications and future perspectives. Ann Oncol 2025:S0923-7534(25)00112-7. [PMID: 40307127 DOI: 10.1016/j.annonc.2025.03.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2024] [Revised: 02/19/2025] [Accepted: 03/07/2025] [Indexed: 05/02/2025] Open
Abstract
BACKGROUND Artificial intelligence (AI) is rapidly transforming the fields of pathology and oncology, offering novel opportunities for advancing diagnosis, prognosis, and treatment of cancer. METHODS Through a systematic review-based approach, the representatives from the European Society for Medical Oncology (ESMO) Precision Oncology Working Group (POWG) and international experts identified studies in pathology and oncology that applied AI-based algorithms for tumour diagnosis, molecular biomarker detection, and cancer prognosis assessment. These findings were synthesised to provide a comprehensive overview of current AI applications and future directions in cancer pathology. RESULTS The integration of AI tools in digital pathology is markedly improving the accuracy and efficiency of image analysis, allowing for automated tumour detection and classification, identification of prognostic molecular biomarkers, and prediction of treatment response and patient outcomes. Several barriers for the adoption of AI in clinical workflows, such as data availability, explainability, and regulatory considerations, still persist. There are currently no prognostic or predictive AI-based biomarkers supported by level IA or IB evidence. The ongoing advancements in AI algorithms, particularly foundation models, generalist models and transformer-based deep learning, offer immense promise for the future of cancer research and care. AI is also facilitating the integration of multi-omics data, leading to more precise patient stratification and personalised treatment strategies. CONCLUSIONS The application of AI in pathology is poised to not only enhance the accuracy and efficiency of cancer diagnosis and prognosis but also facilitate the development of personalised treatment strategies. Although barriers to implementation remain, ongoing research and development in this field coupled with addressing ethical and regulatory considerations will likely lead to a future where AI plays an integral role in cancer management and precision medicine. The continued evolution and adoption of AI in pathology and oncology are anticipated to reshape the landscape of cancer care, heralding a new era of precision medicine and improved patient outcomes.
Collapse
Affiliation(s)
- A Marra
- Division of Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy
| | - S Morganti
- Department of Medical Oncology, Dana-Farber Cancer Institute, Boston, USA; Department of Medicine, Harvard Medical School, Boston, USA; Gerstner Center for Cancer Diagnostics, Broad Institute of MIT and Harvard, Boston, USA
| | - F Pareja
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, USA
| | - G Campanella
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, USA
| | - F Bibeau
- Department of Pathology, University Hospital of Besançon, Besancon, France
| | - T Fuchs
- Hasso Plattner Institute for Digital Health, Mount Sinai Medical School, New York, USA; Department of AI and Human Health, Icahn School of Medicine at Mount Sinai, New York, USA
| | - M Loda
- Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York, USA; Nuffield Department of Surgical Sciences, University of Oxford, Oxford, UK; Department of Oncologic Pathology, Dana-Farber Cancer Institute and Harvard Medical School, Boston, USA
| | - A Parwani
- Department of Pathology, Wexner Medical Center, Ohio State University, Columbus, USA
| | - A Scarpa
- Department of Diagnostics and Public Health, Section of Pathology, University and Hospital Trust of Verona, Verona, Italy; ARC-Net Research Center, University of Verona, Verona, Italy
| | - J S Reis-Filho
- Department of Pathology and Laboratory Medicine, Memorial Sloan Kettering Cancer Center, New York, USA
| | - G Curigliano
- Division of Early Drug Development for Innovative Therapies, European Institute of Oncology IRCCS, Milan, Italy; Department of Oncology and Hemato-Oncology, University of Milan, Milan, Italy
| | - C Marchiò
- Candiolo Cancer Institute, FPO IRCCS, Candiolo, Italy; Department of Medical Sciences, University of Turin, Turin, Italy
| | - J N Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany; Department of Medicine I, University Hospital and Faculty of Medicine Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany; Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
19
|
Dang C, Qi Z, Xu T, Gu M, Chen J, Wu J, Lin Y, Qi X. Deep Learning-Powered Whole Slide Image Analysis in Cancer Pathology. J Transl Med 2025; 105:104186. [PMID: 40306572 DOI: 10.1016/j.labinv.2025.104186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Revised: 03/05/2025] [Accepted: 04/22/2025] [Indexed: 05/02/2025] Open
Abstract
Pathology is the cornerstone of modern cancer care. With the advancement of precision oncology, the demand for histopathologic diagnosis and stratification of patients is increasing as personalized cancer therapy relies on accurate biomarker assessment. Recently, rapid development of whole slide imaging technology has enabled digitalization of traditional histologic slides at high resolution, holding promise to improve both the precision and efficiency of histopathologic evaluation. In particular, deep learning approaches, such as Convolutional Neural Network, Graph Convolutional Network, and Transformer, have shown great promise in enhancing the sensitivity and accuracy of whole slide image (WSI) analysis in cancer pathology because of their ability to handle high-dimensional and complex image data. The integration of deep learning models with WSIs enables us to explore and mine morphologic features beyond the visual perception of pathologists, which can help automate clinical diagnosis, assess histopathologic grade, predict clinical outcomes, and even discover novel morphologic biomarkers. In this review, we present a comprehensive framework for incorporating deep learning with WSIs, highlighting how deep learning-driven WSI analysis advances clinical tasks in cancer care. Furthermore, we critically discuss the opportunities and challenges of translating deep learning-based digital pathology into clinical practice, which should be considered to support personalized treatment of cancer patients.
Collapse
Affiliation(s)
- Chengrun Dang
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Zhuang Qi
- School of Software, Shandong University, Jinan, China
| | - Tao Xu
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Mingkai Gu
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Jiajia Chen
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China
| | - Jie Wu
- Department of Oncology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| | - Yuxin Lin
- Department of Urology, The First Affiliated Hospital of Soochow University, Suzhou, China.
| | - Xin Qi
- School of Chemistry and Life Sciences, Suzhou University of Science and Technology, Suzhou, China.
| |
Collapse
|
20
|
Xu C, Sun Y, Zhang Y, Liu T, Wang X, Hu D, Huang S, Li J, Zhang F, Li G. Stain Normalization of Histopathological Images Based on Deep Learning: A Review. Diagnostics (Basel) 2025; 15:1032. [PMID: 40310413 PMCID: PMC12077256 DOI: 10.3390/diagnostics15081032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 04/06/2025] [Accepted: 04/10/2025] [Indexed: 05/02/2025] Open
Abstract
Histopathological images stained with hematoxylin and eosin (H&E) are crucial for cancer diagnosis and prognosis. However, color variations caused by differences in tissue preparation and scanning devices can lead to data distribution discrepancies, adversely affecting the performance of downstream algorithms in tasks like classification, segmentation, and detection. To address these issues, stain normalization methods have been developed to standardize color distributions across images from various sources. Recent advancements in deep learning-based stain normalization methods have shown significant promise due to their minimal preprocessing requirements, independence from reference templates, and robustness. This review examines 115 publications to explore the latest developments in this field. We first outline the evaluation metrics and publicly available datasets used for assessing stain normalization methods. Next, we systematically review deep learning-based approaches, including supervised, unsupervised, and self-supervised methods, categorizing them by core technologies and analyzing their contributions and limitations. Finally, we discuss current challenges and future directions, aiming to provide researchers with a comprehensive understanding of the field, promote further development, and accelerate the progress of intelligent cancer diagnosis.
Collapse
Affiliation(s)
- Chuanyun Xu
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Yisha Sun
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Yang Zhang
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Tianqi Liu
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
| | - Xiao Wang
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Die Hu
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Shuaiye Huang
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Junjie Li
- School of Computer & Information Science, Chongqing Normal University, Chongqing 401331, China; (C.X.); (Y.S.)
| | - Fanghong Zhang
- National Center for Applied Mathematics, Chongqing Normal University, Chongqing 401331, China
| | - Gang Li
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
| |
Collapse
|
21
|
Gonzalez AD, Wadop YN, Danner B, Clarke KM, Dopler MB, Ghaseminejad-Bandpey A, Babu S, Parker-Garza J, Corbett C, Alhneif M, Keating M, Bieniek KF, Maestre GE, Seshadri S, Etemadmoghadam S, Fongang B, Flanagan ME. Digital pathology in tau research: A comparison of QuPath and HALO. J Neuropathol Exp Neurol 2025:nlaf026. [PMID: 40238207 DOI: 10.1093/jnen/nlaf026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2025] Open
Abstract
The application of digital pathology tools has expanded in recent years, but non-neoplastic human brain tissue presents unique challenges due to its complexity. This study evaluated HALO and QuPath tau quantification performance in the hippocampus and mid-frontal gyrus across various tauopathies. Percent positivity emerged as the most reliable measure, showing strong correlations with Braak stages and CERAD scores, outperforming object and optical densities. QuPath demonstrated superior correlations with Braak stages, while HALO excelled in aligning with CERAD scoring. However, HALO's optical density was less consistent. Paired t-tests revealed significant differences in object and optical densities between platforms, though percent positivity was consistent across both. QuPath's threshold-based object density showed similar agreement with manual counts compared to HALO's AI-dependent approach (all ρ > 0.70). Reanalysis of QuPath further improved its agreement with manual measurements and correlations with Braak and CERAD scores (all ρ > 0.70). HALO offers a user-friendly interface and excels in certain metrics but is hindered by frequent software malfunctions and more limited flexibility. In contrast, QuPath's customizable workflows and superior performance in Braak staging make it more suitable for advanced and larger-scale analyses. Overall, our study highlights the strengths and limitations of these platforms, helping guide their application in neuropathology.
Collapse
Affiliation(s)
- Angelique D Gonzalez
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Yannick N Wadop
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Benjamin Danner
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Kyra M Clarke
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Matthew B Dopler
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Ali Ghaseminejad-Bandpey
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Sahana Babu
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Julie Parker-Garza
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Cole Corbett
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Mohammad Alhneif
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Mallory Keating
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Kevin F Bieniek
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
- Department of Pathology and Laboratory Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Gladys E Maestre
- Institute of Neurosciences, School of Medicine, University of Texas Rio Grande Valley, Harlingen, TX, United States
- Rio Grande Valley Alzheimer's Disease Resource Center for Minority Aging Research (RGV AD-RCMAR), University of Texas Rio Grande Valley, Brownsville, TX, United States
- Department of Human Genetics, School of Medicine, University of Texas Rio Grande Valley, Brownsville, TX, United States
| | - Sudha Seshadri
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Shahroo Etemadmoghadam
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Bernard Fongang
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| | - Margaret E Flanagan
- Glenn Biggs Institute for Alzheimer's and Neurodegenerative Diseases, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
- Department of Pathology and Laboratory Medicine, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
| |
Collapse
|
22
|
Ito H, Wada T, Ichinose G, Tanimoto J, Yoshimura J, Yamamoto T, Morita S. Barriers to the widespread adoption of diagnostic artificial intelligence for preventing antimicrobial resistance. Sci Rep 2025; 15:13113. [PMID: 40240443 PMCID: PMC12003763 DOI: 10.1038/s41598-025-95110-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2024] [Accepted: 03/19/2025] [Indexed: 04/18/2025] Open
Abstract
Currently, antimicrobial resistance (AMR) poses a major public health challenge. The emergence of AMR, which significantly threatens public health, is primarily due to the overuse of antimicrobial agents. This study explored the possibility that the ethical dilemmas inherent in the context of AMR may hinder the adoption of diagnostic artificial intelligence (AI). We conducted a web survey across eight countries/areas to assess public preference between two hypothetical AI types: one prioritizing individual health and the other considering the global AMR threat. Our results revealed a societal preference for the utilization of both AI types, reflecting a conflict between recognizing the significance of AMR and the desire for individualized treatment. Interestingly, the survey indicated significant gender and age differences in AI preferences, and the majority of respondents opposed the idea of AI standardization in treatment. These findings highlight the challenges of incorporating AI into public health and the necessity of considering public sentiment in addressing global health issues such as AMR.
Collapse
Affiliation(s)
- Hiromu Ito
- Department of International Health and Medical Anthropology, Institute of Tropical Medicine, Nagasaki University, Nagasaki, Japan.
| | - Takayuki Wada
- Graduate School of Human Life and Ecology, Osaka Metropolitan University, Osaka, Japan
- Osaka International Research Center for Infectious Diseases, Osaka Metropolitan University, Osaka, Japan
| | - Genki Ichinose
- Department of Mathematical and Systems Engineering, Shizuoka University, Shizuoka, Japan
| | - Jun Tanimoto
- Department of Energy and Environmental Engineering, Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, Fukuoka, Japan
- Department of Advanced Environmental Science and Engineering, Faculty of Engineering Sciences, Kyushu University, Fukuoka, Japan
| | - Jin Yoshimura
- Department of International Health and Medical Anthropology, Institute of Tropical Medicine, Nagasaki University, Nagasaki, Japan
- Marine Biosystems Research Center, Chiba University, Chiba, Japan
- Department of Biological Science, Tokyo Metropolitan University, Tokyo, Japan
| | - Taro Yamamoto
- Department of International Health and Medical Anthropology, Institute of Tropical Medicine, Nagasaki University, Nagasaki, Japan
| | - Satoru Morita
- Department of Mathematical and Systems Engineering, Shizuoka University, Shizuoka, Japan
| |
Collapse
|
23
|
Ahn S, Hong Y, Park S, Cho Y, Hwang I, Na JM, Lee H, Min BH, Lee JH, Kim JJ, Kim KM. Development and application of deep learning-based diagnostics for pathologic diagnosis of gastric endoscopic submucosal dissection specimens. Gastric Cancer 2025:10.1007/s10120-025-01612-y. [PMID: 40232558 DOI: 10.1007/s10120-025-01612-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2025] [Accepted: 03/26/2025] [Indexed: 04/16/2025]
Abstract
BACKGROUND Accurate diagnosis of ESD specimens is crucial for managing early gastric cancer. Identifying tumor areas in serially sectioned ESD specimens requires experience and is time-consuming. This study aimed to develop and evaluate a deep learning model for diagnosing ESD specimens. METHODS Whole-slide images of 366 ESD specimens of adenocarcinoma were analyzed, with 2257 annotated regions of interest (tumor and muscularis mucosa) and 83,839 patch images. The development set was divided into training and internal validation sets. Tissue segmentation performance was evaluated using the internal validation set. A detection algorithm for tumor and submucosal invasion at the whole-slide image level was developed, and its performance was evaluated using a test set. RESULTS The model achieved Dice coefficients of 0.85 and 0.79 for segmentation of tumor and muscularis mucosa, respectively. In the test set, the diagnostic performance of tumor detection, measured by the AUROC, was 0.995, with a specificity of 1.000 and a sensitivity of 0.947. For detecting submucosal invasion, the model achieved an AUROC of 0.981, with a specificity of 0.956 and a sensitivity of 0.907. Pathologists' performance in diagnosing ESD specimens was evaluated with and without assistance from the deep learning model, and the model significantly reduced the mean diagnosis time (747 s without assistance vs. 478 s with assistance, P < 0.001). CONCLUSION The deep learning model demonstrated satisfactory performance in tissue segmentation and high accuracy in detecting tumors and submucosal invasion. This model can potentially serve as a screening tool in the histopathological diagnosis of ESD specimens.
Collapse
Affiliation(s)
- Soomin Ahn
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Yiyu Hong
- Department of R&D Center, Arontier Co., Ltd, Seoul, South Korea
| | - Sujin Park
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Yunjoo Cho
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Inwoo Hwang
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Ji Min Na
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Hyuk Lee
- Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Byung-Hoon Min
- Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Jun Haeng Lee
- Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Jae J Kim
- Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea
| | - Kyoung-Mee Kim
- Department of Pathology and Translational Genomics, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, South Korea.
| |
Collapse
|
24
|
Huhulea EN, Huang L, Eng S, Sumawi B, Huang A, Aifuwa E, Hirani R, Tiwari RK, Etienne M. Artificial Intelligence Advancements in Oncology: A Review of Current Trends and Future Directions. Biomedicines 2025; 13:951. [PMID: 40299653 PMCID: PMC12025054 DOI: 10.3390/biomedicines13040951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2025] [Revised: 04/03/2025] [Accepted: 04/10/2025] [Indexed: 05/01/2025] Open
Abstract
Cancer remains one of the leading causes of mortality worldwide, driving the need for innovative approaches in research and treatment. Artificial intelligence (AI) has emerged as a powerful tool in oncology, with the potential to revolutionize cancer diagnosis, treatment, and management. This paper reviews recent advancements in AI applications within cancer research, focusing on early detection through computer-aided diagnosis, personalized treatment strategies, and drug discovery. We survey AI-enhanced diagnostic applications and explore AI techniques such as deep learning, as well as the integration of AI with nanomedicine and immunotherapy for cancer care. Comparative analyses of AI-based models versus traditional diagnostic methods are presented, highlighting AI's superior potential. Additionally, we discuss the importance of integrating social determinants of health to optimize cancer care. Despite these advancements, challenges such as data quality, algorithmic biases, and clinical validation remain, limiting widespread adoption. The review concludes with a discussion of the future directions of AI in oncology, emphasizing its potential to reshape cancer care by enhancing diagnosis, personalizing treatments and targeted therapies, and ultimately improving patient outcomes.
Collapse
Affiliation(s)
- Ellen N. Huhulea
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Lillian Huang
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Shirley Eng
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Bushra Sumawi
- Barshop Institute, The University of Texas Health Science Center, San Antonio, TX 78229, USA
| | - Audrey Huang
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Esewi Aifuwa
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
| | - Rahim Hirani
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Raj K. Tiwari
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Graduate School of Biomedical Sciences, New York Medical College, Valhalla, NY 10595, USA
| | - Mill Etienne
- School of Medicine, New York Medical College, Valhalla, NY 10595, USA (R.H.)
- Department of Neurology, New York Medical College, Valhalla, NY 10595, USA
| |
Collapse
|
25
|
Hatamoto D, Yamakawa M, Shiina T. Improving ultrasound image classification accuracy of liver tumors using deep learning model with hepatitis virus infection information. J Med Ultrason (2001) 2025:10.1007/s10396-025-01528-1. [PMID: 40205118 DOI: 10.1007/s10396-025-01528-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 02/04/2025] [Indexed: 04/11/2025]
Abstract
PURPOSE In recent years, computer-aided diagnosis (CAD) using deep learning methods for medical images has been studied. Although studies have been conducted to classify ultrasound images of tumors of the liver into four categories (liver cysts (Cyst), liver hemangiomas (Hemangioma), hepatocellular carcinoma (HCC), and metastatic liver cancer (Meta)), no studies with additional information for deep learning have been reported. Therefore, we attempted to improve the classification accuracy of ultrasound images of hepatic tumors by adding hepatitis virus infection information to deep learning. METHODS Four combinations of hepatitis virus infection information were assigned to each image, plus or minus HBs antigen and plus or minus HCV antibody, and the classification accuracy was compared before and after the information was input and weighted to fully connected layers. RESULTS With the addition of hepatitis virus infection information, accuracy changed from 0.574 to 0.643. The F1-Score for Cyst, Hemangioma, HCC, and Meta changed from 0.87 to 0.88, 0.55 to 0.57, 0.46 to 0.59, and 0.54 to 0.62, respectively, remaining the same for Hemangioma but increasing for the rest. CONCLUSION Learning hepatitis virus infection information showed the highest increase in the F1-Score for HCC, resulting in improved classification accuracy of ultrasound images of hepatic tumors.
Collapse
Affiliation(s)
- Daisuke Hatamoto
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8397, Japan.
| | - Makoto Yamakawa
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8397, Japan
- SIT Research Laboratories, Shibaura Institute of Technology, Tokyo, Japan
| | - Tsuyoshi Shiina
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8397, Japan
- SIT Research Laboratories, Shibaura Institute of Technology, Tokyo, Japan
| |
Collapse
|
26
|
Bangash SH, Ibrahim M, Ali A, Wei CY, Hussain A, Riaz M, Rehman MFU, Ahmed F, Al-Salahi R, Tang WW. A new natural Cyperol A together with five known compounds from Cyperus rotundus L.: isolation, structure elucidation, DFT analysis, insecticidal and enzyme-inhibition activities and in silico study. RSC Adv 2025; 15:11491-11502. [PMID: 40225773 PMCID: PMC11987592 DOI: 10.1039/d5ra00505a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Accepted: 03/21/2025] [Indexed: 04/15/2025] Open
Abstract
One new natural benzaldehyde derivative (1), together with five known compounds, was isolated from the methanolic extract of the whole plant of Cyperus rotundus L., which is a globally distributed noxious weed. The structure of compound (1) (named Cyperol A) was determined using various NMR methods, including 1H, 13C, COSY, HMBC, HSQC and NOESY, and mass spectrometric techniques, including EIMS. The newly isolated compound (1) was subjected to optimization using computer-assisted calculation via DFT methods for natural bond orbital (NBO) and frontier molecular orbital (FMO) analyses and compared with carbofuran, which is used to control the pest brown planthopper. The in vitro insecticidal efficacy of compounds 1-6 was evaluated against Nilaparvata lugens. Compound 1 demonstrated exceptional lethal and notable enzyme inhibitory effects. Furthermore, compound 1 was investigated in silico for its anti-pesticidal activities targeting the BPH (Nilaparvata lugens (Stål)) key enzymes, such as glutathione S-transferase (GST) and acetylcholinesterase (AChE). Compound 1 showed good docking scores of -9.75 kcal mol-1 against GST, forming hydrogen bonds with its active site, and -10.56 kcal mol-1 with AChE owing to its high potential for hydrogen bonding.
Collapse
Affiliation(s)
- Saqib Hussain Bangash
- Guangxi Key Laboratory of Agro-Environment and Agric-Product Safety, National Demonstration Center for Experimental Plant Science Education, College of Agriculture, Guangxi University Nanning Guangxi People's Replublic of China
- Department of Applied Chemistry, Government College University Faisalabad Pakistan
| | - Muhammad Ibrahim
- Department of Applied Chemistry, Government College University Faisalabad Pakistan
| | - Akbar Ali
- Department of Chemistry, Government College University Faisalabad Pakistan
| | - Chen-Yang Wei
- Guangxi Key Laboratory of Agro-Environment and Agric-Product Safety, National Demonstration Center for Experimental Plant Science Education, College of Agriculture, Guangxi University Nanning Guangxi People's Replublic of China
| | - Amjad Hussain
- Institute of Chemistry, University of Okara Okara-56300 Punjab Pakistan
| | - Moazama Riaz
- Department of Applied Chemistry, Government College University Faisalabad Pakistan
| | | | - Faiz Ahmed
- Department of Chemistry, Government College University Faisalabad Pakistan
| | - Rashad Al-Salahi
- Department of Pharmaceutical Chemistry, College of Pharmacy, King Saud University Riyadh 11451 Saudi Arabia
| | - Wen-Wei Tang
- Guangxi Key Laboratory of Agro-Environment and Agric-Product Safety, National Demonstration Center for Experimental Plant Science Education, College of Agriculture, Guangxi University Nanning Guangxi People's Replublic of China
| |
Collapse
|
27
|
Nguyen T, Panwar V, Jamale V, Perny A, Dusek C, Cai Q, Kapur P, Danuser G, Rajaram S. Autonomous learning of pathologists' cancer grading rules. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.18.643999. [PMID: 40166226 PMCID: PMC11956981 DOI: 10.1101/2025.03.18.643999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/02/2025]
Abstract
Deep learning (DL) algorithms have demonstrated remarkable proficiency in histopathology classification tasks, presenting an opportunity to discover disease-related features escaping visual inspection. However, the "black box" nature of DL obfuscates the basis of the classification. Here, we develop an algorithm for interpretable Deep Learning (IDL) that sheds light on the links between tissue morphology and cancer biology. We make use of a generative model trained to represent images via a combination of a semantic latent space and a noise vector to capture low level image details. We traversed the latent space so as to induce prototypical image changes associated with the disease state, which we identified via a second DL model. Applied to a dataset of clear cell renal cell carcinoma (ccRCC) tissue images the AI system pinpoints nuclear size and nucleolus density in tumor cells (but not other cell types) as the decisive features of tumor progression from grade 1 to grade 4 - mirroring the rules that have been used for decades in the clinic and are taught in textbooks. Moreover, the AI system posits a decrease in vasculature with increasing grade. While the association has been illustrated by some previous reports, the correlation is not part of currently implemented grading systems. These results indicate the potential of IDL to autonomously formalize the connection between the histopathological presentation of a disease and the underlying tissue architectural drivers.
Collapse
Affiliation(s)
- Thuong Nguyen
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Vandana Panwar
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Vipul Jamale
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Averi Perny
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Cecilia Dusek
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Qi Cai
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Payal Kapur
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Kidney Cancer Program, Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Urology, University of Texas Southwestern Medical Center at Dallas, Dallas, TX, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Satwik Rajaram
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Kidney Cancer Program, Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
28
|
Lewis C, Groarke J, Graham-Wisener L, James J. Public Awareness of and Attitudes Toward the Use of AI in Pathology Research and Practice: Mixed Methods Study. J Med Internet Res 2025; 27:e59591. [PMID: 40173441 PMCID: PMC12004022 DOI: 10.2196/59591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 11/07/2024] [Accepted: 02/04/2025] [Indexed: 04/04/2025] Open
Abstract
BACKGROUND The last decade has witnessed major advances in the development of artificial intelligence (AI) technologies for use in health care. One of the most promising areas of research that has potential clinical utility is the use of AI in pathology to aid cancer diagnosis and management. While the value of using AI to improve the efficiency and accuracy of diagnosis cannot be underestimated, there are challenges in the development and implementation of such technologies. Notably, questions remain about public support for the use of AI to assist in pathological diagnosis and for the use of health care data, including data obtained from tissue samples, to train algorithms. OBJECTIVE This study aimed to investigate public awareness of and attitudes toward AI in pathology research and practice. METHODS A nationally representative, cross-sectional, web-based mixed methods survey (N=1518) was conducted to assess the UK public's awareness of and views on the use of AI in pathology research and practice. Respondents were recruited via Prolific, an online research platform. To be eligible for the study, participants had to be aged >18 years, be UK residents, and have the capacity to express their own opinion. Respondents answered 30 closed-ended questions and 2 open-ended questions. Sociodemographic information and previous experience with cancer were collected. Descriptive and inferential statistics were used to analyze quantitative data; qualitative data were analyzed thematically. RESULTS Awareness was low, with only 23.19% (352/1518) of the respondents somewhat or moderately aware of AI being developed for use in pathology. Most did not support a diagnosis of cancer (908/1518, 59.82%) or a diagnosis based on biomarkers (694/1518, 45.72%) being made using AI only. However, most (1478/1518, 97.36%) supported diagnoses made by pathologists with AI assistance. The adjusted odds ratio (aOR) for supporting AI in cancer diagnosis and management was higher for men (aOR 1.34, 95% CI 1.02-1.75). Greater awareness (aOR 1.25, 95% CI 1.10-1.42), greater trust in data security and privacy protocols (aOR 1.04, 95% CI 1.01-1.07), and more positive beliefs (aOR 1.27, 95% CI 1.20-1.36) also increased support, whereas identifying more risks reduced the likelihood of support (aOR 0.80, 95% CI 0.73-0.89). In total, 3 main themes emerged from the qualitative data: bringing the public along, the human in the loop, and more hard evidence needed, indicating conditional support for AI in pathology with human decision-making oversight, robust measures for data handling and protection, and evidence for AI benefit and effectiveness. CONCLUSIONS Awareness of AI's potential use in pathology was low, but attitudes were positive, with high but conditional support. Challenges remain, particularly among women, regarding AI use in cancer diagnosis and management. Apprehension persists about the access to and use of health care data by private organizations.
Collapse
Affiliation(s)
- Claire Lewis
- School of Medicine Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, United Kingdom
| | - Jenny Groarke
- School of Psychology, University of Galway, Galway, Ireland
| | | | - Jacqueline James
- School of Medicine Dentistry and Biomedical Sciences, Queen's University Belfast, Belfast, United Kingdom
| |
Collapse
|
29
|
Rahnfeld J, Naouar M, Kalweit G, Boedecker J, Dubruc E, Kalweit M. A comparative study of explainability methods for whole slide classification of lymph node metastases using vision transformers. PLOS DIGITAL HEALTH 2025; 4:e0000792. [PMID: 40233316 PMCID: PMC11999707 DOI: 10.1371/journal.pdig.0000792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 02/17/2025] [Indexed: 04/17/2025]
Abstract
Recent advancements in deep learning have shown promise in enhancing the performance of medical image analysis. In pathology, automated whole slide imaging has transformed clinical workflows by streamlining routine tasks and diagnostic and prognostic support. However, the lack of transparency of deep learning models, often described as black boxes, poses a significant barrier to their clinical adoption. This study evaluates various explainability methods for Vision Transformers, assessing their effectiveness in explaining the rationale behind their classification predictions on histopathological images. Using a Vision Transformer trained on the publicly available CAMELYON16 dataset comprising of 399 whole slide images of lymph node metastases of patients with breast cancer, we conducted a comparative analysis of a diverse range of state-of-the-art techniques for generating explanations through heatmaps, including Attention Rollout, Integrated Gradients, RISE, and ViT-Shapley. Our findings reveal that Attention Rollout and Integrated Gradients are prone to artifacts, while RISE and particularly ViT-Shapley generate more reliable and interpretable heatmaps. ViT-Shapley also demonstrated faster runtime and superior performance in insertion and deletion metrics. These results suggest that integrating ViT-Shapley-based heatmaps into pathology reports could enhance trust and scalability in clinical workflows, facilitating the adoption of explainable artificial intelligence in pathology.
Collapse
Affiliation(s)
- Jens Rahnfeld
- Collaborative Research Institute Intelligent Oncology (CRIION), Freiburg, Germany
- University of Freiburg, Freiburg, Germany
| | - Mehdi Naouar
- Collaborative Research Institute Intelligent Oncology (CRIION), Freiburg, Germany
- University of Freiburg, Freiburg, Germany
| | - Gabriel Kalweit
- Collaborative Research Institute Intelligent Oncology (CRIION), Freiburg, Germany
- University of Freiburg, Freiburg, Germany
| | - Joschka Boedecker
- Collaborative Research Institute Intelligent Oncology (CRIION), Freiburg, Germany
- University of Freiburg, Freiburg, Germany
- BrainLinks-BrainTools, Freiburg, Germany
| | - Estelle Dubruc
- Department of Pathology, University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Maria Kalweit
- Collaborative Research Institute Intelligent Oncology (CRIION), Freiburg, Germany
- University of Freiburg, Freiburg, Germany
| |
Collapse
|
30
|
Dafni MF, Shih M, Manoel AZ, Yousif MYE, Spathi S, Harshal C, Bhatt G, Chodnekar SY, Chune NS, Rasool W, Umar TP, Moustakas DC, Achkar R, Kumar H, Naz S, Acuña-Chavez LM, Evgenikos K, Gulraiz S, Ali ESM, Elaagib A, Uggh IHP. Empowering cancer prevention with AI: unlocking new frontiers in prediction, diagnosis, and intervention. Cancer Causes Control 2025; 36:353-367. [PMID: 39672997 DOI: 10.1007/s10552-024-01942-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 11/18/2024] [Indexed: 12/15/2024]
Abstract
Artificial intelligence is rapidly changing our world at an exponential rate and its transformative power has extensively reached important sectors like healthcare. In the fight against cancer, AI proved to be a novel and powerful tool, offering new hope for prevention and early detection. In this review, we will comprehensively explore the medical applications of AI, including early cancer detection through pathological and imaging analysis, risk stratification, patient triage, and the development of personalized prevention approaches. However, despite the successful impact AI has contributed to, we will also discuss the myriad of challenges that we have faced so far toward optimal AI implementation. There are problems when it comes to the best way in which we can use AI systemically. Having the correct data that can be understood easily must remain one of the most significant concerns in all its uses including sharing information. Another challenge that exists is how to interpret AI models because they are too complicated for people to follow through examples used in their developments which may affect trust, especially among medical professionals. Other considerations like data privacy, algorithm bias, and equitable access to AI tools have also arisen. Finally, we will evaluate possible future directions for this promising field that highlight AI's capacity to transform preventative cancer care.
Collapse
Affiliation(s)
- Marianna-Foteini Dafni
- School of Medicine, Laboratory of Forensic Medicine and Toxicology, Aristotle Univerisity of Thessaloniki, Thessaloniki, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Mohamed Shih
- School of Medicine, Newgiza University, Giza, Egypt.
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece.
| | - Agnes Zanotto Manoel
- Faculty of Medicine, Federal University of Rio Grande, Rio Grande do Sul, Brazil
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Mohamed Yousif Elamin Yousif
- Faculty of Medicine, University of Khartoum, Khartoum, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Stavroula Spathi
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Chorya Harshal
- Faculty of Medicine, Medical College Baroda, Vadodara, India
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Gaurang Bhatt
- All India Institute of Medical Sciences, Rishikesh, India
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Swarali Yatin Chodnekar
- Faculty of Medicine, Teaching University Geomedi LLC, Tbilisi, Georgia
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Nicholas Stam Chune
- Faculty of Medicine, University of Nairobi, Nairobi, Kenya
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Warda Rasool
- Faculty of Medicine, King Edward Medical University, Lahore, Pakistan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Tungki Pratama Umar
- Division of Surgery and Interventional Science, Faculty of Medical Sciences, University College London, London, UK
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Dimitrios C Moustakas
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Robert Achkar
- Faculty of Medicine, Poznan University of Medical Sciences, Poznan, Poland
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Harendra Kumar
- Dow University of Health Sciences, Karachi, Pakistan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Suhaila Naz
- Tbilisi State Medical University, Tbilisi, Georgia
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Luis M Acuña-Chavez
- Facultad de Medicina de la Universidad Nacional de Trujillo, Trujillo, Peru
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Konstantinos Evgenikos
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Shaina Gulraiz
- Royal Bournemouth Hospital (University Hospitals Dorset), Bournemouth, UK
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Eslam Salih Musa Ali
- University of Dongola Faculty of Medicine and Health Science, Dongola, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Amna Elaagib
- Faculty of Medicine AlMughtaribeen University, Khartoum, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Innocent H Peter Uggh
- Kilimanjaro Clinical Research Institute, Kilimanjaro, Tanzania
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| |
Collapse
|
31
|
Alhusain FA. Harnessing artificial intelligence for infection control and prevention in hospitals: A comprehensive review of current applications, challenges, and future directions. Saudi Med J 2025; 46:329-334. [PMID: 40254319 PMCID: PMC12010500 DOI: 10.15537/smj.2025.46.4.20240878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/22/2025] Open
Abstract
Hospital-acquired infections (HAIs) significantly burden global healthcare systems, exacerbated by antibiotic-resistant bacteria. Traditional infection control measures often lack consistency due to variable human compliance. This comprehensive review aims to explore the role of artificial intelligence (AI) in enhancing infection control and prevention in hospitals. A systematic literature search was conducted using databases such as PubMed, Scopus, and Web of Science up to October 2024, focusing on studies applying AI to infection control. The review synthesizes current applications of AI, including predictive analytics for early detection, automated surveillance systems, personalized medicine approaches, decision support systems, and patient engagement tools. Findings demonstrate that AI effectively predicts HAIs, optimizes antimicrobial use, and improves compliance with infection prevention protocols. However, challenges such as data quality issues, interoperability, ethical concerns, regulatory hurdles, and the need for substantial investment impede widespread adoption. Addressing these challenges is crucial to leverage AI's potential to enhance patient safety and improve overall healthcare quality.
Collapse
Affiliation(s)
- Fahad A. Alhusain
- From the Department of Scientific Research Center, Prince Sultan Military Medical City, Riyadh, Kingdom of Saudi Arabia.
| |
Collapse
|
32
|
Jin H, Shen J, Cui L, Shi X, Li K, Zhu X. Dynamic graph based weakly supervised deep hashing for whole slide image classification and retrieval. Med Image Anal 2025; 101:103468. [PMID: 39879715 DOI: 10.1016/j.media.2025.103468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Revised: 01/01/2025] [Accepted: 01/09/2025] [Indexed: 01/31/2025]
Abstract
Recently, a multi-scale representation attention based deep multiple instance learning method has proposed to directly extract patch-level image features from gigapixel whole slide images (WSIs), and achieved promising performance on multiple popular WSI datasets. However, it still has two major limitations: (i) without considering the relations among patches, thereby possibly restricting the model performance; (ii) unable to handle retrieval tasks, which is very important in clinic diagnosis. To overcome these limitations, in this paper, we propose a novel end-to-end MIL-based deep hashing framework, which is composed of a multi-scale representation attention based deep network as the backbone, patch-based dynamic graphs and hashing encoding layers, to simultaneously handle classification and retrieval tasks. Specifically, the multi-scale representation attention based deep network is to directly extract patch-level features from WSIs with mining the significant information at cell-, patch- and bag-level features. Additionally, we design a novel patch-based dynamic graph construction method to learn the relations among patches within each bag. Moreover, the hashing encoding layers are to encode patch- and WSI-level features into binary codes for patch- and WSI-level image retrieval. Extensive experiments on multiple popular datasets demonstrate that the proposed framework outperforms recent state-of-the-art ones on both classification and retrieval tasks. All source codes are available athttps://github.com/hcjin0816/DG_WSDH.
Collapse
Affiliation(s)
- Haochen Jin
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Junyi Shen
- Division of Liver Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu, 610044, China
| | - Lei Cui
- Department of Computer Science and Technology, Northwest University of China, 710075, China
| | - Xiaoshuang Shi
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China.
| | - Kang Li
- West China Medical Center, Sichuan University, Chengdu, 610041, China.
| | - Xiaofeng Zhu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China
| |
Collapse
|
33
|
Cheng C, Li B, Li J, Wang Y, Xiao H, Lian X, Chen L, Wang J, Wang H, Qin S, Yu L, Wu T, Peng S, Tan W, Ye Q, Chen W, Jiang X. Multi-stain deep learning prediction model of treatment response in lupus nephritis based on renal histopathology. Kidney Int 2025; 107:714-727. [PMID: 39733792 DOI: 10.1016/j.kint.2024.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 10/03/2024] [Accepted: 12/16/2024] [Indexed: 12/31/2024]
Abstract
The response of the kidney after induction treatment is one of the determinants of prognosis in lupus nephritis, but effective predictive tools are lacking. Here, we sought to apply deep learning approaches on kidney biopsies for treatment response prediction in lupus nephritis. Patients who received cyclophosphamide or mycophenolate mofetil as induction treatment were included, and the primary outcome was 12-month treatment response, complete response defined as 24-h urinary protein under 0.5 g with normal estimated glomerular filtration rate or within 10% of normal range. The model development cohort included 245 patients (880 digital slides), and the external test cohort had 71 patients (258 digital slides). Deep learning models were trained independently on hematoxylin and eosin-, periodic acid-Schiff-, periodic Schiff-methenamine silver- and Masson's trichrome-stained slides at multiple magnifications and integrated to predict the primary outcome of complete response to therapy at 12 months. Single-stain models showed area under the curves of 0.813, 0.841, 0.823, and 0.862, respectively. Further, integration of the four models into a multi-stain model achieved area under the curves of 0.901 and 0.840 on internal validation and external testing, respectively, which outperformed conventional clinicopathologic parameters including estimated glomerular filtration rate, chronicity index and reduction in proteinuria at three months. Decisive features uncovered by visualization for model prediction included tertiary lymphoid structures, glomerulosclerosis, interstitial fibrosis and tubular atrophy. Our study demonstrated the feasibility of utilizing deep learning on kidney pathology to predict treatment response for lupus patients. Further validation is required before the model could be implemented for risk stratification and to aid in making therapeutic decisions in clinical practice.
Collapse
Affiliation(s)
- Cheng Cheng
- Department of Pediatrics, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Bin Li
- Clinical Trials Unit, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jie Li
- Department of Pediatrics, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yiqin Wang
- Department of Nephrology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China; National Health Commission Key Laboratory of Clinical Nephrology (Sun Yat-sen University) and Guangdong Provincial Key Laboratory of Nephrology, Guangzhou, Guangdong, China
| | - Han Xiao
- Department of Medical Ultrasonics, Institute of Diagnostic and Interventional Ultrasound, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Xingji Lian
- Department of Nephrology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China; National Health Commission Key Laboratory of Clinical Nephrology (Sun Yat-sen University) and Guangdong Provincial Key Laboratory of Nephrology, Guangzhou, Guangdong, China
| | - Lizhi Chen
- Department of Pediatrics, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Junxian Wang
- Department of Nephrology, Zhongshan City People's Hospital, Zhongshan, Guangdong, China
| | - Haiyan Wang
- Department of Pediatrics, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Shuguang Qin
- Department of Nephrology, Guangzhou First People's Hospital, Guangzhou, Guangdong, China
| | - Li Yu
- Department of Pediatrics, Guangzhou First People's Hospital, Guangzhou, Guangdong, China
| | - Tingbo Wu
- Department of Pediatrics, Zhongshan City People's Hospital, Zhongshan, Guangdong, China
| | - Sui Peng
- Clinical Trials Unit, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Institute of Precision Medicine, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China; Department of Gastroenterology and Hepatology, the First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Weiping Tan
- Department of Pediatrics, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China.
| | - Qing Ye
- Department of Nephrology, Zhongshan City People's Hospital, Zhongshan, Guangdong, China.
| | - Wei Chen
- Department of Nephrology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, Guangdong, China; National Health Commission Key Laboratory of Clinical Nephrology (Sun Yat-sen University) and Guangdong Provincial Key Laboratory of Nephrology, Guangzhou, Guangdong, China.
| | - Xiaoyun Jiang
- Department of Pediatrics, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, Guangdong, China.
| |
Collapse
|
34
|
Shang A, Yu P, Li L, He G, Xu J. Tumor‑stroma ratio as a clinical prognostic factor in colorectal carcinoma: A meta‑analysis of 7,934 patients. Oncol Lett 2025; 29:190. [PMID: 40041409 PMCID: PMC11877013 DOI: 10.3892/ol.2025.14936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2024] [Accepted: 01/29/2025] [Indexed: 03/06/2025] Open
Abstract
The tumor-stroma ratio (TSR) has been regarded as an important factor associated with tumor metastasis, based on the 'seed and soil' theory, which may have guiding significance for the selection of chemotherapy regimens. Therefore, a high TSR may be a new risk factor for tumor recurrence in patients with stage II colorectal cancer (CRC). The present study aimed to evaluate the prognostic value of TSR in CRC, especially for the computer-calculated TSR. A comprehensive literature retrieval was performed using the PubMed, Web of Science, Embase and Cochrane Library databases to identify relevant studies published up to December 13, 2023. Pooled hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated to estimate the prognostic value of the TSR in CRC. A total of 21 studies published between 2007 and 2023 were included in the present meta-analysis. The combined analysis demonstrated that a high TSR was significantly associated with worse overall survival (OS; HR=1.84; 95% CI, 1.44-2.34; P<0.001), disease-free survival (DFS; HR=1.85; 95% CI, 1.27-2.68; P<0.001), cancer-specific survival (CSS; H=1.97; 95% CI, 1.46-2.65; P<0.001) and recurrence free survival (RFS; HR=1.55; 95% CI, 1.25-1.92; P<0.001) in patients with CRC. Moreover, an elevated computer-calculated TSR was also associated with poor OS (HR=1.89; 95% CI, 1.48-2.40; P<0.001) and DFS (HR=1.85; 95% CI, 1.27-2.68; P<0.001). However, a high TSR was not associated with poor OS in patients with stage I CRC (HR=1.01; 95% CI, 0.48-2.14; P=0.97). In conclusion, the results of the present meta-analysis indicate that a high TSR is associated with poor OS, DFS, CSS and RFS in patients with CRC, especially for those with stage II-III. In addition, TSR calculated by computer using whole-slide images may also be an effective prognostic marker for OS and DFS in patients with CRC.
Collapse
Affiliation(s)
- An Shang
- Department of General Surgery, The Fourth Hospital of Guangxi Medical University, Liuzhou, Guangxi 545007, P.R. China
| | - Pengcheng Yu
- Department of General Surgery, The Fourth Hospital of Guangxi Medical University, Liuzhou, Guangxi 545007, P.R. China
| | - Liping Li
- Department of Pneumology, The Fourth Hospital of Guangxi Medical University, Liuzhou, Guangxi 545007, P.R. China
| | - Ge He
- Department of General Surgery, The Fourth Hospital of Guangxi Medical University, Liuzhou, Guangxi 545007, P.R. China
| | - Junyi Xu
- Department of General Surgery, The Fourth Hospital of Guangxi Medical University, Liuzhou, Guangxi 545007, P.R. China
| |
Collapse
|
35
|
Ye Q, Law T, Klippel D, Albarracin C, Chen H, Contreras A, Ding Q, Huo L, Khazai L, Middleton L, Resetkova E, Sahin A, Sun H, Sweeney K, Symmans WF, Wu Y, Yoon E, Krishnamurthy S. Prospective and Retrospective Analysis of Whole-Slide Images of Sentinel and Targeted Lymph Node Frozen Sections in Breast Cancer. Mod Pathol 2025; 38:100708. [PMID: 39788205 DOI: 10.1016/j.modpat.2025.100708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2024] [Revised: 12/02/2024] [Accepted: 12/19/2024] [Indexed: 01/12/2025]
Abstract
Different digital modalities are currently available for frozen section (FS) evaluation in surgical pathology practice. However, there are limited studies that demonstrate the potential of whole-slide imaging (WSI) as a robust digital pathology option for FS diagnosis. In the current study, we compared the diagnostic accuracy achieved with WSI to that achieved with light microscopy (LM) for evaluating FSs of axillary sentinel lymph nodes (SLNs) and clipped lymph nodes (LNs) from patients with breast cancer using 2 modalities. Initially, a retrospective analysis evaluated hematoxylin and eosin (H&E)-stained FSs of 109 SLNs using WSI followed by LM after a washout period ranging from 2 to 6 weeks. Subsequently, a prospective analysis assessed FSs of 132 SLNs using LM by the first pathologist, and then H&E-stained FSs were scanned and interpreted remotely in real time by a different pathologist. In the retrospective analysis, the diagnostic accuracy utilizing WSI ranged from 96% to 99% and exhibited similarity to those achieved with LM, ranging from 94% to 99%. Similarly, the prospective analysis also demonstrated comparable diagnostic accuracy between WSI (96.2%) and LM (97%). Pathologists in the retrospective study required an additional 0.8 to 5.4 minutes to render diagnoses using WSI compared with LM (P < .0001). In the prospective study conducted 2 years later, pathologists only took slightly longer to provide WSI FS diagnoses (3.95 minutes) compared with LM (3.51 minutes) (P > .05). In conclusion, our study indicated that WSI-based evaluation showed comparable diagnostic accuracy to LM for assessing LN FSs. Furthermore, the prospective study demonstrated the feasibility of real-time acquisition of high-quality WSIs for remote FS diagnosis of SLNs. These findings substantiate the promising potential of using WSIs of SLNs and clipped LNs in real-time FS evaluation of patients with breast cancer as a standard-of-care in surgical pathology practice.
Collapse
Affiliation(s)
- Qiqi Ye
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Timothy Law
- Department of Pathology, Affiliated Pathologists Medical Group, Saddleback Medical Center, Laguna Hills, California
| | - Dianna Klippel
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Constance Albarracin
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Hui Chen
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Alejandro Contreras
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Qingqing Ding
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Lei Huo
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Laila Khazai
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Lavinia Middleton
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Erika Resetkova
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Aysegul Sahin
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Hongxia Sun
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Keith Sweeney
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - William Fraser Symmans
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Yun Wu
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Esther Yoon
- Department of Pathology, Cleveland Clinic Foundation, Weston, Florida
| | - Savitri Krishnamurthy
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas.
| |
Collapse
|
36
|
Ma Y, Yuan M, Shen A, Luo X, An B, Chen X, Wang M. SeLa-MIL: Developing an instance-level classifier via weakly-supervised self-training for whole slide image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 261:108614. [PMID: 39913995 DOI: 10.1016/j.cmpb.2025.108614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 01/18/2025] [Accepted: 01/19/2025] [Indexed: 02/21/2025]
Abstract
BACKGROUND AND OBJECTIVE Pathology image classification is crucial in clinical cancer diagnosis and computer-aided diagnosis. Whole Slide Image (WSI) classification is often framed as a multiple instance learning (MIL) problem due to the high cost of detailed patch-level annotations. Existing MIL methods primarily focus on bag-level classification, often overlooking critical instance-level information, which results in suboptimal outcomes. This paper proposes a novel semi-supervised learning approach, SeLa-MIL, which leverages both labeled and unlabeled instances to improve instance and bag classification, particularly in hard positive instances near the decision boundary. METHODS SeLa-MIL reformulates the traditional MIL problem as a novel semi-supervised instance classification task to effectively utilize both labeled and unlabeled instances. To address the challenge where all labeled instances are negative, we introduce a weakly supervised self-training framework by solving a constrained optimization problem. This method employs global and local constraints on pseudo-labels derived from positive WSI information, enhancing the learning of hard positive instances and ensuring the quality of pseudo-labels. The approach can be integrated into end-to-end training pipelines to maximize the use of available instance-level information. RESULTS Comprehensive experiments on synthetic datasets, MIL benchmarks, and popular WSI datasets demonstrate that SeLa-MIL consistently outperforms existing methods in both instance and bag-level classification, with substantial improvements in recognizing hard positive instances. Visualization further highlights the method's effectiveness in pathology regions relevant to cancer diagnosis. CONCLUSION SeLa-MIL effectively addresses key challenges in MIL-based WSI classification by reformulating it as a semi-supervised problem, leveraging both weakly supervised learning and pseudo-labeling techniques. This approach improves classification accuracy and generalization across diverse datasets, making it valuable for pathology image analysis.
Collapse
Affiliation(s)
- Yingfan Ma
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China
| | - Mingzhi Yuan
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China
| | - Ao Shen
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China
| | - Xiaoyuan Luo
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China
| | - Bohan An
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China
| | - Xinrong Chen
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China; Academy for Engineering and Technology, Fudan University, Shanghai, 200032, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, 200032, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Fudan University, Shanghai, 200032, China.
| |
Collapse
|
37
|
Karasayar AHD, Kulaç İ, Kapucuoğlu N. Advances in Breast Cancer Care: The Role of Artificial Intelligence and Digital Pathology in Precision Medicine. Eur J Breast Health 2025; 21:93-100. [PMID: 40028897 PMCID: PMC11934827 DOI: 10.4274/ejbh.galenos.2025.2024-12-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2024] [Accepted: 02/17/2025] [Indexed: 03/05/2025]
Abstract
Artificial intelligence (AI) and digital pathology are transforming breast cancer management by addressing the limitations inherent in traditional histopathological methods. The application of machine learning algorithms has enhanced the ability of AI systems to classify breast cancer subtypes, grade tumors, and quantify key biomarkers, thereby improving diagnostic accuracy and prognostic precision. Furthermore, AI-powered image analysis has demonstrated superiority in detecting lymph node metastases, contributing to more precise staging, treatment planning, and reduced evaluation time. The ability of AI to predict molecular markers, including human epidermal growth factor receptor 2 status, BRCA mutations and homologus recombination deficiency, offers substantial potential for the development of personalized treatment strategies. A collaborative approach between pathologists and AI systems is essential to fully harness the potential of this technology. Although AI provides automation and objective analysis, human expertise remains indispensable for the interpretation of results and clinical decision-making. This partnership is anticipated to transform breast cancer care by enhancing patient outcomes and optimizing treatment approaches.
Collapse
Affiliation(s)
- Ayşe Hümeyra Dur Karasayar
- Graduate School of Health Sciences, Koç University Faculty of Medicine, İstanbul, Turkey
- Department of Pathology, Başakşehir Çam and Sakura Hospital, İstanbul, Turkey
| | - İbrahim Kulaç
- Graduate School of Health Sciences, Koç University Faculty of Medicine, İstanbul, Turkey
- Koç University & İş Bank Artificial Intelligence Center, Koç University, İstanbul, Turkey
- Research Center for Translational Medicine, Koç University, İstanbul, Turkey
- Department of Pathology, Koç University Faculty of Medicine, İstanbul, Turkey
| | - Nilgün Kapucuoğlu
- Department of Pathology, Koç University Faculty of Medicine, İstanbul, Turkey
| |
Collapse
|
38
|
Yang Z, Wei T, Liang Y, Yuan X, Gao R, Xia Y, Zhou J, Zhang Y, Yu Z. A foundation model for generalizable cancer diagnosis and survival prediction from histopathological images. Nat Commun 2025; 16:2366. [PMID: 40064883 PMCID: PMC11894166 DOI: 10.1038/s41467-025-57587-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Accepted: 02/21/2025] [Indexed: 03/14/2025] Open
Abstract
Computational pathology, utilizing whole slide images (WSIs) for pathological diagnosis, has advanced the development of intelligent healthcare. However, the scarcity of annotated data and histological differences hinder the general application of existing methods. Extensive histopathological data and the robustness of self-supervised models in small-scale data demonstrate promising prospects for developing foundation pathology models. Here we show BEPH (BEiT-based model Pre-training on Histopathological image), a foundation model that leverages self-supervised learning to learn meaningful representations from 11 million unlabeled histopathological images. These representations are then efficiently adapted to various tasks, including patch-level cancer diagnosis, WSI-level cancer classification, and survival prediction for multiple cancer subtypes. By leveraging the masked image modeling (MIM) pre-training approach, BEPH offers an efficient solution to enhance model performance, reduce the reliance on expert annotations, and facilitate the broader application of artificial intelligence in clinical settings. The pre-trained model is available at https://github.com/Zhcyoung/BEPH .
Collapse
Affiliation(s)
- Zhaochang Yang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Ting Wei
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Liang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| | - Xin Yuan
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
- SJTU-Yale Joint Center for Biostatistics and Data Science Organization, Shanghai Jiao Tong University, Shanghai, China
- Center for Biomedical Data Science, Translational Science Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- National Center for Translational Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - RuiTian Gao
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Yujia Xia
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Jie Zhou
- School of Mathematical sciences, Shanghai Jiao Tong University, Shanghai, China
| | - Yue Zhang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China.
- SJTU-Yale Joint Center for Biostatistics and Data Science Organization, Shanghai Jiao Tong University, Shanghai, China.
| | - Zhangsheng Yu
- Department of Bioinformatics and Biostatistics, School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China.
- SJTU-Yale Joint Center for Biostatistics and Data Science Organization, Shanghai Jiao Tong University, Shanghai, China.
- Center for Biomedical Data Science, Translational Science Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- School of Mathematical sciences, Shanghai Jiao Tong University, Shanghai, China.
- Clinical Research Institute, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
39
|
Tarakçı EA, Çeliker M, Birinci M, Yemiş T, Gül O, Oğuz EF, Solak M, Kaba E, Çeliker FB, Özergin Coşkun Z, Alkan A, Erdivanlı ÖÇ. Novel Preprocessing-Based Sequence for Comparative MR Cervical Lymph Node Segmentation. J Clin Med 2025; 14:1802. [PMID: 40142614 PMCID: PMC11943128 DOI: 10.3390/jcm14061802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 02/20/2025] [Accepted: 02/24/2025] [Indexed: 03/28/2025] Open
Abstract
Background and Objective: This study aims to utilize deep learning methods for the automatic segmentation of cervical lymph nodes in magnetic resonance images (MRIs), enhancing the speed and accuracy of diagnosing pathological masses in the neck and improving patient treatment processes. Materials and Methods: This study included 1346 MRI slices from 64 patients undergoing cervical lymph node dissection, biopsy, and preoperative contrast-enhanced neck MRI. A preprocessing model was used to crop and highlight lymph nodes, along with a method for automatic re-cropping. Two datasets were created from the cropped images-one with augmentation and one without-divided into 90% training and 10% validation sets. After preprocessing, the ResNet-50 images in the DeepLabv3+ encoder block were automatically segmented. Results: According to the results of the validation set, the mean IoU values for the DWI, T2, T1, T1+C, and ADC sequences in the dataset without augmentation created for cervical lymph node segmentation were 0.89, 0.88, 0.81, 0.85, and 0.80, respectively. In the augmented dataset, the average IoU values for all sequences were 0.91, 0.89, 0.85, 0.88, and 0.84. The DWI sequence showed the highest performance in the datasets with and without augmentation. Conclusions: Our preprocessing-based deep learning architectures successfully segmented cervical lymph nodes with high accuracy. This study is the first to explore automatic segmentation of the cervical lymph nodes using comprehensive neck MRI sequences. The proposed model can streamline the detection process, reducing the need for radiology expertise. Additionally, it offers a promising alternative to manual segmentation in radiotherapy, potentially enhancing treatment effectiveness.
Collapse
Affiliation(s)
- Elif Ayten Tarakçı
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Metin Çeliker
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Mehmet Birinci
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Tuğba Yemiş
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Oğuz Gül
- Department of Otorhinolaryngology, Akçaabat Haçkalı Baba State Hospital, Trabzon 61310, Turkey;
| | - Enes Faruk Oğuz
- Department of Biomedical Device Technology, Hassa Vocational School, Hatay Mustafa Kemal University, Hatay 31000, Turkey;
| | - Merve Solak
- Department of Radiolagy, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (M.S.); (E.K.); (F.B.Ç.)
| | - Esat Kaba
- Department of Radiolagy, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (M.S.); (E.K.); (F.B.Ç.)
| | - Fatma Beyazal Çeliker
- Department of Radiolagy, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (M.S.); (E.K.); (F.B.Ç.)
| | - Zerrin Özergin Coşkun
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| | - Ahmet Alkan
- Department of Electrical and Electronics Engineering, Kahramanmaraş Sütçü İmam University, Kahramanmaraş 46000, Turkey;
| | - Özlem Çelebi Erdivanlı
- Department of Otorhinolaryngology, Medicine Faculty, Recep Tayyip Erdoğan University, Rize 53000, Turkey; (E.A.T.); (M.B.); (T.Y.); (Z.Ö.C.); (Ö.Ç.E.)
| |
Collapse
|
40
|
Cho HS, Hwang EJ, Yi J, Choi B, Park CM. Artificial intelligence system for identification of overlooked lung metastasis in abdominopelvic computed tomography scans of patients with malignancy. Diagn Interv Radiol 2025; 31:102-110. [PMID: 39248126 PMCID: PMC11880870 DOI: 10.4274/dir.2024.242835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Accepted: 08/01/2024] [Indexed: 09/10/2024]
Abstract
PURPOSE This study aimed to evaluate whether an artificial intelligence (AI) system can identify basal lung metastatic nodules examined using abdominopelvic computed tomography (CT) that were initially overlooked by radiologists. METHODS We retrospectively included abdominopelvic CT images with the following inclusion criteria: a) CT images from patients with solid organ malignancies between March 1 and March 31, 2019, in a single institution; and b) abdominal CT images interpreted as negative for basal lung metastases. Reference standards for diagnosis of lung metastases were confirmed by reviewing medical records and subsequent CT images. An AI system that could automatically detect lung nodules on CT images was applied retrospectively. A radiologist reviewed the AI detection results to classify them as lesions with the possibility of metastasis or clearly benign. The performance of the initial AI results and the radiologist's review of the AI results were evaluated using patient-level and lesion-level sensitivities, false-positive rates, and the number of false-positive lesions per patient. RESULTS A total of 878 patients (580 men; mean age, 63 years) were included, with overlooked basal lung metastases confirmed in 13 patients (1.5%). The AI exhibited an area under the receiver operating characteristic curve value of 0.911 for the identification of overlooked basal lung metastases. Patient- and lesion-level sensitivities of the AI system ranged from 69.2% to 92.3% and 46.2% to 92.3%, respectively. After a radiologist reviewed the AI results, the sensitivity remained unchanged. The false-positive rate and number of false-positive lesions per patient ranged from 5.8% to 27.6% and 0.1% to 0.5%, respectively. Radiologist reviews significantly reduced the false-positive rate (2.4%-12.6%; all P values < 0.001) and the number of false-positive lesions detected per patient (0.03-0.20, respectively). CONCLUSION The AI system could accurately identify basal lung metastases detected in abdominopelvic CT images that were overlooked by radiologists, suggesting its potential as a tool for radiologist interpretation. CLINICAL SIGNIFICANCE The AI system can identify missed basal lung lesions in abdominopelvic CT scans in patients with malignancy, providing feedback to radiologists, which can reduce the risk of missing basal lung metastasis.
Collapse
Affiliation(s)
- Hye Soo Cho
- Seoul National University Hospital Seoul National University College of Medicine, Department of Radiology, Seoul, Republic of Korea
| | - Eui Jin Hwang
- Seoul National University Hospital Seoul National University College of Medicine, Department of Radiology, Seoul, Republic of Korea
- Seoul National University College of Medicine Department of Radiology, Seoul, Republic of Korea
| | - Jaeyoun Yi
- Coreline Soft Inc. Seoul, Republic of Korea
| | | | - Chang Min Park
- Seoul National University Hospital Seoul National University College of Medicine, Department of Radiology, Seoul, Republic of Korea
- Seoul National University College of Medicine Department of Radiology, Seoul, Republic of Korea
| |
Collapse
|
41
|
Abbott LP, Saikia A, Anthonappa RP. ARTIFICIAL INTELLIGENCE PLATFORMS IN DENTAL CARIES DETECTION: A SYSTEMATIC REVIEW AND META-ANALYSIS. J Evid Based Dent Pract 2025; 25:102077. [PMID: 39947783 DOI: 10.1016/j.jebdp.2024.102077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 11/25/2024] [Accepted: 11/29/2024] [Indexed: 05/09/2025]
Abstract
OBJECTIVES To assess Artificial Intelligence (AI) platforms, machine learning methodologies and associated accuracies used in detecting dental caries from clinical images and dental radiographs. METHODS A systematic search of 8 distinct electronic databases: Scopus, Web of Science, MEDLINE, Educational Resources Information Centre, Institute of Electrical and Electronics Engineers Explore, Science Direct, Directory of Open Access Journals and JSTOR, was conducted from January 2000 to March 2024. AI platforms, machine learning methodologies and associated accuracies of studies using AI for dental caries detection were extracted along with essential study characteristics. The quality of included studies was assessed using QUADAS-2 and the CLAIM checklist. Meta-analysis was performed to obtain a quantitative estimate of AI accuracy. RESULTS Of the 2538 studies identified, 45 met the inclusion criteria and underwent qualitative synthesis. Of the 45 included studies, 33 used dental radiographs, and 12 used clinical images as datasets. A total of 21 different AI platforms were reported. The accuracy ranged from 41.5% to 98.6% across reported AI platforms. A quantitative meta-analysis across 7 studies reported a mean sensitivity of 76% [95% CI (65% - 85%)] and specificity of 91% [(95% CI (86% - 95%)]. The area under the curve (AUC) was 92% [95% CI (89% - 94%)], with high heterogeneity across included studies. CONCLUSION Significant variability exists in AI performance for detecting dental caries across different AI platforms. Meta-analysis demonstrates that AI has superior sensitivity and equal specificity of detecting dental caries from clinical images as compared to bitewing radiography. Although AI is promising for dental caries detection, further refinement is necessary to achieve consistent and reliable performance across varying imaging modalities.
Collapse
Affiliation(s)
- Lyndon P Abbott
- Paediatric Dentistry, UWA Dental School, The University of Western Australia, Perth, Australia.
| | - Ankita Saikia
- Paediatric Dentistry, UWA Dental School, The University of Western Australia, Perth, Australia
| | - Robert P Anthonappa
- Professor Paediatric Dentistry, UWA Dental School, The University of Western Australia, Perth, Australia
| |
Collapse
|
42
|
Hutchinson JC, Picarsic J, McGenity C, Treanor D, Williams B, Sebire NJ. Whole Slide Imaging, Artificial Intelligence, and Machine Learning in Pediatric and Perinatal Pathology: Current Status and Future Directions. Pediatr Dev Pathol 2025; 28:91-98. [PMID: 39552500 DOI: 10.1177/10935266241299073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2024]
Abstract
The integration of artificial intelligence (AI) into healthcare is becoming increasingly mainstream. Leveraging digital technologies, such as AI and deep learning, impacts researchers, clinicians, and industry due to promising performance and clinical potential. Digital pathology is now a proven technology, enabling generation of high-resolution digital images from glass slides (whole slide images; WSI). WSIs facilitates AI-based image analysis to aid pathologists in diagnostic tasks, improve workflow efficiency, and address workforce shortages. Example applications include tumor segmentation, disease classification, detection, quantitation and grading, rare object identification, and outcome prediction. While advancements have occurred, integration of WSI-AI into clinical laboratories faces challenges, including concerns regarding evidence quality, regulatory adaptations, clinical evaluation, and safety considerations. In pediatric and developmental histopathology, adoption of AI could improve diagnostic efficiency, automate routine tasks, and address specific diagnostic challenges unique to the specialty, such as standardizing placental pathology and developmental autopsy findings, as well as mitigating staffing shortages in the subspeciality. Additionally, AI-based tools have potential to mitigate medicolegal implications by enhancing reproducibility and objectivity in diagnostic evaluations. An overview of recent developments and challenges in applying AI to pediatric and developmental pathology, focusing on machine learning methods applied to WSIs of pediatric pathology specimens is presented.
Collapse
Affiliation(s)
| | - Jennifer Picarsic
- Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | | | | | | | | |
Collapse
|
43
|
Chang CP, Hsu CY, Wang HS, Feng PC, Liang WY. Detection of metastatic breast carcinoma in sentinel lymph node frozen sections using an artificial intelligence-assisted system. Pathol Res Pract 2025; 267:155836. [PMID: 39946987 DOI: 10.1016/j.prp.2025.155836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 07/15/2024] [Accepted: 02/09/2025] [Indexed: 03/01/2025]
Abstract
We developed an automatic method based on a convolutional neural network (CNN) that identifies metastatic lesions in whole slide images (WSI) of intraoperative frozen sections from sentinel lymph nodes in breast cancer. A total of 954 sentinel lymph node frozen sections, encompassing all types of breast cancer, were collected and examined at our institution between January 1, 2021, and September 27, 2022. Seventy-two cases from a total of 954 cases, including 50 macrometastases, 16 micrometastases, and 6 negatives, were selected and annotated for training a model, which was a self-developed platform (EasyPath) built using R 4.1.3 accompanied by Python 3.7 as the reticulate package. Another 105 metastasis-positive and 80 metastasis-negative cases from the remaining 882 cases were collected to validate and test the algorithm. Our algorithm successfully identified 103 cases (98 %) of metastases, including 85 cases of macrometastases and 18 cases of micrometastasis, with the inference time averaging 87.3 seconds per case. The algorithm correctly identified all of the macrometastases and 90 % of the micrometastases. The sensitivity for detecting micrometastases significantly outperformed that of the pathologists (p = 0.014, McNemar's test). Furthermore, we provide a workflow that deploys our algorithm into the daily practice of assessing intraoperative frozen sections. Our algorithm provides a robust backup for detecting metastases, particularly for high sensitivity for micrometastases, which will minimize errors in the pathological assessment of intraoperative frozen section of sentinel lymph nodes.
Collapse
Affiliation(s)
- Chia-Ping Chang
- Department of Pathology and Laboratory Medicine, Taipei Veteran General Hospital, Taipei, Taiwan, ROC
| | - Chih-Yi Hsu
- Department of Pathology and Laboratory Medicine, Taipei Veteran General Hospital, Taipei, Taiwan, ROC; National Yang Ming Chiao Tung University School of Medicine, Taipei City 112, Taiwan, ROC
| | - Hsiang Sheng Wang
- Department of Pathology, Chang Gung Memorial Hospital at Linkou Taoyuan, Ling Ko, 33305, Taiwan, ROC
| | - Peng-Chuna Feng
- Department of Pathology and Laboratory Medicine, Taipei Veteran General Hospital, Taipei, Taiwan, ROC
| | - Wen-Yih Liang
- Department of Pathology and Laboratory Medicine, Taipei Veteran General Hospital, Taipei, Taiwan, ROC; National Yang Ming Chiao Tung University School of Medicine, Taipei City 112, Taiwan, ROC.
| |
Collapse
|
44
|
Duong D, Solomon BD. Artificial intelligence in clinical genetics. Eur J Hum Genet 2025; 33:281-288. [PMID: 39806188 PMCID: PMC11894121 DOI: 10.1038/s41431-024-01782-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2024] [Accepted: 12/19/2024] [Indexed: 01/16/2025] Open
Abstract
Artificial intelligence (AI) has been growing more powerful and accessible, and will increasingly impact many areas, including virtually all aspects of medicine and biomedical research. This review focuses on previous, current, and especially emerging applications of AI in clinical genetics. Topics covered include a brief explanation of different general categories of AI, including machine learning, deep learning, and generative AI. After introductory explanations and examples, the review discusses AI in clinical genetics in three main categories: clinical diagnostics; management and therapeutics; clinical support. The review concludes with short, medium, and long-term predictions about the ways that AI may affect the field of clinical genetics. Overall, while the precise speed at which AI will continue to change clinical genetics is unclear, as are the overall ramifications for patients, families, clinicians, researchers, and others, it is likely that AI will result in dramatic evolution in clinical genetics. It will be important for all those involved in clinical genetics to prepare accordingly in order to minimize the risks and maximize benefits related to the use of AI in the field.
Collapse
Affiliation(s)
- Dat Duong
- Medical Genetics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA
| | - Benjamin D Solomon
- Medical Genetics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
45
|
Wang X, Song J, Qiu Q, Su Y, Wang L, Cao X. A Stacked Multimodality Model Based on Functional MRI Features and Deep Learning Radiomics for Predicting the Early Response to Radiotherapy in Nasopharyngeal Carcinoma. Acad Radiol 2025; 32:1631-1644. [PMID: 39496536 DOI: 10.1016/j.acra.2024.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2024] [Revised: 10/09/2024] [Accepted: 10/12/2024] [Indexed: 11/06/2024]
Abstract
BACKGROUND This study aimed to construct and assess a comprehensive model that integrates MRI-derived deep learning radiomics, functional imaging (fMRI), and clinical indicators to predict early efficacy of radiotherapy in nasopharyngeal carcinoma (NPC). METHODS This retrospective study recruited NPC patients with radiotherapy from two Chinese hospitals between October 2018 and July 2022, divided into a training set (hospital I, 194 cases), an internal validation set (hospital I, 82 cases), and an external validation set (hospital II, 40 cases). We extracted 3404 radiomic features and 2048 deep learning features from multi-sequence MRI includes T1WI, CE-T1WI, T2WI and T2WI/FS. Additionally, both the Apparent diffusion coefficient (ADC), its maximum (ADCmax) and Tumor blood flow (TBF), its maximum (TBFmax) were obtained by Diffusion-weighted imaging (DWI) and Arterial spin labeling (ASL) respectively. We used four classifiers (LR, XGBoost, SVM and KNN) and stacked algorithm as model construction methods. The area under the receiver operating characteristic curve (AUC) and decision curve analysis was used to assess models. RESULTS The manual radiomics model based on XGBoost and the deep learning model based on KNN (the AUCs in the training set: 0.909, 0.823, respectively) showed better predictive efficacy than other machine learning algorithms. The stacked model that integrated MRI-based deep learning radiomics, fMRI, and hematological indicators, has the strongest efficacy prediction ability of AUC in the training set [0.984 (95%CI: 0.972-0.996)], the internal validation set [0.936 (95%CI: 0.885-0.987)], and the external validation set [0.959 (95%CI: 0.901-1.000)]. CONCLUSION Our research has developed a clinical-radiomics integrated model based on MRI which can predict early radiotherapy response in NPC and provide guidance for personalized treatment.
Collapse
Affiliation(s)
- Xiaowen Wang
- Shandong University Cancer Center, Jinan, Shandong, China (X.W.); Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China (X.W., X.C.)
| | - Jian Song
- Medical Imageology, Shandong Medical College, Jinan, China (J.S.)
| | - Qingtao Qiu
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China (Q.Q., Y.S., L.W.)
| | - Ya Su
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China (Q.Q., Y.S., L.W.)
| | - Lizhen Wang
- Department of Radiation Oncology Physics and Technology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China (Q.Q., Y.S., L.W.)
| | - Xiujuan Cao
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China (X.W., X.C.).
| |
Collapse
|
46
|
Chang TG, Park S, Schäffer AA, Jiang P, Ruppin E. Hallmarks of artificial intelligence contributions to precision oncology. NATURE CANCER 2025; 6:417-431. [PMID: 40055572 PMCID: PMC11957836 DOI: 10.1038/s43018-025-00917-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2024] [Accepted: 01/21/2025] [Indexed: 03/29/2025]
Abstract
The integration of artificial intelligence (AI) into oncology promises to revolutionize cancer care. In this Review, we discuss ten AI hallmarks in precision oncology, organized into three groups: (1) cancer prevention and diagnosis, encompassing cancer screening, detection and profiling; (2) optimizing current treatments, including patient outcome prediction, treatment planning and monitoring, clinical trial design and matching, and developing response biomarkers; and (3) advancing new treatments by identifying treatment combinations, discovering cancer vulnerabilities and designing drugs. We also survey AI applications in interventional clinical trials and address key challenges to broader clinical adoption of AI: data quality and quantity, model accuracy, clinical relevance and patient benefit, proposing actionable solutions for each.
Collapse
Affiliation(s)
- Tian-Gen Chang
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
| | - Seongyong Park
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Alejandro A Schäffer
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peng Jiang
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Eytan Ruppin
- Cancer Data Science Laboratory, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
47
|
Flach RN, van Dooijeweert C, Nguyen TQ, Lynch M, Jonges TN, Meijer RP, Suelmann BBM, Willemse PPM, Stathonikos N, van Diest PJ. Prospective Clinical Implementation of Paige Prostate Detect Artificial Intelligence Assistance in the Detection of Prostate Cancer in Prostate Biopsies: CONFIDENT P Trial Implementation of Artificial Intelligence Assistance in Prostate Cancer Detection. JCO Clin Cancer Inform 2025; 9:e2400193. [PMID: 40036728 DOI: 10.1200/cci-24-00193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Revised: 10/25/2024] [Accepted: 01/22/2025] [Indexed: 03/06/2025] Open
Abstract
PURPOSE Pathologists diagnose prostate cancer (PCa) on hematoxylin and eosin (HE)-stained sections of prostate needle biopsies (PBx). Some laboratories use costly immunohistochemistry (IHC) for all cases to optimize workflow, often exceeding reimbursement for the full specimen. Despite the rise in digital pathology and artificial intelligence (AI) algorithms, clinical implementation studies are scarce. This prospective clinical trial evaluated whether an AI-assisted workflow for detecting PCa in PBx reduces IHC use while maintaining diagnostic safety standards. METHODS Patients suspected of PCa were allocated biweekly to either a control or intervention arm. In the control arm, pathologists assessed whole-slide images (WSI) of PBx using HE and IHC stainings. In the intervention arm, pathologists used the Paige Prostate Detect AI algorithm on HE slides, requesting IHC only as needed. IHC was requested for all morphologically negative slides in the AI arm. The main outcome was the relative risk (RR) of IHC use per detected PCa case at both patient and WSI levels. RESULTS Overall, 143 of 237 (60.3%) slides of 64 of 82 patients contained PCa (78.0%). AI assistance significantly reduced the risk of IHC use per detected PCa case at both the patient level (RR, 0.55; 95% CI, 0.39 to 0.72) and slide level (RR, 0.41; 95% CI, 0.29 to 0.52). Cost reductions on IHC were €1,700 for the trial, at €50 per IHC stain. AI-assisted pathologists reported higher confidence in their diagnoses (80% v 56% confident or high confidence). The median assessment time per HE slide showed no significant difference between the AI-assisted and control arms (139 seconds v 112 seconds; P = .2). CONCLUSION This study demonstrates that AI assistance for PCa detection in PBx significantly reduces IHC costs while maintaining diagnostic safety standards, supporting the business case for AI implementation in PCa detection.
Collapse
Affiliation(s)
- Rachel N Flach
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Tri Q Nguyen
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mitchell Lynch
- Department of Pathology, Gelre Hospital, Apeldoorn, the Netherlands
| | - Trudy N Jonges
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Richard P Meijer
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Britt B M Suelmann
- Department of Medical Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Peter-Paul M Willemse
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
48
|
Ariful Islam M, Mridha MF, Safran M, Alfarhood S, Mohsin Kabir M. Revolutionizing Brain Tumor Detection Using Explainable AI in MRI Images. NMR IN BIOMEDICINE 2025; 38:e70001. [PMID: 39948696 DOI: 10.1002/nbm.70001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2024] [Revised: 01/04/2025] [Accepted: 01/21/2025] [Indexed: 05/09/2025]
Abstract
Due to the complex structure of the brain, variations in tumor shapes and sizes, and the resemblance between tumor and healthy tissues, the reliable and efficient identification of brain tumors through magnetic resonance imaging (MRI) presents a persistent challenge. Given that manual identification of tumors is often time-consuming and prone to errors, there is a clear need for advanced automated procedures to enhance detection accuracy and efficiency. Our study addresses the difficulty by creating an improved convolutional neural network (CNN) framework derived from DenseNet121 to augment the accuracy of brain tumor detection. The proposed model was comprehensively evaluated against 12 baseline CNN models and 5 state-of-the-art architectures, namely Vision Transformer (ViT), ConvNeXt, MobileNetV3, FastViT, and InternImage. The proposed model achieved exceptional accuracy rates of 98.4% and 99.3% on two separate datasets, outperforming all 17 models evaluated. Our improved model was integrated using Explainable AI (XAI) techniques, particularly Grad-CAM++, facilitating accurate diagnosis and localization of complex tumor instances, including small metastatic lesions and nonenhancing low-grade gliomas. The XAI framework distinctly highlights essential areas signifying tumor presence, hence enhancing the model's accuracy and interpretability. The results highlight the potential of our method as a reliable diagnostic instrument for healthcare practitioners' ability to comprehend and confirm artificial intelligence (AI)-driven predictions but also bring transparency to the model's decision-making process, ultimately improving patient outcomes. This advancement signifies a significant progression in the use of AI in neuro-oncology, enhancing diagnostic interpretability and precision.
Collapse
Affiliation(s)
- Md Ariful Islam
- Department of Computer Science, American International University-Bangladesh, Dhaka, Bangladesh
| | - M F Mridha
- Department of Computer Science, American International University-Bangladesh, Dhaka, Bangladesh
| | - Mejdl Safran
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Sultan Alfarhood
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Md Mohsin Kabir
- School of Innovation, Design and Engineering, Mälardalens University, Västerås, Sweden
| |
Collapse
|
49
|
Du Z, Zhang P, Huang X, Hu Z, Yang G, Xi M, Liu D. Deeply supervised two stage generative adversarial network for stain normalization. Sci Rep 2025; 15:7068. [PMID: 40016308 PMCID: PMC11868385 DOI: 10.1038/s41598-025-91587-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Accepted: 02/21/2025] [Indexed: 03/01/2025] Open
Abstract
The color variations present in histopathological images pose a significant challenge to computational pathology and, consequently, negatively affect the performance of certain pathological image analysis methods, especially those based on deep learning techniques. To date, several methods have been proposed to mitigate this issue. However, these methods either produce images with low texture retention, perform poorly when trained with small datasets, or have low generalization capabilities. In this paper, we propose a Deep Supervised Two-stage Generative Adversarial Network known as DSTGAN for stain-normalization. Specifically, we introduce deep supervision to generative adversarial networks in an innovative way to enhance the learning capacity of the model, benefiting from different model regularization methods. To make fuller use of source domain images for training the model, we drew upon semi-supervised concepts to design a novel two-stage staining strategy. Additionally, we construct a generator that can capture long-distance semantic relationships, enabling the model to retain more abundant texture information in the generated images. In the evaluation of the quality of generated images, we have achieved state-of-the-art performance on TUPAC-2016, MITOS-ATYPIA-14, ICIAR-BACH-2018 and MICCAI-16-GlaS datasets, improving the precision of classification and segmentation by 5.2% and 4.2%, respectively. Not only has our model significantly improved the quality of the stained images compared to existing stain normalization methods, but it also has a positive impact on the execution of downstream classification and segmentation tasks. Our method has further reduced the effect that staining differences have on computational pathology, thereby improving the accuracy of histopathological image analysis to some extent.
Collapse
Affiliation(s)
- Zhe Du
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, China
- Henan Engineering Research Center of Digital Pathology and Artificial Intelligence Diagnosis, Luoyang, China
| | - Pujing Zhang
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, China
- Henan Engineering Research Center of Digital Pathology and Artificial Intelligence Diagnosis, Luoyang, China
| | - Xiaodong Huang
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, China
- Henan Engineering Research Center of Digital Pathology and Artificial Intelligence Diagnosis, Luoyang, China
| | - Zhigang Hu
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, China
| | - Gege Yang
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, China
- Henan Engineering Research Center of Digital Pathology and Artificial Intelligence Diagnosis, Luoyang, China
| | - Mengyang Xi
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, China
| | - Dechun Liu
- Henan Engineering Research Center of Digital Pathology and Artificial Intelligence Diagnosis, Luoyang, China.
- The First Affiliated Hospital of Henan University of Science and Technology, Luoyang, China.
| |
Collapse
|
50
|
Xu T, Bassiouny D, Srinidhi C, Lam MSW, Goubran M, Nofech-Mozes S, Martel AL. Artificial Intelligence-Assisted Detection of Breast Cancer Lymph Node Metastases in the Post-Neoadjuvant Treatment Setting. J Transl Med 2025; 105:104121. [PMID: 40020876 DOI: 10.1016/j.labinv.2025.104121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 02/14/2025] [Accepted: 02/19/2025] [Indexed: 03/03/2025] Open
Abstract
Lymph node assessment for metastasis is a common, time-consuming, and potentially error-prone pathologist task. Past studies have proposed deep learning algorithms designed to automate this task. However, none have explicitly evaluated the generalizability of these algorithms to lymph node in patients with breast cancer who have received neoadjuvant systemic therapy (NAT). In this study, we created a large 1027-slide data set exclusively containing patients with breast cancer who have received NAT with detailed pathologist labels. We developed an interpretable deep learning pipeline to carry out the following 2 tasks: first, to classify slides as positive or negative for metastases, and second, to create a detailed, patch-level heatmap for probability of metastasis. We evaluated this pipeline with and without post-NAT treatment effect in training data, and investigated its performance relative to both slide- and patch-level tasks. We found that the presence of post-NAT treatment effect training data is relevant for both tasks, with particular benefits in pipeline specificity. With the post-NAT testing cohort, we found that our final pipeline obtained 0.986 area under the receiver operating characteristic curve for slide-level classification, and 70.9% specificity when calibrating for 100% sensitivity. We additionally performed an interpretability study on the outputs of our pipeline and found that the patch-level heatmap was successful in efficiently guiding pathologists toward detecting and correcting erroneous predictions that were made with an uncalibrated network.
Collapse
Affiliation(s)
- Tony Xu
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario.
| | - Dina Bassiouny
- Department of Laboratory Medicine and Pathology, University of Toronto, Toronto, Ontario; Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario
| | - Chetan Srinidhi
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario
| | - Michael S W Lam
- Department of Biomedical Engineering, University of Waterloo, Waterloo, Ontario
| | - Maged Goubran
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario; Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario; Hurvitz Brain Sciences Centre, Sunnybrook Health Sciences Centre, Toronto, Ontario; Harquail Centre for Neuromodulation, Sunnybrook Health Sciences Centre, Toronto, Ontario
| | - Sharon Nofech-Mozes
- Department of Laboratory Medicine and Pathology, University of Toronto, Toronto, Ontario; Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario; Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario
| |
Collapse
|