1
|
Yi J, Liu X, Cheng S, Chen L, Zeng S. Multi-scale window transformer for cervical cytopathology image recognition. Comput Struct Biotechnol J 2024; 24:314-321. [PMID: 38681132 PMCID: PMC11046249 DOI: 10.1016/j.csbj.2024.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 04/09/2024] [Accepted: 04/10/2024] [Indexed: 05/01/2024] Open
Abstract
Cervical cancer is a major global health issue, particularly in developing countries where access to healthcare is limited. Early detection of pre-cancerous lesions is crucial for successful treatment and reducing mortality rates. However, traditional screening and diagnostic processes require cytopathology doctors to manually interpret a huge number of cells, which is time-consuming, costly, and prone to human experiences. In this paper, we propose a Multi-scale Window Transformer (MWT) for cervical cytopathology image recognition. We design multi-scale window multi-head self-attention (MW-MSA) to simultaneously integrate cell features of different scales. Small window self-attention is used to extract local cell detail features, and large window self-attention aims to integrate features from smaller-scale window attention to achieve window-to-window information interaction. Our design enables long-range feature integration but avoids whole image self-attention (SA) in ViT or twice local window SA in Swin Transformer. We find convolutional feed-forward networks (CFFN) are more efficient than original MLP-based FFN for representing cytopathology images. Our overall model adopts a pyramid architecture. We establish two multi-center cervical cell classification datasets of two-category 192,123 images and four-category 174,138 images. Extensive experiments demonstrate that our MWT outperforms state-of-the-art general classification networks and specialized classifiers for cytopathology images in the internal and external test sets. The results on large-scale datasets prove the effectiveness and generalization of our proposed model. Our work provides a reliable cytopathology image recognition method and helps establish computer-aided screening for cervical cancer. Our code is available at https://github.com/nmyz669/MWT, and our web service tool can be accessed at https://huggingface.co/spaces/nmyz/MWTdemo.
Collapse
Affiliation(s)
- Jiaxiang Yi
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Xiuli Liu
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| | - Shenghua Cheng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
| | - Li Chen
- Department of Clinical Laboratory, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center and MoE Key Laboratory for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
2
|
Bera A, Bhattacharjee D, Krejcar O. PND-Net: plant nutrition deficiency and disease classification using graph convolutional network. Sci Rep 2024; 14:15537. [PMID: 38969738 DOI: 10.1038/s41598-024-66543-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 07/02/2024] [Indexed: 07/07/2024] Open
Abstract
Crop yield production could be enhanced for agricultural growth if various plant nutrition deficiencies, and diseases are identified and detected at early stages. Hence, continuous health monitoring of plant is very crucial for handling plant stress. The deep learning methods have proven its superior performances in the automated detection of plant diseases and nutrition deficiencies from visual symptoms in leaves. This article proposes a new deep learning method for plant nutrition deficiencies and disease classification using a graph convolutional network (GNN), added upon a base convolutional neural network (CNN). Sometimes, a global feature descriptor might fail to capture the vital region of a diseased leaf, which causes inaccurate classification of disease. To address this issue, regional feature learning is crucial for a holistic feature aggregation. In this work, region-based feature summarization at multi-scales is explored using spatial pyramidal pooling for discriminative feature representation. Furthermore, a GCN is developed to capacitate learning of finer details for classifying plant diseases and insufficiency of nutrients. The proposed method, called Plant Nutrition Deficiency and Disease Network (PND-Net), has been evaluated on two public datasets for nutrition deficiency, and two for disease classification using four backbone CNNs. The best classification performances of the proposed PND-Net are as follows: (a) 90.00% Banana and 90.54% Coffee nutrition deficiency; and (b) 96.18% Potato diseases and 84.30% on PlantDoc datasets using Xception backbone. Furthermore, additional experiments have been carried out for generalization, and the proposed method has achieved state-of-the-art performances on two public datasets, namely the Breast Cancer Histopathology Image Classification (BreakHis 40 × : 95.50%, and BreakHis 100 × : 96.79% accuracy) and Single cells in Pap smear images for cervical cancer classification (SIPaKMeD: 99.18% accuracy). Also, the proposed method has been evaluated using five-fold cross validation and achieved improved performances on these datasets. Clearly, the proposed PND-Net effectively boosts the performances of automated health analysis of various plants in real and intricate field environments, implying PND-Net's aptness for agricultural growth as well as human cancer classification.
Collapse
Affiliation(s)
- Asish Bera
- Department of Computer Science and Information Systems, BITS Pilani, Pilani Campus, Pilani, Rajasthan, 333031, India.
| | - Debotosh Bhattacharjee
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, West Bengal, 700032, India
- Faculty of Informatics and Management, University of Hradec Kralove, Hradec Kralove, Czech Republic
| | - Ondrej Krejcar
- Faculty of Informatics and Management, University of Hradec Kralove, Hradec Kralove, Czech Republic
- Skoda Auto University, Na Karmeli 1457, 293 01, Mlada Boleslav, Czech Republic
- Malaysia Japan International Institute of Technology (MJIIT), Universiti Teknologi Malaysia, Kuala Lumpur, Malaysia
| |
Collapse
|
3
|
Takeda K, Sakai T, Mitate E. Background removal for debiasing computer-aided cytological diagnosis. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03169-0. [PMID: 38918281 DOI: 10.1007/s11548-024-03169-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 04/30/2024] [Indexed: 06/27/2024]
Abstract
To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.
Collapse
Affiliation(s)
- Keita Takeda
- School of Information and Data Sciences, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan.
| | - Tomoya Sakai
- School of Information and Data Sciences, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan
- Graduate School of Integrated Science and Technology, Nagasaki University, 1-14 Bunkyo, Nagasaki, 8528521, Japan
| | - Eiji Mitate
- Department of Oral and Maxillofacial Surgery, Kanazawa Medical University, 1-1 Daigaku, Uchinada, Kahoku, Ishikawa, 9200293, Japan
| |
Collapse
|
4
|
Raufeisen J, Xie K, Hörst F, Braunschweig T, Li J, Kleesiek J, Röhrig R, Egger J, Leibe B, Hölzle F, Hermans A, Puladi B. Cyto R-CNN and CytoNuke Dataset: Towards reliable whole-cell segmentation in bright-field histological images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 252:108215. [PMID: 38781811 DOI: 10.1016/j.cmpb.2024.108215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Revised: 04/19/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Cell segmentation in bright-field histological slides is a crucial topic in medical image analysis. Having access to accurate segmentation allows researchers to examine the relationship between cellular morphology and clinical observations. Unfortunately, most segmentation methods known today are limited to nuclei and cannot segment the cytoplasm. METHODS We present a new network architecture Cyto R-CNN that is able to accurately segment whole cells (with both the nucleus and the cytoplasm) in bright-field images. We also present a new dataset CytoNuke, consisting of multiple thousand manual annotations of head and neck squamous cell carcinoma cells. Utilizing this dataset, we compared the performance of Cyto R-CNN to other popular cell segmentation algorithms, including QuPath's built-in algorithm, StarDist, Cellpose and a multi-scale Attention Deeplabv3+. To evaluate segmentation performance, we calculated AP50, AP75 and measured 17 morphological and staining-related features for all detected cells. We compared these measurements to the gold standard of manual segmentation using the Kolmogorov-Smirnov test. RESULTS Cyto R-CNN achieved an AP50 of 58.65% and an AP75 of 11.56% in whole-cell segmentation, outperforming all other methods (QuPath 19.46/0.91%; StarDist 45.33/2.32%; Cellpose 31.85/5.61%, Deeplabv3+ 3.97/1.01%). Cell features derived from Cyto R-CNN showed the best agreement to the gold standard (D¯=0.15) outperforming QuPath (D¯=0.22), StarDist (D¯=0.25), Cellpose (D¯=0.23) and Deeplabv3+ (D¯=0.33). CONCLUSION Our newly proposed Cyto R-CNN architecture outperforms current algorithms in whole-cell segmentation while providing more reliable cell measurements than any other model. This could improve digital pathology workflows, potentially leading to improved diagnosis. Moreover, our published dataset can be used to develop further models in the future.
Collapse
Affiliation(s)
- Johannes Raufeisen
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Center for Integrated Oncology Aachen Bonn Cologne Düsseldorf (CIO ABCD), Pauwelsstraße 30, 52074 Aachen, Germany
| | - Kunpeng Xie
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Center for Integrated Oncology Aachen Bonn Cologne Düsseldorf (CIO ABCD), Pauwelsstraße 30, 52074 Aachen, Germany
| | - Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Hufelandstr. 55, 45147 Essen, Germany
| | - Till Braunschweig
- Center for Integrated Oncology Aachen Bonn Cologne Düsseldorf (CIO ABCD), Pauwelsstraße 30, 52074 Aachen, Germany; Institute of Pathology, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Institute of Pathology, LMU Munich, Thalkirchner Str. 36, 80337 Munich, Germany
| | - Jianning Li
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Hufelandstr. 55, 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Hufelandstr. 55, 45147 Essen, Germany; Department of Physics, TU Dortmund University, August-Schmidt-Str. 4, 44227 Dortmund, Germany
| | - Rainer Röhrig
- Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Center for Integrated Oncology Aachen Bonn Cologne Düsseldorf (CIO ABCD), Pauwelsstraße 30, 52074 Aachen, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Girardetstraße 2, 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Hufelandstr. 55, 45147 Essen, Germany; Center for Virtual and Extended Reality in Medicine (ZvRM), University Hospital Essen, University Medicine Essen, Hufelandstraße 55, 45147 Essen, Germany
| | - Bastian Leibe
- Visual Computing Institute (Computer Vision), RWTH Aachen University, Mies-van-der-Rohe Str. 15, 52074 Aachen, Germany
| | - Frank Hölzle
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Center for Integrated Oncology Aachen Bonn Cologne Düsseldorf (CIO ABCD), Pauwelsstraße 30, 52074 Aachen, Germany
| | - Alexander Hermans
- Visual Computing Institute (Computer Vision), RWTH Aachen University, Mies-van-der-Rohe Str. 15, 52074 Aachen, Germany
| | - Behrus Puladi
- Department of Oral and Maxillofacial Surgery, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Institute of Medical Informatics, University Hospital RWTH Aachen, Pauwelsstr. 30, 52074 Aachen, Germany; Center for Integrated Oncology Aachen Bonn Cologne Düsseldorf (CIO ABCD), Pauwelsstraße 30, 52074 Aachen, Germany.
| |
Collapse
|
5
|
van Diest PJ, Flach RN, van Dooijeweert C, Makineli S, Breimer GE, Stathonikos N, Pham P, Nguyen TQ, Veta M. Pros and cons of artificial intelligence implementation in diagnostic pathology. Histopathology 2024; 84:924-934. [PMID: 38433288 DOI: 10.1111/his.15153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 12/29/2023] [Accepted: 01/19/2024] [Indexed: 03/05/2024]
Abstract
The rapid introduction of digital pathology has greatly facilitated development of artificial intelligence (AI) models in pathology that have shown great promise in assisting morphological diagnostics and quantitation of therapeutic targets. We are now at a tipping point where companies have started to bring algorithms to the market, and questions arise whether the pathology community is ready to implement AI in routine workflow. However, concerns also arise about the use of AI in pathology. This article reviews the pros and cons of introducing AI in diagnostic pathology.
Collapse
Affiliation(s)
- Paul J van Diest
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Rachel N Flach
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
| | | | - Seher Makineli
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Surgical Oncology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Gerben E Breimer
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nikolas Stathonikos
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Paul Pham
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Tri Q Nguyen
- Department of Pathology, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mitko Veta
- Department of Oncological Urology, University Medical Center Utrecht, Utrecht, the Netherlands
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| |
Collapse
|
6
|
Russo C, Bria A, Marrocco C. GravityNet for end-to-end small lesion detection. Artif Intell Med 2024; 150:102842. [PMID: 38553147 DOI: 10.1016/j.artmed.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 03/01/2024] [Accepted: 03/11/2024] [Indexed: 04/02/2024]
Abstract
This paper introduces a novel one-stage end-to-end detector specifically designed to detect small lesions in medical images. Precise localization of small lesions presents challenges due to their appearance and the diverse contextual backgrounds in which they are found. To address this, our approach introduces a new type of pixel-based anchor that dynamically moves towards the targeted lesion for detection. We refer to this new architecture as GravityNet, and the novel anchors as gravity points since they appear to be "attracted" by the lesions. We conducted experiments on two well-established medical problems involving small lesions to evaluate the performance of the proposed approach: microcalcifications detection in digital mammograms and microaneurysms detection in digital fundus images. Our method demonstrates promising results in effectively detecting small lesions in these medical imaging tasks.
Collapse
Affiliation(s)
- Ciro Russo
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and L.M., Via G. Di Biasio 43, 03043 Cassino (FR), Italy.
| |
Collapse
|
7
|
He L, Li M, Wang X, Wu X, Yue G, Wang T, Zhou Y, Lei B, Zhou G. Morphology-based deep learning enables accurate detection of senescence in mesenchymal stem cell cultures. BMC Biol 2024; 22:1. [PMID: 38167069 PMCID: PMC10762950 DOI: 10.1186/s12915-023-01780-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 11/24/2023] [Indexed: 01/05/2024] Open
Abstract
BACKGROUND Cell senescence is a sign of aging and plays a significant role in the pathogenesis of age-related disorders. For cell therapy, senescence may compromise the quality and efficacy of cells, posing potential safety risks. Mesenchymal stem cells (MSCs) are currently undergoing extensive research for cell therapy, thus necessitating the development of effective methods to evaluate senescence. Senescent MSCs exhibit distinctive morphology that can be used for detection. However, morphological assessment during MSC production is often subjective and uncertain. New tools are required for the reliable evaluation of senescent single cells on a large scale in live imaging of MSCs. RESULTS We have developed a successful morphology-based Cascade region-based convolution neural network (Cascade R-CNN) system for detecting senescent MSCs, which can automatically locate single cells of different sizes and shapes in multicellular images and assess their senescence state. Additionally, we tested the applicability of the Cascade R-CNN system for MSC senescence and examined the correlation between morphological changes with other senescence indicators. CONCLUSIONS This deep learning has been applied for the first time to detect senescent MSCs, showing promising performance in both chronic and acute MSC senescence. The system can be a labor-saving and cost-effective option for screening MSC culture conditions and anti-aging drugs, as well as providing a powerful tool for non-invasive and real-time morphological image analysis integrated into cell production.
Collapse
Affiliation(s)
- Liangge He
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, 1066 Xueyuan Avenue, Shenzhen, 518060, China
- Department of Medical Cell Biology and Genetics, Shenzhen Key Laboratory of Anti-Aging and Regenerative Medicine, Shenzhen Engineering Laboratory of Regenerative Technologies for Orthopedic Diseases, Shenzhen University Medical School, Shenzhen, 518060, China
| | - Mingzhu Li
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, 1066 Xueyuan Avenue, Shenzhen, 518060, China
| | - Xinglie Wang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, 1066 Xueyuan Avenue, Shenzhen, 518060, China
| | - Xiaoyan Wu
- Department of Dermatology, Shenzhen Institute of Translational Medicine, Shenzhen Second People's Hospital, The First Affiliated Hospital of Shenzhen University, Shenzhen, 518035, China
| | - Guanghui Yue
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, 1066 Xueyuan Avenue, Shenzhen, 518060, China
| | - Tianfu Wang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, 1066 Xueyuan Avenue, Shenzhen, 518060, China
| | - Yan Zhou
- Department of Medical Cell Biology and Genetics, Shenzhen Key Laboratory of Anti-Aging and Regenerative Medicine, Shenzhen Engineering Laboratory of Regenerative Technologies for Orthopedic Diseases, Shenzhen University Medical School, Shenzhen, 518060, China
- Lungene Biotech Ltd., Shenzhen, 18000, China
| | - Baiying Lei
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, 1066 Xueyuan Avenue, Shenzhen, 518060, China.
| | - Guangqian Zhou
- Department of Medical Cell Biology and Genetics, Shenzhen Key Laboratory of Anti-Aging and Regenerative Medicine, Shenzhen Engineering Laboratory of Regenerative Technologies for Orthopedic Diseases, Shenzhen University Medical School, Shenzhen, 518060, China.
| |
Collapse
|
8
|
Xing F, Yang X, Cornish TC, Ghosh D. Learning with limited target data to detect cells in cross-modality images. Med Image Anal 2023; 90:102969. [PMID: 37802010 DOI: 10.1016/j.media.2023.102969] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 08/16/2023] [Accepted: 09/11/2023] [Indexed: 10/08/2023]
Abstract
Deep neural networks have achieved excellent cell or nucleus quantification performance in microscopy images, but they often suffer from performance degradation when applied to cross-modality imaging data. Unsupervised domain adaptation (UDA) based on generative adversarial networks (GANs) has recently improved the performance of cross-modality medical image quantification. However, current GAN-based UDA methods typically require abundant target data for model training, which is often very expensive or even impossible to obtain for real applications. In this paper, we study a more realistic yet challenging UDA situation, where (unlabeled) target training data is limited and previous work seldom delves into cell identification. We first enhance a dual GAN with task-specific modeling, which provides additional supervision signals to assist with generator learning. We explore both single-directional and bidirectional task-augmented GANs for domain adaptation. Then, we further improve the GAN by introducing a differentiable, stochastic data augmentation module to explicitly reduce discriminator overfitting. We examine source-, target-, and dual-domain data augmentation for GAN enhancement, as well as joint task and data augmentation in a unified GAN-based UDA framework. We evaluate the framework for cell detection on multiple public and in-house microscopy image datasets, which are acquired with different imaging modalities, staining protocols and/or tissue preparations. The experiments demonstrate that our method significantly boosts performance when compared with the reference baseline, and it is superior to or on par with fully supervised models that are trained with real target annotations. In addition, our method outperforms recent state-of-the-art UDA approaches by a large margin on different datasets.
Collapse
Affiliation(s)
- Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA.
| | - Xinyi Yang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, 13001 E 17th Pl, Aurora, CO 80045, USA
| |
Collapse
|
9
|
Waqas A, Bui MM, Glassy EF, El Naqa I, Borkowski P, Borkowski AA, Rasool G. Revolutionizing Digital Pathology With the Power of Generative Artificial Intelligence and Foundation Models. J Transl Med 2023; 103:100255. [PMID: 37757969 DOI: 10.1016/j.labinv.2023.100255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 09/06/2023] [Accepted: 09/19/2023] [Indexed: 09/29/2023] Open
Abstract
Digital pathology has transformed the traditional pathology practice of analyzing tissue under a microscope into a computer vision workflow. Whole-slide imaging allows pathologists to view and analyze microscopic images on a computer monitor, enabling computational pathology. By leveraging artificial intelligence (AI) and machine learning (ML), computational pathology has emerged as a promising field in recent years. Recently, task-specific AI/ML (eg, convolutional neural networks) has risen to the forefront, achieving above-human performance in many image-processing and computer vision tasks. The performance of task-specific AI/ML models depends on the availability of many annotated training datasets, which presents a rate-limiting factor for AI/ML development in pathology. Task-specific AI/ML models cannot benefit from multimodal data and lack generalization, eg, the AI models often struggle to generalize to new datasets or unseen variations in image acquisition, staining techniques, or tissue types. The 2020s are witnessing the rise of foundation models and generative AI. A foundation model is a large AI model trained using sizable data, which is later adapted (or fine-tuned) to perform different tasks using a modest amount of task-specific annotated data. These AI models provide in-context learning, can self-correct mistakes, and promptly adjust to user feedback. In this review, we provide a brief overview of recent advances in computational pathology enabled by task-specific AI, their challenges and limitations, and then introduce various foundation models. We propose to create a pathology-specific generative AI based on multimodal foundation models and present its potentially transformative role in digital pathology. We describe different use cases, delineating how it could serve as an expert companion of pathologists and help them efficiently and objectively perform routine laboratory tasks, including quantifying image analysis, generating pathology reports, diagnosis, and prognosis. We also outline the potential role that foundation models and generative AI can play in standardizing the pathology laboratory workflow, education, and training.
Collapse
Affiliation(s)
- Asim Waqas
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; Department of Electrical Engineering, University of South Florida, Tampa, Florida.
| | - Marilyn M Bui
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; Department of Pathology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Eric F Glassy
- Affiliated Pathologists Medical Group, Inc., Rancho Dominguez, California
| | - Issam El Naqa
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida
| | - Piotr Borkowski
- Quest Diagnostics/Ameripath, Tampa, Florida; Center of Excellence for Digital and AI-Empowered Pathology, Quest Diagnostics, Tampa, Florida
| | - Andrew A Borkowski
- University of South Florida, Morsani College of Medicine, Tampa, Florida; James A. Haley Veterans' Hospital, Tampa, Florida; National Artificial Intelligence Institute, Washington, District of Columbia
| | - Ghulam Rasool
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida; Department of Electrical Engineering, University of South Florida, Tampa, Florida; University of South Florida, Morsani College of Medicine, Tampa, Florida; Department of Neuro-Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida
| |
Collapse
|
10
|
Meng Y, Yang Y, Hu M, Zhang Z, Zhou X. Artificial intelligence-based radiomics in bone tumors: Technical advances and clinical application. Semin Cancer Biol 2023; 95:75-87. [PMID: 37499847 DOI: 10.1016/j.semcancer.2023.07.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/21/2023] [Accepted: 07/22/2023] [Indexed: 07/29/2023]
Abstract
Radiomics is the extraction of predefined mathematic features from medical images for predicting variables of clinical interest. Recent research has demonstrated that radiomics can be processed by artificial intelligence algorithms to reveal complex patterns and trends for diagnosis, and prediction of prognosis and response to treatment modalities in various types of cancer. Artificial intelligence tools can utilize radiological images to solve next-generation issues in clinical decision making. Bone tumors can be classified as primary and secondary (metastatic) tumors. Osteosarcoma, Ewing sarcoma, and chondrosarcoma are the dominating primary tumors of bone. The development of bone tumor model systems and relevant research, and the assessment of novel treatment methods are ongoing to improve clinical outcomes, notably for patients with metastases. Artificial intelligence and radiomics have been utilized in almost full spectrum of clinical care of bone tumors. Radiomics models have achieved excellent performance in the diagnosis and grading of bone tumors. Furthermore, the models enable to predict overall survival, metastases, and recurrence. Radiomics features have exhibited promise in assisting therapeutic planning and evaluation, especially neoadjuvant chemotherapy. This review provides an overview of the evolution and opportunities for artificial intelligence in imaging, with a focus on hand-crafted features and deep learning-based radiomics approaches. We summarize the current application of artificial intelligence-based radiomics both in primary and metastatic bone tumors, and discuss the limitations and future opportunities of artificial intelligence-based radiomics in this field. In the era of personalized medicine, our in-depth understanding of emerging artificial intelligence-based radiomics approaches will bring innovative solutions to bone tumors and achieve clinical application.
Collapse
Affiliation(s)
- Yichen Meng
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Yue Yang
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Miao Hu
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China
| | - Zheng Zhang
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China.
| | - Xuhui Zhou
- Department of Orthopedics, Second Affiliated Hospital of Naval Medical University, Shanghai 200003, PR China.
| |
Collapse
|
11
|
Levy JJ, Chan N, Marotti JD, Kerr DA, Gutmann EJ, Glass RE, Dodge CP, Suriawinata AA, Christensen BC, Liu X, Vaickus LJ. Large-scale validation study of an improved semiautonomous urine cytology assessment tool: AutoParis-X. Cancer Cytopathol 2023; 131:637-654. [PMID: 37377320 DOI: 10.1002/cncy.22732] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 05/11/2023] [Accepted: 05/12/2023] [Indexed: 06/29/2023]
Abstract
BACKGROUND Adopting a computational approach for the assessment of urine cytology specimens has the potential to improve the efficiency, accuracy, and reliability of bladder cancer screening, which has heretofore relied on semisubjective manual assessment methods. As rigorous, quantitative criteria and guidelines have been introduced for improving screening practices (e.g., The Paris System for Reporting Urinary Cytology), algorithms to emulate semiautonomous diagnostic decision-making have lagged behind, in part because of the complex and nuanced nature of urine cytology reporting. METHODS In this study, the authors report on the development and large-scale validation of a deep-learning tool, AutoParis-X, which can facilitate rapid, semiautonomous examination of urine cytology specimens. RESULTS The results of this large-scale, retrospective validation study indicate that AutoParis-X can accurately determine urothelial cell atypia and aggregate a wide variety of cell-related and cluster-related information across a slide to yield an atypia burden score, which correlates closely with overall specimen atypia and is predictive of Paris system diagnostic categories. Importantly, this approach accounts for challenges associated with the assessment of overlapping cell cluster borders, which improve the ability to predict specimen atypia and accurately estimate the nuclear-to-cytoplasm ratio for cells in these clusters. CONCLUSIONS The authors developed a publicly available, open-source, interactive web application that features a simple, easy-to-use display for examining urine cytology whole-slide images and determining the level of atypia in specific cells, flagging the most abnormal cells for pathologist review. The accuracy of AutoParis-X (and other semiautomated digital pathology systems) indicates that these technologies are approaching clinical readiness and necessitates full evaluation of these algorithms in head-to-head clinical trials.
Collapse
Affiliation(s)
- Joshua J Levy
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Department of Dermatology, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Natt Chan
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Jonathan D Marotti
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Darcy A Kerr
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Edward J Gutmann
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Ryan E Glass
- Department of Pathology, University of Pittsburgh Medical Center East, Pittsburgh, Pennsylvania, USA
| | | | - Arief A Suriawinata
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Brock C Christensen
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
- Department of Molecular and Systems Biology, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
- Department of Community and Family Medicine, Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Xiaoying Liu
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| | - Louis J Vaickus
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire, USA
- Dartmouth College Geisel School of Medicine, Hanover, New Hampshire, USA
| |
Collapse
|
12
|
Giarnieri E, Scardapane S. Towards Artificial Intelligence Applications in Next Generation Cytopathology. Biomedicines 2023; 11:2225. [PMID: 37626721 PMCID: PMC10452064 DOI: 10.3390/biomedicines11082225] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/04/2023] [Accepted: 08/05/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last 20 years we have seen an increase in techniques in the field of computational pathology and machine learning, improving our ability to analyze and interpret imaging. Neural networks, in particular, have been used for more than thirty years, starting with the computer assisted smear test using early generation models. Today, advanced machine learning, working on large image data sets, has been shown to perform classification, detection, and segmentation with remarkable accuracy and generalization in several domains. Deep learning algorithms, as a branch of machine learning, are thus attracting attention in digital pathology and cytopathology, providing feasible solutions for accurate and efficient cytological diagnoses, ranging from efficient cell counts to automatic classification of anomalous cells and queries over large clinical databases. The integration of machine learning with related next-generation technologies powered by AI, such as augmented/virtual reality, metaverse, and computational linguistic models are a focus of interest in health care digitalization, to support education, diagnosis, and therapy. In this work we will consider how all these innovations can help cytopathology to go beyond the microscope and to undergo a hyper-digitalized transformation. We also discuss specific challenges to their applications in the field, notably, the requirement for large-scale cytopathology datasets, the necessity of new protocols for sharing information, and the need for further technological training for pathologists.
Collapse
Affiliation(s)
- Enrico Giarnieri
- Cytopathology Unit, Department of Clinical and Molecular Medicine, Sant’Andrea Hospital, Sapienza University of Rome, Piazzale Aldo Moro 5, 00189 Rome, Italy
| | - Simone Scardapane
- Department of Information Engineering, Electronics and Telecommunications, Sapienza University of Rome, Via Eudossiana 18, 00196 Rome, Italy;
| |
Collapse
|
13
|
Liang Y, Feng S, Liu Q, Kuang H, Liu J, Liao L, Du Y, Wang J. Exploring Contextual Relationships for Cervical Abnormal Cell Detection. IEEE J Biomed Health Inform 2023; 27:4086-4097. [PMID: 37192032 DOI: 10.1109/jbhi.2023.3276919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Cervical abnormal cell detection is a challenging task as the morphological discrepancies between abnormal and normal cells are usually subtle. To determine whether a cervical cell is normal or abnormal, cytopathologists always take surrounding cells as references to identify its abnormality. To mimic these behaviors, we propose to explore contextual relationships to boost the performance of cervical abnormal cell detection. Specifically, both contextual relationships between cells and cell-to-global images are exploited to enhance features of each region of interest (RoI) proposal. Accordingly, two modules, dubbed as RoI-relationship attention module (RRAM) and global RoI attention module (GRAM), are developed and their combination strategies are also investigated. We establish a strong baseline by using Double-Head Faster R-CNN with a feature pyramid network (FPN) and integrate our RRAM and GRAM into it to validate the effectiveness of the proposed modules. Experiments conducted on a large cervical cell detection dataset reveal that the introduction of RRAM and GRAM both achieves better average precision (AP) than the baseline methods. Moreover, when cascading RRAM and GRAM, our method outperforms the state-of-the-art (SOTA) methods. Furthermore, we show that the proposed feature-enhancing scheme can facilitate image- and smear-level classification.
Collapse
|