1
|
Xing X, Murdoch S, Tang C, Papanastasiou G, Cross-Zamirski J, Guo Y, Xiao X, Schönlieb CB, Wang Y, Yang G. Can generative AI replace immunofluorescent staining processes? A comparison study of synthetically generated cellpainting images from brightfield. Comput Biol Med 2024; 182:109102. [PMID: 39255659 DOI: 10.1016/j.compbiomed.2024.109102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/13/2024] [Accepted: 09/01/2024] [Indexed: 09/12/2024]
Abstract
Cell imaging assays utilising fluorescence stains are essential for observing sub-cellular organelles and their responses to perturbations. Immunofluorescent staining process is routinely in labs, however the recent innovations in generative AI is challenging the idea of wet lab immunofluorescence (IF) staining. This is especially true when the availability and cost of specific fluorescence dyes is a problem to some labs. Furthermore, staining process takes time and leads to inter-intra-technician and hinders downstream image and data analysis, and the reusability of image data for other projects. Recent studies showed the use of generated synthetic IF images from brightfield (BF) images using generative AI algorithms in the literature. Therefore, in this study, we benchmark and compare five models from three types of IF generation backbones-CNN, GAN, and diffusion models-using a publicly available dataset. This paper not only serves as a comparative study to determine the best-performing model but also proposes a comprehensive analysis pipeline for evaluating the efficacy of generators in IF image synthesis. We highlighted the potential of deep learning-based generators for IF image synthesis, while also discussed potential issues and future research directions. Although generative AI shows promise in simplifying cell phenotyping using only BF images with IF staining, further research and validations are needed to address the key challenges of model generalisability, batch effects, feature relevance and computational costs.
Collapse
Affiliation(s)
- Xiaodan Xing
- Bioengineering Department and Imperial-X, Imperial College London, London, United Kingdom
| | - Siofra Murdoch
- Bioengineering Department and Imperial-X, Imperial College London, London, United Kingdom
| | - Chunling Tang
- Centre for Craniofacial & Regenerative Biology, King's College London, London, United Kingdom
| | - Giorgos Papanastasiou
- Archimedes Unit, Athena Research Centre, Athens, Greece; School of Computer Science and Electronic Engineering, The University of Essex, Essex, United Kingdom
| | - Jan Cross-Zamirski
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Yunzhe Guo
- Centre for Craniofacial & Regenerative Biology, King's College London, London, United Kingdom
| | - Xianglu Xiao
- Bioengineering Department and Imperial-X, Imperial College London, London, United Kingdom
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, United Kingdom
| | - Yinhai Wang
- Data Sciences and Quantitative Biology, Discovery Sciences, AstraZeneca R&D, Cambridge, United Kingdom
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, London, United Kingdom; National Heart and Lung Institute, Imperial College London, London, United Kingdom; School of Biomedical Engineering & Imaging Sciences, King's College London, London, United Kingdom; Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom.
| |
Collapse
|
2
|
Fu M, Fang M, Khan RA, Liao B, Hu Z, Wu FX. SG-Fusion: A swin-transformer and graph convolution-based multi-modal deep neural network for glioma prognosis. Artif Intell Med 2024; 157:102972. [PMID: 39232270 DOI: 10.1016/j.artmed.2024.102972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 07/22/2024] [Accepted: 08/29/2024] [Indexed: 09/06/2024]
Abstract
The integration of morphological attributes extracted from histopathological images and genomic data holds significant importance in advancing tumor diagnosis, prognosis, and grading. Histopathological images are acquired through microscopic examination of tissue slices, providing valuable insights into cellular structures and pathological features. On the other hand, genomic data provides information about tumor gene expression and functionality. The fusion of these two distinct data types is crucial for gaining a more comprehensive understanding of tumor characteristics and progression. In the past, many studies relied on single-modal approaches for tumor diagnosis. However, these approaches had limitations as they were unable to fully harness the information from multiple data sources. To address these limitations, researchers have turned to multi-modal methods that concurrently leverage both histopathological images and genomic data. These methods better capture the multifaceted nature of tumors and enhance diagnostic accuracy. Nonetheless, existing multi-modal methods have, to some extent, oversimplified the extraction processes for both modalities and the fusion process. In this study, we presented a dual-branch neural network, namely SG-Fusion. Specifically, for the histopathological modality, we utilize the Swin-Transformer structure to capture both local and global features and incorporate contrastive learning to encourage the model to discern commonalities and differences in the representation space. For the genomic modality, we developed a graph convolutional network based on gene functional and expression level similarities. Additionally, our model integrates a cross-attention module to enhance information interaction and employs divergence-based regularization to enhance the model's generalization performance. Validation conducted on glioma datasets from the Cancer Genome Atlas unequivocally demonstrates that our SG-Fusion model outperforms both single-modal methods and existing multi-modal approaches in both survival analysis and tumor grading.
Collapse
Affiliation(s)
- Minghan Fu
- Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, S7N 5A9, SK, Canada
| | - Ming Fang
- Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, S7N 5A9, SK, Canada
| | - Rayyan Azam Khan
- Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, S7N 5A9, SK, Canada
| | - Bo Liao
- School of Mathematics and Statistics, Hainan Normal University, Haikou, 571158, Hainan, China
| | - Zhanli Hu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, Guangdong, China
| | - Fang-Xiang Wu
- Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, S7N 5A9, SK, Canada; Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, S7N 5A9, SK, Canada; Department of Computer Science, University of Saskatchewan, Saskatoon, S7N 5A9, SK, Canada.
| |
Collapse
|
3
|
Windsor R, Jamaludin A, Kadir T, Zisserman A. Automated detection, labelling and radiological grading of clinical spinal MRIs. Sci Rep 2024; 14:14993. [PMID: 38951574 PMCID: PMC11217300 DOI: 10.1038/s41598-024-64580-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 06/11/2024] [Indexed: 07/03/2024] Open
Abstract
Spinal magnetic resonance (MR) scans are a vital tool for diagnosing the cause of back pain for many diseases and conditions. However, interpreting clinically useful information from these scans can be challenging, time-consuming and hard to reproduce across different radiologists. In this paper, we alleviate these problems by introducing a multi-stage automated pipeline for analysing spinal MR scans. This pipeline first detects and labels vertebral bodies across several commonly used sequences (e.g. T1w, T2w and STIR) and fields of view (e.g. lumbar, cervical, whole spine). Using these detections it then performs automated diagnosis for several spinal disorders, including intervertebral disc degenerative changes in T1w and T2w lumbar scans, and spinal metastases, cord compression and vertebral fractures. To achieve this, we propose a new method of vertebrae detection and labelling, using vector fields to group together detected vertebral landmarks and a language-modelling inspired beam search to determine the corresponding levels of the detections. We also employ a new transformer-based architecture to perform radiological grading which incorporates context from multiple vertebrae and sequences, as a real radiologist would. The performance of each stage of the pipeline is tested in isolation on several clinical datasets, each consisting of 66 to 421 scans. The outputs are compared to manual annotations of expert radiologists, demonstrating accurate vertebrae detection across a range of scan parameters. Similarly, the model's grading predictions for various types of disc degeneration and detection of spinal metastases closely match those of an expert radiologist. To aid future research, our code and trained models are made publicly available.
Collapse
Affiliation(s)
- Rhydian Windsor
- Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Amir Jamaludin
- Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Timor Kadir
- Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Andrew Zisserman
- Visual Geometry Group, Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Salehi MA, Mohammadi S, Harandi H, Zakavi SS, Jahanshahi A, Shahrabi Farahani M, Wu JS. Diagnostic Performance of Artificial Intelligence in Detection of Primary Malignant Bone Tumors: a Meta-Analysis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:766-777. [PMID: 38343243 PMCID: PMC11031503 DOI: 10.1007/s10278-023-00945-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/04/2023] [Accepted: 10/12/2023] [Indexed: 04/20/2024]
Abstract
We aim to conduct a meta-analysis on studies that evaluated the diagnostic performance of artificial intelligence (AI) algorithms in the detection of primary bone tumors, distinguishing them from other bone lesions, and comparing them with clinician assessment. A systematic search was conducted using a combination of keywords related to bone tumors and AI. After extracting contingency tables from all included studies, we performed a meta-analysis using random-effects model to determine the pooled sensitivity and specificity, accompanied by their respective 95% confidence intervals (CI). Quality assessment was evaluated using a modified version of Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) and Prediction Model Study Risk of Bias Assessment Tool (PROBAST). The pooled sensitivities for AI algorithms and clinicians on internal validation test sets for detecting bone neoplasms were 84% (95% CI: 79.88) and 76% (95% CI: 64.85), and pooled specificities were 86% (95% CI: 81.90) and 64% (95% CI: 55.72), respectively. At external validation, the pooled sensitivity and specificity for AI algorithms were 84% (95% CI: 75.90) and 91% (95% CI: 83.96), respectively. The same numbers for clinicians were 85% (95% CI: 73.92) and 94% (95% CI: 89.97), respectively. The sensitivity and specificity for clinicians with AI assistance were 95% (95% CI: 86.98) and 57% (95% CI: 48.66). Caution is needed when interpreting findings due to potential limitations. Further research is needed to bridge this gap in scientific understanding and promote effective implementation for medical practice advancement.
Collapse
Affiliation(s)
- Mohammad Amin Salehi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Soheil Mohammadi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran.
| | - Hamid Harandi
- School of Medicine, Tehran University of Medical Sciences, Pour Sina St, Keshavarz Blvd, Tehran, 1417613151, Iran
| | - Seyed Sina Zakavi
- School of Medicine, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Jahanshahi
- School of Medicine, Guilan University of Medical Sciences, Rasht, Iran
| | | | - Jim S Wu
- Department of Radiology, Beth Israel Deaconess Medical Center, Harvard Medical School, 330 Brookline Avenue, Boston, MA, 02215, USA
| |
Collapse
|
5
|
De A, Mishra N, Chang HT. An approach to the dermatological classification of histopathological skin images using a hybridized CNN-DenseNet model. PeerJ Comput Sci 2024; 10:e1884. [PMID: 38435616 PMCID: PMC10909212 DOI: 10.7717/peerj-cs.1884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 01/29/2024] [Indexed: 03/05/2024]
Abstract
This research addresses the challenge of automating skin disease diagnosis using dermatoscopic images. The primary issue lies in accurately classifying pigmented skin lesions, which traditionally rely on manual assessment by dermatologists and are prone to subjectivity and time consumption. By integrating a hybrid CNN-DenseNet model, this study aimed to overcome the complexities of differentiating various skin diseases and automating the diagnostic process effectively. Our methodology involved rigorous data preprocessing, exploratory data analysis, normalization, and label encoding. Techniques such as model hybridization, batch normalization and data fitting were employed to optimize the model architecture and data fitting. Initial iterations of our convolutional neural network (CNN) model achieved an accuracy of 76.22% on the test data and 75.69% on the validation data. Recognizing the need for improvement, the model was hybridized with DenseNet architecture and ResNet architecture was implemented for feature extraction and then further trained on the HAM10000 and PAD-UFES-20 datasets. Overall, our efforts resulted in a hybrid model that demonstrated an impressive accuracy of 95.7% on the HAM10000 dataset and 91.07% on the PAD-UFES-20 dataset. In comparison to recently published works, our model stands out because of its potential to effectively diagnose skin diseases such as melanocytic nevi, melanoma, benign keratosis-like lesions, basal cell carcinoma, actinic keratoses, vascular lesions, and dermatofibroma, all of which rival the diagnostic accuracy of real-world clinical specialists but also offer customization potential for more nuanced clinical uses.
Collapse
Affiliation(s)
- Anubhav De
- School of Computing Science & Engineering, VIT Bhopal University, Madhya Pradesh, India
| | - Nilamadhab Mishra
- School of Computing Science & Engineering, VIT Bhopal University, Madhya Pradesh, India
| | - Hsien-Tsung Chang
- Department of Computer Science and Information Engineering, Chang Gung University, Taoyuan, Taiwan
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Taoyuan, Taiwan
- Artificial Intelligence Research Center, Chang Gung University, Taoyuan, Taiwan
- Bachelor Program in Artificial Intelligence, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
6
|
Zhang Y, Hu M, Zhao W, Liu X, Peng Q, Meng B, Yang S, Feng X, Zhang L. A Bibliometric Analysis of Artificial Intelligence Applications in Spine Care. J Neurol Surg A Cent Eur Neurosurg 2024; 85:62-73. [PMID: 36640757 DOI: 10.1055/a-2013-3149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
BACKGROUND With the rapid development of science and technology, artificial intelligence (AI) has been widely used in the diagnosis and prognosis of various spine diseases. It has been proved that AI has a broad prospect in accurate diagnosis and treatment of spine disorders. METHODS On May 7, 2022, the Web of Science (WOS) Core Collection database was used to identify the documents on the application of AI in the field of spine care. HistCite and VOSviewer were used for citation analysis and visualization mapping. RESULTS A total of 693 documents were included in the final analysis. The most prolific authors were Karhade A.V. and Schwab J.H. United States was the most productive country. The leading journal was Spine. The most frequently used keyword was spinal. The most prolific institution was Northwestern University in Illinois, USA. Network visualization map showed that United States was the largest network of international cooperation. The keyword "machine learning" had the strongest total link strengths (TLS) and largest number of occurrences. The latest trends suggest that AI for the diagnosis of spine diseases may receive widespread attention in the future. CONCLUSIONS AI has a wide range of application in the field of spine care, and an increasing number of scholars are committed to research on the use of AI in the field of spine care. Bibliometric analysis in the field of AI and spine provides an overall perspective, and the appreciation and research of these influential publications are useful for future research.
Collapse
Affiliation(s)
- Yu Zhang
- Department of Orthopedics, Clinical Medical College of Yangzhou University, Yangzhou, China
| | - Man Hu
- Graduate School of Dalian Medical University, Dalian, China
| | - Wenjie Zhao
- Graduate School of Dalian Medical University, Dalian, China
| | - Xin Liu
- Department of Orthopedics, Clinical Medical College of Yangzhou University, Yangzhou, China
| | - Qing Peng
- Department of Orthopedics, Clinical Medical College of Yangzhou University, Yangzhou, China
| | - Bo Meng
- Graduate School of Dalian Medical University, Dalian, China
| | - Sheng Yang
- Graduate School of Dalian Medical University, Dalian, China
| | - Xinmin Feng
- Department of Orthopedics, Clinical Medical College of Yangzhou University, Yangzhou, China
| | - Liang Zhang
- Department of Orthopedics, Clinical Medical College of Yangzhou University, Yangzhou, China
| |
Collapse
|
7
|
Zhao S, Li X, He J, Chen B, Li S. Sequence based local-global information fusion framework for vertebrae detection under pathological and FOV variation challenges. Comput Med Imaging Graph 2023; 108:102244. [PMID: 37429121 DOI: 10.1016/j.compmedimag.2023.102244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/03/2023] [Accepted: 05/11/2023] [Indexed: 07/12/2023]
Abstract
Automated vertebrae detection (identification and localization) aims to identify vertebrae and locate their centroids in medical images, which is a critical step of spinal computer-aided systems. However, due to unpredictable field-of-view and various pathology cases, the image content is diverse and the vertebral morphology can be abnormal in a variety of ways, which challenges the designed systems. In this paper, we propose an effective sequence-based framework robust to various tough cases for accurate vertebrae identification and localization. Our method consists of three sub-modules: (1) Local Feature Extraction (LFE) module designs a shape-compatible category-balanced sampler to collect patches to train a convolution neural network, which extracts representative local features and generates score maps. (2) Discriminative Sequential Image Description (DSID) module proposes a node screening strategy for reliable vertebral feature sequence construction based on feature maps and score maps. This effectively prevents false positives and false negatives in light-weighted dense prediction schemes and fuses local features into a hierarchical discriminative description of given images. (3) Spinal Pattern Exploitation (SPE) module designs an end-balanced relative position learning scheme to fuse hierarchical local-global information for comprehensively exploiting spinal patterns to overcome the FOV and pathological variation challenges in vertebrae detection. Extensive experiments on a challenging dataset consisting of 450 spinal MRIs show that the identification rate of FSDF reaches 0.974 ±0.025 and the localization error is only 4.742 ±2.928 pixels, which demonstrates the effectiveness of our method with pathological and field-of-view variations and its superiority over other state-of-the-art methods.
Collapse
Affiliation(s)
- Shen Zhao
- School of Intelligent Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Xiangsheng Li
- School of Intelligent Engineering, Sun Yat-sen University, Shenzhen 518107, China; Department of Automation, University of Science and Technology of China, Hefei 230027, China
| | - Jiayi He
- School of Intelligent Engineering, Sun Yat-sen University, Shenzhen 518107, China
| | - Bin Chen
- Orthopedics Department, The First Affiliated Hospital of Zhejiang University, Hangzhou 310003, China.
| | - Shuo Li
- Department of Biomedical Engineering, Case Western Reserve University, OH, USA
| |
Collapse
|