1
|
Sharkey MJ, Checkley EW, Swift AJ. Applications of artificial intelligence in computed tomography imaging for phenotyping pulmonary hypertension. Curr Opin Pulm Med 2024; 30:464-472. [PMID: 38989815 PMCID: PMC11309337 DOI: 10.1097/mcp.0000000000001103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
PURPOSE OF REVIEW Pulmonary hypertension is a heterogeneous condition with significant morbidity and mortality. Computer tomography (CT) plays a central role in determining the phenotype of pulmonary hypertension, informing treatment strategies. Many artificial intelligence tools have been developed in this modality for the assessment of pulmonary hypertension. This article reviews the latest CT artificial intelligence applications in pulmonary hypertension and related diseases. RECENT FINDINGS Multistructure segmentation tools have been developed in both pulmonary hypertension and nonpulmonary hypertension cohorts using state-of-the-art UNet architecture. These segmentations correspond well with those of trained radiologists, giving clinically valuable metrics in significantly less time. Artificial intelligence lung parenchymal assessment accurately identifies and quantifies lung disease patterns by integrating multiple radiomic techniques such as texture analysis and classification. This gives valuable information on disease burden and prognosis. There are many accurate artificial intelligence tools to detect acute pulmonary embolism. Detection of chronic pulmonary embolism proves more challenging with further research required. SUMMARY There are numerous artificial intelligence tools being developed to identify and quantify many clinically relevant parameters in both pulmonary hypertension and related disease cohorts. These potentially provide accurate and efficient clinical information, impacting clinical decision-making.
Collapse
Affiliation(s)
- Michael J. Sharkey
- Department of Clinical Medicine, University of Sheffield
- 3D Imaging Lab, Sheffield Teaching Hospitals NHS Foundation Trust
| | | | - Andrew J. Swift
- Department of Clinical Medicine, University of Sheffield
- Insigneo Institute for in Silico Medicine, University of Sheffield
- National Institute for Health and Care Research, Sheffield Biomedical Research Centre, Sheffield, UK
| |
Collapse
|
2
|
Podobnik G, Ibragimov B, Tappeiner E, Lee C, Kim JS, Mesbah Z, Modzelewski R, Ma Y, Yang F, Rudecki M, Wodziński M, Peterlin P, Strojan P, Vrtovec T. HaN-Seg: The head and neck organ-at-risk CT and MR segmentation challenge. Radiother Oncol 2024; 198:110410. [PMID: 38917883 DOI: 10.1016/j.radonc.2024.110410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 06/12/2024] [Accepted: 06/15/2024] [Indexed: 06/27/2024]
Abstract
BACKGROUND AND PURPOSE To promote the development of auto-segmentation methods for head and neck (HaN) radiation treatment (RT) planning that exploit the information of computed tomography (CT) and magnetic resonance (MR) imaging modalities, we organized HaN-Seg: The Head and Neck Organ-at-Risk CT and MR Segmentation Challenge. MATERIALS AND METHODS The challenge task was to automatically segment 30 organs-at-risk (OARs) of the HaN region in 14 withheld test cases given the availability of 42 publicly available training cases. Each case consisted of one contrast-enhanced CT and one T1-weighted MR image of the HaN region of the same patient, with up to 30 corresponding reference OAR delineation masks. The performance was evaluated in terms of the Dice similarity coefficient (DSC) and 95-percentile Hausdorff distance (HD95), and statistical ranking was applied for each metric by pairwise comparison of the submitted methods using the Wilcoxon signed-rank test. RESULTS While 23 teams registered for the challenge, only seven submitted their methods for the final phase. The top-performing team achieved a DSC of 76.9 % and a HD95 of 3.5 mm. All participating teams utilized architectures based on U-Net, with the winning team leveraging rigid MR to CT registration combined with network entry-level concatenation of both modalities. CONCLUSION This challenge simulated a real-world clinical scenario by providing non-registered MR and CT images with varying fields-of-view and voxel sizes. Remarkably, the top-performing teams achieved segmentation performance surpassing the inter-observer agreement on the same dataset. These results set a benchmark for future research on this publicly available dataset and on paired multi-modal image segmentation in general.
Collapse
Affiliation(s)
- Gašper Podobnik
- University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia.
| | - Bulat Ibragimov
- University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia; University of Copenhagen, Department of Computer Science, Universitetsparken 1, Copenhagen 2100, Denmark
| | - Elias Tappeiner
- UMIT Tirol - Private University for Health Sciences and Health Technology, Eduard-Wallnöfer-Zentrum 1, Hall in Tirol 6060, Austria
| | - Chanwoong Lee
- Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea; Yonsei Cancer Center, Department of RadiationOncology, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul 03722, South Korea
| | - Jin Sung Kim
- Yonsei University, College of Medicine, 50 Yonsei-ro, Seodaemun-gu, Seoul 03722, South Korea; Yonsei Cancer Center, Department of RadiationOncology, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul 03722, South Korea; Oncosoft Inc, 37 Myeongmul-gil, Seodaemun-gu, Seoul 03722, South Korea
| | - Zacharia Mesbah
- Henri Becquerel Cancer Center, 1 Rue d'Amiens, Rouen 76000, France; Siemens Healthineers, 6 Rue du Général Audran, CS20146, Courbevoie 92412, France
| | - Romain Modzelewski
- Henri Becquerel Cancer Center, 1 Rue d'Amiens, Rouen 76000, France; Litis UR 4108, 684 Av. de l'Université, Saint- Étienne-du-Rouvray 76800, France
| | - Yihao Ma
- Guizhou Medical University, School of Biology & Engineering, 9FW8+2P3, Ankang Avenue, Gui'an New Area, Guiyang, Guizhou Province 561113, China
| | - Fan Yang
- Guizhou Medical University, School of Biology & Engineering, 9FW8+2P3, Ankang Avenue, Gui'an New Area, Guiyang, Guizhou Province 561113, China
| | - Mikołaj Rudecki
- AGH University of Kraków, Department of Measurement and Electronicsal, Mickiewicza 30, Kraków 30-059, Poland
| | - Marek Wodziński
- AGH University of Kraków, Department of Measurement and Electronicsal, Mickiewicza 30, Kraków 30-059, Poland; University of Applied Sciences Western Switzerland, Information Systems Institute, Rue de la Plaine 2, Sierre 3960, Switzerland
| | - Primož Peterlin
- Institute of Oncology, Ljubljana, Zaloška cesta 2, Ljubljana 1000, Slovenia
| | - Primož Strojan
- Institute of Oncology, Ljubljana, Zaloška cesta 2, Ljubljana 1000, Slovenia
| | - Tomaž Vrtovec
- University of Ljubljana, Faculty Electrical Engineering, Tržaška cesta 25, Ljubljana 1000, Slovenia
| |
Collapse
|
3
|
Crawley R, Amirrajab S, Lustermans D, Holtackers RJ, Plein S, Veta M, Breeuwer M, Chiribiri A, Scannell CM. Automated cardiovascular MR myocardial scar quantification with unsupervised domain adaptation. Eur Radiol Exp 2024; 8:93. [PMID: 39143405 PMCID: PMC11324636 DOI: 10.1186/s41747-024-00497-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 07/15/2024] [Indexed: 08/16/2024] Open
Abstract
Quantification of myocardial scar from late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) images can be facilitated by automated artificial intelligence (AI)-based analysis. However, AI models are susceptible to domain shifts in which the model performance is degraded when applied to data with different characteristics than the original training data. In this study, CycleGAN models were trained to translate local hospital data to the appearance of a public LGE CMR dataset. After domain adaptation, an AI scar quantification pipeline including myocardium segmentation, scar segmentation, and computation of scar burden, previously developed on the public dataset, was evaluated on an external test set including 44 patients clinically assessed for ischemic scar. The mean ± standard deviation Dice similarity coefficients between the manual and AI-predicted segmentations in all patients were similar to those previously reported: 0.76 ± 0.05 for myocardium and 0.75 ± 0.32 for scar, 0.41 ± 0.12 for scar in scans with pathological findings. Bland-Altman analysis showed a mean bias in scar burden percentage of -0.62% with limits of agreement from -8.4% to 7.17%. These results show the feasibility of deploying AI models, trained with public data, for LGE CMR quantification on local clinical data using unsupervised CycleGAN-based domain adaptation. RELEVANCE STATEMENT: Our study demonstrated the possibility of using AI models trained from public databases to be applied to patient data acquired at a specific institution with different acquisition settings, without additional manual labor to obtain further training labels.
Collapse
Affiliation(s)
- Richard Crawley
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Sina Amirrajab
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Didier Lustermans
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Robert J Holtackers
- Cardiovascular Research Institute Maastricht (CARIM), Maastricht University, Maastricht, the Netherlands
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Center, Maastricht, the Netherlands
| | - Sven Plein
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
- Leeds Institute of Cardiovascular and Metabolic Medicine, University of Leeds, Leeds, UK
| | - Mitko Veta
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Marcel Breeuwer
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Amedeo Chiribiri
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK
| | - Cian M Scannell
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, UK.
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, the Netherlands.
| |
Collapse
|
4
|
Holzschuh JC, Mix M, Freitag MT, Hölscher T, Braune A, Kotzerke J, Vrachimis A, Doolan P, Ilhan H, Marinescu IM, Spohn SKB, Fechter T, Kuhn D, Gratzke C, Grosu R, Grosu AL, Zamboglou C. The impact of multicentric datasets for the automated tumor delineation in primary prostate cancer using convolutional neural networks on 18F-PSMA-1007 PET. Radiat Oncol 2024; 19:106. [PMID: 39113123 PMCID: PMC11304577 DOI: 10.1186/s13014-024-02491-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Accepted: 07/17/2024] [Indexed: 08/11/2024] Open
Abstract
PURPOSE Convolutional Neural Networks (CNNs) have emerged as transformative tools in the field of radiation oncology, significantly advancing the precision of contouring practices. However, the adaptability of these algorithms across diverse scanners, institutions, and imaging protocols remains a considerable obstacle. This study aims to investigate the effects of incorporating institution-specific datasets into the training regimen of CNNs to assess their generalization ability in real-world clinical environments. Focusing on a data-centric analysis, the influence of varying multi- and single center training approaches on algorithm performance is conducted. METHODS nnU-Net is trained using a dataset comprising 161 18F-PSMA-1007 PET images collected from four distinct institutions (Freiburg: n = 96, Munich: n = 19, Cyprus: n = 32, Dresden: n = 14). The dataset is partitioned such that data from each center are systematically excluded from training and used solely for testing to assess the model's generalizability and adaptability to data from unfamiliar sources. Performance is compared through a 5-Fold Cross-Validation, providing a detailed comparison between models trained on datasets from single centers to those trained on aggregated multi-center datasets. Dice Similarity Score, Hausdorff distance and volumetric analysis are used as primary evaluation metrics. RESULTS The mixed training approach yielded a median DSC of 0.76 (IQR: 0.64-0.84) in a five-fold cross-validation, showing no significant differences (p = 0.18) compared to models trained with data exclusion from each center, which performed with a median DSC of 0.74 (IQR: 0.56-0.86). Significant performance improvements regarding multi-center training were observed for the Dresden cohort (multi-center median DSC 0.71, IQR: 0.58-0.80 vs. single-center 0.68, IQR: 0.50-0.80, p < 0.001) and Cyprus cohort (multi-center 0.74, IQR: 0.62-0.83 vs. single-center 0.72, IQR: 0.54-0.82, p < 0.01). While Munich and Freiburg also showed performance improvements with multi-center training, results showed no statistical significance (Munich: multi-center DSC 0.74, IQR: 0.60-0.80 vs. single-center 0.72, IQR: 0.59-0.82, p > 0.05; Freiburg: multi-center 0.78, IQR: 0.53-0.87 vs. single-center 0.71, IQR: 0.53-0.83, p = 0.23). CONCLUSION CNNs trained for auto contouring intraprostatic GTV in 18F-PSMA-1007 PET on a diverse dataset from multiple centers mostly generalize well to unseen data from other centers. Training on a multicentric dataset can improve performance compared to training exclusively with a single-center dataset regarding intraprostatic 18F-PSMA-1007 PET GTV segmentation. The segmentation performance of the same CNN can vary depending on the dataset employed for training and testing.
Collapse
Affiliation(s)
- Julius C Holzschuh
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany.
- Division of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| | - Michael Mix
- Department of Nuclear Medicine, Faculty of Medicine, Medical Center - University of Freiburg, Freiburg, Germany
| | - Martin T Freitag
- Department of Nuclear Medicine, Faculty of Medicine, Medical Center - University of Freiburg, Freiburg, Germany
| | - Tobias Hölscher
- Department of Radiotherapy and Radiation Oncology, Faculty of Medicine, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Anja Braune
- Department of Nuclear Medicine, Faculty of Medicine, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jörg Kotzerke
- Department of Nuclear Medicine, Faculty of Medicine, University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Alexis Vrachimis
- Department of Nuclear Medicine, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Paul Doolan
- Department of Medical Physics, German Oncology Center, European University Cyprus, Limassol, Cyprus
| | - Harun Ilhan
- Department of Nuclear Medicine, University Hospital - Ludwig-Maximilians-Universität, Munich, Germany
| | - Ioana M Marinescu
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Simon K B Spohn
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Tobias Fechter
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
- Division of Medical Physics, Department of Radiation Oncology, Faculty of Medicine, Medical Center-University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Dejan Kuhn
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
- Division of Medical Physics, Department of Radiation Oncology, Faculty of Medicine, Medical Center-University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - Christian Gratzke
- Department of Urology, Medical Center-University of Freiburg, Freiburg, Germany
| | - Radu Grosu
- Cyber-Physical Systems Division, Institute of Computer Engineering and Faculty of Informatics, Technical University of Vienna, Vienna, Austria
- Department of Computer Science, State University of New York at Stony Brook, Stony Brook, NY, USA
| | - Anca-Ligia Grosu
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
| | - C Zamboglou
- Department of Radiation Oncology, Faculty of Medicine, Medical Center - University of Freiburg, University of Freiburg, German Cancer Consortium (DKTK), Partner Site DKTK, Freiburg, Germany
- Department of Radiation Oncology, German Oncology Center, European University Cyprus, Limassol, Cyprus
| |
Collapse
|
5
|
Cimini BA. Creating and troubleshooting microscopy analysis workflows: Common challenges and common solutions. J Microsc 2024; 295:93-101. [PMID: 38532662 PMCID: PMC11245365 DOI: 10.1111/jmi.13288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/29/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
As microscopy diversifies and becomes ever more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certain challenges in turning microscopy images into answers, independent of their scientific question and the images they have generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by keeping analysis in mind, optimizing data quality, understanding tools and tradeoffs, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.
Collapse
Affiliation(s)
- Beth A Cimini
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| |
Collapse
|
6
|
Sweeney PW, Hacker L, Lefebvre TL, Brown EL, Gröhl J, Bohndiek SE. Unsupervised Segmentation of 3D Microvascular Photoacoustic Images Using Deep Generative Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2402195. [PMID: 38923324 DOI: 10.1002/advs.202402195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 05/27/2024] [Indexed: 06/28/2024]
Abstract
Mesoscopic photoacoustic imaging (PAI) enables label-free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time-consuming and error-prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human-annotated ground-truth labels. To address this, an unsupervised image-to-image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient-derived breast cancer xenograft models and 3D clinical angiograms, VAN-GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN-GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high-quality blood vessel segmentation (F1 score: VAN-GAN vs. U-Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
Collapse
Affiliation(s)
- Paul W Sweeney
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Lina Hacker
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Thierry L Lefebvre
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Emma L Brown
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Janek Gröhl
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| | - Sarah E Bohndiek
- Cancer Research UK Cambridge Institute, University of Cambridge, Robinson Way, Cambridge, CB2 0RE, UK
- Department of Physics, University of Cambridge, JJ Thomson Avenue, Cambridge, CB3 0HE, UK
| |
Collapse
|
7
|
Yagis E, Aslani S, Jain Y, Zhou Y, Rahmani S, Brunet J, Bellier A, Werlein C, Ackermann M, Jonigk D, Tafforeau P, Lee PD, Walsh C. Deep Learning for 3D Vascular Segmentation in Phase Contrast Tomography. RESEARCH SQUARE 2024:rs.3.rs-4613439. [PMID: 39070623 PMCID: PMC11276017 DOI: 10.21203/rs.3.rs-4613439/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/30/2024]
Abstract
Automated blood vessel segmentation is critical for biomedical image analysis, as vessel morphology changes are associated with numerous pathologies. Still, precise segmentation is difficult due to the complexity of vascular structures, anatomical variations across patients, the scarcity of annotated public datasets, and the quality of images. Our goal is to provide a foundation on the topic and identify a robust baseline model for application to vascular segmentation using a new imaging modality, Hierarchical Phase-Contrast Tomography (HiP-CT). We begin with an extensive review of current machine learning approaches for vascular segmentation across various organs. Our work introduces a meticulously curated training dataset, verified by double annotators, consisting of vascular data from three kidneys imaged using Hierarchical Phase-Contrast Tomography (HiP-CT) as part of the Human Organ Atlas Project. HiP-CT, pioneered at the European Synchrotron Radiation Facility in 2020, revolutionizes 3D organ imaging by offering resolution around 20μm/voxel, and enabling highly detailed localized zooms up to 1μm/voxel without physical sectioning. We leverage the nnU-Net framework to evaluate model performance on this high-resolution dataset, using both known and novel samples, and implementing metrics tailored for vascular structures. Our comprehensive review and empirical analysis on HiP-CT data sets a new standard for evaluating machine learning models in high-resolution organ imaging. Our three experiments yielded Dice scores of 0.9523 and 0.9410, and 0.8585, respectively. Nevertheless, DSC primarily assesses voxel-to-voxel concordance, overlooking several crucial characteristics of the vessels and should not be the sole metric for deciding the performance of vascular segmentation. Our results show that while segmentations yielded reasonably high scores-such as centerline Dice values ranging from 0.82 to 0.88, certain errors persisted. Specifically, large vessels that collapsed due to the lack of hydro-static pressure (HiP-CT is an ex vivo technique) were segmented poorly. Moreover, decreased connectivity in finer vessels and higher segmentation errors at vessel boundaries were observed. Such errors, particularly in significant vessels, obstruct the understanding of the structures by interrupting vascular tree connectivity. Through our review and outputs, we aim to set a benchmark for subsequent model evaluations using various modalities, especially with the HiP-CT imaging database.
Collapse
Affiliation(s)
- Ekin Yagis
- Department of Mechanical Engineering, University College London, London, UK
| | - Shahab Aslani
- Department of Mechanical Engineering, University College London, London, UK
- Centre for Medical Image Computing, University College London, London UK
| | - Yashvardhan Jain
- Department of Intelligent Systems Engineering, Luddy School of Informatics, Computing, and Engineering, Indiana University, Bloomington, USA
| | - Yang Zhou
- Department of Mechanical Engineering, University College London, London, UK
| | - Shahrokh Rahmani
- Department of Mechanical Engineering, University College London, London, UK
| | - Joseph Brunet
- Department of Mechanical Engineering, University College London, London, UK
- European Synchrotron Radiation Facility, Grenoble, France
| | | | - Christopher Werlein
- Institute of Pathology, Hannover Medical School, Carl-Neuberg-Straße 1, 30625, Hannover, Germany
| | | | - Danny Jonigk
- Member of the German Center for Lung Research (DZL), Biomedical Research in Endstage and Obstructive Lung Disease Hannover (BREATH), Hannover, Germany
| | - Paul Tafforeau
- European Synchrotron Radiation Facility, Grenoble, France
| | - Peter D. Lee
- Department of Mechanical Engineering, University College London, London, UK
| | - Claire Walsh
- Department of Mechanical Engineering, University College London, London, UK
| |
Collapse
|
8
|
De La Hoz EC, Verstockt J, Verspeek S, Clarys W, Thiessen FEF, Tondu T, Tjalma WAA, Steenackers G, Vanlanduit S. Automated thermographic detection of blood vessels for DIEP flap reconstructive surgery. Int J Comput Assist Radiol Surg 2024:10.1007/s11548-024-03199-8. [PMID: 39014178 DOI: 10.1007/s11548-024-03199-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 05/27/2024] [Indexed: 07/18/2024]
Abstract
PURPOSE Inadequate perfusion is the most common cause of partial flap loss in tissue transfer for post-mastectomy breast reconstruction. The current state-of-the-art uses computed tomography angiography (CTA) to locate the best perforators. Unfortunately, these techniques are expensive and time-consuming and not performed during surgery. Dynamic infrared thermography (DIRT) can offer a solution for these disadvantages. METHODS The research presented couples thermographic examination during DIEP flap breast reconstruction with automatic segmentation approach using a convolutional neural network. Traditional segmentation techniques and annotations by surgeons are used to create automatic labels for the training. RESULTS The network used for image annotation is able to label in real-time on minimal hardware and the labels created can be used to locate and quantify perforator candidates for selection with a dice score accuracy of 0.8 after 2 min and 0.9 after 4 min. CONCLUSIONS These results allow for a computational system that can be used in place during surgery to improve surgical success. The ability to track and measure perforators and their perfused area allows for less subjective results and helps the surgeon to select the most suitable perforator for DIEP flap breast reconstruction.
Collapse
Affiliation(s)
- Edgar Cardenas De La Hoz
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium.
| | - Jan Verstockt
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Simon Verspeek
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Warre Clarys
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Filip E F Thiessen
- Department of Plastic, Reconstructive and Aesthetic Surgery, Multidisciplinary Breast Clinic, Antwerp University Hospital, Wilrijkstraat 10, 2650, Antwerp, Antwerp, Belgium
- Department of Plastic, Reconstructive and Aesthetic Surgery, Ziekenhuis Netwerk Antwerpen, Lindendreef 1, 2020, Antwerp, Antwerp, Belgium
| | - Thierry Tondu
- Department of Plastic, Reconstructive and Aesthetic Surgery, Multidisciplinary Breast Clinic, Antwerp University Hospital, Wilrijkstraat 10, 2650, Antwerp, Antwerp, Belgium
- Department of Plastic, Reconstructive and Aesthetic Surgery, Ziekenhuis Netwerk Antwerpen, Lindendreef 1, 2020, Antwerp, Antwerp, Belgium
| | - Wiebren A A Tjalma
- Gynaecological Oncology Unit, Department of Obstetrics and Gynaecology, Multidisciplinary Breast Clinic, Antwerp University Hospital, Wilrijkstraat 10, 2650, Antwerp, Antwerp, Belgium
| | - Gunther Steenackers
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| | - Steve Vanlanduit
- InViLab Research Group, Faculty of Applied Engineering, University of Antwerp, Groenenborgerlaan 171, 2020, Wilrijk, Antwerp, Belgium
| |
Collapse
|
9
|
Brosig J, Krüger N, Khasyanova I, Wamala I, Ivantsits M, Sündermann S, Kempfert J, Heldmann S, Hennemuth A. Learning three-dimensional aortic root assessment based on sparse annotations. J Med Imaging (Bellingham) 2024; 11:044504. [PMID: 39087084 PMCID: PMC11287057 DOI: 10.1117/1.jmi.11.4.044504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 06/10/2024] [Accepted: 07/09/2024] [Indexed: 08/02/2024] Open
Abstract
Purpose Analyzing the anatomy of the aorta and left ventricular outflow tract (LVOT) is crucial for risk assessment and planning of transcatheter aortic valve implantation (TAVI). A comprehensive analysis of the aortic root and LVOT requires the extraction of the patient-individual anatomy via segmentation. Deep learning has shown good performance on various segmentation tasks. If this is formulated as a supervised problem, large amounts of annotated data are required for training. Therefore, minimizing the annotation complexity is desirable. Approach We propose two-dimensional (2D) cross-sectional annotation and point cloud-based surface reconstruction to train a fully automatic 3D segmentation network for the aortic root and the LVOT. Our sparse annotation scheme enables easy and fast training data generation for tubular structures such as the aortic root. From the segmentation results, we derive clinically relevant parameters for TAVI planning. Results The proposed 2D cross-sectional annotation results in high inter-observer agreement [Dice similarity coefficient (DSC): 0.94]. The segmentation model achieves a DSC of 0.90 and an average surface distance of 0.96 mm. Our approach achieves an aortic annulus maximum diameter difference between prediction and annotation of 0.45 mm (inter-observer variance: 0.25 mm). Conclusions The presented approach facilitates reproducible annotations. The annotations allow for training accurate segmentation models of the aortic root and LVOT. The segmentation results facilitate reproducible and quantifiable measurements for TAVI planning.
Collapse
Affiliation(s)
- Johanna Brosig
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
- Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Nina Krüger
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
- Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Inna Khasyanova
- Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
- Charité-Universitätsmedizin Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité, Department of Cardiothoracic and Vascular Surgery, Berlin, Germany
| | - Isaac Wamala
- Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
- Charité-Universitätsmedizin Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité, Department of Cardiothoracic and Vascular Surgery, Berlin, Germany
| | - Matthias Ivantsits
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
- Charité-Universitätsmedizin Berlin, Berlin, Germany
| | - Simon Sündermann
- Charité-Universitätsmedizin Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité, Department of Cardiothoracic and Vascular Surgery, Berlin, Germany
- DZHK (German Center for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
| | - Jörg Kempfert
- Charité-Universitätsmedizin Berlin, Berlin, Germany
- Deutsches Herzzentrum der Charité, Department of Cardiothoracic and Vascular Surgery, Berlin, Germany
- DZHK (German Center for Cardiovascular Research), Partner Site Berlin, Berlin, Germany
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Anja Hennemuth
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Computer-Assisted Cardiovascular Medicine, Deutsches Herzzentrum der Charité, Berlin, Germany
- Charité-Universitätsmedizin Berlin, Berlin, Germany
- University Medical Center Hamburg-Eppendorf, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, Hamburg, Germany
| |
Collapse
|
10
|
Küstner T, Qin C, Sun C, Ning L, Scannell CM. The intelligent imaging revolution: artificial intelligence in MRI and MRS acquisition and reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:329-333. [PMID: 38900344 DOI: 10.1007/s10334-024-01179-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024]
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.Lab), Diagnostic and Interventional Radiology, University Hospital of Tuebingen, 72076, Tuebingen, Germany.
| | - Chen Qin
- Department of Electrical and Electronic Engineering, I-X Imperial College London, London, UK
| | - Changyu Sun
- Department of Chemical and Biomedical Engineering, Department of Radiology, University of Missouri-Columbia, 65201, Columbia, USA
| | - Lipeng Ning
- Brigham and Women' s Hospital, 02215, Boston, USA
| | - Cian M Scannell
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
11
|
Tejani AS, Klontzas ME, Gatti AA, Mongan JT, Moy L, Park SH, Kahn CE. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): 2024 Update. Radiol Artif Intell 2024; 6:e240300. [PMID: 38809149 PMCID: PMC11304031 DOI: 10.1148/ryai.240300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 05/14/2024] [Accepted: 05/17/2024] [Indexed: 05/30/2024]
Abstract
To address the rapid evolution of artificial intelligence in medical imaging, the authors present the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) 2024 Update.
Collapse
Affiliation(s)
| | | | | | - John T. Mongan
- From the Department of Radiology, University of Texas Southwestern
Medical Center, Dallas, Tex (A.S.T.); Department of Radiology, University of
Crete School of Medicine, Heraklion, Crete, Greece (M.E.K.); Department of
Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
(M.E.K.); Department of Radiology, Stanford University, Stanford, Calif
(A.A.G.); Department of Radiology and Biomedical Imaging, University of
California San Francisco, San Francisco, Calif (J.T.M.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (L.M.);
Department of Radiology and Research Institute of Radiology, Asan Medical
Center, University of Ulsan College of Medicine, Seoul, South Korea (S.H.P.);
and Department of Radiology and Institute for Biomedical Informatics, University
of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104-6243 (C.E.K.)
| | - Linda Moy
- From the Department of Radiology, University of Texas Southwestern
Medical Center, Dallas, Tex (A.S.T.); Department of Radiology, University of
Crete School of Medicine, Heraklion, Crete, Greece (M.E.K.); Department of
Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
(M.E.K.); Department of Radiology, Stanford University, Stanford, Calif
(A.A.G.); Department of Radiology and Biomedical Imaging, University of
California San Francisco, San Francisco, Calif (J.T.M.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (L.M.);
Department of Radiology and Research Institute of Radiology, Asan Medical
Center, University of Ulsan College of Medicine, Seoul, South Korea (S.H.P.);
and Department of Radiology and Institute for Biomedical Informatics, University
of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104-6243 (C.E.K.)
| | - Seong Ho Park
- From the Department of Radiology, University of Texas Southwestern
Medical Center, Dallas, Tex (A.S.T.); Department of Radiology, University of
Crete School of Medicine, Heraklion, Crete, Greece (M.E.K.); Department of
Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
(M.E.K.); Department of Radiology, Stanford University, Stanford, Calif
(A.A.G.); Department of Radiology and Biomedical Imaging, University of
California San Francisco, San Francisco, Calif (J.T.M.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (L.M.);
Department of Radiology and Research Institute of Radiology, Asan Medical
Center, University of Ulsan College of Medicine, Seoul, South Korea (S.H.P.);
and Department of Radiology and Institute for Biomedical Informatics, University
of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104-6243 (C.E.K.)
| | - Charles E. Kahn
- From the Department of Radiology, University of Texas Southwestern
Medical Center, Dallas, Tex (A.S.T.); Department of Radiology, University of
Crete School of Medicine, Heraklion, Crete, Greece (M.E.K.); Department of
Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
(M.E.K.); Department of Radiology, Stanford University, Stanford, Calif
(A.A.G.); Department of Radiology and Biomedical Imaging, University of
California San Francisco, San Francisco, Calif (J.T.M.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (L.M.);
Department of Radiology and Research Institute of Radiology, Asan Medical
Center, University of Ulsan College of Medicine, Seoul, South Korea (S.H.P.);
and Department of Radiology and Institute for Biomedical Informatics, University
of Pennsylvania, 3400 Spruce St, Philadelphia, PA 19104-6243 (C.E.K.)
| | | |
Collapse
|
12
|
Schmidt A, Mohareri O, DiMaio SP, Salcudean SE. Surgical Tattoos in Infrared: A Dataset for Quantifying Tissue Tracking and Mapping. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2634-2645. [PMID: 38437151 DOI: 10.1109/tmi.2024.3372828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Quantifying performance of methods for tracking and mapping tissue in endoscopic environments is essential for enabling image guidance and automation of medical interventions and surgery. Datasets developed so far either use rigid environments, visible markers, or require annotators to label salient points in videos after collection. These are respectively: not general, visible to algorithms, or costly and error-prone. We introduce a novel labeling methodology along with a dataset that uses said methodology, Surgical Tattoos in Infrared (STIR). STIR has labels that are persistent but invisible to visible spectrum algorithms. This is done by labelling tissue points with IR-fluorescent dye, indocyanine green (ICG), and then collecting visible light video clips. STIR comprises hundreds of stereo video clips in both in vivo and ex vivo scenes with start and end points labelled in the IR spectrum. With over 3,000 labelled points, STIR will help to quantify and enable better analysis of tracking and mapping methods. After introducing STIR, we analyze multiple different frame-based tracking methods on STIR using both 3D and 2D endpoint error and accuracy metrics. STIR is available at https://dx.doi.org/10.21227/w8g4-g548.
Collapse
|
13
|
Erozan A, Lösel PD, Heuveline V, Weinhardt V. Automated 3D cytoplasm segmentation in soft X-ray tomography. iScience 2024; 27:109856. [PMID: 38784019 PMCID: PMC11112332 DOI: 10.1016/j.isci.2024.109856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 03/22/2024] [Accepted: 04/27/2024] [Indexed: 05/25/2024] Open
Abstract
Cells' structure is key to understanding cellular function, diagnostics, and therapy development. Soft X-ray tomography (SXT) is a unique tool to image cellular structure without fixation or labeling at high spatial resolution and throughput. Fast acquisition times increase demand for accelerated image analysis, like segmentation. Currently, segmenting cellular structures is done manually and is a major bottleneck in the SXT data analysis. This paper introduces ACSeg, an automated 3D cytoplasm segmentation model. ACSeg is generated using semi-automated labels and 3D U-Net and is trained on 43 SXT tomograms of immune T cells, rapidly converging to high-accuracy segmentation, therefore reducing time and labor. Furthermore, adding only 6 SXT tomograms of other cell types diversifies the model, showing potential for optimal experimental design. ACSeg successfully segmented unseen tomograms and is published on Biomedisa, enabling high-throughput analysis of cell volume and structure of cytoplasm in diverse cell types.
Collapse
Affiliation(s)
- Ayse Erozan
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany
- Engineering Mathematics and Computing Lab, Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
| | - Philipp D. Lösel
- Engineering Mathematics and Computing Lab, Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
- Department of Materials Physics Research School of Physics, The Australian National University, Acton ACT, Australia
| | - Vincent Heuveline
- Engineering Mathematics and Computing Lab, Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany
- Data Mining and Uncertainty Quantification, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
| | - Venera Weinhardt
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany
- Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA, USA
| |
Collapse
|
14
|
Qasim AB, Motta A, Studier-Fischer A, Sellner J, Ayala L, Hübner M, Bressan M, Özdemir B, Kowalewski KF, Nickel F, Seidlitz S, Maier-Hein L. Test-time augmentation with synthetic data addresses distribution shifts in spectral imaging. Int J Comput Assist Radiol Surg 2024; 19:1021-1031. [PMID: 38483702 PMCID: PMC11178652 DOI: 10.1007/s11548-024-03085-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Accepted: 02/22/2024] [Indexed: 06/15/2024]
Abstract
PURPOSE Surgical scene segmentation is crucial for providing context-aware surgical assistance. Recent studies highlight the significant advantages of hyperspectral imaging (HSI) over traditional RGB data in enhancing segmentation performance. Nevertheless, the current hyperspectral imaging (HSI) datasets remain limited and do not capture the full range of tissue variations encountered clinically. METHODS Based on a total of 615 hyperspectral images from a total of 16 pigs, featuring porcine organs in different perfusion states, we carry out an exploration of distribution shifts in spectral imaging caused by perfusion alterations. We further introduce a novel strategy to mitigate such distribution shifts, utilizing synthetic data for test-time augmentation. RESULTS The effect of perfusion changes on state-of-the-art (SOA) segmentation networks depended on the organ and the specific perfusion alteration induced. In the case of the kidney, we observed a performance decline of up to 93% when applying a state-of-the-art (SOA) network under ischemic conditions. Our method improved on the state-of-the-art (SOA) by up to 4.6 times. CONCLUSION Given its potential wide-ranging relevance to diverse pathologies, our approach may serve as a pivotal tool to enhance neural network generalization within the realm of spectral imaging.
Collapse
Affiliation(s)
- Ahmad Bin Qasim
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany.
- Helmholtz Information and Data Science School for Health, Karlsruhe/Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Alessandro Motta
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Studier-Fischer
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Jan Sellner
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Information and Data Science School for Health, Karlsruhe/Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, A Partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Leonardo Ayala
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Marco Hübner
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | - Marc Bressan
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Berkin Özdemir
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Karl Friedrich Kowalewski
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of Urology, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Felix Nickel
- Department of General, Visceral, and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Silvia Seidlitz
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Information and Data Science School for Health, Karlsruhe/Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, A Partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Information and Data Science School for Health, Karlsruhe/Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, A Partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
15
|
Ma J, Xie R, Ayyadhury S, Ge C, Gupta A, Gupta R, Gu S, Zhang Y, Lee G, Kim J, Lou W, Li H, Upschulte E, Dickscheid T, de Almeida JG, Wang Y, Han L, Yang X, Labagnara M, Gligorovski V, Scheder M, Rahi SJ, Kempster C, Pollitt A, Espinosa L, Mignot T, Middeke JM, Eckardt JN, Li W, Li Z, Cai X, Bai B, Greenwald NF, Van Valen D, Weisbart E, Cimini BA, Cheung T, Brück O, Bader GD, Wang B. The multimodality cell segmentation challenge: toward universal solutions. Nat Methods 2024; 21:1103-1113. [PMID: 38532015 PMCID: PMC11210294 DOI: 10.1038/s41592-024-02233-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyper-parameters in different experimental settings. Here, we present a multimodality cell segmentation benchmark, comprising more than 1,500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.
Collapse
Affiliation(s)
- Jun Ma
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Ronald Xie
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
| | - Shamini Ayyadhury
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
| | - Cheng Ge
- School of Medicine and Pharmacy, Ocean University of China, Qingdao, China
| | - Anubha Gupta
- Department of Electronics and Communications Engineering, Indraprastha Institute of Information Technology Delhi (IIITD), New Delhi, India
| | - Ritu Gupta
- Laboratory Oncology Unit, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
| | - Song Gu
- Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Nanjing, China
| | - Yao Zhang
- Shanghai Artificial Intelligence Laboratory, Shanghai, China
| | - Gihun Lee
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Joonkee Kim
- Graduate School of AI, KAIST, Seoul, South Korea
| | - Wei Lou
- Shenzhen Research Institute of Big Data, Shenzhen, China
- Chinese University of Hong Kong (Shenzhen), Shenzhen, China
| | - Haofeng Li
- Shenzhen Research Institute of Big Data, Shenzhen, China
| | - Eric Upschulte
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1) and Helmholtz AI, Research Center Jülich, Jülich, Germany
- Faculty of Mathematics and Natural Sciences - Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - José Guilherme de Almeida
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
- Champalimaud Foundation - Centre for the Unknown, Lisbon, Portugal
| | - Yixin Wang
- Department of Bioengineering, Stanford University, Palo Alto, CA, USA
| | - Lin Han
- Tandon School of Engineering, New York University, New York, NY, USA
| | - Xin Yang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Marco Labagnara
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Vojislav Gligorovski
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Maxime Scheder
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Sahand Jamal Rahi
- Laboratory of the Physics of Biological Systems, Institute of Physics, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Carly Kempster
- School of Biological Sciences, University of Reading, Reading, UK
| | - Alice Pollitt
- School of Biological Sciences, University of Reading, Reading, UK
| | - Leon Espinosa
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Tâm Mignot
- Laboratoire de Chimie Bactérienne, CNRS-Université Aix-Marseille UMR, Institut de Microbiologie de la Méditerranée, Marseille, France
| | - Jan Moritz Middeke
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Jan-Niklas Eckardt
- Department of Internal Medicine I, University Hospital Dresden, Technical University Dresden, Dresden, Germany
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Wangkai Li
- Department of Automation, University of Science and Technology of China, Hefei, China
| | - Zhaoyang Li
- Institute of Advanced Technology, University of Science and Technology of China, Hefei, China
| | - Xiaochen Cai
- Department of Computer Science and Technology, Nanjing University, Nanjing, China
| | - Bizhe Bai
- School of EECS, The University of Queensland, Brisbane, Queensland, Australia
| | | | - David Van Valen
- Division of Computing and Mathematical Science, Caltech, Pasadena, CA, USA
- Howard Hughes Medical Institute, Chevy Chase, MD, USA
| | - Erin Weisbart
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Trevor Cheung
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
| | - Oscar Brück
- Hematoscope Laboratory, Comprehensive Cancer Center & Center of Diagnostics, Helsinki University Hospital, Helsinki, Finland
- Department of Oncology, University of Helsinki, Helsinki, Finland
| | - Gary D Bader
- Department of Molecular Genetics, University of Toronto, Toronto, Ontario, Canada
- Donnelly Centre, University of Toronto, Toronto, Ontario, Canada
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, Ontario, Canada
- CIFAR Multiscale Human Program, CIFAR, Toronto, Ontario, Canada
| | - Bo Wang
- Peter Munk Cardiac Centre, University Health Network, Toronto, Ontario, Canada.
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada.
- Vector Institute, Toronto, Ontario, Canada.
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada.
- UHN AI Hub, University Health Network, Toronto, Ontario, Canada.
| |
Collapse
|
16
|
Kubiak KB, Więckowska B, Jodłowska-Siewert E, Guzik P. Visualising and quantifying the usefulness of new predictors stratified by outcome class: The U-smile method. PLoS One 2024; 19:e0303276. [PMID: 38768166 PMCID: PMC11104627 DOI: 10.1371/journal.pone.0303276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 04/22/2024] [Indexed: 05/22/2024] Open
Abstract
Binary classification methods encompass various algorithms to categorize data points into two distinct classes. Binary prediction, in contrast, estimates the likelihood of a binary event occurring. We introduce a novel graphical and quantitative approach, the U-smile method, for assessing prediction improvement stratified by binary outcome class. The U-smile method utilizes a smile-like plot and novel coefficients to measure the relative and absolute change in prediction compared with the reference method. The likelihood-ratio test was used to assess the significance of the change in prediction. Logistic regression models using the Heart Disease dataset and generated random variables were employed to validate the U-smile method. The receiver operating characteristic (ROC) curve was used to compare the results of the U-smile method. The likelihood-ratio test demonstrated that the proposed coefficients consistently generated smile-shaped U-smile plots for the most informative predictors. The U-smile plot proved more effective than the ROC curve in comparing the effects of adding new predictors to the reference method. It effectively highlighted differences in model performance for both non-events and events. Visual analysis of the U-smile plots provided an immediate impression of the usefulness of different predictors at a glance. The U-smile method can guide the selection of the most valuable predictors. It can also be helpful in applications beyond prediction.
Collapse
Affiliation(s)
- Katarzyna B. Kubiak
- Department of Computer Science and Statistics, Poznan University of Medical Sciences, Poznan, Poland
| | - Barbara Więckowska
- Department of Computer Science and Statistics, Poznan University of Medical Sciences, Poznan, Poland
| | | | - Przemysław Guzik
- Department of Cardiology - Intensive Therapy and Internal Medicine, Poznan University of Medical Sciences, Poznan, Poland
- University Centre for Sports and Medical Studies, Poznan University of Medical Sciences, Poznan, Poland
| |
Collapse
|
17
|
Ni FD, Xu ZN, Liu MQ, Zhang MJ, Li S, Bai HL, Ding P, Fu KY. Towards clinically applicable automated mandibular canal segmentation on CBCT. J Dent 2024; 144:104931. [PMID: 38458378 DOI: 10.1016/j.jdent.2024.104931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 03/04/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024] Open
Abstract
OBJECTIVES To develop a deep learning-based system for precise, robust, and fully automated segmentation of the mandibular canal on cone beam computed tomography (CBCT) images. METHODS The system was developed on 536 CBCT scans (training set: 376, validation set: 80, testing set: 80) from one center and validated on an external dataset of 89 CBCT scans from 3 centers. Each scan was annotated using a multi-stage annotation method and refined by oral and maxillofacial radiologists. We proposed a three-step strategy for the mandibular canal segmentation: extraction of the region of interest based on 2D U-Net, global segmentation of the mandibular canal, and segmentation refinement based on 3D U-Net. RESULTS The system consistently achieved accurate mandibular canal segmentation in the internal set (Dice similarity coefficient [DSC], 0.952; intersection over union [IoU], 0.912; average symmetric surface distance [ASSD], 0.046 mm; 95% Hausdorff distance [HD95], 0.325 mm) and the external set (DSC, 0.960; IoU, 0.924; ASSD, 0.040 mm; HD95, 0.288 mm). CONCLUSIONS These results demonstrated the potential clinical application of this AI system in facilitating clinical workflows related to mandibular canal localization. CLINICAL SIGNIFICANCE Accurate delineation of the mandibular canal on CBCT images is critical for implant placement, mandibular third molar extraction, and orthognathic surgery. This AI system enables accurate segmentation across different models, which could contribute to more efficient and precise dental automation systems.
Collapse
Affiliation(s)
- Fang-Duan Ni
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China
| | | | - Mu-Qing Liu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| | - Min-Juan Zhang
- Second Dental Center, Peking University Hospital of Stomatology, Beijing 100101, China
| | - Shu Li
- Department of Stomatology, Beijing Hospital, Beijing 100005, China
| | | | | | - Kai-Yuan Fu
- Department of Oral & Maxillofacial Radiology, Peking University School & Hospital of Stomatology, Beijing 100081, China; National Center for Stomatology & National Clinical Research Center for Oral Diseases, Beijing 100081, China; National Engineering Research Center of Oral Biomaterials and Digital Medical Devices, Beijing 100081, China; Beijing Key Laboratory of Digital Stomatology, Beijing 100081, China.
| |
Collapse
|
18
|
Kwong JCC, Wu J, Malik S, Khondker A, Gupta N, Bodnariuc N, Narayana K, Malik M, van der Kwast TH, Johnson AEW, Zlotta AR, Kulkarni GS. Predicting non-muscle invasive bladder cancer outcomes using artificial intelligence: a systematic review using APPRAISE-AI. NPJ Digit Med 2024; 7:98. [PMID: 38637674 PMCID: PMC11026453 DOI: 10.1038/s41746-024-01088-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 03/29/2024] [Indexed: 04/20/2024] Open
Abstract
Accurate prediction of recurrence and progression in non-muscle invasive bladder cancer (NMIBC) is essential to inform management and eligibility for clinical trials. Despite substantial interest in developing artificial intelligence (AI) applications in NMIBC, their clinical readiness remains unclear. This systematic review aimed to critically appraise AI studies predicting NMIBC outcomes, and to identify common methodological and reporting pitfalls. MEDLINE, EMBASE, Web of Science, and Scopus were searched from inception to February 5th, 2024 for AI studies predicting NMIBC recurrence or progression. APPRAISE-AI was used to assess methodological and reporting quality of these studies. Performance between AI and non-AI approaches included within these studies were compared. A total of 15 studies (five on recurrence, four on progression, and six on both) were included. All studies were retrospective, with a median follow-up of 71 months (IQR 32-93) and median cohort size of 125 (IQR 93-309). Most studies were low quality, with only one classified as high quality. While AI models generally outperformed non-AI approaches with respect to accuracy, c-index, sensitivity, and specificity, this margin of benefit varied with study quality (median absolute performance difference was 10 for low, 22 for moderate, and 4 for high quality studies). Common pitfalls included dataset limitations, heterogeneous outcome definitions, methodological flaws, suboptimal model evaluation, and reproducibility issues. Recommendations to address these challenges are proposed. These findings emphasise the need for collaborative efforts between urological and AI communities paired with rigorous methodologies to develop higher quality models, enabling AI to reach its potential in enhancing NMIBC care.
Collapse
Affiliation(s)
- Jethro C C Kwong
- Division of Urology, Department of Surgery, University of Toronto, Toronto, ON, Canada
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, ON, Canada
| | - Jeremy Wu
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Shamir Malik
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Adree Khondker
- Division of Urology, Department of Surgery, University of Toronto, Toronto, ON, Canada
| | - Naveen Gupta
- Georgetown University School of Medicine, Georgetown University, Washington, DC, USA
- Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, USA
| | - Nicole Bodnariuc
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | | | - Mikail Malik
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Theodorus H van der Kwast
- Laboratory Medicine Program, University Health Network, Princess Margaret Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - Alistair E W Johnson
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, ON, Canada
- Division of Biostatistics, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada
| | - Alexandre R Zlotta
- Division of Urology, Department of Surgery, University of Toronto, Toronto, ON, Canada
- Division of Urology, Department of Surgery, Mount Sinai Hospital, Sinai Health System, Toronto, ON, Canada
- Division of Urology, Department of Surgery, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada
| | - Girish S Kulkarni
- Division of Urology, Department of Surgery, University of Toronto, Toronto, ON, Canada.
- Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, ON, Canada.
- Division of Urology, Department of Surgery, Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
| |
Collapse
|
19
|
Where imaging and metrics meet. Nat Methods 2024; 21:151. [PMID: 38347133 DOI: 10.1038/s41592-024-02187-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
|
20
|
Reinke A, Tizabi MD, Baumgartner M, Eisenmann M, Heckmann-Nötzel D, Kavur AE, Rädsch T, Sudre CH, Acion L, Antonelli M, Arbel T, Bakas S, Benis A, Buettner F, Cardoso MJ, Cheplygina V, Chen J, Christodoulou E, Cimini BA, Farahani K, Ferrer L, Galdran A, van Ginneken B, Glocker B, Godau P, Hashimoto DA, Hoffman MM, Huisman M, Isensee F, Jannin P, Kahn CE, Kainmueller D, Kainz B, Karargyris A, Kleesiek J, Kofler F, Kooi T, Kopp-Schneider A, Kozubek M, Kreshuk A, Kurc T, Landman BA, Litjens G, Madani A, Maier-Hein K, Martel AL, Meijering E, Menze B, Moons KGM, Müller H, Nichyporuk B, Nickel F, Petersen J, Rafelski SM, Rajpoot N, Reyes M, Riegler MA, Rieke N, Saez-Rodriguez J, Sánchez CI, Shetty S, Summers RM, Taha AA, Tiulpin A, Tsaftaris SA, Van Calster B, Varoquaux G, Yaniv ZR, Jäger PF, Maier-Hein L. Understanding metric-related pitfalls in image analysis validation. Nat Methods 2024; 21:182-194. [PMID: 38347140 PMCID: PMC11181963 DOI: 10.1038/s41592-023-02150-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024]
Abstract
Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence research and its translation into practice. However, increasing evidence shows that, particularly in image analysis, metrics are often chosen inadequately. Although taking into account the individual strengths, weaknesses and limitations of validation metrics is a critical prerequisite to making educated choices, the relevant knowledge is currently scattered and poorly accessible to individual researchers. Based on a multistage Delphi process conducted by a multidisciplinary expert consortium as well as extensive community feedback, the present work provides a reliable and comprehensive common point of access to information on pitfalls related to validation metrics in image analysis. Although focused on biomedical image analysis, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy. The work serves to enhance global comprehension of a key topic in image analysis validation.
Collapse
Affiliation(s)
- Annika Reinke
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
| | - Minu D Tizabi
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
| | - Michael Baumgartner
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | - Matthias Eisenmann
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Doreen Heckmann-Nötzel
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - A Emre Kavur
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Tim Rädsch
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany
| | - Carole H Sudre
- MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Laura Acion
- Instituto de Cálculo, CONICET - Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Michela Antonelli
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
- Centre for Medical Image Computing, University College London, London, UK
| | - Tal Arbel
- Centre for Intelligent Machines and MILA (Quebec Artificial Intelligence Institute), McGill University, Montréal, Quebec, Canada
| | - Spyridon Bakas
- Division of Computational Pathology, Dept of Pathology & Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN, USA
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Arriel Benis
- Department of Digital Medical Technologies, Holon Institute of Technology, Holon, Israel
- European Federation for Medical Informatics, Le Mont-sur-Lausanne, Switzerland
| | - Florian Buettner
- German Cancer Consortium (DKTK), partner site Frankfurt/Mainz, a partnership between DKFZ and UCT Frankfurt-Marburg, Frankfurt am Main, Germany
- German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Goethe University Frankfurt, Department of Medicine, Frankfurt am Main, Germany
- Goethe University Frankfurt, Department of Informatics, Frankfurt am Main, Germany
- Frankfurt Cancer Insititute, Frankfurt am Main, Germany
| | - M Jorge Cardoso
- School of Biomedical Engineering and Imaging Science, King's College London, London, UK
| | - Veronika Cheplygina
- Department of Computer Science, IT University of Copenhagen, Copenhagen, Denmark
| | - Jianxu Chen
- Leibniz-Institut für Analytische Wissenschaften - ISAS - e.V., Dortmund, Germany
| | - Evangelia Christodoulou
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
| | - Beth A Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Keyvan Farahani
- Center for Biomedical Informatics and Information Technology, National Cancer Institute, Bethesda, MD, USA
| | - Luciana Ferrer
- Instituto de Investigación en Ciencias de la Computación (ICC), CONICET-UBA, Ciudad Autónoma de Buenos Aires, Buenos Aires, Argentina
| | - Adrian Galdran
- Universitat Pompeu Fabra, Barcelona, Spain
- University of Adelaide, Adelaide, South Australia, Australia
| | - Bram van Ginneken
- Fraunhofer MEVIS, Bremen, Germany
- Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Ben Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - Patrick Godau
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany
| | - Daniel A Hashimoto
- Department of Surgery, Perelman School of Medicine, Philadelphia, PA, USA
- General Robotics Automation Sensing and Perception Laboratory, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada
| | - Merel Huisman
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Fabian Isensee
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- German Cancer Research Center (DKFZ) Heidelberg, HI Applied Computer Vision Lab, Heidelberg, Germany
| | - Pierre Jannin
- Laboratoire Traitement du Signal et de l'Image - UMR_S 1099, Université de Rennes 1, Rennes, France
- INSERM, Paris, France
| | - Charles E Kahn
- Department of Radiology and Institute for Biomedical Informatics, University of Pennsylvania, Philadelphia, PA, USA
| | - Dagmar Kainmueller
- Max-Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC), Biomedical Image Analysis and HI Helmholtz Imaging, Berlin, Germany
- University of Potsdam, Digital Engineering Faculty, Potsdam, Germany
| | - Bernhard Kainz
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
- Department AIBE, Friedrich-Alexander-Universität (FAU), Erlangen-Nürnberg, Germany
| | | | - Jens Kleesiek
- Translational Image-guided Oncology (TIO), Institute for AI in Medicine (IKIM), University Medicine Essen, Essen, Germany
| | | | | | - Annette Kopp-Schneider
- German Cancer Research Center (DKFZ) Heidelberg, Division of Biostatistics, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis and Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory (EMBL), Heidelberg, Germany
| | - Tahsin Kurc
- Department of Biomedical Informatics, Stony Brook University, Health Science Center, Stony Brook, NY, USA
| | | | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Amin Madani
- Department of Surgery, University Health Network, Philadelphia, PA, USA
| | - Klaus Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Anne L Martel
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, UNSW Sydney, Kensington, New South Wales, Australia
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Karel G M Moons
- Julius Center for Health Sciences and Primary Care, UMC Utrecht, Utrecht University, Utrecht, the Netherlands
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
- Medical Faculty, University of Geneva, Geneva, Switzerland
| | - Brennan Nichyporuk
- MILA (Quebec Artificial Intelligence Institute), Montréal, Quebec, Canada
| | - Felix Nickel
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Jens Petersen
- German Cancer Research Center (DKFZ) Heidelberg, Division of Medical Image Computing, Heidelberg, Germany
| | | | - Nasir Rajpoot
- Tissue Image Analytics Laboratory, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mauricio Reyes
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- Department of Radiation Oncology, University Hospital Bern, University of Bern, Bern, Switzerland
| | - Michael A Riegler
- Simula Metropolitan Center for Digital Engineering, Oslo, Norway
- UiT The Arctic University of Norway, Tromsø, Norway
| | | | - Julio Saez-Rodriguez
- Institute for Computational Biomedicine, Heidelberg University, Heidelberg, Germany
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany
| | - Clara I Sánchez
- Informatics Institute, Faculty of Science, University of Amsterdam, Amsterdam, the Netherlands
| | | | - Ronald M Summers
- National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Abdel A Taha
- Institute of Information Systems Engineering, TU Wien, Vienna, Austria
| | - Aleksei Tiulpin
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland
- Neurocenter Oulu, Oulu University Hospital, Oulu, Finland
| | | | - Ben Van Calster
- Department of Development and Regeneration and EPI-centre, KU Leuven, Leuven, Belgium
- Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, the Netherlands
| | - Gaël Varoquaux
- Parietal project team, INRIA Saclay-Île de France, Palaiseau, France
| | - Ziv R Yaniv
- National Institute of Allergy and Infectious Diseases, Bethesda, MD, USA
| | - Paul F Jäger
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, Interactive Machine Learning Group, Heidelberg, Germany.
| | - Lena Maier-Hein
- German Cancer Research Center (DKFZ) Heidelberg, Division of Intelligent Medical Systems, Heidelberg, Germany.
- German Cancer Research Center (DKFZ) Heidelberg, HI Helmholtz Imaging, Heidelberg, Germany.
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and University Medical Center Heidelberg, Heidelberg, Germany.
- Faculty of Medicine, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|