1
|
Seoni S, Shahini A, Meiburger KM, Marzola F, Rotunno G, Acharya UR, Molinari F, Salvi M. All you need is data preparation: A systematic review of image harmonization techniques in Multi-center/device studies for medical support systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108200. [PMID: 38677080 DOI: 10.1016/j.cmpb.2024.108200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Revised: 04/20/2024] [Accepted: 04/22/2024] [Indexed: 04/29/2024]
Abstract
BACKGROUND AND OBJECTIVES Artificial intelligence (AI) models trained on multi-centric and multi-device studies can provide more robust insights and research findings compared to single-center studies. However, variability in acquisition protocols and equipment can introduce inconsistencies that hamper the effective pooling of multi-source datasets. This systematic review evaluates strategies for image harmonization, which standardizes appearances to enable reliable AI analysis of multi-source medical imaging. METHODS A literature search using PRISMA guidelines was conducted to identify relevant papers published between 2013 and 2023 analyzing multi-centric and multi-device medical imaging studies that utilized image harmonization approaches. RESULTS Common image harmonization techniques included grayscale normalization (improving classification accuracy by up to 24.42 %), resampling (increasing the percentage of robust radiomics features from 59.5 % to 89.25 %), and color normalization (enhancing AUC by up to 0.25 in external test sets). Initially, mathematical and statistical methods dominated, but machine and deep learning adoption has risen recently. Color imaging modalities like digital pathology and dermatology have remained prominent application areas, though harmonization efforts have expanded to diverse fields including radiology, nuclear medicine, and ultrasound imaging. In all the modalities covered by this review, image harmonization improved AI performance, with increasing of up to 24.42 % in classification accuracy and 47 % in segmentation Dice scores. CONCLUSIONS Continued progress in image harmonization represents a promising strategy for advancing healthcare by enabling large-scale, reliable analysis of integrated multi-source datasets using AI. Standardizing imaging data across clinical settings can help realize personalized, evidence-based care supported by data-driven technologies while mitigating biases associated with specific populations or acquisition protocols.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Alen Shahini
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Kristen M Meiburger
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Francesco Marzola
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Giulia Rotunno
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia; Centre for Health Research, University of Southern Queensland, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| |
Collapse
|
2
|
Cai L, Lambregts DMJ, Beets GL, Mass M, Pooch EHP, Guérendel C, Beets-Tan RGH, Benson S. An automated deep learning pipeline for EMVI classification and response prediction of rectal cancer using baseline MRI: a multi-centre study. NPJ Precis Oncol 2024; 8:17. [PMID: 38253770 PMCID: PMC10803303 DOI: 10.1038/s41698-024-00516-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/14/2023] [Indexed: 01/24/2024] Open
Abstract
The classification of extramural vascular invasion status using baseline magnetic resonance imaging in rectal cancer has gained significant attention as it is an important prognostic marker. Also, the accurate prediction of patients achieving complete response with primary staging MRI assists clinicians in determining subsequent treatment plans. Most studies utilised radiomics-based methods, requiring manually annotated segmentation and handcrafted features, which tend to generalise poorly. We retrospectively collected 509 patients from 9 centres, and proposed a fully automated pipeline for EMVI status classification and CR prediction with diffusion weighted imaging and T2-weighted imaging. We applied nnUNet, a self-configuring deep learning model, for tumour segmentation and employed learned multiple-level image features to train classification models, named MLNet. This ensures a more comprehensive representation of the tumour features, in terms of both fine-grained detail and global context. On external validation, MLNet, yielding similar AUCs as internal validation, outperformed 3D ResNet10, a deep neural network with ten layers designed for analysing spatiotemporal data, in both CR and EMVI tasks. For CR prediction, MLNet showed better results than the current state-of-the-art model using imaging and clinical features in the same external cohort. Our study demonstrated that incorporating multi-level image representations learned by a deep learning based tumour segmentation model on primary MRI improves the results of EMVI classification and CR prediction with good generalisation to external data. We observed variations in the contributions of individual feature maps to different classification tasks. This pipeline has the potential to be applied in clinical settings, particularly for EMVI classification.
Collapse
Affiliation(s)
- Lishan Cai
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
| | - Doenja M J Lambregts
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
| | - Geerard L Beets
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
- Department of Surgery, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Monique Mass
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
| | - Eduardo H P Pooch
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
| | - Corentin Guérendel
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
| | - Regina G H Beets-Tan
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
- GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre, P. Debyelaan 25, 66202 AZ, Maastricht, The Netherlands
| | - Sean Benson
- Department of Radiology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands.
| |
Collapse
|
3
|
Malik M, Chong B, Fernandez J, Shim V, Kasabov NK, Wang A. Stroke Lesion Segmentation and Deep Learning: A Comprehensive Review. Bioengineering (Basel) 2024; 11:86. [PMID: 38247963 PMCID: PMC10813717 DOI: 10.3390/bioengineering11010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 01/23/2024] Open
Abstract
Stroke is a medical condition that affects around 15 million people annually. Patients and their families can face severe financial and emotional challenges as it can cause motor, speech, cognitive, and emotional impairments. Stroke lesion segmentation identifies the stroke lesion visually while providing useful anatomical information. Though different computer-aided software are available for manual segmentation, state-of-the-art deep learning makes the job much easier. This review paper explores the different deep-learning-based lesion segmentation models and the impact of different pre-processing techniques on their performance. It aims to provide a comprehensive overview of the state-of-the-art models and aims to guide future research and contribute to the development of more robust and effective stroke lesion segmentation models.
Collapse
Affiliation(s)
- Mishaim Malik
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
| | - Benjamin Chong
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Nikola Kirilov Kasabov
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Knowledge Engineering and Discovery Research Innovation, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
- Knowledge Engineering Consulting Ltd., Auckland 1071, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
- Medical Imaging Research Centre, The University of Auckland, Auckland 1010, New Zealand
- Centre for Co-Created Ageing Research, The University of Auckland, Auckland 1010, New Zealand
| |
Collapse
|
4
|
PISDGAN: Perceive image structure and details for laryngeal image enhancement. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
5
|
Interstitial lung abnormalities (ILA) on routine chest CT: Comparison of radiologists’ visual evaluation and automated quantification. Eur J Radiol 2022; 157:110564. [DOI: 10.1016/j.ejrad.2022.110564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 10/06/2022] [Accepted: 10/11/2022] [Indexed: 11/21/2022]
|
6
|
Kim E, Cho HH, Kwon J, Oh YT, Ko ES, Park H. Tumor-Attentive Segmentation-Guided GAN for Synthesizing Breast Contrast-Enhanced MRI Without Contrast Agents. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 11:32-43. [PMID: 36478773 PMCID: PMC9721354 DOI: 10.1109/jtehm.2022.3221918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/25/2022] [Accepted: 11/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a sensitive imaging technique critical for breast cancer diagnosis. However, the administration of contrast agents poses a potential risk. This can be avoided if contrast-enhanced MRI can be obtained without using contrast agents. Thus, we aimed to generate T1-weighted contrast-enhanced MRI (ceT1) images from pre-contrast T1 weighted MRI (preT1) images in the breast. METHODS We proposed a generative adversarial network to synthesize ceT1 from preT1 breast images that adopted a local discriminator and segmentation task network to focus specifically on the tumor region in addition to the whole breast. The segmentation network performed a related task of segmentation of the tumor region, which allowed important tumor-related information to be enhanced. In addition, edge maps were included to provide explicit shape and structural information. Our approach was evaluated and compared with other methods in the local (n = 306) and external validation (n = 140) cohorts. Four evaluation metrics of normalized mean squared error (NRMSE), Pearson cross-correlation coefficients (CC), peak signal-to-noise ratio (PSNR), and structural similarity index map (SSIM) for the whole breast and tumor region were measured. An ablation study was performed to evaluate the incremental benefits of various components in our approach. RESULTS Our approach performed the best with an NRMSE 25.65, PSNR 54.80 dB, SSIM 0.91, and CC 0.88 on average, in the local test set. CONCLUSION Performance gains were replicated in the validation cohort. SIGNIFICANCE We hope that our method will help patients avoid potentially harmful contrast agents. Clinical and Translational Impact Statement-Contrast agents are necessary to obtain DCE-MRI which is essential in breast cancer diagnosis. However, administration of contrast agents may cause side effects such as nephrogenic systemic fibrosis and risk of toxic residue deposits. Our approach can generate DCE-MRI without contrast agents using a generative deep neural network. Thus, our approach could help patients avoid potentially harmful contrast agents resulting in an improved diagnosis and treatment workflow for breast cancer.
Collapse
Affiliation(s)
- Eunjin Kim
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Hwan-Ho Cho
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
- Department of Medical Aritifical IntelligenceKonyang UniversityDaejon35365South Korea
| | - Junmo Kwon
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Young-Tack Oh
- Department of Electrical and Computer EngineeringSungkyunkwan UniversitySuwon16419South Korea
| | - Eun Sook Ko
- Samsung Medical CenterDepartment of Radiology, School of MedicineSungkyunkwan UniversitySeoul06351South Korea
| | - Hyunjin Park
- School of Electronic and Electrical EngineeringSungkyunkwan UniversitySuwon16419South Korea
- Center for Neuroscience Imaging ResearchInstitute for Basic ScienceSuwon16419South Korea
| |
Collapse
|
7
|
Scalco E, Rizzo G, Mastropietro A. The stability of oncologic MRI radiomic features and the potential role of deep learning: a review. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac60b9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 03/24/2022] [Indexed: 11/11/2022]
Abstract
Abstract
The use of MRI radiomic models for the diagnosis, prognosis and treatment response prediction of tumors has been increasingly reported in literature. However, its widespread adoption in clinics is hampered by issues related to features stability. In the MRI radiomic workflow, the main factors that affect radiomic features computation can be found in the image acquisition and reconstruction phase, in the image pre-processing steps, and in the segmentation of the region of interest on which radiomic indices are extracted. Deep Neural Networks (DNNs), having shown their potentiality in the medical image processing and analysis field, can be seen as an attractive strategy to partially overcome the issues related to radiomic stability and mitigate their impact. In fact, DNN approaches can be prospectively integrated in the MRI radiomic workflow to improve image quality, obtain accurate and reproducible segmentations and generate standardized images. In this review, DNN methods that can be included in the image processing steps of the radiomic workflow are described and discussed, in the light of a detailed analysis of the literature in the context of MRI radiomic reliability.
Collapse
|
8
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
9
|
Ibrahim A, Widaatalla Y, Refaee T, Primakov S, Miclea RL, Öcal O, Fabritius MP, Ingrisch M, Ricke J, Hustinx R, Mottaghy FM, Woodruff HC, Seidensticker M, Lambin P. Reproducibility of CT-Based Hepatocellular Carcinoma Radiomic Features across Different Contrast Imaging Phases: A Proof of Concept on SORAMIC Trial Data. Cancers (Basel) 2021; 13:cancers13184638. [PMID: 34572870 PMCID: PMC8468150 DOI: 10.3390/cancers13184638] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 09/11/2021] [Accepted: 09/13/2021] [Indexed: 12/17/2022] Open
Abstract
Simple Summary Radiomics has been reported to have potential for correlating with clinical outcomes. However, handcrafted radiomic features (HRFs)—the quantitative features extracted from medical images—are limited by their sensitivity to variations in scanning parameters. Furthermore, radiomics analyses require big data with good quality to achieve desirable performances. In this study, we investigated the reproducibility of HRFs between scans acquired with the same scanning parameters except for the imaging phase (arterial and portal venous phases) to assess the possibilities of merging scans from different phases or replacing missing scans from a phase with other phases to increase data entries. Additionally, we assessed the potential of ComBat harmonization to remove batch effects attributed to this variation. Our results show that the majority of HRFs were not reproducible between the arterial and portal venous phases before or after ComBat harmonization. We provide a guide for analyzing scans of different imaging phases. Abstract Handcrafted radiomic features (HRFs) are quantitative imaging features extracted from regions of interest on medical images which can be correlated with clinical outcomes and biologic characteristics. While HRFs have been used to train predictive and prognostic models, their reproducibility has been reported to be affected by variations in scan acquisition and reconstruction parameters, even within the same imaging vendor. In this work, we evaluated the reproducibility of HRFs across the arterial and portal venous phases of contrast-enhanced computed tomography images depicting hepatocellular carcinomas, as well as the potential of ComBat harmonization to correct for this difference. ComBat harmonization is a method based on Bayesian estimates that was developed for gene expression arrays, and has been investigated as a potential method for harmonizing HRFs. Our results show that the majority of HRFs are not reproducible between the arterial and portal venous imaging phases, yet a number of HRFs could be used interchangeably between those phases. Furthermore, ComBat harmonization increased the number of reproducible HRFs across both phases by 1%. Our results guide the pooling of arterial and venous phases from different patients in an effort to increase cohort size, as well as joint analysis of the phases.
Collapse
Affiliation(s)
- Abdalla Ibrahim
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6200 MD Maastricht, The Netherlands; (Y.W.); (S.P.); (H.C.W.); (P.L.)
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, 6200 MD Maastricht, The Netherlands; (R.L.M.); (F.M.M.)
- Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, University Hospital of Liege and GIGA CRC-In Vivo Imaging, University of Liege, 4000 Liege, Belgium;
- Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, 52074 Aachen, Germany
- Correspondence: (A.I.); (T.R.)
| | - Yousif Widaatalla
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6200 MD Maastricht, The Netherlands; (Y.W.); (S.P.); (H.C.W.); (P.L.)
| | - Turkey Refaee
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6200 MD Maastricht, The Netherlands; (Y.W.); (S.P.); (H.C.W.); (P.L.)
- Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, Jazan University, Jazan 45142, Saudi Arabia
- Correspondence: (A.I.); (T.R.)
| | - Sergey Primakov
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6200 MD Maastricht, The Netherlands; (Y.W.); (S.P.); (H.C.W.); (P.L.)
- Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, 52074 Aachen, Germany
| | - Razvan L. Miclea
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, 6200 MD Maastricht, The Netherlands; (R.L.M.); (F.M.M.)
| | - Osman Öcal
- Department of Radiology, University Hospital, LMU Munich, 80336 Munich, Germany; (O.Ö.); (M.P.F.); (M.I.); (J.R.); (M.S.)
| | - Matthias P. Fabritius
- Department of Radiology, University Hospital, LMU Munich, 80336 Munich, Germany; (O.Ö.); (M.P.F.); (M.I.); (J.R.); (M.S.)
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, 80336 Munich, Germany; (O.Ö.); (M.P.F.); (M.I.); (J.R.); (M.S.)
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, 80336 Munich, Germany; (O.Ö.); (M.P.F.); (M.I.); (J.R.); (M.S.)
| | - Roland Hustinx
- Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, University Hospital of Liege and GIGA CRC-In Vivo Imaging, University of Liege, 4000 Liege, Belgium;
| | - Felix M. Mottaghy
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, 6200 MD Maastricht, The Netherlands; (R.L.M.); (F.M.M.)
- Department of Nuclear Medicine and Comprehensive Diagnostic Center Aachen (CDCA), University Hospital RWTH Aachen University, 52074 Aachen, Germany
| | - Henry C. Woodruff
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6200 MD Maastricht, The Netherlands; (Y.W.); (S.P.); (H.C.W.); (P.L.)
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, 6200 MD Maastricht, The Netherlands; (R.L.M.); (F.M.M.)
| | - Max Seidensticker
- Department of Radiology, University Hospital, LMU Munich, 80336 Munich, Germany; (O.Ö.); (M.P.F.); (M.I.); (J.R.); (M.S.)
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW—School for Oncology, Maastricht University, 6200 MD Maastricht, The Netherlands; (Y.W.); (S.P.); (H.C.W.); (P.L.)
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, 6200 MD Maastricht, The Netherlands; (R.L.M.); (F.M.M.)
| |
Collapse
|