1
|
Xiao Y, Li Y, Zhao H. Spatiotemporal metabolomic approaches to the cancer-immunity panorama: a methodological perspective. Mol Cancer 2024; 23:202. [PMID: 39294747 PMCID: PMC11409752 DOI: 10.1186/s12943-024-02113-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 09/05/2024] [Indexed: 09/21/2024] Open
Abstract
Metabolic reprogramming drives the development of an immunosuppressive tumor microenvironment (TME) through various pathways, contributing to cancer progression and reducing the effectiveness of anticancer immunotherapy. However, our understanding of the metabolic landscape within the tumor-immune context has been limited by conventional metabolic measurements, which have not provided comprehensive insights into the spatiotemporal heterogeneity of metabolism within TME. The emergence of single-cell, spatial, and in vivo metabolomic technologies has now enabled detailed and unbiased analysis, revealing unprecedented spatiotemporal heterogeneity that is particularly valuable in the field of cancer immunology. This review summarizes the methodologies of metabolomics and metabolic regulomics that can be applied to the study of cancer-immunity across single-cell, spatial, and in vivo dimensions, and systematically assesses their benefits and limitations.
Collapse
Affiliation(s)
- Yang Xiao
- Chongqing University Cancer Hospital, School of Medicine, Chongqing University, Chongqing, 400044, China
| | - Yongsheng Li
- Chongqing University Cancer Hospital, School of Medicine, Chongqing University, Chongqing, 400044, China.
- Department of Medical Oncology, Chongqing University Cancer Hospital, Chongqing, 400030, China.
| | - Huakan Zhao
- Chongqing University Cancer Hospital, School of Medicine, Chongqing University, Chongqing, 400044, China.
- Department of Medical Oncology, Chongqing University Cancer Hospital, Chongqing, 400030, China.
| |
Collapse
|
2
|
Hurkmans C, Bibault JE, Brock KK, van Elmpt W, Feng M, David Fuller C, Jereczek-Fossa BA, Korreman S, Landry G, Madesta F, Mayo C, McWilliam A, Moura F, Muren LP, El Naqa I, Seuntjens J, Valentini V, Velec M. A joint ESTRO and AAPM guideline for development, clinical validation and reporting of artificial intelligence models in radiation therapy. Radiother Oncol 2024; 197:110345. [PMID: 38838989 DOI: 10.1016/j.radonc.2024.110345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 05/23/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND AND PURPOSE Artificial Intelligence (AI) models in radiation therapy are being developed with increasing pace. Despite this, the radiation therapy community has not widely adopted these models in clinical practice. A cohesive guideline on how to develop, report and clinically validate AI algorithms might help bridge this gap. METHODS AND MATERIALS A Delphi process with all co-authors was followed to determine which topics should be addressed in this comprehensive guideline. Separate sections of the guideline, including Statements, were written by subgroups of the authors and discussed with the whole group at several meetings. Statements were formulated and scored as highly recommended or recommended. RESULTS The following topics were found most relevant: Decision making, image analysis, volume segmentation, treatment planning, patient specific quality assurance of treatment delivery, adaptive treatment, outcome prediction, training, validation and testing of AI model parameters, model availability for others to verify, model quality assurance/updates and upgrades, ethics. Key references were given together with an outlook on current hurdles and possibilities to overcome these. 19 Statements were formulated. CONCLUSION A cohesive guideline has been written which addresses main topics regarding AI in radiation therapy. It will help to guide development, as well as transparent and consistent reporting and validation of new AI tools and facilitate adoption.
Collapse
Affiliation(s)
- Coen Hurkmans
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, the Netherlands; Department of Electrical Engineering, Technical University Eindhoven, Eindhoven, the Netherlands.
| | | | - Kristy K Brock
- Departments of Imaging Physics and Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Mary Feng
- University of California San Francisco, San Francisco, CA, USA
| | - Clifton David Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX
| | - Barbara A Jereczek-Fossa
- Dept. of Oncology and Hemato-oncology, University of Milan, Milan, Italy; Dept. of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Stine Korreman
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany; German Cancer Consortium (DKTK), Partner Site Munich, a Partnership between DKFZ and LMU University Hospital Munich, Germany; Bavarian Cancer Research Center (BZKF), Partner Site Munich, Munich, Germany
| | - Frederic Madesta
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Chuck Mayo
- Institute for Healthcare Policy and Innovation, University of Michigan, USA
| | - Alan McWilliam
- Division of Cancer Sciences, The University of Manchester, Manchester, UK
| | - Filipe Moura
- CrossI&D Lisbon Research Center, Portuguese Red Cross Higher Health School Lisbon, Portugal
| | - Ludvig P Muren
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Jan Seuntjens
- Princess Margaret Cancer Centre, Radiation Medicine Program, University Health Network & Departments of Radiation Oncology and Medical Biophysics, University of Toronto, Toronto, Canada
| | - Vincenzo Valentini
- Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Fondazione Policlinico Universitario "Agostino Gemelli" IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy
| | - Michael Velec
- Radiation Medicine Program, Princess Margaret Cancer Centre and Department of Radiation Oncology, University of Toronto, Toronto, Canada
| |
Collapse
|
3
|
Jung E, Kong E, Yu D, Yang H, Chicontwe P, Park SH, Jeon I. Generation of synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks and conditional denoising diffusion probabilistic models based on simultaneous 18F-FDG PET/MR image data of pyogenic spondylodiscitis. Spine J 2024; 24:1467-1477. [PMID: 38615932 DOI: 10.1016/j.spinee.2024.04.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/12/2024] [Accepted: 04/06/2024] [Indexed: 04/16/2024]
Abstract
BACKGROUND CONTEXT Cross-modality image generation from magnetic resonance (MR) to positron emission tomography (PET) using the generative model can be expected to have complementary effects by addressing the limitations and maximizing the advantages inherent in each modality. PURPOSE This study aims to generate synthetic PET/MR fusion images from MR images using a combination of generative adversarial networks (GANs) and conditional denoising diffusion probabilistic models (cDDPMs) based on simultaneous 18F-fluorodeoxyglucose (18F-FDG) PET/MR image data. STUDY DESIGN Retrospective study with prospectively collected clinical and radiological data. PATIENT SAMPLE This study included 94 patients (60 men and 34 women) with thoraco-lumbar pyogenic spondylodiscitis (PSD) from February 2017 to January 2020 in a single tertiary institution. OUTCOME MEASURES Quantitative and qualitative image similarity were analyzed between the real and synthetic PET/ T2-weighted fat saturation MR (T2FS) fusion images on the test data set. METHODS We used paired spinal sagittal T2FS and PET/T2FS fusion images of simultaneous 18F-FDG PET/MR imaging examination in patients with PSD, which were employed to generate synthetic PET/T2FS fusion images from T2FS images using a combination of Pix2Pix (U-Net generator + Least Squares GANs discriminator) and cDDPMs algorithms. In the analyses of image similarity between the real and synthetic PET/T2FS fusion images, we adopted the values of mean peak signal to noise ratio (PSNR), mean structural similarity measurement (SSIM), mean absolute error (MAE), and mean squared error (MSE) for quantitative analysis, while the discrimination accuracy by three spine surgeons was applied for qualitative analysis. RESULTS Total of 2,082 pairs of T2FS and PET/T2FS fusion images were obtained from 172 examinations on 94 patients, which were randomly assigned to training, validation, and test data sets in 8:1:1 ratio (1664, 209, and 209 pairs). The quantitative analysis revealed PSNR of 30.634 ± 3.437, SSIM of 0.910 ± 0.067, MAE of 0.017 ± 0.008, and MSE of 0.001 ± 0.001, respectively. The values of PSNR, MAE, and MSE significantly decreased as FDG uptake increased in real PET/T2FS fusion image, with no significant correlation on SSIM. In the qualitative analysis, the overall discrimination accuracy between real and synthetic PET/T2FS fusion images was 47.4%. CONCLUSIONS The combination of Pix2Pix and cDDPMs demonstrated the potential for cross-modal image generation from MR to PET images, with reliable quantitative and qualitative image similarities.
Collapse
Affiliation(s)
- Euijin Jung
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
| | - Eunjung Kong
- Department of Nuclear Medicine, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea
| | - Dongwoo Yu
- Department of Neurosurgery, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea
| | - Heesung Yang
- School of Computer Science and Engineering, Kyungpook National University, Daegu, South Korea
| | - Philip Chicontwe
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
| | - Sang Hyun Park
- Department of Robotics and Mechatronics Engineering, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, South Korea
| | - Ikchan Jeon
- Department of Neurosurgery, Yeungnam University Hospital, Yeungnam University College of Medicine, Daegu, South Korea.
| |
Collapse
|
4
|
Yayan J, Rasche K, Franke KJ, Windisch W, Berger M. FDG-PET-CT as an early detection method for tuberculosis: a systematic review and meta-analysis. BMC Public Health 2024; 24:2022. [PMID: 39075378 PMCID: PMC11285570 DOI: 10.1186/s12889-024-19495-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 07/16/2024] [Indexed: 07/31/2024] Open
Abstract
Tuberculosis (TB) causes major public health problems worldwide. Fighting TB requires sustained efforts in health prevention, diagnosis and treatment. Previous literature has shown that conventional diagnostic methods like X-ray and sputum microscopy often miss early or extrapulmonary TB due to their limited sensitivity. Blood tests, while useful, lack the anatomical detail needed for precise localization of TB lesions. A possible step forward in the fight against TB could be the use of Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) and Computed Tomography (CT). This meta-analysis discusses the current literature, including the methods, results and implications of using FDG-PET-CT in the early diagnosis of TB. Analysis of the studies showed that the sensitivity of FDG-PET-CT as a potential method for early detection of TB was 82.6%.
Collapse
Affiliation(s)
- Josef Yayan
- Department of Internal Medicine, Division of Pulmonary, Allergy and Sleep Medicine, Witten/Herdecke University, HELIOS Clinic Wuppertal, Heusnerstr. 40, 42283, Wuppertal, Germany.
| | - Kurt Rasche
- Department of Internal Medicine, Division of Pulmonary, Allergy and Sleep Medicine, Witten/Herdecke University, HELIOS Clinic Wuppertal, Heusnerstr. 40, 42283, Wuppertal, Germany
| | - Karl-Josef Franke
- University of Witten/Herdecke Chair of Internal Medicine I Department of Pulmonary Medicine, Clinical Center Siegen, Siegen, Germany
| | - Wolfram Windisch
- Department of Pneumology, Cologne Merheim Hospital, Witten/Herdecke University, Cologne, Germany
| | - Melanie Berger
- Department of Pneumology, Cologne Merheim Hospital, Witten/Herdecke University, Cologne, Germany
| |
Collapse
|
5
|
Zavaleta-Monestel E, Alpizar-Rojas M, García-Montero J, León-Obando A, Arguedas-Chacón S, Quesada-Villaseñor R. Alternatives for the Detection and Diagnosis of Osteoarticular Infections: An Exploratory Review. Cureus 2024; 16:e63743. [PMID: 39099945 PMCID: PMC11296701 DOI: 10.7759/cureus.63743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/06/2024] Open
Abstract
The precise diagnosis of osteomyelitis, a bone infection, remains a significant challenge for healthcare professionals. This difficulty stems from the highly variable nature of its clinical presentation and disease course. Patients can exhibit a wide range of symptoms, making it easy to misdiagnose the condition. In turn, inaccurate diagnoses lead to inappropriate treatment regimens, potentially hindering a patient's recovery and causing unnecessary complications. Nuclear medicine offers a ray of hope in this fight against diagnostic ambiguity. It provides valuable tools, such as radiopharmaceutical imaging, that can significantly improve the accuracy of osteomyelitis diagnosis. However, limitations exist. This article explores the need for alternative diagnostic approaches within the specific context of Costa Rica. This exploration is particularly relevant due to the current regional shortage of gallium-67 (⁶⁷Ga), a radiopharmaceutical commonly used in osteomyelitis diagnosis. The article delves into the nature, function, and limitations of various nuclear medicine techniques, encompassing both independent radiopharmaceuticals like ⁶⁷Ga and those conjugated with specific targeting molecules to pinpoint areas of infection within the body. Given the scarcity of ⁶⁷Ga in Costa Rica, it becomes crucial to explore and implement viable alternative diagnostic techniques within the healthcare system. This article emphasizes the need for further investigation into these alternatives, with the goal of improving diagnostic accuracy and ensuring optimal patient care. By implementing these alternatives, healthcare professionals in Costa Rica can effectively combat the challenges posed by osteomyelitis and pave the way for better patient outcomes.
Collapse
Affiliation(s)
- Esteban Zavaleta-Monestel
- Pharmacy, Hospital Clínica Bíblica, San José, CRI
- Pharmacy, Universidad de Iberoámerica, San José, CRI
| | | | | | | | | | | |
Collapse
|
6
|
Ebrahimi S, Lundström E, Batasin SJ, Hedlund E, Stålberg K, Ehman EC, Sheth VR, Iranpour N, Loubrie S, Schlein A, Rakow-Penner R. Application of PET/MRI in Gynecologic Malignancies. Cancers (Basel) 2024; 16:1478. [PMID: 38672560 PMCID: PMC11048306 DOI: 10.3390/cancers16081478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 03/23/2024] [Accepted: 03/27/2024] [Indexed: 04/28/2024] Open
Abstract
The diagnosis, treatment, and management of gynecologic malignancies benefit from both positron emission tomography/computed tomography (PET/CT) and MRI. PET/CT provides important information on the local extent of disease as well as diffuse metastatic involvement. MRI offers soft tissue delineation and loco-regional disease involvement. The combination of these two technologies is key in diagnosis, treatment planning, and evaluating treatment response in gynecological malignancies. This review aims to assess the performance of PET/MRI in gynecologic cancer patients and outlines the technical challenges and clinical advantages of PET/MR systems when specifically applied to gynecologic malignancies.
Collapse
Affiliation(s)
- Sheida Ebrahimi
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Elin Lundström
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
- Department of Surgical Sciences, Radiology, Uppsala University, 751 85 Uppsala, Sweden
- Center for Medical Imaging, Uppsala University Hospital, 751 85 Uppsala, Sweden
| | - Summer J. Batasin
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Elisabeth Hedlund
- Department of Surgical Sciences, Radiology, Uppsala University, 751 85 Uppsala, Sweden
| | - Karin Stålberg
- Department of Women’s and Children’s Health, Uppsala University, 751 85 Uppsala, Sweden
| | - Eric C. Ehman
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Vipul R. Sheth
- Department of Radiology, Stanford University, Palo Alto, CA 94305, USA; (V.R.S.)
| | - Negaur Iranpour
- Department of Radiology, Stanford University, Palo Alto, CA 94305, USA; (V.R.S.)
| | - Stephane Loubrie
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Alexandra Schlein
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
| | - Rebecca Rakow-Penner
- Department of Radiology, University of California San Diego, La Jolla, CA 92093, USA
- Department of Bioengineering, University of California San Diego, La Jolla, CA 92093, USA
| |
Collapse
|
7
|
Fuchs T, Kaiser L, Müller D, Papp L, Fischer R, Tran-Gia J. Enhancing Interoperability and Harmonisation of Nuclear Medicine Image Data and Associated Clinical Data. Nuklearmedizin 2023; 62:389-398. [PMID: 37907246 PMCID: PMC10689089 DOI: 10.1055/a-2187-5701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 09/21/2023] [Indexed: 11/02/2023]
Abstract
Nuclear imaging techniques such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) in combination with computed tomography (CT) are established imaging modalities in clinical practice, particularly for oncological problems. Due to a multitude of manufacturers, different measurement protocols, local demographic or clinical workflow variations as well as various available reconstruction and analysis software, very heterogeneous datasets are generated. This review article examines the current state of interoperability and harmonisation of image data and related clinical data in the field of nuclear medicine. Various approaches and standards to improve data compatibility and integration are discussed. These include, for example, structured clinical history, standardisation of image acquisition and reconstruction as well as standardised preparation of image data for evaluation. Approaches to improve data acquisition, storage and analysis will be presented. Furthermore, approaches are presented to prepare the datasets in such a way that they become usable for projects applying artificial intelligence (AI) (machine learning, deep learning, etc.). This review article concludes with an outlook on future developments and trends related to AI in nuclear medicine, including a brief research of commercial solutions.
Collapse
Affiliation(s)
- Timo Fuchs
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
| | - Lena Kaiser
- Department of Nuclear Medicine, LMU University Hospital, LMU, Munich, Germany
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany
- Medical Data Integration Center, University Hospital Augsburg, Augsburg, Germany
| | - Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Wien, Austria
| | - Regina Fischer
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Würzburg, Wurzburg, Germany
| |
Collapse
|
8
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2023; 53:555-557. [PMID: 37451935 DOI: 10.1053/j.semnuclmed.2023.06.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/18/2023]
|
9
|
Schwenck J, Sonanini D, Cotton JM, Rammensee HG, la Fougère C, Zender L, Pichler BJ. Advances in PET imaging of cancer. Nat Rev Cancer 2023:10.1038/s41568-023-00576-4. [PMID: 37258875 DOI: 10.1038/s41568-023-00576-4] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/17/2023] [Indexed: 06/02/2023]
Abstract
Molecular imaging has experienced enormous advancements in the areas of imaging technology, imaging probe and contrast development, and data quality, as well as machine learning-based data analysis. Positron emission tomography (PET) and its combination with computed tomography (CT) or magnetic resonance imaging (MRI) as a multimodality PET-CT or PET-MRI system offer a wealth of molecular, functional and morphological data with a single patient scan. Despite the recent technical advances and the availability of dozens of disease-specific contrast and imaging probes, only a few parameters, such as tumour size or the mean tracer uptake, are used for the evaluation of images in clinical practice. Multiparametric in vivo imaging data not only are highly quantitative but also can provide invaluable information about pathophysiology, receptor expression, metabolism, or morphological and functional features of tumours, such as pH, oxygenation or tissue density, as well as pharmacodynamic properties of drugs, to measure drug response with a contrast agent. It can further quantitatively map and spatially resolve the intertumoural and intratumoural heterogeneity, providing insights into tumour vulnerabilities for target-specific therapeutic interventions. Failure to exploit and integrate the full potential of such powerful imaging data may lead to a lost opportunity in which patients do not receive the best possible care. With the desire to implement personalized medicine in the cancer clinic, the full comprehensive diagnostic power of multiplexed imaging should be utilized.
Collapse
Affiliation(s)
- Johannes Schwenck
- Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University of Tübingen, Tübingen, Germany
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, Eberhard Karls University of Tübingen, Tübingen, Germany
- Cluster of Excellence iFIT (EXC 2180) 'Image-Guided and Functionally Instructed Tumour Therapies', Eberhard Karls University, Tübingen, Germany
| | - Dominik Sonanini
- Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University of Tübingen, Tübingen, Germany
- Medical Oncology and Pulmonology, Department of Internal Medicine, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Jonathan M Cotton
- Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University of Tübingen, Tübingen, Germany
- Cluster of Excellence iFIT (EXC 2180) 'Image-Guided and Functionally Instructed Tumour Therapies', Eberhard Karls University, Tübingen, Germany
| | - Hans-Georg Rammensee
- Cluster of Excellence iFIT (EXC 2180) 'Image-Guided and Functionally Instructed Tumour Therapies', Eberhard Karls University, Tübingen, Germany
- Department of Immunology, IFIZ Institute for Cell Biology, Eberhard Karls University of Tübingen, Tübingen, Germany
- German Cancer Research Center, German Cancer Consortium DKTK, Partner Site Tübingen, Tübingen, Germany
| | - Christian la Fougère
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, Eberhard Karls University of Tübingen, Tübingen, Germany
- Cluster of Excellence iFIT (EXC 2180) 'Image-Guided and Functionally Instructed Tumour Therapies', Eberhard Karls University, Tübingen, Germany
- German Cancer Research Center, German Cancer Consortium DKTK, Partner Site Tübingen, Tübingen, Germany
| | - Lars Zender
- Cluster of Excellence iFIT (EXC 2180) 'Image-Guided and Functionally Instructed Tumour Therapies', Eberhard Karls University, Tübingen, Germany
- Medical Oncology and Pulmonology, Department of Internal Medicine, Eberhard Karls University of Tübingen, Tübingen, Germany
- German Cancer Research Center, German Cancer Consortium DKTK, Partner Site Tübingen, Tübingen, Germany
| | - Bernd J Pichler
- Werner Siemens Imaging Center, Department of Preclinical Imaging and Radiopharmacy, Eberhard Karls University of Tübingen, Tübingen, Germany.
- Cluster of Excellence iFIT (EXC 2180) 'Image-Guided and Functionally Instructed Tumour Therapies', Eberhard Karls University, Tübingen, Germany.
- German Cancer Research Center, German Cancer Consortium DKTK, Partner Site Tübingen, Tübingen, Germany.
| |
Collapse
|
10
|
Rajagopal A, Natsuaki Y, Wangerin K, Hamdi M, An H, Sunderland JJ, Laforest R, Kinahan PE, Larson PEZ, Hope TA. Synthetic PET via Domain Translation of 3-D MRI. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:333-343. [PMID: 37396797 PMCID: PMC10311993 DOI: 10.1109/trpms.2022.3223275] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Historically, patient datasets have been used to develop and validate various reconstruction algorithms for PET/MRI and PET/CT. To enable such algorithm development, without the need for acquiring hundreds of patient exams, in this article we demonstrate a deep learning technique to generate synthetic but realistic whole-body PET sinograms from abundantly available whole-body MRI. Specifically, we use a dataset of 56 18F-FDG-PET/MRI exams to train a 3-D residual UNet to predict physiologic PET uptake from whole-body T1-weighted MRI. In training, we implemented a balanced loss function to generate realistic uptake across a large dynamic range and computed losses along tomographic lines of response to mimic the PET acquisition. The predicted PET images are forward projected to produce synthetic PET (sPET) time-of-flight (ToF) sinograms that can be used with vendor-provided PET reconstruction algorithms, including using CT-based attenuation correction (CTAC) and MR-based attenuation correction (MRAC). The resulting synthetic data recapitulates physiologic 18F-FDG uptake, e.g., high uptake localized to the brain and bladder, as well as uptake in liver, kidneys, heart, and muscle. To simulate abnormalities with high uptake, we also insert synthetic lesions. We demonstrate that this sPET data can be used interchangeably with real PET data for the PET quantification task of comparing CTAC and MRAC methods, achieving ≤ 7.6% error in mean-SUV compared to using real data. These results together show that the proposed sPET data pipeline can be reasonably used for development, evaluation, and validation of PET/MRI reconstruction methods.
Collapse
Affiliation(s)
- Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| | - Yutaka Natsuaki
- Department of Radiation Oncology, University of New Mexico, Albuquerque, NM 87131 USA
| | | | - Mahdjoub Hamdi
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Hongyu An
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - John J Sunderland
- Department of Radiology, The University of Iowa, Iowa City, IA 52242 USA
| | - Richard Laforest
- Department of Radiology, Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Paul E Kinahan
- Department of Radiology, University of Washington, Seattle, WA 98195 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
| |
Collapse
|
11
|
Duan H, Iagaru A. Neuroendocrine Tumor Diagnosis: PET/MR Imaging. PET Clin 2023; 18:259-266. [PMID: 36707370 DOI: 10.1016/j.cpet.2022.11.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Imaging plays a critical role in the diagnosis and management of neuroendocrine tumors (NETs). The initial workup of the primary tumor, including its characterization, local and distant staging, defines subsequent treatment decisions. Functional imaging using hybrid systems, such as PET combined with computed tomography, has become the gold standard. As NETs majorly arise from the gastrointestinal system and metastasize primarily to the liver, simultaneous PET and MR imaging with its high soft tissue contrast might be a valuable clinical one-stop-shop whole-body imaging tool. This review presents the current status and challenges of PET/MR imaging for diagnosis of NETs.
Collapse
Affiliation(s)
- Heying Duan
- Department of Radiology, Division of Nuclear Medicine and Molecular Imaging, Stanford University, 300 Pasteur Drive, H2200, Stanford, CA 94305, USA
| | - Andrei Iagaru
- Department of Radiology, Division of Nuclear Medicine and Molecular Imaging, Stanford University, 300 Pasteur Drive, H2200, Stanford, CA 94305, USA.
| |
Collapse
|
12
|
Chamberlin JH, Smith C, Schoepf UJ, Nance S, Elojeimy S, O'Doherty J, Baruah D, Burt JR, Varga-Szemes A, Kabakus IM. A deep convolutional neural network ensemble for composite identification of pulmonary nodules and incidental findings on routine PET/CT. Clin Radiol 2023; 78:e368-e376. [PMID: 36863883 DOI: 10.1016/j.crad.2023.01.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 10/19/2022] [Accepted: 01/30/2023] [Indexed: 02/18/2023]
Abstract
AIM To evaluate primary and secondary pathologies of interest using an artificial intelligence (AI) platform, AI-Rad Companion, on low-dose computed tomography (CT) series from integrated positron-emission tomography (PET)/CT to detect CT findings that might be overlooked. MATERIALS AND METHODS One hundred and eighty-nine sequential patients who had undergone PET/CT were included. Images were evaluated using an ensemble of convolutional neural networks (AI-Rad Companion, Siemens Healthineers, Erlangen, Germany). The primary outcome was detection of pulmonary nodules for which the accuracy, identity, and intra-rater reliability was calculated. For secondary outcomes (binary detection of coronary artery calcium, aortic ectasia, vertebral height loss), accuracy and diagnostic performance were calculated. RESULTS The overall per-nodule accuracy for detection of lung nodules was 0.847. The overall sensitivity and specificity for detection of lung nodules was 0.915 and 0.781. The overall per-patient accuracy for AI detection of coronary artery calcium, aortic ectasia, and vertebral height loss was 0.979, 0.966, and 0.840, respectively. The sensitivity and specificity for coronary artery calcium was 0.989 and 0.969. The sensitivity and specificity for aortic ectasia was 0.806 and 1. CONCLUSION The neural network ensemble accurately assessed the number of pulmonary nodules and presence of coronary artery calcium and aortic ectasia on low-dose CT series of PET/CT. The neural network was highly specific for the diagnosis of vertebral height loss, but not sensitive. The use of the AI ensemble can help radiologists and nuclear medicine physicians to catch CT findings that might be overlooked.
Collapse
Affiliation(s)
- J H Chamberlin
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - C Smith
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - U J Schoepf
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - S Nance
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - S Elojeimy
- Division of Nuclear Medicine, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - J O'Doherty
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA; Siemens Medical Solutions, Malvern, PA, USA
| | - D Baruah
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - J R Burt
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - A Varga-Szemes
- Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA
| | - I M Kabakus
- Division of Thoracic Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA; Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA; Division of Nuclear Medicine, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, SC, USA.
| |
Collapse
|
13
|
Interactive Display of Images in Digital Exhibition Halls under Artificial Intelligence and Mixed Reality Technology. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3688797. [PMID: 36275980 PMCID: PMC9581601 DOI: 10.1155/2022/3688797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 09/11/2022] [Accepted: 09/15/2022] [Indexed: 11/23/2022]
Abstract
The attractiveness of traditional exhibition halls to young people is gradually decreasing. Combining modern digital technology to improve the display effect of the exhibition hall can effectively enhance the effect of cultural publicity. This article introduces the technology of image interaction and mixed reality (MR) to improve the historical and cultural propaganda level of the Shaanxi exhibition hall. The advantages of MR technology in applying digital exhibition halls are theoretically expounded. A theoretical plan for Shaanxi history and culture-related display areas is designed using artificial intelligence combined with MR technology. In addition, the survey respondent's evaluation of the effect of the new exhibition hall is obtained using a questionnaire survey. The survey results show that 97% of people like the history and culture of Shaanxi but only 13% of the people say they know or know very well about the history and culture of Shaanxi. In addition, 60% of the tourists say they are satisfied with the cultural experience of Shaanxi, and only 27% of the tourists are very satisfied. Also, 96% of tourists are willing to experience Shaanxi's history and culture through digital exhibition halls, and 93% are willing to participate in cultural experience activities based on MR technology. The survey results prove that tourists are satisfied with the effect of the new exhibition hall. Tourists want to add a distinctive form of cultural experience to the exhibition hall. They are willing to accept digital exhibition halls incorporating MR technology and are very happy to participate in the exhibition method of image interaction. This shows that the use of image interactive display based on MR technology in the layout of the exhibition hall is recognized by people and has strong feasibility. This article has reference significance for the digital upgrade of the exhibition hall and the development of the cultural tourism industry.
Collapse
|
14
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2022; 52:505-507. [PMID: 35906038 DOI: 10.1053/j.semnuclmed.2022.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
15
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2022; 52:263-265. [DOI: 10.1053/j.semnuclmed.2022.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
16
|
Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur J Nucl Med Mol Imaging 2022; 49:3717-3739. [PMID: 35451611 DOI: 10.1007/s00259-022-05805-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/12/2022] [Indexed: 11/04/2022]
Abstract
PURPOSE This paper reviews recent applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging. Recent advances in Deep Learning (DL) and GANs catalysed the research of their applications in medical imaging modalities. As a result, several unique GAN topologies have emerged and been assessed in an experimental environment over the last two years. METHODS The present work extensively describes GAN architectures and their applications in PET imaging. The identification of relevant publications was performed via approved publication indexing websites and repositories. Web of Science, Scopus, and Google Scholar were the major sources of information. RESULTS The research identified a hundred articles that address PET imaging applications such as attenuation correction, de-noising, scatter correction, removal of artefacts, image fusion, high-dose image estimation, super-resolution, segmentation, and cross-modality synthesis. These applications are presented and accompanied by the corresponding research works. CONCLUSION GANs are rapidly employed in PET imaging tasks. However, specific limitations must be eliminated to reach their full potential and gain the medical community's trust in everyday clinical practice.
Collapse
|
17
|
Lai YC, Wu KC, Tseng NC, Chen YJ, Chang CJ, Yen KY, Kao CH. Differentiation Between Malignant and Benign Pulmonary Nodules by Using Automated Three-Dimensional High-Resolution Representation Learning With Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography. Front Med (Lausanne) 2022; 9:773041. [PMID: 35372415 PMCID: PMC8971840 DOI: 10.3389/fmed.2022.773041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 02/14/2022] [Indexed: 11/26/2022] Open
Abstract
Background The investigation of incidental pulmonary nodules has rapidly become one of the main indications for 18F-fluorodeoxyglucose (FDG) positron emission tomography (PET), currently combined with computed tomography (PET-CT). There is also a growing trend to use artificial Intelligence for optimization and interpretation of PET-CT Images. Therefore, we proposed a novel deep learning model that aided in the automatic differentiation between malignant and benign pulmonary nodules on FDG PET-CT. Methods In total, 112 participants with pulmonary nodules who underwent FDG PET-CT before surgery were enrolled retrospectively. We designed a novel deep learning three-dimensional (3D) high-resolution representation learning (HRRL) model for the automated classification of pulmonary nodules based on FDG PET-CT images without manual annotation by experts. For the images to be localized more precisely, we defined the territories of the lungs through a novel artificial intelligence-driven image-processing algorithm, instead of the conventional segmentation method, without the aid of an expert; this algorithm is based on deep HRRL, which is used to perform high-resolution classification. In addition, the 2D model was converted to a 3D model. Results All pulmonary lesions were confirmed through pathological studies (79 malignant and 33 benign). We evaluated its diagnostic performance in the differentiation of malignant and benign nodules. The area under the receiver operating characteristic curve (AUC) of the deep learning model was used to indicate classification performance in an evaluation using fivefold cross-validation. The nodule-based prediction performance of the model had an AUC, sensitivity, specificity, and accuracy of 78.1, 89.9, 54.5, and 79.4%, respectively. Conclusion Our results suggest that a deep learning algorithm using HRRL without manual annotation from experts might aid in the classification of pulmonary nodules discovered through clinical FDG PET-CT images.
Collapse
Affiliation(s)
- Yung-Chi Lai
- Department of Nuclear Medicine, PET Center, China Medical University Hospital, Taichung, Taiwan
| | - Kuo-Chen Wu
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Center of Augmented Intelligence in Healthcare, China Medical University Hospital, Taichung, Taiwan
| | - Neng-Chuan Tseng
- Division of Nuclear Medicine, Tungs’ Taichung MetroHarbor Hospital, Taichung, Taiwan
| | - Yi-Jin Chen
- Center of Augmented Intelligence in Healthcare, China Medical University Hospital, Taichung, Taiwan
| | - Chao-Jen Chang
- Center of Augmented Intelligence in Healthcare, China Medical University Hospital, Taichung, Taiwan
| | - Kuo-Yang Yen
- Department of Nuclear Medicine, PET Center, China Medical University Hospital, Taichung, Taiwan
- Department of Biomedical Imaging and Radiological Science, School of Medicine, College of Medicine, China Medical University, Taichung, Taiwan
| | - Chia-Hung Kao
- Department of Nuclear Medicine, PET Center, China Medical University Hospital, Taichung, Taiwan
- Center of Augmented Intelligence in Healthcare, China Medical University Hospital, Taichung, Taiwan
- Graduate Institute of Biomedical Sciences, College of Medicine, China Medical University, Taichung, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
- *Correspondence: Chia-Hung Kao, ,
| |
Collapse
|
18
|
Huo W, Zheng G, Yan J, Le S, Han L. Interacting with medical artificial intelligence: Integrating self-responsibility attribution, human–computer trust, and personality. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107253] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
19
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
20
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
21
|
Abstract
A decade of PET/MRI clinical imaging has passed and many of the pitfalls are similar to those on earlier studies. However, techniques to overcome them have emerged and continue to develop. Although clinically significant lung nodules are demonstrable, smaller nodules may be detected using ultrashort/zero echo-time (TE) lung MRI. Fast reconstruction ultrashort TE sequences have also been used to achieve high-resolution lung MRI even with free-breathing. The introduction and improvement of time-of-flight scanners and increasing the axial length of the PET detector arrays have more than doubled the sensitivity of the PET part of the system. MRI for attenuation correction has provided many potential pitfalls, including misclassification of tissue classes based on MRI information for attenuation correction. Although the use of short echo times have helped to address these pitfalls, one of the most exciting developments has been the use of deep learning algorithms and computational neural networks to rapidly provide soft tissue, fat, bone and air information for the attenuation correction as a supplement to the attenuation correction information from fat-water imaging. Challenges with motion correction, particularly respiratory and cardiac remain but are being addressed with respiratory monitors and using PET data. In order to address truncation artefacts, the system manufacturers have developed methods to extend the MR field-of-view for the purpose of the attenuation and scatter corrections. General pitfalls like stitching of body sections for individual studies, optimum delivery of images for viewing and reporting, and resource implications for the sheer volume of data generated remain Methods to overcome these pitfalls serve as a strong foundation for the future of PET/MRI. Advances in the underlying technology with significant evolution in hard-ware and software and the exiting developments in use of deep learning algorithms and computational neural networks will drive the next decade of PET/MRI imaging.
Collapse
Affiliation(s)
- Asim Afaq
- University of Iowa Carver College of Medicine, Iowa City; Institute of Nuclear Medicine, UCL/ UCLH London, UK
| | | | | | - Simon Wan
- Institute of Nuclear Medicine, UCL/ UCLH London, UK
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA
| | - Patrick Veit Haibach
- Toronto Joint Dept. Medical Imaging, University Health Network, Sinai Health System, Women's College University of Toronto, Canada
| | | |
Collapse
|
22
|
Slart RHJA, Williams MC, Juarez-Orozco LE, Rischpler C, Dweck MR, Glaudemans AWJM, Gimelli A, Georgoulias P, Gheysens O, Gaemperli O, Habib G, Hustinx R, Cosyns B, Verberne HJ, Hyafil F, Erba PA, Lubberink M, Slomka P, Išgum I, Visvikis D, Kolossváry M, Saraste A. Position paper of the EACVI and EANM on artificial intelligence applications in multimodality cardiovascular imaging using SPECT/CT, PET/CT, and cardiac CT. Eur J Nucl Med Mol Imaging 2021; 48:1399-1413. [PMID: 33864509 PMCID: PMC8113178 DOI: 10.1007/s00259-021-05341-z] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 03/25/2021] [Indexed: 12/18/2022]
Abstract
In daily clinical practice, clinicians integrate available data to ascertain the diagnostic and prognostic probability of a disease or clinical outcome for their patients. For patients with suspected or known cardiovascular disease, several anatomical and functional imaging techniques are commonly performed to aid this endeavor, including coronary computed tomography angiography (CCTA) and nuclear cardiology imaging. Continuous improvement in positron emission tomography (PET), single-photon emission computed tomography (SPECT), and CT hardware and software has resulted in improved diagnostic performance and wide implementation of these imaging techniques in daily clinical practice. However, the human ability to interpret, quantify, and integrate these data sets is limited. The identification of novel markers and application of machine learning (ML) algorithms, including deep learning (DL) to cardiovascular imaging techniques will further improve diagnosis and prognostication for patients with cardiovascular diseases. The goal of this position paper of the European Association of Nuclear Medicine (EANM) and the European Association of Cardiovascular Imaging (EACVI) is to provide an overview of the general concepts behind modern machine learning-based artificial intelligence, highlights currently prefered methods, practices, and computational models, and proposes new strategies to support the clinical application of ML in the field of cardiovascular imaging using nuclear cardiology (hybrid) and CT techniques.
Collapse
Affiliation(s)
- Riemer H J A Slart
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands.
- Faculty of Science and Technology Biomedical, Photonic Imaging, University of Twente, Enschede, The Netherlands.
| | - Michelle C Williams
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Luis Eduardo Juarez-Orozco
- Department of Cardiology, Division Heart & Lungs, University Medical Center Utrecht, Utrecht University, Utrecht, the Netherlands
- University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Christoph Rischpler
- Department of Nuclear Medicine, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Marc R Dweck
- British Heart Foundation Centre for Cardiovascular Science, University of Edinburgh, Edinburgh, UK
- Edinburgh Imaging facility QMRI, Edinburgh, UK
| | - Andor W J M Glaudemans
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
| | | | - Panagiotis Georgoulias
- Department of Nuclear Medicine, Faculty of Medicine, University of Thessaly, University Hospital of Larissa, Larissa, Greece
| | - Olivier Gheysens
- Department of Nuclear Medicine, Cliniques Universitaires Saint-Luc and Institute of Clinical and Experimental Research (IREC), Université catholique de Louvain (UCLouvain), Brussels, Belgium
| | | | - Gilbert Habib
- APHM, Cardiology Department, La Timone Hospital, Marseille, France
- IRD, APHM, MEPHI, IHU-Méditerranée Infection, Aix Marseille Université, Marseille, France
| | - Roland Hustinx
- Division of Nuclear Medicine and Oncological Imaging, Department of Medical Physics, ULiège, Liège, Belgium
| | - Bernard Cosyns
- Department of Cardiology, Centrum voor Hart en Vaatziekten, Universitair Ziekenhuis Brussel, 101 Laarbeeklaan, 1090, Brussels, Belgium
| | - Hein J Verberne
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
| | - Fabien Hyafil
- Department of Nuclear Medicine, DMU IMAGINA, Georges-Pompidou European Hospital, Assistance Publique - Hôpitaux de Paris, F-75015, Paris, France
- University of Paris, PARCC, INSERM, F-75006, Paris, France
| | - Paola A Erba
- Medical Imaging Centre, Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Hanzeplein 1, PO 9700 RB, Groningen, The Netherlands
- Department of Nuclear Medicine (P.A.E.), University of Pisa, Pisa, Italy
- Department of Translational Research and New Technology in Medicine (P.A.E.), University of Pisa, Pisa, Italy
| | - Mark Lubberink
- Department of Surgical Sciences/Radiology, Uppsala University, Uppsala, Sweden
- Medical Physics, Uppsala University Hospital, Uppsala, Sweden
| | - Piotr Slomka
- Department of Imaging, Medicine, and Biomedical Sciences, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Ivana Išgum
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location AMC, University of Amsterdam, Amsterdam, The Netherlands
- Department of Biomedical Engineering and Physics, Amsterdam UMC - location AMC, University of Amsterdam, 1105, Amsterdam, AZ, Netherlands
| | | | - Márton Kolossváry
- MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68 Városmajor Street, Budapest, Hungary
| | - Antti Saraste
- Turku PET Centre, Turku University Hospital, University of Turku, Turku, Finland
- Heart Center, Turku University Hospital, Turku, Finland
| |
Collapse
|
23
|
Bouchelouche K, Sathekge MM. Letter from the Editors. Semin Nucl Med 2021; 51:99-101. [PMID: 33509375 DOI: 10.1053/j.semnuclmed.2020.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|