1
|
Chauvie S, Mazzoni LN, O’Doherty J. A Review on the Use of Imaging Biomarkers in Oncology Clinical Trials: Quality Assurance Strategies for Technical Validation. Tomography 2023; 9:1876-1902. [PMID: 37888741 PMCID: PMC10610870 DOI: 10.3390/tomography9050149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 10/10/2023] [Accepted: 10/13/2023] [Indexed: 10/28/2023] Open
Abstract
Imaging biomarkers (IBs) have been proposed in medical literature that exploit images in a quantitative way, going beyond the visual assessment by an imaging physician. These IBs can be used in the diagnosis, prognosis, and response assessment of several pathologies and are very often used for patient management pathways. In this respect, IBs to be used in clinical practice and clinical trials have a requirement to be precise, accurate, and reproducible. Due to limitations in imaging technology, an error can be associated with their value when considering the entire imaging chain, from data acquisition to data reconstruction and subsequent analysis. From this point of view, the use of IBs in clinical trials requires a broadening of the concept of quality assurance and this can be a challenge for the responsible medical physics experts (MPEs). Within this manuscript, we describe the concept of an IB, examine some examples of IBs currently employed in clinical practice/clinical trials and analyze the procedure that should be carried out to achieve better accuracy and reproducibility in their use. We anticipate that this narrative review, written by the components of the EFOMP working group on "the role of the MPEs in clinical trials"-imaging sub-group, can represent a valid reference material for MPEs approaching the subject.
Collapse
Affiliation(s)
- Stephane Chauvie
- Medical Physics Division, Santa Croce e Carle Hospital, 12100 Cuneo, Italy;
| | | | - Jim O’Doherty
- Siemens Medical Solutions, Malvern, PA 19355, USA;
- Department of Radiology & Radiological Sciences, Medical University of South Carolina, Charleston, SC 20455, USA
- Radiography & Diagnostic Imaging, University College Dublin, D04 C7X2 Dublin, Ireland
| |
Collapse
|
2
|
Reader AJ, Pan B. AI for PET image reconstruction. Br J Radiol 2023; 96:20230292. [PMID: 37486607 PMCID: PMC10546435 DOI: 10.1259/bjr.20230292] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 06/06/2023] [Accepted: 06/20/2023] [Indexed: 07/25/2023] Open
Abstract
Image reconstruction for positron emission tomography (PET) has been developed over many decades, with advances coming from improved modelling of the data statistics and improved modelling of the imaging physics. However, high noise and limited spatial resolution have remained issues in PET imaging, and state-of-the-art PET reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in noise reduction and enhancement of PET's spatial resolution. Nonetheless, there is an ongoing drive towards not only improving image quality, but also reducing the injected radiation dose and reducing scanning times. While the arrival of new PET scanners (such as total body PET) is helping, there is always a need to improve reconstructed image quality due to the time and count limited imaging conditions. Artificial intelligence (AI) methods are now at the frontier of research for PET image reconstruction. While AI can learn the imaging physics as well as the noise in the data (when given sufficient examples), one of the most common uses of AI arises from exploiting databases of high-quality reference examples, to provide advanced noise compensation and resolution recovery. There are three main AI reconstruction approaches: (i) direct data-driven AI methods which rely on supervised learning from reference data, (ii) iterative (unrolled) methods which combine our physics and statistical models with AI learning from data, and (iii) methods which exploit AI with our known models, but crucially can offer benefits even in the absence of any example training data whatsoever. This article reviews these methods, considering opportunities and challenges of AI for PET reconstruction.
Collapse
Affiliation(s)
- Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| | - Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, UK
| |
Collapse
|
3
|
Farag A, Huang J, Kohan A, Mirshahvalad SA, Basso Dias A, Fenchel M, Metser U, Veit-Haibach P. Evaluation of MR anatomically-guided PET reconstruction using a convolutional neural network in PSMA patients. Phys Med Biol 2023; 68:185014. [PMID: 37625418 DOI: 10.1088/1361-6560/acf439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 08/25/2023] [Indexed: 08/27/2023]
Abstract
Background. Recently, approaches have utilized the superior anatomical information provided by magnetic resonance imaging (MRI) to guide the reconstruction of positron emission tomography (PET). One of those approaches is the Bowsher's prior, which has been accelerated lately with a convolutional neural network (CNN) to reconstruct MR-guided PET in the imaging domain in routine clinical imaging. Two differently trained Bowsher-CNN methods (B-CNN0 and B-CNN) have been trained and tested on brain PET/MR images with non-PSMA tracers, but so far, have not been evaluated in other anatomical regions yet.Methods. A NEMA phantom with five of its six spheres filled with the same, calibrated concentration of 18F-DCFPyL-PSMA, and thirty-two patients (mean age 64 ± 7 years) with biopsy-confirmed PCa were used in this study. Reconstruction with either of the two available Bowsher-CNN methods were performed on the conventional MR-based attenuation correction (MRAC) and T1-MR images in the imaging domain. Detectable volume of the spheres and tumors, relative contrast recovery (CR), and background variation (BV) were measured for the MRAC and the Bowsher-CNN images, and qualitative assessment was conducted by ranking the image sharpness and quality by two experienced readers.Results. For the phantom study, the B-CNN produced 12.7% better CR compared to conventional reconstruction. The small sphere volume (<1.8 ml) detectability improved from MRAC to B-CNN by nearly 13%, while measured activity was higher than the ground-truth by 8%. The signal-to-noise ratio, CR, and BV were significantly improved (p< 0.05) in B-CNN images of the tumor. The qualitative analysis determined that tumor sharpness was excellent in 76% of the PET images reconstructed with the B-CNN method, compared to conventional reconstruction.Conclusions. Applying the MR-guided B-CNN in clinical prostate PET/MR imaging improves some quantitative, as well as qualitative imaging measures. The measured improvements in the phantom are also clearly translated into clinical application.
Collapse
Affiliation(s)
- Adam Farag
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Jin Huang
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Andres Kohan
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Seyed Ali Mirshahvalad
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Adriano Basso Dias
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | | | - Ur Metser
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| | - Patrick Veit-Haibach
- Joint Department of Medical Imaging, University Health Network, Mount Sinai Hospital and Women's College Hospital, University of Toronto, 610 University Ave, Toronto, ON, M5G 2M9, Canada
| |
Collapse
|
4
|
Brown R, Kolbitsch C, Delplancke C, Papoutsellis E, Mayer J, Ovtchinnikov E, Pasca E, Neji R, da Costa-Luis C, Gillman AG, Ehrhardt MJ, McClelland JR, Eiben B, Thielemans K. Motion estimation and correction for simultaneous PET/MR using SIRF and CIL. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200208. [PMID: 34218674 DOI: 10.1098/rsta.2020.0208] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/07/2021] [Indexed: 05/10/2023]
Abstract
SIRF is a powerful PET/MR image reconstruction research tool for processing data and developing new algorithms. In this research, new developments to SIRF are presented, with focus on motion estimation and correction. SIRF's recent inclusion of the adjoint of the resampling operator allows gradient propagation through resampling, enabling the MCIR technique. Another enhancement enabled registering and resampling of complex images, suitable for MRI. Furthermore, SIRF's integration with the optimization library CIL enables the use of novel algorithms. Finally, SPM is now supported, in addition to NiftyReg, for registration. Results of MR and PET MCIR reconstructions are presented, using FISTA and PDHG, respectively. These demonstrate the advantages of incorporating motion correction and variational and structural priors. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 2'.
Collapse
Affiliation(s)
- Richard Brown
- Institute of Nuclear Medicine, University College London, London, UK
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Christoph Kolbitsch
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany
| | | | - Evangelos Papoutsellis
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Harwell Campus, Didcot, UK
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | - Johannes Mayer
- Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany
| | - Evgueni Ovtchinnikov
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Harwell Campus, Didcot, UK
| | - Edoardo Pasca
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Harwell Campus, Didcot, UK
| | - Radhouene Neji
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- MR Research Collaborations, Siemens Healthcare, Frimley, UK
| | - Casper da Costa-Luis
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Ashley G Gillman
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Townsville, Australia
| | - Matthias J Ehrhardt
- Department of Mathematical Sciences, University of Bath, Bath, UK
- Institute for Mathematical Innovation, University of Bath, UK
| | - Jamie R McClelland
- Centre for Medical Image Computing, University College London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Bjoern Eiben
- Centre for Medical Image Computing, University College London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
- Department of Medical Physics and Biomedical Engineering, University College London, UK
| |
Collapse
|
5
|
Arridge SR, Ehrhardt MJ, Thielemans K. (An overview of) Synergistic reconstruction for multimodality/multichannel imaging methods. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200205. [PMID: 33966461 DOI: 10.1098/rsta.2020.0205] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Imaging is omnipresent in modern society with imaging devices based on a zoo of physical principles, probing a specimen across different wavelengths, energies and time. Recent years have seen a change in the imaging landscape with more and more imaging devices combining that which previously was used separately. Motivated by these hardware developments, an ever increasing set of mathematical ideas is appearing regarding how data from different imaging modalities or channels can be synergistically combined in the image reconstruction process, exploiting structural and/or functional correlations between the multiple images. Here we review these developments, give pointers to important challenges and provide an outlook as to how the field may develop in the forthcoming years. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Simon R Arridge
- Department of Computer Science, University College London, London, UK
| | - Matthias J Ehrhardt
- Department of Mathematical Sciences, University of Bath, Bath, UK
- Institute for Mathematical Innovation, University of Bath, Bath, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
| |
Collapse
|
6
|
Schramm G, Rigie D, Vahle T, Rezaei A, Van Laere K, Shepherd T, Nuyts J, Boada F. Approximating anatomically-guided PET reconstruction in image space using a convolutional neural network. Neuroimage 2021; 224:117399. [PMID: 32971267 PMCID: PMC7812485 DOI: 10.1016/j.neuroimage.2020.117399] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 08/20/2020] [Accepted: 09/17/2020] [Indexed: 12/22/2022] Open
Abstract
In the last two decades, it has been shown that anatomically-guided PET reconstruction can lead to improved bias-noise characteristics in brain PET imaging. However, despite promising results in simulations and first studies, anatomically-guided PET reconstructions are not yet available for use in routine clinical because of several reasons. In light of this, we investigate whether the improvements of anatomically-guided PET reconstruction methods can be achieved entirely in the image domain with a convolutional neural network (CNN). An entirely image-based CNN post-reconstruction approach has the advantage that no access to PET raw data is needed and, moreover, that the prediction times of trained CNNs are extremely fast on state of the art GPUs which will substantially facilitate the evaluation, fine-tuning and application of anatomically-guided PET reconstruction in real-world clinical settings. In this work, we demonstrate that anatomically-guided PET reconstruction using the asymmetric Bowsher prior can be well-approximated by a purely shift-invariant convolutional neural network in image space allowing the generation of anatomically-guided PET images in almost real-time. We show that by applying dedicated data augmentation techniques in the training phase, in which 16 [18F]FDG and 10 [18F]PE2I data sets were used, lead to a CNN that is robust against the used PET tracer, the noise level of the input PET images and the input MRI contrast. A detailed analysis of our CNN in 36 [18F]FDG, 18 [18F]PE2I, and 7 [18F]FET test data sets demonstrates that the image quality of our trained CNN is very close to the one of the target reconstructions in terms of regional mean recovery and regional structural similarity.
Collapse
Affiliation(s)
- Georg Schramm
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium.
| | - David Rigie
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NYC, US
| | | | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Koen Van Laere
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Timothy Shepherd
- Department of Neuroradiology, NYU Langone Health, Department of Radiology, New York University School of Medicine, New York, US
| | - Johan Nuyts
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Fernando Boada
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NYC, US
| |
Collapse
|
7
|
Arabi H, Zaidi H. Truncation compensation and metallic dental implant artefact reduction in PET/MRI attenuation correction using deep learning-based object completion. ACTA ACUST UNITED AC 2020; 65:195002. [DOI: 10.1088/1361-6560/abb02c] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
8
|
Mehranian A, McGinnity CJ, Neji R, Prieto C, Hammers A, De Vita E, Reader AJ. Motion‐corrected and high‐resolution anatomically assisted (MOCHA) reconstruction of arterial spin labeling MRI. Magn Reson Med 2020; 84:1306-1320. [PMID: 32125015 PMCID: PMC8614125 DOI: 10.1002/mrm.28205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 01/12/2020] [Accepted: 01/18/2020] [Indexed: 11/30/2022]
Abstract
Purpose A model‐based reconstruction framework is proposed for motion‐corrected and high‐resolution anatomically assisted (MOCHA) reconstruction of arterial spin labeling (ASL) data. In this framework, all low‐resolution ASL control‐label pairs are used to reconstruct a single high‐resolution cerebral blood flow (CBF) map, corrected for rigid‐motion, point‐spread‐function blurring and partial volume effect. Methods Six volunteers were recruited for CBF imaging using pseudo‐continuous ASL labeling, two‐shot 3D gradient and spin‐echo sequences and high‐resolution T1‐weighted MRI. For 2 volunteers, high‐resolution scans with double and triple resolution in the partition direction were additionally collected. Simulations were designed for evaluations against a high‐resolution ground‐truth CBF map, including a simulated hyperperfused lesion and hyperperfusion/hypoperfusion abnormalities. The MOCHA technique was compared with standard reconstruction and a 3D linear regression partial‐volume effect correction method and was further evaluated for acquisitions with reduced control‐label pairs and k‐space undersampling. Results The MOCHA reconstructions of low‐resolution ASL data showed enhanced image quality, particularly in the partition direction. In simulations, both MOCHA and 3D linear regression provided more accurate CBF maps than the standard reconstruction; however, MOCHA resulted in the lowest errors and well delineated the abnormalities. The MOCHA reconstruction of standard‐resolution in vivo data showed good agreement with higher‐resolution scans requiring 4‐times and 9‐times longer acquisitions. The MOCHA reconstruction was found to be robust for 4‐times‐accelerated ASL acquisitions, achieved by reduced control‐label pairs or k‐space undersampling. Conclusion The MOCHA reconstruction reduces partial‐volume effect by direct reconstruction of CBF maps in the high‐resolution space of the corresponding anatomical image, incorporating motion correction and point spread function modeling. Following further evaluation, MOCHA should promote the clinical application of ASL.
Collapse
Affiliation(s)
- Abolfazl Mehranian
- Department of Biomedical Engineering School of Biomedical Engineering and Imaging Sciences King’s College London London United Kingdom
| | - Colm J. McGinnity
- School of Biomedical Engineering and Imaging Sciences, King’s College London and King’s College London & Guy’s and St. Thomas’ PET Centre, St. Thomas’ Hospital London United Kingdom
| | - Radhouene Neji
- Department of Biomedical Engineering School of Biomedical Engineering and Imaging Sciences King’s College London London United Kingdom
- MR Research Collaborations Siemens Healthcare Frimley United Kingdom
| | - Claudia Prieto
- Department of Biomedical Engineering School of Biomedical Engineering and Imaging Sciences King’s College London London United Kingdom
| | - Alexander Hammers
- School of Biomedical Engineering and Imaging Sciences, King’s College London and King’s College London & Guy’s and St. Thomas’ PET Centre, St. Thomas’ Hospital London United Kingdom
| | - Enrico De Vita
- Department of Biomedical Engineering School of Biomedical Engineering and Imaging Sciences King’s College London London United Kingdom
| | - Andrew J. Reader
- Department of Biomedical Engineering School of Biomedical Engineering and Imaging Sciences King’s College London London United Kingdom
| |
Collapse
|
9
|
Mehranian A, Belzunce MA, McGinnity CJ, Bustin A, Prieto C, Hammers A, Reader AJ. Multi-modal synergistic PET and MR reconstruction using mutually weighted quadratic priors. Magn Reson Med 2019; 81:2120-2134. [PMID: 30325053 PMCID: PMC6563465 DOI: 10.1002/mrm.27521] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 08/15/2018] [Accepted: 08/15/2018] [Indexed: 11/06/2022]
Abstract
PURPOSE To propose a framework for synergistic reconstruction of PET-MR and multi-contrast MR data to improve the image quality obtained from noisy PET data and from undersampled MR data. THEORY AND METHODS Weighted quadratic priors were devised to preserve common boundaries between PET-MR images while reducing noise, PET Gibbs ringing, and MR undersampling artifacts. These priors are iteratively reweighted using normalized multi-modal Gaussian similarity kernels. Synergistic PET-MR reconstructions were built on the PET maximum a posteriori expectation maximization algorithm and the MR regularized sensitivity encoding method. The proposed approach was compared to conventional methods, total variation, and prior-image weighted quadratic regularization methods. Comparisons were performed on a simulated [18 F]fluorodeoxyglucose-PET and T1 /T2 -weighted MR brain phantom, 2 in vivo T1 /T2 -weighted MR brain datasets, and an in vivo [18 F]fluorodeoxyglucose-PET and fluid-attenuated inversion recovery/T1 -weighted MR brain dataset. RESULTS Simulations showed that synergistic reconstructions achieve the lowest quantification errors for all image modalities compared to conventional, total variation, and weighted quadratic methods. Whereas total variation regularization preserved modality-unique features, this method failed to recover PET details and was not able to reduce MR artifacts compared to our proposed method. For in vivo MR data, our method maintained similar image quality for 3× and 14× accelerated data. Reconstruction of the PET-MR dataset also demonstrated improved performance of our method compared to the conventional independent methods in terms of reduced Gibbs and undersampling artifacts. CONCLUSION The proposed methodology offers a robust multi-modal synergistic image reconstruction framework that can be readily built on existing established algorithms.
Collapse
Affiliation(s)
- Abolfazl Mehranian
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging SciencesKing's College LondonUnited Kingdom
| | - Martin A. Belzunce
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging SciencesKing's College LondonUnited Kingdom
| | - Colm J. McGinnity
- King's College London & Guy's and St Thomas' PET Centre, St Thomas' HospitalLondonUnited Kingdom
| | - Aurelien Bustin
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging SciencesKing's College LondonUnited Kingdom
| | - Claudia Prieto
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging SciencesKing's College LondonUnited Kingdom
| | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, St Thomas' HospitalLondonUnited Kingdom
| | - Andrew J. Reader
- Department of Biomedical Engineering, School of Biomedical Engineering and Imaging SciencesKing's College LondonUnited Kingdom
| |
Collapse
|
10
|
Deidda D, Karakatsanis NA, Robson PM, Calcagno C, Senders ML, Mulder WJM, Fayad ZA, Aykroyd RG, Tsoumpas C. Hybrid PET/MR Kernelised Expectation Maximisation Reconstruction for Improved Image-Derived Estimation of the Input Function from the Aorta of Rabbits. CONTRAST MEDIA & MOLECULAR IMAGING 2019; 2019:3438093. [PMID: 30800014 PMCID: PMC6360049 DOI: 10.1155/2019/3438093] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Revised: 11/15/2018] [Accepted: 11/21/2018] [Indexed: 11/30/2022]
Abstract
Positron emission tomography (PET) provides simple noninvasive imaging biomarkers for multiple human diseases which can be used to produce quantitative information from single static images or to monitor dynamic processes. Such kinetic studies often require the tracer input function (IF) to be measured but, in contrast to direct blood sampling, the image-derived input function (IDIF) provides a noninvasive alternative technique to estimate the IF. Accurate estimation can, in general, be challenging due to the partial volume effect (PVE), which is particularly important in preclinical work on small animals. The recently proposed hybrid kernelised ordered subsets expectation maximisation (HKEM) method has been shown to improve accuracy and contrast across a range of different datasets and count levels and can be used on PET/MR or PET/CT data. In this work, we apply the method with the purpose of providing accurate estimates of the aorta IDIF for rabbit PET studies. In addition, we proposed a method for the extraction of the aorta region of interest (ROI) using the MR and the HKEM image, to minimise the PVE within the rabbit aortic region-a method which can be directly transferred to the clinical setting. A realistic simulation study was performed with ten independent noise realisations while two, real data, rabbit datasets, acquired with the Biograph Siemens mMR PET/MR scanner, were also considered. For reference and comparison, the data were reconstructed using OSEM, OSEM with Gaussian postfilter and KEM, as well as HKEM. The results across the simulated datasets and different time frames show reduced PVE and accurate IDIF values for the proposed method, with 5% average bias (0.8% minimum and 16% maximum bias). Consistent results were obtained with the real datasets. The results of this study demonstrate that HKEM can be used to accurately estimate the IDIF in preclinical PET/MR studies, such as rabbit mMR data, as well as in clinical human studies. The proposed algorithm is made available as part of an open software library, and it can be used equally successfully on human or animal data acquired from a variety of PET/MR or PET/CT scanners.
Collapse
Affiliation(s)
- Daniel Deidda
- Biomedical Imaging Science Department, University of Leeds, Leeds, UK
- Department of Statistics, University of Leeds, Leeds, UK
| | - Nicolas A. Karakatsanis
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Division of Radiopharmaceutical Sciences, Department of Radiology, Weill Cornell Medical College, Cornell University, New York, NY, USA
| | - Philip M. Robson
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Claudia Calcagno
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Max L. Senders
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Willem J. M. Mulder
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Zahi A. Fayad
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Charalampos Tsoumpas
- Biomedical Imaging Science Department, University of Leeds, Leeds, UK
- Translational and Molecular Imaging Institute (TMII), Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| |
Collapse
|