151
|
Duffy IR, Boyle AJ, Vasdev N. Improving PET Imaging Acquisition and Analysis With Machine Learning: A Narrative Review With Focus on Alzheimer's Disease and Oncology. Mol Imaging 2020; 18:1536012119869070. [PMID: 31429375 PMCID: PMC6702769 DOI: 10.1177/1536012119869070] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Machine learning (ML) algorithms have found increasing utility in the medical imaging field and numerous applications in the analysis of digital biomarkers within positron emission tomography (PET) imaging have emerged. Interest in the use of artificial intelligence in PET imaging for the study of neurodegenerative diseases and oncology stems from the potential for such techniques to streamline decision support for physicians providing early and accurate diagnosis and allowing personalized treatment regimens. In this review, the use of ML to improve PET image acquisition and reconstruction is presented, along with an overview of its applications in the analysis of PET images for the study of Alzheimer's disease and oncology.
Collapse
Affiliation(s)
- Ian R Duffy
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Amanda J Boyle
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
| | - Neil Vasdev
- 1 Azrieli Centre for Neuro-Radiochemistry, Research Imaging Centre, Centre for Addiction and Mental Health, Toronto, Ontario, Canada.,2 Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
152
|
Vandenberghe S, Moskal P, Karp JS. State of the art in total body PET. EJNMMI Phys 2020; 7:35. [PMID: 32451783 PMCID: PMC7248164 DOI: 10.1186/s40658-020-00290-2] [Citation(s) in RCA: 191] [Impact Index Per Article: 38.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Accepted: 03/25/2020] [Indexed: 12/29/2022] Open
Abstract
The idea of a very sensitive positron emission tomography (PET) system covering a large portion of the body of a patient already dates back to the early 1990s. In the period 2000-2010, only some prototypes with long axial field of view (FOV) have been built, which never resulted in systems used for clinical research. One of the reasons was the limitations in the available detector technology, which did not yet have sufficient energy resolution, timing resolution or countrate capabilities for fully exploiting the benefits of a long axial FOV design. PET was also not yet as widespread as it is today: the growth in oncology, which has become the major application of PET, appeared only after the introduction of PET-CT (early 2000).The detector technology used in most clinical PET systems today has a combination of good energy and timing resolution with higher countrate capabilities and has now been used since more than a decade to build time-of-flight (TOF) PET systems with fully 3D acquisitions. Based on this technology, one can construct total body PET systems and the remaining challenges (data handling, fast image reconstruction, detector cooling) are mostly related to engineering. The direct benefits of long axial FOV systems are mostly related to the higher sensitivity. For single organ imaging, the gain is close to the point source sensitivity which increases linearly with the axial length until it is limited by solid angle and attenuation of the body. The gains for single organ (compared to a fully 3D PET 20-cm axial FOV) are limited to a factor 3-4. But for long objects (like body scans), it increases quadratically with scanner length and factors of 10-40 × higher sensitivity are predicted for the long axial FOV scanner. This application of PET has seen a major increase (mostly in oncology) during the last 2 decades and is now the main type of study in a PET centre. As the technology is available and the full body concept also seems to match with existing applications, the old concept of a total body PET scanner is seeing a clear revival. Several research groups are working on this concept and after showing the potential via extensive simulations; construction of these systems has started about 2 years ago. In the first phase, two PET systems with long axial FOV suitable for large animal imaging were constructed to explore the potential in more experimental settings. Recently, the first completed total body PET systems for human use, a 70-cm-long system, called PennPET Explorer, and a 2-m-long system, called uExplorer, have become reality and first clinical studies have been shown. These results illustrate the large potential of this concept with regard to low-dose imaging, faster scanning, whole-body dynamic imaging and follow-up of tracers over longer periods. This large range of possible technical improvements seems to have the potential to change the current clinical routine and to expand the number of clinical applications of molecular imaging. The J-PET prototype is a prototype system with a long axial FOV built from axially arranged plastic scintillator strips.This paper gives an overview of the recent technical developments with regard to PET scanners with a long axial FOV covering at least the majority of the body (so called total body PET systems). After explaining the benefits and challenges of total body PET systems, the different total body PET system designs proposed for large animal and clinical imaging are described in detail. The axial length is one of the major factors determining the total cost of the system, but there are also other options in detector technology, design and processing for reducing the cost these systems. The limitations and advantages of different designs for research and clinical use are discussed taking into account potential applications and the increased cost of these systems.
Collapse
Affiliation(s)
- Stefaan Vandenberghe
- Department of Electronics and Information Systems, MEDISIP, Ghent University-IBiTech, De Pintelaan 185 block B, Ghent, B-9000 Belgium
| | - Pawel Moskal
- Institute of Physics, Jagiellonian University, Krakow, Poland
| | - Joel S. Karp
- Department of Radiology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
153
|
Clinical Use of Integrated Positron Emission Tomography-Magnetic Resonance Imaging for Dementia Patients. Top Magn Reson Imaging 2020; 28:299-310. [PMID: 31794502 DOI: 10.1097/rmr.0000000000000225] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Combining magnetic resonance imaging (MRI) with 2-deoxy-2-F-fluoro-D-glucose positron emission tomography (FDG-PET) data improve the imaging accuracy for detection of Alzheimer disease and related dementias. Integrated FDG-PET-MRI is a recent technical innovation that allows both imaging modalities to be obtained simultaneously from individual patients with cognitive impairment. This report describes the practical benefits and challenges of using integrated FDG-PET-MRI to support the clinical diagnosis of various dementias. Over the past 7 years, we have performed integrated FDG-PET-MRI on >1500 patients with possible cognitive impairment or dementia. The FDG-PET and MRI protocols are the same as current conventions, but are obtained simultaneously over 25 minutes. An additional Dixon MRI sequence with superimposed bone atlas is used to calculate PET attenuation correction. A single radiologist interprets all imaging data and generates 1 report. The most common positive finding is concordant temporoparietal volume loss and FDG hypometabolism that suggests increased risk for underlying Alzheimer disease. Lobar-specific atrophy and FDG hypometabolism patterns that may be subtle, asymmetric, and focal also are more easily recognized using combined FDG-PET and MRI, thereby improving detection of other neurodegeneration conditions such as primary progressive aphasias and frontotemporal degeneration. Integrated PET-MRI has many practical benefits to individual patients, referrers, and interpreting radiologists. The integrated PET-MRI system requires several modifications to standard imaging center workflows, and requires training individual radiologists to interpret both modalities in conjunction. Reading MRI and FDG-PET together increases imaging diagnostic yield for individual patients; however, both modalities have limitations in specificity.
Collapse
|
154
|
Sanaat A, Arabi H, Mainta I, Garibotto V, Zaidi H. Projection Space Implementation of Deep Learning-Guided Low-Dose Brain PET Imaging Improves Performance over Implementation in Image Space. J Nucl Med 2020; 61:1388-1396. [PMID: 31924718 DOI: 10.2967/jnumed.119.239327] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 01/09/2020] [Indexed: 01/08/2023] Open
Abstract
Our purpose was to assess the performance of full-dose (FD) PET image synthesis in both image and sinogram space from low-dose (LD) PET images and sinograms without sacrificing diagnostic quality using deep learning techniques. Methods: Clinical brain PET/CT studies of 140 patients were retrospectively used for LD-to-FD PET conversion. Five percent of the events were randomly selected from the FD list-mode PET data to simulate a realistic LD acquisition. A modified 3-dimensional U-Net model was implemented to predict FD sinograms in the projection space (PSS) and FD images in image space (PIS) from their corresponding LD sinograms and images, respectively. The quality of the predicted PET images was assessed by 2 nuclear medicine specialists using a 5-point grading scheme. Quantitative analysis using established metrics including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), regionwise SUV bias, and first-, second- and high-order texture radiomic features in 83 brain regions for the test and evaluation datasets was also performed. Results: All PSS images were scored 4 or higher (good to excellent) by the nuclear medicine specialists. PSNR and SSIM values of 0.96 ± 0.03 and 0.97 ± 0.02, respectively, were obtained for PIS, and values of 31.70 ± 0.75 and 37.30 ± 0.71, respectively, were obtained for PSS. The average SUV bias calculated over all brain regions was 0.24% ± 0.96% and 1.05% ± 1.44% for PSS and PIS, respectively. The Bland-Altman plots reported the lowest SUV bias (0.02) and variance (95% confidence interval, -0.92 to +0.84) for PSS, compared with the reference FD images. The relative error of the homogeneity radiomic feature belonging to the gray-level cooccurrence matrix category was -1.07 ± 1.77 and 0.28 ± 1.4 for PIS and PSS, respectively. Conclusion: The qualitative assessment and quantitative analysis demonstrated that the FD PET PSS led to superior performance, resulting in higher image quality and lower SUV bias and variance than for FD PET PIS.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland .,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, Netherlands; and.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
155
|
Song TA, Chowdhury SR, Yang F, Dutta J. Super-Resolution PET Imaging Using Convolutional Neural Networks. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:518-528. [PMID: 32055649 PMCID: PMC7017584 DOI: 10.1109/tci.2020.2964229] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).
Collapse
Affiliation(s)
- Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Samadrita Roy Chowdhury
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Fan Yang
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| |
Collapse
|
156
|
Gong K, Berg E, Cherry SR, Qi J. Machine Learning in PET: from Photon Detection to Quantitative Image Reconstruction. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:51-68. [PMID: 38045770 PMCID: PMC10691821 DOI: 10.1109/jproc.2019.2936809] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Machine learning has found unique applications in nuclear medicine from photon detection to quantitative image reconstruction. While there have been impressive strides in detector development for time-of-flight positron emission tomography, most detectors still make use of simple signal processing methods to extract the time and position information from the detector signals. Now with the availability of fast waveform digitizers, machine learning techniques have been applied to estimate the position and arrival time of high-energy photons. In quantitative image reconstruction, machine learning has been used to estimate various corrections factors, including scattered events and attenuation images, as well as to reduce statistical noise in reconstructed images. Here machine learning either provides a faster alternative to an existing time-consuming computation, such as in the case of scatter estimation, or creates a data-driven approach to map an implicitly defined function, such as in the case of estimating the attenuation map for PET/MR scans. In this article, we will review the abovementioned applications of machine learning in nuclear medicine.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Biomedical Engineering, University of California, Davis, CA, USA and is now with Massachusetts General Hospital, Boston, MA, USA
| | - Eric Berg
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| | - Simon R. Cherry
- Department of Biomedical Engineering and Department of Radiology, University of California, Davis, CA, USA
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA, USA
| |
Collapse
|
157
|
Belciug S. The beginnings. Artif Intell Cancer 2020. [DOI: 10.1016/b978-0-12-820201-2.00002-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
|
158
|
Cui J, Gong K, Guo N, Wu C, Meng X, Kim K, Zheng K, Wu Z, Fu L, Xu B, Zhu Z, Tian J, Liu H, Li Q. PET image denoising using unsupervised deep learning. Eur J Nucl Med Mol Imaging 2019; 46:2780-2789. [PMID: 31468181 PMCID: PMC7814987 DOI: 10.1007/s00259-019-04468-4] [Citation(s) in RCA: 131] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 07/29/2019] [Indexed: 12/23/2022]
Abstract
PURPOSE Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed. METHODS In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test. RESULTS For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details. CONCLUSION The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.
Collapse
Affiliation(s)
- Jianan Cui
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, No.3 Teaching Building, 405, Hangzhou, 310027, China
| | - Kuang Gong
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Ning Guo
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Chenxi Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Xiaxia Meng
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Kun Zheng
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Zhifang Wu
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Liping Fu
- Department of Nuclear Medicine, The Chinese PLA General Hospital, Beijing, China
| | - Baixuan Xu
- Department of Nuclear Medicine, The Chinese PLA General Hospital, Beijing, China
| | - Zhaohui Zhu
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Jiahe Tian
- Department of Nuclear Medicine, The Chinese PLA General Hospital, Beijing, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, No.3 Teaching Building, 405, Hangzhou, 310027, China.
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA.
| |
Collapse
|
159
|
Zaharchuk G. Next generation research applications for hybrid PET/MR and PET/CT imaging using deep learning. Eur J Nucl Med Mol Imaging 2019; 46:2700-2707. [PMID: 31254036 PMCID: PMC6881542 DOI: 10.1007/s00259-019-04374-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 05/23/2019] [Indexed: 02/08/2023]
Abstract
INTRODUCTION Recently there have been significant advances in the field of machine learning and artificial intelligence (AI) centered around imaging-based applications such as computer vision. In particular, the tremendous power of deep learning algorithms, primarily based on convolutional neural network strategies, is becoming increasingly apparent and has already had direct impact on the fields of radiology and nuclear medicine. While most early applications of computer vision to radiological imaging have focused on classification of images into disease categories, it is also possible to use these methods to improve image quality. Hybrid imaging approaches, such as PET/MRI and PET/CT, are ideal for applying these methods. METHODS This review will give an overview of the application of AI to improve image quality for PET imaging directly and how the additional use of anatomic information from CT and MRI can lead to further benefits. For PET, these performance gains can be used to shorten imaging scan times, with improvement in patient comfort and motion artifacts, or to push towards lower radiotracer doses. It also opens the possibilities for dual tracer studies, more frequent follow-up examinations, and new imaging indications. How to assess quality and the potential effects of bias in training and testing sets will be discussed. CONCLUSION Harnessing the power of these new technologies to extract maximal information from hybrid PET imaging will open up new vistas for both research and clinical applications with associated benefits in patient care.
Collapse
Affiliation(s)
- Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, USA.
| |
Collapse
|
160
|
Satoh Y, Sekine T, Omiya Y, Onishi H, Motosugi U. Reduction of the fluorine-18-labeled fluorodeoxyglucose dose for clinically dedicated breast positron emission tomography. EJNMMI Phys 2019; 6:21. [PMID: 31784863 PMCID: PMC6884607 DOI: 10.1186/s40658-019-0256-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Accepted: 10/09/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To determine the clinically acceptable level of reduction in the injected fluorine-18 (18F)-labeled fluorodeoxyglucose (18F-FDG) dose in dedicated breast positron emission tomography (dbPET). METHODS A breast phantom with four spheres exhibiting various diameters (5, 7.5, 10, and 16 mm), a background 18F-FDG radioactivity of 2.28 kBq/mL, and a sphere-to-background radioactivity ratio of 8:1 was used. True dose-reduced dbPET images were obtained by data acquisition for 20 min in list mode at multiple time points over 7 h of radioactive decay. Simulated dose-reduced images were generated by reconstruction with a portion of the list mode acquisition data. True and simulated dose-reduced images were visually and quantitatively compared. On the basis of the phantom study, dbPET images for 32 breasts of 28 women with abnormal uptake were generated after simulated reduction of the injected 18F-FDG doses; these images were compared with those acquired using current clinical doses. RESULTS There were no qualitative differences between true and simulated dose-reduced phantom images. The phantom study revealed that the minimal required dose was 12.5% for the detection of 5-mm spheres and 25% for precise semi-quantification of FDG in the spheres. The 7-min reconstruction with a 100% dose was defined as the reference for the clinical study. The image quality and lesion conspicuity were clinically acceptable for the 25% dose images. Lesion detectability on the 12.5% dose images was maintained despite image quality degradation. CONCLUSIONS In summary, 25% of the standard 18F-FDG dose for dbPET can provide a clinically acceptable image quality, while 12.5% of the standard dose results in acceptable quality in terms of lesion detection when lesions are located at a sufficient distance from the edge of the dbPET detector.
Collapse
Affiliation(s)
- Yoko Satoh
- Yamanashi PET Imaging Clinic, Shimokato 3046-2, Chuo City, Yamanashi Prefecture, 409-3821, Japan. .,Department of Radiology, University of Yamanashi, Chuo City, Yamanashi Prefecture, Japan.
| | - Tetsuro Sekine
- Department of Radiology, Nippon Medical School, Bunkyo-ku, Tokyo, Japan
| | - Yoshie Omiya
- Department of Radiology, University of Yamanashi, Chuo City, Yamanashi Prefecture, Japan
| | - Hiroshi Onishi
- Department of Radiology, University of Yamanashi, Chuo City, Yamanashi Prefecture, Japan
| | - Utaroh Motosugi
- Department of Radiology, University of Yamanashi, Chuo City, Yamanashi Prefecture, Japan
| |
Collapse
|
161
|
Hope TA, Fayad ZA, Fowler KJ, Holley D, Iagaru A, McMillan AB, Veit-Haiback P, Witte RJ, Zaharchuk G, Catana C. Summary of the First ISMRM-SNMMI Workshop on PET/MRI: Applications and Limitations. J Nucl Med 2019; 60:1340-1346. [PMID: 31123099 PMCID: PMC6785790 DOI: 10.2967/jnumed.119.227231] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 05/21/2019] [Indexed: 12/12/2022] Open
Abstract
Since the introduction of simultaneous PET/MRI in 2011, there have been significant advancements. In this review, we highlight several technical advancements that have been made primarily in attenuation and motion correction and discuss the status of multiple clinical applications using PET/MRI. This review is based on the experience at the first PET/MRI conference cosponsored by the International Society for Magnetic Resonance in Medicine and the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California
- Department of Radiology, San Francisco VA Medical Center, San Francisco, California
- UCSF Helen Diller Family Comprehensive Cancer Center, University of California San Francisco, San Francisco, California
| | - Zahi A Fayad
- Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Kathryn J Fowler
- Department of Radiology, University of California San Diego, San Diego, California
| | - Dawn Holley
- Department of Radiology, Stanford University Medical Center, Stanford, California
| | - Andrei Iagaru
- Department of Radiology, Stanford University Medical Center, Stanford, California
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Patrick Veit-Haiback
- Joint Department of Medical Imaging, Toronto General Hospital, University Health Network, University of Toronto, Toronto, Canada
| | - Robert J Witte
- Department of Radiology, Mayo Clinic, Rochester, Minnesota; and
| | - Greg Zaharchuk
- Department of Radiology, Stanford University Medical Center, Stanford, California
| | - Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts
| |
Collapse
|
162
|
The Role of Generative Adversarial Networks in Radiation Reduction and Artifact Correction in Medical Imaging. J Am Coll Radiol 2019; 16:1273-1278. [DOI: 10.1016/j.jacr.2019.05.040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Accepted: 05/23/2019] [Indexed: 01/08/2023]
|
163
|
Larson DB, Boland GW. Imaging Quality Control in the Era of Artificial Intelligence. J Am Coll Radiol 2019; 16:1259-1266. [DOI: 10.1016/j.jacr.2019.05.048] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 05/27/2019] [Accepted: 05/29/2019] [Indexed: 12/13/2022]
|
164
|
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G, Wintermark M. Applications of Deep Learning to Neuro-Imaging Techniques. Front Neurol 2019; 10:869. [PMID: 31474928 PMCID: PMC6702308 DOI: 10.3389/fneur.2019.00869] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/26/2019] [Indexed: 12/12/2022] Open
Abstract
Many clinical applications based on deep learning and pertaining to radiology have been proposed and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even prediction of therapy responses. There are many other innovative applications of AI in various technical aspects of medical imaging, particularly applied to the acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images, improving image quality, lowering radiation and contrast dose, and shortening the duration of imaging studies. This article will address this topic and will seek to present an overview of deep learning applied to neuroimaging techniques.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Wintermark
- Neuroradiology Section, Department of Radiology, Stanford Healthcare, Stanford, CA, United States
| |
Collapse
|
165
|
Ouyang J, Chen KT, Gong E, Pauly J, Zaharchuk G. Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss. Med Phys 2019; 46:3555-3564. [PMID: 31131901 PMCID: PMC6692211 DOI: 10.1002/mp.13626] [Citation(s) in RCA: 111] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2019] [Revised: 04/02/2019] [Accepted: 05/05/2019] [Indexed: 01/08/2023] Open
Abstract
PURPOSE Our goal was to use a generative adversarial network (GAN) with feature matching and task-specific perceptual loss to synthesize standard-dose amyloid Positron emission tomography (PET) images of high quality and including accurate pathological features from ultra-low-dose PET images only. METHODS Forty PET datasets from 39 participants were acquired with a simultaneous PET/MRI scanner following injection of 330 ± 30 MBq of the amyloid radiotracer 18F-florbetaben. The raw list-mode PET data were reconstructed as the standard-dose ground truth and were randomly undersampled by a factor of 100 to reconstruct 1% low-dose PET scans. A 2D encoder-decoder network was implemented as the generator to synthesize a standard-dose image and a discriminator was used to evaluate them. The two networks contested with each other to achieve high-visual quality PET from the ultra-low-dose PET. Multi-slice inputs were used to reduce noise by providing the network with 2.5D information. Feature matching was applied to reduce hallucinated structures. Task-specific perceptual loss was designed to maintain the correct pathological features. The image quality was evaluated by peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and root mean square error (RMSE) metrics with and without each of these modules. Two expert radiologists were asked to score image quality on a 5-point scale and identified the amyloid status (positive or negative). RESULTS With only low-dose PET as input, the proposed method significantly outperformed Chen et al.'s method (Chen et al. Radiology. 2018;290:649-656) (which shows the best performance in this task) with the same input (PET-only model) by 1.87 dB in PSNR, 2.04% in SSIM, and 24.75% in RMSE. It also achieved comparable results to Chen et al.'s method which used additional magnetic resonance imaging (MRI) inputs (PET-MR model). Experts' reading results showed that the proposed method could achieve better overall image quality and maintain better pathological features indicating amyloid status than both PET-only and PET-MR models proposed by Chen et al. CONCLUSION: Standard-dose amyloid PET images can be synthesized from ultra-low-dose images using GAN. Applying adversarial learning, feature matching, and task-specific perceptual loss are essential to ensure image quality and the preservation of pathological features.
Collapse
Affiliation(s)
- Jiahong Ouyang
- Department of RadiologyStanford UniversityStanfordCA94305USA
| | - Kevin T. Chen
- Department of RadiologyStanford UniversityStanfordCA94305USA
| | | | - John Pauly
- Department of Electrical EngineeringStanford UniversityStanfordCA94305USA
| | - Greg Zaharchuk
- Department of RadiologyStanford UniversityStanfordCA94305USA
- Subtle MedicalMenlo ParkCA94025USA
| |
Collapse
|
166
|
Langlotz CP, Allen B, Erickson BJ, Kalpathy-Cramer J, Bigelow K, Cook TS, Flanders AE, Lungren MP, Mendelson DS, Rudie JD, Wang G, Kandarpa K. A Roadmap for Foundational Research on Artificial Intelligence in Medical Imaging: From the 2018 NIH/RSNA/ACR/The Academy Workshop. Radiology 2019; 291:781-791. [PMID: 30990384 PMCID: PMC6542624 DOI: 10.1148/radiol.2019190613] [Citation(s) in RCA: 195] [Impact Index Per Article: 32.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2019] [Revised: 03/24/2019] [Accepted: 03/25/2019] [Indexed: 01/08/2023]
Abstract
Imaging research laboratories are rapidly creating machine learning systems that achieve expert human performance using open-source methods and tools. These artificial intelligence systems are being developed to improve medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection, computer-aided classification, and radiogenomics. In August 2018, a meeting was held in Bethesda, Maryland, at the National Institutes of Health to discuss the current state of the art and knowledge gaps and to develop a roadmap for future research initiatives. Key research priorities include: 1, new image reconstruction methods that efficiently produce images suitable for human interpretation from source data; 2, automated image labeling and annotation methods, including information extraction from the imaging report, electronic phenotyping, and prospective structured image reporting; 3, new machine learning methods for clinical imaging data, such as tailored, pretrained model architectures, and federated machine learning methods; 4, machine learning methods that can explain the advice they provide to human users (so-called explainable artificial intelligence); and 5, validated methods for image de-identification and data sharing to facilitate wide availability of clinical imaging data sets. This research roadmap is intended to identify and prioritize these needs for academic research laboratories, funding agencies, professional societies, and industry.
Collapse
Affiliation(s)
- Curtis P. Langlotz
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Bibb Allen
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Bradley J. Erickson
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Jayashree Kalpathy-Cramer
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Keith Bigelow
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Tessa S. Cook
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Adam E. Flanders
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Matthew P. Lungren
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - David S. Mendelson
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Jeffrey D. Rudie
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Ge Wang
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| | - Krishna Kandarpa
- From the Department of Radiology, Stanford University, Stanford, CA 94305 (C.P.L., M.P.L.); Department of Radiology, Grandview Medical Center, Birmingham, Ala (B.A.); Department of Radiology, Mayo Clinic, Rochester, Minn (B.J.E.); Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Mass (J.K.C.); GE Healthcare, Chicago, Ill (K.B.); Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, Pa (T.S.C., J.D.R.); Department of Radiology, Thomas Jefferson University Hospital, Philadelphia, Pa (A.E.F.); Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY (D.S.M.); Biomedical Imaging Center, Rensselaer Polytechnic Institute, Troy, NY (G.W.); and National Institute of Biomedical Imaging and Bioengineering, National Institutes of Health, Washington, DC (K.K.)
| |
Collapse
|
167
|
A convolutional neural network-based system to prevent patient misidentification in FDG-PET examinations. Sci Rep 2019; 9:7192. [PMID: 31076620 PMCID: PMC6510755 DOI: 10.1038/s41598-019-43656-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Accepted: 04/29/2019] [Indexed: 12/13/2022] Open
Abstract
Patient misidentification in imaging examinations has become a serious problem in clinical settings. Such misidentification could be prevented if patient characteristics such as sex, age, and body weight could be predicted based on an image of the patient, with an alert issued when a mismatch between the predicted and actual patient characteristic is detected. Here, we tested a simple convolutional neural network (CNN)-based system that predicts patient sex from FDG PET-CT images. This retrospective study included 6,462 consecutive patients who underwent whole-body FDG PET-CT at our institute. The CNN system was used for classifying these patients by sex. Seventy percent of the randomly selected images were used to train and validate the system; the remaining 30% were used for testing. The training process was repeated five times to calculate the system's accuracy. When images for the testing were given to the learned CNN model, the sex of 99% of the patients was correctly categorized. We then performed an image-masking simulation to investigate the body parts that are significant for patient classification. The image-masking simulation indicated the pelvic region as the most important feature for classification. Finally, we showed that the system was also able to predict age and body weight. Our findings demonstrate that a CNN-based system would be effective to predict the sex of patients, with or without age and body weight prediction, and thereby prevent patient misidentification in clinical settings.
Collapse
|
168
|
Catana C. Development of Dedicated Brain PET Imaging Devices: Recent Advances and Future Perspectives. J Nucl Med 2019; 60:1044-1052. [PMID: 31028166 DOI: 10.2967/jnumed.118.217901] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 04/16/2019] [Indexed: 12/12/2022] Open
Abstract
Whole-body PET scanners are not optimized for imaging small structures in the human brain. Several PET devices specifically designed for this task have been proposed either for stand-alone operation or as MR-compatible inserts. The main distinctive features of some of the most recent concepts and their performance characteristics, with a focus on spatial resolution and sensitivity, are reviewed. The trade-offs between the various performance characteristics, desired capabilities, and cost that need to be considered when designing a dedicated brain scanner are presented. Finally, the aspirational goals for future-generation scanners, some of the factors that have contributed to the current status, and how recent advances may affect future developments in dedicated brain PET instrumentation are briefly discussed.
Collapse
Affiliation(s)
- Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts
| |
Collapse
|
169
|
Affiliation(s)
- Ciprian Catana
- From the Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, A.A. Martinos Center, 149 13th St, Room 2.301, Charlestown, MA 02129
| |
Collapse
|