1
|
Guo X, Shi L, Chen X, Liu Q, Zhou B, Xie H, Liu YH, Palyo R, Miller EJ, Sinusas AJ, Staib L, Spottiswoode B, Liu C, Dvornek NC. TAI-GAN: A Temporally and Anatomically Informed Generative Adversarial Network for early-to-late frame conversion in dynamic cardiac PET inter-frame motion correction. Med Image Anal 2024; 96:103190. [PMID: 38820677 PMCID: PMC11180595 DOI: 10.1016/j.media.2024.103190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 06/02/2024]
Abstract
Inter-frame motion in dynamic cardiac positron emission tomography (PET) using rubidium-82 (82Rb) myocardial perfusion imaging impacts myocardial blood flow (MBF) quantification and the diagnosis accuracy of coronary artery diseases. However, the high cross-frame distribution variation due to rapid tracer kinetics poses a considerable challenge for inter-frame motion correction, especially for early frames where intensity-based image registration techniques often fail. To address this issue, we propose a novel method called Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) that utilizes an all-to-one mapping to convert early frames into those with tracer distribution similar to the last reference frame. The TAI-GAN consists of a feature-wise linear modulation layer that encodes channel-wise parameters generated from temporal information and rough cardiac segmentation masks with local shifts that serve as anatomical information. Our proposed method was evaluated on a clinical 82Rb PET dataset, and the results show that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, the motion estimation accuracy and subsequent myocardial blood flow (MBF) quantification with both conventional and deep learning-based motion correction methods were improved compared to using the original frames. The code is available at https://github.com/gxq1998/TAI-GAN.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | | | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Yi-Hwa Liu
- Department of Internal Medicine, Yale University, New Haven, CT, USA
| | | | - Edward J Miller
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Lawrence Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
2
|
Sanaat A, Boccalini C, Mathoux G, Perani D, Frisoni GB, Haller S, Montandon ML, Rodriguez C, Giannakopoulos P, Garibotto V, Zaidi H. A deep learning model for generating [ 18F]FDG PET Images from early-phase [ 18F]Florbetapir and [ 18F]Flutemetamol PET images. Eur J Nucl Med Mol Imaging 2024:10.1007/s00259-024-06755-1. [PMID: 38861183 DOI: 10.1007/s00259-024-06755-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 05/05/2024] [Indexed: 06/12/2024]
Abstract
INTRODUCTION Amyloid-β (Aβ) plaques is a significant hallmark of Alzheimer's disease (AD), detectable via amyloid-PET imaging. The Fluorine-18-Fluorodeoxyglucose ([18F]FDG) PET scan tracks cerebral glucose metabolism, correlated with synaptic dysfunction and disease progression and is complementary for AD diagnosis. Dual-scan acquisitions of amyloid PET allows the possibility to use early-phase amyloid-PET as a biomarker for neurodegeneration, proven to have a good correlation to [18F]FDG PET. The aim of this study was to evaluate the added value of synthesizing the later from the former through deep learning (DL), aiming at reducing the number of PET scans, radiation dose, and discomfort to patients. METHODS A total of 166 subjects including cognitively unimpaired individuals (N = 72), subjects with mild cognitive impairment (N = 73) and dementia (N = 21) were included in this study. All underwent T1-weighted MRI, dual-phase amyloid PET scans using either Fluorine-18 Florbetapir ([18F]FBP) or Fluorine-18 Flutemetamol ([18F]FMM), and an [18F]FDG PET scan. Two transformer-based DL models called SwinUNETR were trained separately to synthesize the [18F]FDG from early phase [18F]FBP and [18F]FMM (eFBP/eFMM). A clinical similarity score (1: no similarity to 3: similar) was assessed to compare the imaging information obtained by synthesized [18F]FDG as well as eFBP/eFMM to actual [18F]FDG. Quantitative evaluations include region wise correlation and single-subject voxel-wise analyses in comparison with a reference [18F]FDG PET healthy control database. Dice coefficients were calculated to quantify the whole-brain spatial overlap between hypometabolic ([18F]FDG PET) and hypoperfused (eFBP/eFMM) binary maps at the single-subject level as well as between [18F]FDG PET and synthetic [18F]FDG PET hypometabolic binary maps. RESULTS The clinical evaluation showed that, in comparison to eFBP/eFMM (average of clinical similarity score (CSS) = 1.53), the synthetic [18F]FDG images are quite similar to the actual [18F]FDG images (average of CSS = 2.7) in terms of preserving clinically relevant uptake patterns. The single-subject voxel-wise analyses showed that at the group level, the Dice scores improved by around 13% and 5% when using the DL approach for eFBP and eFMM, respectively. The correlation analysis results indicated a relatively strong correlation between eFBP/eFMM and [18F]FDG (eFBP: slope = 0.77, R2 = 0.61, P-value < 0.0001); eFMM: slope = 0.77, R2 = 0.61, P-value < 0.0001). This correlation improved for synthetic [18F]FDG (synthetic [18F]FDG generated from eFBP (slope = 1.00, R2 = 0.68, P-value < 0.0001), eFMM (slope = 0.93, R2 = 0.72, P-value < 0.0001)). CONCLUSION We proposed a DL model for generating the [18F]FDG from eFBP/eFMM PET images. This method may be used as an alternative for multiple radiotracer scanning in research and clinical settings allowing to adopt the currently validated [18F]FDG PET normal reference databases for data analysis.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
| | - Cecilia Boccalini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Gregory Mathoux
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Daniela Perani
- Vita-Salute San Raffaele University, Nuclear Medicine Unit San Raffaele Hospital, Milan, Italy
| | | | - Sven Haller
- CIMC - Centre d'Imagerie Médicale de Cornavin, Geneva, Switzerland
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Marie-Louise Montandon
- Department of Rehabilitation and Geriatrics, Geneva University Hospitals and University of Geneva, Geneva, Switzerland
| | - Cristelle Rodriguez
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
| | - Panteleimon Giannakopoulos
- Division of Institutional Measures, Medical Direction, Geneva University Hospitals, Geneva, Switzerland
- Department of Psychiatry, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Valentina Garibotto
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Laboratory of Neuroimaging and Innovative Molecular Tracers (NIMTlab), Geneva University Neurocenter and Faculty of Medicine, University of Geneva, Geneva, Switzerland
- CIBM Center for Biomedical Imaging, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
- University Research and Innovation Center, Óbudabuda University, Budapest, Hungary.
| |
Collapse
|
3
|
Miederer I, Shi K, Wendler T. Machine learning methods for tracer kinetic modelling. Nuklearmedizin 2023; 62:370-378. [PMID: 37820696 DOI: 10.1055/a-2179-5818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
Tracer kinetic modelling based on dynamic PET is an important field of Nuclear Medicine for quantitative functional imaging. Yet, its implementation in clinical routine has been constrained by its complexity and computational costs. Machine learning poses an opportunity to improve modelling processes in terms of arterial input function prediction, the prediction of kinetic modelling parameters and model selection in both clinical and preclinical studies while reducing processing time. Moreover, it can help improving kinetic modelling data used in downstream tasks such as tumor detection. In this review, we introduce the basics of tracer kinetic modelling and present a literature review of original works and conference papers using machine learning methods in this field.
Collapse
Affiliation(s)
- Isabelle Miederer
- Department of Nuclear Medicine, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching near Munich, Germany
| | - Thomas Wendler
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching near Munich, Germany
- Department of diagnostic and interventional Radiology and Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| |
Collapse
|
4
|
Ayubcha C, Singh SB, Patel KH, Rahmim A, Hasan J, Liu L, Werner T, Alavi A. Machine learning in the positron emission tomography imaging of Alzheimer's disease. Nucl Med Commun 2023; 44:751-766. [PMID: 37395538 DOI: 10.1097/mnm.0000000000001723] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The utilization of machine learning techniques in medicine has exponentially increased over the last decades due to innovations in computer processing, algorithm development, and access to big data. Applications of machine learning techniques to neuroimaging specifically have unveiled various hidden interactions, structures, and mechanisms related to various neurological disorders. One application of interest is the imaging of Alzheimer's disease, the most common cause of progressive dementia. The diagnoses of Alzheimer's disease, mild cognitive impairment, and preclinical Alzheimer's disease have been difficult. Molecular imaging, particularly via PET scans, holds tremendous value in the imaging of Alzheimer's disease. To date, many novel algorithms have been developed with great success that leverage machine learning in the context of Alzheimer's disease. This review article provides an overview of the diverse applications of machine learning to PET imaging of Alzheimer's disease.
Collapse
Affiliation(s)
- Cyrus Ayubcha
- Harvard Medical School
- Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Shashi B Singh
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Krishna H Patel
- Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Jareed Hasan
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Litian Liu
- Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Thomas Werner
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| | - Abass Alavi
- Department of Radiology, Stanford University School of Medicine, Stanford, California
| |
Collapse
|
5
|
Miao T, Zhou B, Liu J, Guo X, Liu Q, Xie H, Chen X, Chen MK, Wu J, Carson RE, Liu C. Generation of Whole-Body FDG Parametric Ki Images from Static PET Images Using Deep Learning. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:465-472. [PMID: 37997577 PMCID: PMC10665031 DOI: 10.1109/trpms.2023.3243576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2023]
Abstract
FDG parametric Ki images show great advantage over static SUV images, due to the higher contrast and better accuracy in tracer uptake rate estimation. In this study, we explored the feasibility of generating synthetic Ki images from static SUV ratio (SUVR) images using three configurations of U-Nets with different sets of input and output image patches, which were the U-Nets with single input and single output (SISO), multiple inputs and single output (MISO), and single input and multiple outputs (SIMO). SUVR images were generated by averaging three 5-min dynamic SUV frames starting at 60 minutes post-injection, and then normalized by the mean SUV values in the blood pool. The corresponding ground truth Ki images were derived using Patlak graphical analysis with input functions from measurement of arterial blood samples. Even though the synthetic Ki values were not quantitatively accurate compared with ground truth, the linear regression analysis of joint histograms in the voxels of body regions showed that the mean R2 values were higher between U-Net prediction and ground truth (0.596, 0.580, 0.576 in SISO, MISO and SIMO), than that between SUVR and ground truth Ki (0.571). In terms of similarity metrics, the synthetic Ki images were closer to the ground truth Ki images (mean SSIM = 0.729, 0.704, 0.704 in SISO, MISO and MISO) than the input SUVR images (mean SSIM = 0.691). Therefore, it is feasible to use deep learning networks to estimate surrogate map of parametric Ki images from static SUVR images.
Collapse
Affiliation(s)
- Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
| | - Jing Wu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
- Department of Physics, Beijing Normal University, Beijing 100875, China
| | - Richard E. Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
6
|
Li M, Jiang Y, Li X, Yin S, Luo H. Ensemble of convolutional neural networks and multilayer perceptron for the diagnosis of mild cognitive impairment and Alzheimer's disease. Med Phys 2023; 50:209-225. [PMID: 36121183 DOI: 10.1002/mp.15985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 08/09/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Structural magnetic resonance imaging (sMRI) can provide morphological information about the structure and function of the brain in the same scanning process. It has been widely used in the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). PURPOSE To capture the anatomical changes in the brain caused by AD/MCI, deep learning-based MRI image analysis methods have been proposed in recent years. However, it is observed that the performance of most existing methods is limited as they only construct a single type of deep network and ignore the significance of other clinical information. METHODS To make up for these defects, an ensemble framework that incorporates three types of dedicatedly-designed convolutional neural networks (CNNs) and a multilayer perceptron (MLP) network is proposed, where three CNNs with entropy-based multi-instance learning pooling layers have more reliable feature selection abilities. The dedicatedly-designed base classifiers can make use of the heterogeneous data, and empower the framework with enhanced diversity and robustness. In particular, to consider the interactions among the base classifiers, a novel multi-head self-attention voting scheme is designed. Moreover, considering the chance that MCI can be transformed to AD, the proposed framework is designed to diagnose AD and predict MCI conversion simultaneously, with the aid of the transfer learning technique. RESULTS For performance evaluation and comparison, extensive experiments are conducted on the public dataset of the Alzheimer's Disease Neuroimaging Initiative (ADNI). The results show that the proposed ensemble framework provides superior performance under most of the evaluation metrics. Especially, the proposed framework achieves state-of-the-art diagnostic accuracy (98.61% for the AD diagnosis task, and 84.49% for the MCI conversion prediction task). CONCLUSIONS These promising results demonstrate the proposed ensemble framework can accurately diagnose AD patients and predict the conversion of MCI patients, which has the potential of clinical practice for diagnosing AD and MCI.
Collapse
Affiliation(s)
- Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| | - Shen Yin
- Department of Mechanical and Industrial Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, Heilongjiang, China
| |
Collapse
|
7
|
Hirata K, Sugimori H, Fujima N, Toyonaga T, Kudo K. Artificial intelligence for nuclear medicine in oncology. Ann Nucl Med 2022; 36:123-132. [PMID: 35028877 DOI: 10.1007/s12149-021-01693-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 11/07/2021] [Indexed: 12/12/2022]
Abstract
As in all other medical fields, artificial intelligence (AI) is increasingly being used in nuclear medicine for oncology. There are many articles that discuss AI from the viewpoint of nuclear medicine, but few focus on nuclear medicine from the viewpoint of AI. Nuclear medicine images are characterized by their low spatial resolution and high quantitativeness. It is noted that AI has been used since before the emergence of deep learning. AI can be divided into three categories by its purpose: (1) assisted interpretation, i.e., computer-aided detection (CADe) or computer-aided diagnosis (CADx). (2) Additional insight, i.e., AI provides information beyond the radiologist's eye, such as predicting genes and prognosis from images. It is also related to the field called radiomics/radiogenomics. (3) Augmented image, i.e., image generation tasks. To apply AI to practical use, harmonization between facilities and the possibility of black box explanations need to be resolved.
Collapse
Affiliation(s)
- Kenji Hirata
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan. .,Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Japan. .,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.
| | | | - Noriyuki Fujima
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Kohsuke Kudo
- Department of Diagnostic Imaging, Hokkaido University Graduate School of Medicine, Kita 15, Nishi 7, Kita-Ku, Sapporo, Hokkaido, 060-8638, Japan.,Division of Medical AI Education and Research, Hokkaido University Graduate School of Medicine, Sapporo, Japan.,Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Japan.,Global Center for Biomedical Science and Engineering, Hokkaido University Faculty of Medicine, Sapporo, Japan
| |
Collapse
|
8
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|