1
|
Miller RJH, Slomka PJ. Artificial Intelligence in Nuclear Cardiology: An Update and Future Trends. Semin Nucl Med 2024; 54:648-657. [PMID: 38521708 DOI: 10.1053/j.semnuclmed.2024.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 02/19/2024] [Indexed: 03/25/2024]
Abstract
Myocardial perfusion imaging (MPI), using either single photon emission computed tomography (SPECT) or positron emission tomography (PET), is one of the most commonly ordered cardiac imaging tests, with prominent clinical roles for disease diagnosis and risk prediction. Artificial intelligence (AI) could potentially play a role in many steps along the typical MPI workflow, from image acquisition through to clinical reporting and risk estimation. AI can be utilized to improve image quality, reducing radiation exposure and image acquisition times. Once images are acquired, AI can help optimize motion correction and image registration during image reconstruction or provide direct image attenuation correction. Utilizing these image sets, AI can segment a number of anatomic features from associated computed tomographic imaging or even generate synthetic attenuation imaging. Lastly, AI may play an important role in disease diagnosis or risk prediction by combining the large number of potentially important clinical, stress, and imaging-related variables. This review will focus on the most recent developments in the field, providing clinicians and researchers with a timely update on the field. Additionally, it will discuss future trends including applications of AI during multiple points of the typical MPI workflow to maximize clinical utility and methods to maximize the information that can be obtained from hybrid imaging.
Collapse
Affiliation(s)
- Robert J H Miller
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA; Department of Cardiac Sciences, University of Calgary, Calgary, AB, Canada
| | - Piotr J Slomka
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA.
| |
Collapse
|
2
|
Miller RJH, Slomka PJ. Current status and future directions in artificial intelligence for nuclear cardiology. Expert Rev Cardiovasc Ther 2024; 22:367-378. [PMID: 39001698 DOI: 10.1080/14779072.2024.2380764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 07/12/2024] [Indexed: 07/18/2024]
Abstract
INTRODUCTION Myocardial perfusion imaging (MPI) is one of the most commonly ordered cardiac imaging tests. Accurate motion correction, image registration, and reconstruction are critical for high-quality imaging, but this can be technically challenging and has traditionally relied on expert manual processing. With accurate processing, there is a rich variety of clinical, stress, functional, and anatomic data that can be integrated to guide patient management. AREAS COVERED PubMed and Google Scholar were reviewed for articles related to artificial intelligence in nuclear cardiology published between 2020 and 2024. We will outline the prominent roles for artificial intelligence (AI) solutions to provide motion correction, image registration, and reconstruction. We will review the role for AI in extracting anatomic data for hybrid MPI which is otherwise neglected. Lastly, we will discuss AI methods to integrate the wealth of data to improve disease diagnosis or risk stratification. EXPERT OPINION There is growing evidence that AI will transform the performance of MPI by automating and improving on aspects of image acquisition and reconstruction. Physicians and researchers will need to understand the potential strengths of AI in order to benefit from the full clinical utility of MPI.
Collapse
Affiliation(s)
- Robert J H Miller
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA
- Department of Cardiac Sciences, University of Calgary, Calgary, Canada
| | - Piotr J Slomka
- Departments of Medicine (Division of Artificial Intelligence in Medicine), Biomedical Sciences, and Imaging, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| |
Collapse
|
3
|
Wang J, Bermudez D, Chen W, Durgavarjhula D, Randell C, Uyanik M, McMillan A. Motion-correction strategies for enhancing whole-body PET imaging. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2024; 4:1257880. [PMID: 39118964 PMCID: PMC11308502 DOI: 10.3389/fnume.2024.1257880] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/10/2024]
Abstract
Positron Emission Tomography (PET) is a powerful medical imaging technique widely used for detection and monitoring of disease. However, PET imaging can be adversely affected by patient motion, leading to degraded image quality and diagnostic capability. Hence, motion gating schemes have been developed to monitor various motion sources including head motion, respiratory motion, and cardiac motion. The approaches for these techniques have commonly come in the form of hardware-driven gating and data-driven gating, where the distinguishing aspect is the use of external hardware to make motion measurements vs. deriving these measures from the data itself. The implementation of these techniques helps correct for motion artifacts and improves tracer uptake measurements. With the great impact that these methods have on the diagnostic and quantitative quality of PET images, much research has been performed in this area, and this paper outlines the various approaches that have been developed as applied to whole-body PET imaging.
Collapse
Affiliation(s)
- James Wang
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
| | - Dalton Bermudez
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
| | - Weijie Chen
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Electrical and Computer Engineering, University of Wisconsin Madison, Madison, WI, United States
| | - Divya Durgavarjhula
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Computer Science, University of Wisconsin Madison, Madison, WI, United States
| | - Caitlin Randell
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Biomedical Engineering, University of Wisconsin Madison, Madison, WI, United States
| | - Meltem Uyanik
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
| | - Alan McMillan
- Department of Radiology, University of Wisconsin Madison, Madison, WI, United States
- Department of Medical Physics, University of Wisconsin Madison, Madison, WI, United States
- Department of Electrical and Computer Engineering, University of Wisconsin Madison, Madison, WI, United States
- Department of Biomedical Engineering, University of Wisconsin Madison, Madison, WI, United States
- Data Science Institute, University of Wisconsin Madison, Madison, WI, United States
| |
Collapse
|
4
|
Brosch-Lenz JF, Delker A, Schmidt F, Tran-Gia J. On the Use of Artificial Intelligence for Dosimetry of Radiopharmaceutical Therapies. Nuklearmedizin 2023; 62:379-388. [PMID: 37827503 DOI: 10.1055/a-2179-6872] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Routine clinical dosimetry along with radiopharmaceutical therapies is key for future treatment personalization. However, dosimetry is considered complex and time-consuming with various challenges amongst the required steps within the dosimetry workflow. The general workflow for image-based dosimetry consists of quantitative imaging, the segmentation of organs and tumors, fitting of the time-activity-curves, and the conversion to absorbed dose. This work reviews the potential and advantages of the use of artificial intelligence to improve speed and accuracy of every single step of the dosimetry workflow.
Collapse
Affiliation(s)
| | - Astrid Delker
- Department of Nuclear Medicine, LMU University Hospital, Munich, Germany
| | - Fabian Schmidt
- Department of Nuclear Medicine and Clinical Molecular Imaging, University Hospital Tuebingen, Tuebingen, Germany
- Department of Preclinical Imaging and Radiopharmacy, Werner Siemens Imaging Center, Tuebingen, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Wuerzburg, Wuerzburg, Germany
| |
Collapse
|
5
|
Hellwig D, Hellwig NC, Boehner S, Fuchs T, Fischer R, Schmidt D. Artificial Intelligence and Deep Learning for Advancing PET Image Reconstruction: State-of-the-Art and Future Directions. Nuklearmedizin 2023; 62:334-342. [PMID: 37995706 PMCID: PMC10689088 DOI: 10.1055/a-2198-0358] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 10/12/2023] [Indexed: 11/25/2023]
Abstract
Positron emission tomography (PET) is vital for diagnosing diseases and monitoring treatments. Conventional image reconstruction (IR) techniques like filtered backprojection and iterative algorithms are powerful but face limitations. PET IR can be seen as an image-to-image translation. Artificial intelligence (AI) and deep learning (DL) using multilayer neural networks enable a new approach to this computer vision task. This review aims to provide mutual understanding for nuclear medicine professionals and AI researchers. We outline fundamentals of PET imaging as well as state-of-the-art in AI-based PET IR with its typical algorithms and DL architectures. Advances improve resolution and contrast recovery, reduce noise, and remove artifacts via inferred attenuation and scatter correction, sinogram inpainting, denoising, and super-resolution refinement. Kernel-priors support list-mode reconstruction, motion correction, and parametric imaging. Hybrid approaches combine AI with conventional IR. Challenges of AI-assisted PET IR include availability of training data, cross-scanner compatibility, and the risk of hallucinated lesions. The need for rigorous evaluations, including quantitative phantom validation and visual comparison of diagnostic accuracy against conventional IR, is highlighted along with regulatory issues. First approved AI-based applications are clinically available, and its impact is foreseeable. Emerging trends, such as the integration of multimodal imaging and the use of data from previous imaging visits, highlight future potentials. Continued collaborative research promises significant improvements in image quality, quantitative accuracy, and diagnostic performance, ultimately leading to the integration of AI-based IR into routine PET imaging protocols.
Collapse
Affiliation(s)
- Dirk Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Nils Constantin Hellwig
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Steven Boehner
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Timo Fuchs
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Regina Fischer
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
| | - Daniel Schmidt
- Department of Nuclear Medicine, University Hospital Regensburg, Regensburg, Germany
| |
Collapse
|
6
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
7
|
Yu X, He L, Wang Y, Dong Y, Song Y, Yuan Z, Yan Z, Wang W. A deep learning approach for automatic tumor delineation in stereotactic radiotherapy for non-small cell lung cancer using diagnostic PET-CT and planning CT. Front Oncol 2023; 13:1235461. [PMID: 37601687 PMCID: PMC10437048 DOI: 10.3389/fonc.2023.1235461] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/10/2023] [Indexed: 08/22/2023] Open
Abstract
Introduction Accurate delineation of tumor targets is crucial for stereotactic body radiation therapy (SBRT) for non-small cell lung cancer (NSCLC). This study aims to develop a deep learning-based segmentation approach to accurately and efficiently delineate NSCLC targets using diagnostic PET-CT and SBRT planning CT (pCT). Methods The diagnostic PET was registered to pCT using the transform matrix from registering diagnostic CT to the pCT. We proposed a 3D-UNet-based segmentation method to segment NSCLC tumor targets on dual-modality PET-pCT images. This network contained squeeze-and-excitation and Residual blocks in each convolutional block to perform dynamic channel-wise feature recalibration. Furthermore, up-sampling paths were added to supplement low-resolution features to the model and also to compute the overall loss function. The dice similarity coefficient (DSC), precision, recall, and the average symmetric surface distances were used to assess the performance of the proposed approach on 86 pairs of diagnostic PET and pCT images. The proposed model using dual-modality images was compared with both conventional 3D-UNet architecture and single-modality image input. Results The average DSC of the proposed model with both PET and pCT images was 0.844, compared to 0.795 and 0.827, when using 3D-UNet and nnUnet. It also outperformed using either pCT or PET alone with the same network, which had DSC of 0.823 and 0.732, respectively. Discussion Therefore, our proposed segmentation approach is able to outperform the current 3D-UNet network with diagnostic PET and pCT images. The integration of two image modalities helps improve segmentation accuracy.
Collapse
Affiliation(s)
- Xuyao Yu
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
- Tianjin Medical University, Tianjin, China
| | - Lian He
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Yuwen Wang
- Department of Radiotherapy, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
| | - Yang Dong
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Yongchun Song
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhiyong Yuan
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Ziye Yan
- Perception Vision Medical Technologies Co Ltd, Guangzhou, China
| | - Wei Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin’s Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
8
|
Pierre K, Haneberg AG, Kwak S, Peters KR, Hochhegger B, Sananmuang T, Tunlayadechanont P, Tighe PJ, Mancuso A, Forghani R. Applications of Artificial Intelligence in the Radiology Roundtrip: Process Streamlining, Workflow Optimization, and Beyond. Semin Roentgenol 2023; 58:158-169. [PMID: 37087136 DOI: 10.1053/j.ro.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 02/14/2023] [Indexed: 04/24/2023]
Abstract
There are many impactful applications of artificial intelligence (AI) in the electronic radiology roundtrip and the patient's journey through the healthcare system that go beyond diagnostic applications. These tools have the potential to improve quality and safety, optimize workflow, increase efficiency, and increase patient satisfaction. In this article, we review the role of AI for process improvement and workflow enhancement which includes applications beginning from the time of order entry, scan acquisition, applications supporting the image interpretation task, and applications supporting tasks after image interpretation such as result communication. These non-diagnostic workflow and process optimization tasks are an important part of the arsenal of potential AI tools that can streamline day to day clinical practice and patient care.
Collapse
Affiliation(s)
- Kevin Pierre
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Adam G Haneberg
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Sean Kwak
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL
| | - Keith R Peters
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Bruno Hochhegger
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Thiparom Sananmuang
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Padcha Tunlayadechanont
- Department of Diagnostic and Therapeutic Radiology and Research, Faculty of Medicine Ramathibodi Hospital, Ratchathewi, Bangkok, Thailand
| | - Patrick J Tighe
- Departments of Anesthesiology & Orthopaedic Surgery, University of Florida College of Medicine, Gainesville, FL
| | - Anthony Mancuso
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL
| | - Reza Forghani
- Radiomics and Augmented Intelligence Laboratory (RAIL), Department of Radiology and the Norman Fixel Institute for Neurological Diseases, University of Florida College of Medicine, Gainesville, FL; Department of Radiology, University of Florida College of Medicine, Gainesville, FL; Division of Medical Physics, Department of Radiology, University of Florida College of Medicine, Gainesville, FL.
| |
Collapse
|
9
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
10
|
Miller RJ. Artificial Intelligence in Nuclear Cardiology. Cardiol Clin 2023; 41:151-161. [PMID: 37003673 DOI: 10.1016/j.ccl.2023.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/22/2023]
Abstract
Artificial intelligence (AI) encompasses a variety of computer algorithms that have a wide range of potential clinical applications in nuclear cardiology. This article will introduce core terminology and concepts for AI including classifications of AI as well as training and testing regimens. We will then highlight the potential role for AI to improve image registration and image quality. Next, we will discuss methods for AI-driven image attenuation correction. Finally, we will review advancements in machine learning and deep-learning applications for disease diagnosis and risk stratification, including efforts to improve clinical translation of this valuable technology with explainable AI models.
Collapse
|
11
|
Martinez-Movilla A, Mix M, Torres-Espallardo I, Teijeiro E, Bello P, Baltas D, Martí-Bonmatí L, Carles M. Comparison of protocols with respiratory-gated (4D) motion compensation in PET/CT: open-source package for quantification of phantom image quality. EJNMMI Phys 2022; 9:80. [PMID: 36394640 PMCID: PMC9672236 DOI: 10.1186/s40658-022-00509-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 10/31/2022] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Patient's breathing affects the quality of chest images acquired with positron emission tomography/computed tomography (PET/CT) studies. Movement correction is required to optimize PET quantification in clinical settings. We present a reproducible methodology to compare the impact of different movement compensation protocols on PET image quality. Static phantom images were set as reference values, and recovery coefficients (RCs) were calculated from motion compensated images for the phantoms in respiratory movement. Image quality was evaluated in terms of: (1) volume accuracy (VA) with the NEMA phantom; (2) concentration accuracy (CA) by six refillable inserts within the electron density CIRS phantom; and (3) spatial resolution (R) with the Jaszczak phantom. Three different respiratory patterns were applied to the phantoms. We developed an open-source package to automatically analyze VA, CA and R. We compared 10 different movement compensation protocols available in the Philips Gemini TF-64 PET/CT (4-, 6-, 8- and 10-time bins, 20%-, 30%-, 40%-window width in Inhale and Exhale). RESULTS The homemade package provided RC values for VA, CA and R of 102 PET images in less than 5 min. Results of the comparison of the 10 different protocols demonstrated the feasibility of the proposed method for quantifying the variations observed qualitatively. Overall, prospective protocols showed better motion compensation than retrospective. The best performance was obtained for the protocol Exhale 30% (0.3 s after maximum Exhale position and window width of 30%) with RC[Formula: see text], RC[Formula: see text] and RC[Formula: see text]. Among retrospective protocols, 8 Phase protocol showed the best performance. CONCLUSION We provided an open-source package able to automatically evaluate the impact of motion compensation methods on PET image quality. A setup based on commonly available experimental phantoms is recommended. Its application for the comparison of 10 time-based approaches showed that Exhale 30% protocol had the best performance. The proposed framework is not specific to the phantoms and protocols presented on this study.
Collapse
Affiliation(s)
- Andrea Martinez-Movilla
- Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB), Unique Scientific and Technical Infrastructures (ICTS), La Fe Health Research Institute, Valencia, Spain
| | - Michael Mix
- Department of Nuclear Medicine, University Medical Center Freiburg, Faculty of Medicine, 79106, Freiburg, Germany
| | - Irene Torres-Espallardo
- Department of Nuclear Medicine, Medical Imaging Clinical Area, La Fe University and Polytechnic Hospital, 46026, Valencia, Spain
| | - Elena Teijeiro
- Department of Nuclear Medicine, Medical Imaging Clinical Area, La Fe University and Polytechnic Hospital, 46026, Valencia, Spain
| | - Pilar Bello
- Department of Nuclear Medicine, Medical Imaging Clinical Area, La Fe University and Polytechnic Hospital, 46026, Valencia, Spain
| | - Dimos Baltas
- Department of Radiation Oncology, Division of Medical Physics, University Medical Center Freiburg, Faculty of Medicine, 79106, Freiburg, Germany.,German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Luis Martí-Bonmatí
- Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB), Unique Scientific and Technical Infrastructures (ICTS), La Fe Health Research Institute, Valencia, Spain
| | - Montserrat Carles
- Biomedical Imaging Research Group (GIBI230-PREBI) and Imaging La Fe node at Distributed Network for Biomedical Imaging (ReDIB), Unique Scientific and Technical Infrastructures (ICTS), La Fe Health Research Institute, Valencia, Spain.
| |
Collapse
|
12
|
Fourcade C, Ferrer L, Moreau N, Santini G, Brennan A, Rousseau C, Lacombe M, Fleury V, Colombié M, Jézéquel P, Rubeaux M, Mateus D. Deformable image registration with deep network priors: a study on longitudinal PET images. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7e17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 07/04/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. This paper proposes a novel approach for the longitudinal registration of PET imaging acquired for the monitoring of patients with metastatic breast cancer. Unlike with other image analysis tasks, the use of deep learning (DL) has not significantly improved the performance of image registration. With this work, we propose a new registration approach to bridge the performance gap between conventional and DL-based methods: medical image registration method regularized by architecture (MIRRBA). Approach.
MIRRBA is a subject-specific deformable registration method which relies on a deep pyramidal architecture to parametrize the deformation field. Diverging from the usual deep-learning paradigms, MIRRBA does not require a learning database, but only a pair of images to be registered that is used to optimize the network's parameters. We applied MIRRBA on a private dataset of 110 whole-body PET images of patients with metastatic breast cancer. We used different architecture configurations to produce the deformation field and studied the results obtained. We also compared our method to several standard registration approaches: two conventional iterative registration methods (ANTs and Elastix) and two supervised DL-based models (LapIRN and Voxelmorph). Registration accuracy was evaluated using the Dice score, the target registration error, the average Hausdorff distance and the detection rate, while the realism of the registration obtained was evaluated using Jacobian's determinant. The ability of the different methods to shrink disappearing lesions was also computed with the disappearing rate. Main results. MIRRBA significantly improved all metrics when compared to DL-based approaches. The organ and lesion Dice scores of Voxelmorph improved by 6% and 52% respectively, while the ones of LapIRN increased by 5% and 65%. Regarding conventional approaches, MIRRBA presented comparable results showing the feasibility of our method. Significance. In this paper, we also demonstrate the regularizing power of deep architectures and present new elements to understand the role of the architecture in DL methods used for registration.
Collapse
|
13
|
Guo X, Zhou B, Pigg D, Spottiswoode B, Casey ME, Liu C, Dvornek NC. Unsupervised inter-frame motion correction for whole-body dynamic PET using convolutional long short-term memory in a convolutional neural network. Med Image Anal 2022; 80:102524. [PMID: 35797734 PMCID: PMC10923189 DOI: 10.1016/j.media.2022.102524] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 06/08/2022] [Accepted: 06/24/2022] [Indexed: 11/24/2022]
Abstract
Subject motion in whole-body dynamic PET introduces inter-frame mismatch and seriously impacts parametric imaging. Traditional non-rigid registration methods are generally computationally intense and time-consuming. Deep learning approaches are promising in achieving high accuracy with fast speed, but have yet been investigated with consideration for tracer distribution changes or in the whole-body scope. In this work, we developed an unsupervised automatic deep learning-based framework to correct inter-frame body motion. The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer, fully utilizing dynamic temporal features and spatial information. Our dataset contains 27 subjects each under a 90-min FDG whole-body dynamic PET scan. Evaluating performance in motion simulation studies and a 9-fold cross-validation on the human subject dataset, compared with both traditional and deep learning baselines, we demonstrated that the proposed network achieved the lowest motion prediction error, obtained superior performance in enhanced qualitative and quantitative spatial alignment between parametric Ki and Vb images, and significantly reduced parametric fitting error. We also showed the potential of the proposed motion correction method for impacting downstream analysis of the estimated parametric images, improving the ability to distinguish malignant from benign hypermetabolic regions of interest. Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline, showing its potential to be easily applied in clinical settings.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | | | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
14
|
Zou Q, Torres LA, Fain SB, Higano NS, Bates AJ, Jacob M. Dynamic imaging using motion-compensated smoothness regularization on manifolds (MoCo-SToRM). Phys Med Biol 2022; 67:10.1088/1361-6560/ac79fc. [PMID: 35714617 PMCID: PMC9677930 DOI: 10.1088/1361-6560/ac79fc] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/17/2022] [Indexed: 01/07/2023]
Abstract
Objective. We introduce an unsupervised motion-compensated reconstruction scheme for high-resolution free-breathing pulmonary magnetic resonance imaging.Approach. We model the image frames in the time series as the deformed version of the 3D template image volume. We assume the deformation maps to be points on a smooth manifold in high-dimensional space. Specifically, we model the deformation map at each time instant as the output of a CNN-based generator that has the same weight for all time-frames, driven by a low-dimensional latent vector. The time series of latent vectors account for the dynamics in the dataset, including respiratory motion and bulk motion. The template image volume, the parameters of the generator, and the latent vectors are learned directly from the k-t space data in an unsupervised fashion.Main results. Our experimental results show improved reconstructions compared to state-of-the-art methods, especially in the context of bulk motion during the scans.Significance. The proposed unsupervised motion-compensated scheme jointly estimates the latent vectors that capture the motion dynamics, the corresponding deformation maps, and the reconstructed motion-compensated images from the raw k-t space data of each subject. Unlike current motion-resolved strategies, the proposed scheme is more robust to bulk motion events during the scan.
Collapse
Affiliation(s)
- Qing Zou
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Luis A. Torres
- Department of Medical Physics, University of Wisconsin, Madison, WI, USA
| | - Sean B. Fain
- Department of Radiology, The University of Iowa, Iowa City, IA, USA
| | - Nara S. Higano
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine and Department of Radiology, Cincinnati Children’s Hospital, Cincinnati, OH, USA,Department of Pediatrics, University of Cincinnati, Cincinnati, OH, USA
| | - Alister J. Bates
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine and Department of Radiology, Cincinnati Children’s Hospital, Cincinnati, OH, USA,Department of Pediatrics, University of Cincinnati, Cincinnati, OH, USA
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
15
|
Total-body pediatric PET is ready for prime time. Eur J Nucl Med Mol Imaging 2022; 49:3624-3626. [PMID: 35723695 DOI: 10.1007/s00259-022-05873-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 06/09/2022] [Indexed: 11/04/2022]
|
16
|
Grootjans W, Rietbergen DDD, van Velden FHP. Added Value of Respiratory Gating in Positron Emission Tomography for the Clinical Management of Lung Cancer Patients. Semin Nucl Med 2022; 52:745-758. [DOI: 10.1053/j.semnuclmed.2022.04.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 04/21/2022] [Indexed: 12/24/2022]
|
17
|
Li T, Zhang M, Qi W, Asma E, Qi J. Deep Learning Based Joint PET Image Reconstruction and Motion Estimation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1230-1241. [PMID: 34928789 PMCID: PMC9064915 DOI: 10.1109/tmi.2021.3136553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Respiratory motion is one of the main sources of motion artifacts in positron emission tomography (PET) imaging. The emission image and patient motion can be estimated simultaneously from respiratory gated data through a joint estimation framework. However, conventional motion estimation methods based on registration of a pair of images are sensitive to noise. The goal of this study is to develop a robust joint estimation method that incorporates a deep learning (DL)-based image registration approach for motion estimation. We propose a joint estimation framework by incorporating a learned image registration network into a regularized PET image reconstruction. The joint estimation was formulated as a constrained optimization problem with moving gated images related to a fixed image via the deep neural network. The constrained optimization problem is solved by the alternating direction method of multipliers (ADMM) algorithm. The effectiveness of the algorithm was demonstrated using simulated and real data. We compared the proposed DL-ADMM joint estimation algorithm with a monotonic iterative joint estimation. Motion compensated reconstructions using pre-calculated deformation fields by DL-based (DL-MC recon) and iterative (iterative-MC recon) image registration were also included for comparison. Our simulation study shows that the proposed DL-ADMM joint estimation method reduces bias compared to the ungated image without increasing noise and outperforms the competing methods. In the real data study, our proposed method also generated higher lesion contrast and sharper liver boundaries compared to the ungated image and had lower noise than the reference gated image.
Collapse
|
18
|
Yuan N, Rao S, Chen Q, Sensoy L, Qi J, Rong Y. Head and neck synthetic CT generated from ultra-low-dose cone-beam CT following Image Gently Protocol using deep neural network. Med Phys 2022; 49:3263-3277. [PMID: 35229904 DOI: 10.1002/mp.15585] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 02/08/2022] [Accepted: 02/21/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Image guidance is used to improve accuracy of radiation therapy delivery but results in increased dose to patients. This is of particular concern in children who need be treated per Pediatric Image Gently Protocols due to long term risks from radiation exposure. The purpose of this study is to design a deep neural network (DNN) architecture and loss function for improving soft-tissue contrast and preserving small anatomical features in ultra-low-dose cone-beam CTs (CBCT) of head and neck cancer (HNC) imaging. METHODS A 2-D compound U-Net architecture (modified U-Net++) with different depths was proposed to enhance the network capability of capturing small-volume structures. A mask weighted loss function (Mask-Loss) was applied to enhance soft-tissue contrast. Fifty-five paired CBCT and CT images of HNC patients were retrospectively collected for network training and testing. The output enhanced CBCT images from the present study were evaluated with quantitative metrics including mean absolute error (MAE), signal-to-noise ratio (SNR), and structural similarity (SSIM), and compared with those from the previously proposed network architectures (U-Net and wide U-Net) using MAE loss functions. A visual assessment of ten selected structures in the enhanced CBCT images of each patient was performed to evaluate image quality improvement, blindly scored by an experienced radiation oncologist specialized in HN cancer. RESULTS All the enhanced CBCT images showed reduced artifactual distortion and image noise. U-Net++ outperformed the U-Net and wide U-Net in terms of MAE, contrast near structure boundaries, and small structures. The proposed Mask-Loss improved image contrast and accuracy of the soft-tissue regions. The enhanced CBCT images predicted by U-Net++ and Mask-Loss demonstrated improvement compared to the U-Net in terms of average MAE (52.41 vs. 42.85 HU), SNR (14.14 vs. 15.07 dB), and SSIM (0.84 vs. 0.87), respectively (p < 0.01, in all paired t-tests). The visual assessment showed that the proposed U-Net++ and Mask-Loss significantly improved original CBCTs (p < 0.01), compared to the U-Net and MAE loss. CONCLUSIONS The proposed network architecture and loss function effectively improved image quality in soft-tissue contrast, organ boundary, and small structures preservation for ultra-low-dose CBCT following Image Gently Protocol. This method has potential to provide sufficient anatomical representation on the enhanced CBCT images for accurate treatment delivery and potentially fast online-adaptive re-planning for HN cancer patients. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Nimu Yuan
- Department of Biomedical Engineering, University of California, Davis, CA, 95616, United States
| | - Shyam Rao
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, 95817, United States
| | - Quan Chen
- Department of Radiation Oncology, University of Kentucky, Lexington, KY, 40536, United States
| | - Levent Sensoy
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, 95817, United States
| | - Jinyi Qi
- Department of Biomedical Engineering, University of California, Davis, CA, 95616, United States
| | - Yi Rong
- Department of Radiation Oncology, University of California Davis Medical Center, Sacramento, CA, 95817, United States.,Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ, 85054, United States
| |
Collapse
|
19
|
Sun T, Wu Y, Bai Y, Wang Z, Shen C, Wang W, Li C, Hu Z, Liang D, Liu X, Zheng H, Yang Y, Wang M. An iterative image-based inter-frame motion compensation method for dynamic brain PET imaging. Phys Med Biol 2022; 67. [PMID: 35021156 DOI: 10.1088/1361-6560/ac4a8f] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 01/12/2022] [Indexed: 11/11/2022]
Abstract
As a non-invasive imaging tool, positron emission tomography (PET) plays an important role in brain science and disease research. Dynamic acquisition is one way of brain PET imaging. Its wide application in clinical research has often been hindered by practical challenges, such as patient involuntary movement, which could degrade both image quality and the accuracy of the quantification. This is even more obvious in scans of patients with neurodegeneration or mental disorders. Conventional motion compensation methods were either based on images or raw measured data, were shown to be able to reduce the effect of motion on the image quality. As for a dynamic PET scan, motion compensation can be challenging as tracer kinetics and relatively high noise can be present in dynamic frames. In this work, we propose an image-based inter-frame motion compensation approach specifically designed for dynamic brain PET imaging. Our method has an iterative implementation that only requires reconstructed images, based on which the inter-frame subject movement can be estimated and compensated. The method utilized tracer-specific kinetic modelling and can deal with simple and complex movement patterns. The synthesized phantom study showed that the proposed method can compensate for the simulated motion in scans with18F-FDG,18F-Fallypride and18F-AV45. Fifteen dynamic18F-FDG patient scans with motion artifacts were also processed. The quality of the recovered image was superior to the one of the non-corrected images and the corrected images with other image-based methods. The proposed method enables retrospective image quality control for dynamic brain PET imaging, hence facilitating the applications of dynamic PET in clinics and research.
Collapse
Affiliation(s)
- Tao Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Yaping Wu
- Henan Provincial People's Hospital and the People's Hospital of Zhengzhou, University of Zhengzhou, People's Republic of China
| | - Yan Bai
- Henan Provincial People's Hospital and the People's Hospital of Zhengzhou, University of Zhengzhou, People's Republic of China
| | - Zhenguo Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Chushu Shen
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Wei Wang
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chenwei Li
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Zhanli Hu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Yongfeng Yang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Meiyun Wang
- Henan Provincial People's Hospital and the People's Hospital of Zhengzhou, University of Zhengzhou, People's Republic of China
| |
Collapse
|
20
|
Lamare F, Bousse A, Thielemans K, Liu C, Merlin T, Fayad H, Visvikis D. PET respiratory motion correction: quo vadis? Phys Med Biol 2021; 67. [PMID: 34915465 DOI: 10.1088/1361-6560/ac43fc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/16/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) respiratory motion correction has been a subject of great interest for the last twenty years, prompted mainly by the development of multimodality imaging devices such as PET/computed tomography (CT) and PET/magnetic resonance imaging (MRI). PET respiratory motion correction involves a number of steps including acquisition synchronization, motion estimation and finally motion correction. The synchronization steps include the use of different external device systems or data driven approaches which have been gaining ground over the last few years. Patient specific or generic motion models using the respiratory synchronized datasets can be subsequently derived and used for correction either in the image space or within the image reconstruction process. Similar overall approaches can be considered and have been proposed for both PET/CT and PET/MRI devices. Certain variations in the case of PET/MRI include the use of MRI specific sequences for the registration of respiratory motion information. The proposed review includes a comprehensive coverage of all these areas of development in field of PET respiratory motion for different multimodality imaging devices and approaches in terms of synchronization, estimation and subsequent motion correction. Finally, a section on perspectives including the potential clinical usage of these approaches is included.
Collapse
Affiliation(s)
- Frederic Lamare
- Nuclear Medicine Department, University Hospital Centre Bordeaux Hospital Group South, ., Bordeaux, Nouvelle-Aquitaine, 33604, FRANCE
| | - Alexandre Bousse
- LaTIM, INSERM UMR1101, Université de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Kris Thielemans
- University College London Institute of Nuclear Medicine, UCL Hospital, Tower 5, 235 Euston Road, London, NW1 2BU, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Chi Liu
- Department of Diagnostic Radiology, Yale University School of Medicine Department of Radiology and Biomedical Imaging, PO Box 208048, 801 Howard Avenue, New Haven, Connecticut, 06520-8042, UNITED STATES
| | - Thibaut Merlin
- LaTIM, INSERM UMR1101, Universite de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Hadi Fayad
- Weill Cornell Medicine - Qatar, ., Doha, ., QATAR
| | - Dimitris Visvikis
- LaTIM, UMR1101, Universite de Bretagne Occidentale, INSERM, Brest, Bretagne, 29285, FRANCE
| |
Collapse
|
21
|
Mohammadi I, Castro IF, Rahmim A, Veloso JFCA. Motion in nuclear cardiology imaging: types, artifacts, detection and correction techniques. Phys Med Biol 2021; 67. [PMID: 34826826 DOI: 10.1088/1361-6560/ac3dc7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 11/26/2021] [Indexed: 11/12/2022]
Abstract
In this paper, the authors review the field of motion detection and correction in nuclear cardiology with single photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging systems. We start with a brief overview of nuclear cardiology applications and description of SPECT and PET imaging systems, then explaining the different types of motion and their related artefacts. Moreover, we classify and describe various techniques for motion detection and correction, discussing their potential advantages including reference to metrics and tasks, particularly towards improvements in image quality and diagnostic performance. In addition, we emphasize limitations encountered in different motion detection and correction methods that may challenge routine clinical applications and diagnostic performance.
Collapse
Affiliation(s)
- Iraj Mohammadi
- Department of Physics, University of Aveiro, Aveiro, PORTUGAL
| | - I Filipe Castro
- i3n Physics Department, Universidade de Aveiro, Aveiro, PORTUGAL
| | - Arman Rahmim
- Radiology and Physics, The University of British Columbia, Vancouver, British Columbia, CANADA
| | | |
Collapse
|
22
|
Miller RJH, Singh A, Dey D, Slomka P. Artificial Intelligence and Cardiac PET/Computed Tomography Imaging. PET Clin 2021; 17:85-94. [PMID: 34809873 DOI: 10.1016/j.cpet.2021.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Artificial intelligence is an important technology, with rapidly expanding applications for cardiac PET. We review the common terminology, including methods for training and testing, which are fundamental to understanding artificial intelligence. Next, we highlight applications to improve image acquisition, reconstruction, and segmentation. Computed tomographic imaging is commonly acquired in conjunction with PET and various artificial intelligence methods have been applied, including methods to automatically extract anatomic information or generate synthetic attenuation images. Last, we describe methods to automate disease diagnosis or risk stratification. This summary highlights the current and future clinical applications of artificial intelligence to cardiovascular PET imaging.
Collapse
Affiliation(s)
- Robert J H Miller
- Department of Cardiac Sciences, University of Calgary, GAA08 HRIC, 3230 Hospital Drive NW, Calgary AB, T2N 4Z6, Canada
| | - Ananya Singh
- Departments of Imaging and Medicine, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA 90048, USA
| | - Damini Dey
- Departments of Imaging and Medicine, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA 90048, USA
| | - Piotr Slomka
- Departments of Imaging and Medicine, Cedars-Sinai Medical Center, 8700 Beverly Blvd, Suite Metro 203, Los Angeles, CA 90048, USA.
| |
Collapse
|
23
|
Zhou B, Tsai YJ, Chen X, Duncan JS, Liu C. MDPET: A Unified Motion Correction and Denoising Adversarial Network for Low-Dose Gated PET. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3154-3164. [PMID: 33909561 PMCID: PMC8588635 DOI: 10.1109/tmi.2021.3076191] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
In positron emission tomography (PET), gating is commonly utilized to reduce respiratory motion blurring and to facilitate motion correction methods. In application where low-dose gated PET is useful, reducing injection dose causes increased noise levels in gated images that could corrupt motion estimation and subsequent corrections, leading to inferior image quality. To address these issues, we propose MDPET, a unified motion correction and denoising adversarial network for generating motion-compensated low-noise images from low-dose gated PET data. Specifically, we proposed a Temporal Siamese Pyramid Network (TSP-Net) with basic units made up of 1.) Siamese Pyramid Network (SP-Net), and 2.) a recurrent layer for motion estimation among the gates. The denoising network is unified with our motion estimation network to simultaneously correct the motion and predict a motion-compensated denoised PET reconstruction. The experimental results on human data demonstrated that our MDPET can generate accurate motion estimation directly from low-dose gated images and produce high-quality motion-compensated low-noise reconstructions. Comparative studies with previous methods also show that our MDPET is able to generate superior motion estimation and denoising performance. Our code is available at https://github.com/bbbbbbzhou/MDPET.
Collapse
|
24
|
Wang Y, Li E, Cherry SR, Wang G. Total-Body PET Kinetic Modeling and Potential Opportunities Using Deep Learning. PET Clin 2021; 16:613-625. [PMID: 34353745 PMCID: PMC8453049 DOI: 10.1016/j.cpet.2021.06.009] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The uEXPLORER total-body PET/CT system provides a very high level of detection sensitivity and simultaneous coverage of the entire body for dynamic imaging for quantification of tracer kinetics. This article describes the fundamentals and potential benefits of total-body kinetic modeling and parametric imaging focusing on the noninvasive derivation of blood input function, multiparametric imaging, and high-temporal resolution kinetic modeling. Along with its attractive properties, total-body kinetic modeling also brings significant challenges, such as the large scale of total-body dynamic PET data, the need for organ and tissue appropriate input functions and kinetic models, and total-body motion correction. These challenges, and the opportunities using deep learning, are discussed.
Collapse
Affiliation(s)
- Yiran Wang
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA; Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Elizabeth Li
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA; Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA.
| |
Collapse
|
25
|
Yang J, Yang J, Zhao F, Zhang W. An unsupervised multi-scale framework with attention-based network (MANet) for lung 4D-CT registration. Phys Med Biol 2021; 66. [PMID: 34126608 DOI: 10.1088/1361-6560/ac0afc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 06/14/2021] [Indexed: 01/25/2023]
Abstract
Deformable image registration (DIR) of 4D-CT is very important in many radiotherapeutic applications including tumor target definition, image fusion, dose accumulation and response evaluation. It is a challenging task to performing accurate and fast DIR of lung 4D-CT images due to its large and complicated deformations. In this study, we propose an unsupervised multi-scale DIR framework with attention-based network (MANet). Three cascaded models used for aligning CT images in different resolution levels were involved and trained by minimizing the loss functions, which were defined as the combination of dissimilarity between the fixed image and the deformed image and DVF regularization term. In addition, attention gates were incorporated into the three models to distinguish the moving structures from non-moving or minimal-moving structures during registration. The three models were trained sequentially and separately to minimize the loss function in each scale to initialize the MANet, and then trained jointly to minimize the total loss function which incorporated an additional dissimilarity between fixed image and deformed image. Besides, an adversarial network was integrated into MANet to enforce the DVF regularization by penalizing the unrealistic deformed images. The proposed MANet was evaluated on the public dir-lab dataset, and the target registration errors (TREs) of the model were compared with convention iterative optimization-based methods and three recently published deep learning-based methods. The initial results showed that the MANet with an average of TRE of 1.53 ± 1.02 mm outperformed other registration methods, and its execution time took about 1 s for DVF estimation with no requirement of manual-tuning for parameters, which demonstrating that our proposed method had the ability of performing superior registration for 4D-CT.
Collapse
Affiliation(s)
- Juan Yang
- School of Physics and Electronics, Shandong Normal University, Jinan 250358, People's Republic of China
| | - Jinhui Yang
- School of Physics and Electronics, Shandong Normal University, Jinan 250358, People's Republic of China
| | - Fen Zhao
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Jinan 250117, People's Republic of China
| | - Wenjun Zhang
- Department of Human Resources, Shandong Provincial Third Hospital, Jinan 250031, People's Republic of China
| |
Collapse
|
26
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
27
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
28
|
Meikle SR, Sossi V, Roncali E, Cherry SR, Banati R, Mankoff D, Jones T, James M, Sutcliffe J, Ouyang J, Petibon Y, Ma C, El Fakhri G, Surti S, Karp JS, Badawi RD, Yamaya T, Akamatsu G, Schramm G, Rezaei A, Nuyts J, Fulton R, Kyme A, Lois C, Sari H, Price J, Boellaard R, Jeraj R, Bailey DL, Eslick E, Willowson KP, Dutta J. Quantitative PET in the 2020s: a roadmap. Phys Med Biol 2021; 66:06RM01. [PMID: 33339012 PMCID: PMC9358699 DOI: 10.1088/1361-6560/abd4f7] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
Positron emission tomography (PET) plays an increasingly important role in research and clinical applications, catalysed by remarkable technical advances and a growing appreciation of the need for reliable, sensitive biomarkers of human function in health and disease. Over the last 30 years, a large amount of the physics and engineering effort in PET has been motivated by the dominant clinical application during that period, oncology. This has led to important developments such as PET/CT, whole-body PET, 3D PET, accelerated statistical image reconstruction, and time-of-flight PET. Despite impressive improvements in image quality as a result of these advances, the emphasis on static, semi-quantitative 'hot spot' imaging for oncologic applications has meant that the capability of PET to quantify biologically relevant parameters based on tracer kinetics has not been fully exploited. More recent advances, such as PET/MR and total-body PET, have opened up the ability to address a vast range of new research questions, from which a future expansion of applications and radiotracers appears highly likely. Many of these new applications and tracers will, at least initially, require quantitative analyses that more fully exploit the exquisite sensitivity of PET and the tracer principle on which it is based. It is also expected that they will require more sophisticated quantitative analysis methods than those that are currently available. At the same time, artificial intelligence is revolutionizing data analysis and impacting the relationship between the statistical quality of the acquired data and the information we can extract from the data. In this roadmap, leaders of the key sub-disciplines of the field identify the challenges and opportunities to be addressed over the next ten years that will enable PET to realise its full quantitative potential, initially in research laboratories and, ultimately, in clinical practice.
Collapse
Affiliation(s)
- Steven R Meikle
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Canada
| | - Emilie Roncali
- Department of Biomedical Engineering, University of California, Davis, United States of America
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Richard Banati
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Brain and Mind Centre, The University of Sydney, Australia
- Australian Nuclear Science and Technology Organisation, Sydney, Australia
| | - David Mankoff
- Department of Radiology, University of Pennsylvania, United States of America
| | - Terry Jones
- Department of Radiology, University of California, Davis, United States of America
| | - Michelle James
- Department of Radiology, Molecular Imaging Program at Stanford (MIPS), CA, United States of America
- Department of Neurology and Neurological Sciences, Stanford University, CA, United States of America
| | - Julie Sutcliffe
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Internal Medicine, University of California, Davis, CA, United States of America
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Yoann Petibon
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Suleman Surti
- Department of Radiology, University of Pennsylvania, United States of America
| | - Joel S Karp
- Department of Radiology, University of Pennsylvania, United States of America
| | - Ramsey D Badawi
- Department of Biomedical Engineering, University of California, Davis, United States of America
- Department of Radiology, University of California, Davis, United States of America
| | - Taiga Yamaya
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Go Akamatsu
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | - Georg Schramm
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Johan Nuyts
- Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging, KU Leuven, Belgium
| | - Roger Fulton
- Brain and Mind Centre, The University of Sydney, Australia
- Department of Medical Physics, Westmead Hospital, Sydney, Australia
| | - André Kyme
- Brain and Mind Centre, The University of Sydney, Australia
- School of Biomedical Engineering, Faculty of Engineering and IT, The University of Sydney, Australia
| | - Cristina Lois
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - Hasan Sari
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Julie Price
- Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
- Athinoula A. Martinos Center, Massachusetts General Hospital & Harvard Medical School, Boston, MA, United States of America
| | - Ronald Boellaard
- Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam University Medical Center, location VUMC, Netherlands
| | - Robert Jeraj
- Departments of Medical Physics, Human Oncology and Radiology, University of Wisconsin, United States of America
- Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
| | - Dale L Bailey
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Australia
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Enid Eslick
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
| | - Kathy P Willowson
- Department of Nuclear Medicine, Royal North Shore Hospital, Sydney, Australia
- Faculty of Science, The University of Sydney, Australia
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, United States of America
| |
Collapse
|