1
|
Shiyam Sundar LK, Gutschmayer S, Maenle M, Beyer T. Extracting value from total-body PET/CT image data - the emerging role of artificial intelligence. Cancer Imaging 2024; 24:51. [PMID: 38605408 PMCID: PMC11010281 DOI: 10.1186/s40644-024-00684-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Accepted: 03/03/2024] [Indexed: 04/13/2024] Open
Abstract
The evolution of Positron Emission Tomography (PET), culminating in the Total-Body PET (TB-PET) system, represents a paradigm shift in medical imaging. This paper explores the transformative role of Artificial Intelligence (AI) in enhancing clinical and research applications of TB-PET imaging. Clinically, TB-PET's superior sensitivity facilitates rapid imaging, low-dose imaging protocols, improved diagnostic capabilities and higher patient comfort. In research, TB-PET shows promise in studying systemic interactions and enhancing our understanding of human physiology and pathophysiology. In parallel, AI's integration into PET imaging workflows-spanning from image acquisition to data analysis-marks a significant development in nuclear medicine. This review delves into the current and potential roles of AI in augmenting TB-PET/CT's functionality and utility. We explore how AI can streamline current PET imaging processes and pioneer new applications, thereby maximising the technology's capabilities. The discussion also addresses necessary steps and considerations for effectively integrating AI into TB-PET/CT research and clinical practice. The paper highlights AI's role in enhancing TB-PET's efficiency and addresses the challenges posed by TB-PET's increased complexity. In conclusion, this exploration emphasises the need for a collaborative approach in the field of medical imaging. We advocate for shared resources and open-source initiatives as crucial steps towards harnessing the full potential of the AI/TB-PET synergy. This collaborative effort is essential for revolutionising medical imaging, ultimately leading to significant advancements in patient care and medical research.
Collapse
Affiliation(s)
| | - Sebastian Gutschmayer
- Quantitative Imaging and Medical Physics (QIMP) Team, Medical University of Vienna, Vienna, Austria
| | - Marcel Maenle
- Quantitative Imaging and Medical Physics (QIMP) Team, Medical University of Vienna, Vienna, Austria
| | - Thomas Beyer
- Quantitative Imaging and Medical Physics (QIMP) Team, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Yadav A, Kumar A. Artificial intelligence in rectal cancer: What is the future? Artif Intell Cancer 2023; 4:11-22. [DOI: 10.35713/aic.v4.i2.11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/18/2023] [Accepted: 09/25/2023] [Indexed: 12/07/2023] Open
Abstract
Colorectal cancer (CRC) is the third most prevalent cancer in both men and women, and it is the second leading cause of cancer-related deaths globally. Around 60%-70% of CRC patients are diagnosed at advanced stages, with nearly 20% having liver metastases. It is noteworthy that the 5-year survival rates decline significantly from 80%-90% for localized disease to a mere 10%-15% for patients with metastasis at the time of diagnosis. Early diagnosis, appropriate therapeutic strategy, accurate assessment of treatment response, and prognostication is essential for better outcome. There has been significant technological development in the last couple of decades to improve the outcome of rectal cancer including Artificial intelligence (AI). AI is a broad term used to describe the study of machines that mimic human intelligence, such as perceiving the environment, drawing logical conclusions from observations, and performing complex tasks. At present AI has demonstrated a promising role in early diagnosis, prognosis, and treatment outcomes for patients with rectal cancer, a limited role in surgical decision making, and had a bright future.
Collapse
Affiliation(s)
- Alka Yadav
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, UP, India
| | - Ashok Kumar
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, UP, India
| |
Collapse
|
3
|
Feuerecker B, Heimer MM, Geyer T, Fabritius MP, Gu S, Schachtner B, Beyer L, Ricke J, Gatidis S, Ingrisch M, Cyran CC. Artificial Intelligence in Oncological Hybrid Imaging. Nuklearmedizin 2023; 62:296-305. [PMID: 37802057 DOI: 10.1055/a-2157-6810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes. METHODS AND RESULTS The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations. CONCLUSION AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation. KEY POINTS · Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making..
Collapse
Affiliation(s)
- Benedikt Feuerecker
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
- German Cancer Research Center (DKFZ), Partner site Munich, DKTK German Cancer Consortium, Munich, Germany
| | - Maurice M Heimer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Thomas Geyer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Sijing Gu
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Leonie Beyer
- Department of Nuclear Medicine, University Hospital, LMU Munich, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Sergios Gatidis
- Department of Radiology, University Hospital Tübingen, Tübingen, Germany
- MPI, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Clemens C Cyran
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
4
|
Pribanić I, Simić SD, Tanković N, Debeljuh DD, Jurković S. Reduction of SPECT acquisition time using deep learning: A phantom study. Phys Med 2023; 111:102615. [PMID: 37302268 DOI: 10.1016/j.ejmp.2023.102615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 05/03/2023] [Accepted: 05/30/2023] [Indexed: 06/13/2023] Open
Abstract
Single photon emission computed tomography (SPECT) procedures are characterized by long acquisition time to acquire diagnostically acceptable image data. The goal of this investigation was to assess the feasibility of using a deep convolutional neural network (DCNN) to reduce the acquisition time. The DCNN was implemented using the PyTorch and trained using image data from standard SPECT quality phantoms. The under-sampled image dataset is provided to neural network as input, while missing projections were provided as targets. The network is to produce for the output the missing projections. The baseline method of calculating the missing projections as arithmetic means of adjacent ones was introduced. The obtained synthesized projections and reconstructed images were compared to original data and baseline data across several parameters using PyTorch and PyTorch Image Quality code libraries. Results obtained from comparisons of projection and reconstructed image data show the DCNN clearly outperforming the baseline method. However, subsequent analysis revealed the synthesized image data being more comparable to under-sampled than to fully-sampled image data. The results of this investigation imply that neural network can replicate coarser objects better. However, densely sampled clinical image datasets, coarse reconstruction matrices and patient data featuring coarse structures combined with a lack of baseline data generation methods will hamper the ability to analyse the neural network outputs correctly. This study calls for use of phantom image data and introduction of a baseline method in the evaluation of neural network outputs.
Collapse
Affiliation(s)
- Ivan Pribanić
- Medical Physics and Radiation Protection Department, University Hospital Rijeka, Croatia; Department of Medical Physics and Biophysics, Faculty of Medicine, University of Rijeka, Croatia
| | | | - Nikola Tanković
- Faculty of Informatics, Juraj Dobrila University of Pula, Croatia
| | - Dea Dundara Debeljuh
- Medical Physics and Radiation Protection Department, University Hospital Rijeka, Croatia; Department of Medical Physics and Biophysics, Faculty of Medicine, University of Rijeka, Croatia; Radiology Department, General Hospital Pula, Croatia
| | - Slaven Jurković
- Medical Physics and Radiation Protection Department, University Hospital Rijeka, Croatia; Department of Medical Physics and Biophysics, Faculty of Medicine, University of Rijeka, Croatia.
| |
Collapse
|
5
|
Wang R, Zhu J, Xia J, Yao J, Shi J, Li C. Photoacoustic imaging with limited sampling: a review of machine learning approaches. BIOMEDICAL OPTICS EXPRESS 2023; 14:1777-1799. [PMID: 37078052 PMCID: PMC10110324 DOI: 10.1364/boe.483081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/03/2023] [Accepted: 03/17/2023] [Indexed: 05/03/2023]
Abstract
Photoacoustic imaging combines high optical absorption contrast and deep acoustic penetration, and can reveal structural, molecular, and functional information about biological tissue non-invasively. Due to practical restrictions, photoacoustic imaging systems often face various challenges, such as complex system configuration, long imaging time, and/or less-than-ideal image quality, which collectively hinder their clinical application. Machine learning has been applied to improve photoacoustic imaging and mitigate the otherwise strict requirements in system setup and data acquisition. In contrast to the previous reviews of learned methods in photoacoustic computed tomography (PACT), this review focuses on the application of machine learning approaches to address the limited spatial sampling problems in photoacoustic imaging, specifically the limited view and undersampling issues. We summarize the relevant PACT works based on their training data, workflow, and model architecture. Notably, we also introduce the recent limited sampling works on the other major implementation of photoacoustic imaging, i.e., photoacoustic microscopy (PAM). With machine learning-based processing, photoacoustic imaging can achieve improved image quality with modest spatial sampling, presenting great potential for low-cost and user-friendly clinical applications.
Collapse
Affiliation(s)
- Ruofan Wang
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jing Zhu
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Jun Xia
- Department of Biomedical Engineering, University at Buffalo, The State University of New York, Buffalo, NY 14260, USA
| | - Junjie Yao
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA
| | - Junhui Shi
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| | - Chiye Li
- Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou, 311100, China
| |
Collapse
|
6
|
Wen G, Shim V, Holdsworth SJ, Fernandez J, Qiao M, Kasabov N, Wang A. Machine Learning for Brain MRI Data Harmonisation: A Systematic Review. Bioengineering (Basel) 2023; 10:bioengineering10040397. [PMID: 37106584 PMCID: PMC10135601 DOI: 10.3390/bioengineering10040397] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/16/2023] [Accepted: 03/21/2023] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been used to solve different types of problems related to MRI data, showing great promise. OBJECTIVE This study explores how well various ML algorithms perform in harmonising MRI data, both implicitly and explicitly, by summarising the findings in relevant peer-reviewed articles. Furthermore, it provides guidelines for the use of current methods and identifies potential future research directions. METHOD This review covers articles published through PubMed, Web of Science, and IEEE databases through June 2022. Data from studies were analysed based on the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Quality assessment questions were derived to assess the quality of the included publications. RESULTS a total of 41 articles published between 2015 and 2022 were identified and analysed. In the review, MRI data has been found to be harmonised either in an implicit (n = 21) or an explicit (n = 20) way. Three MRI modalities were identified: structural MRI (n = 28), diffusion MRI (n = 7) and functional MRI (n = 6). CONCLUSION Various ML techniques have been employed to harmonise different types of MRI data. There is currently a lack of consistent evaluation methods and metrics used across studies, and it is recommended that the issue be addressed in future studies. Harmonisation of MRI data using ML shows promises in improving performance for ML downstream tasks, while caution should be exercised when using ML-harmonised data for direct interpretation.
Collapse
Affiliation(s)
- Grace Wen
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
| | - Samantha Jane Holdsworth
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Mātai Medical Research Institute, Tairāwhiti-Gisborne 4010, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Miao Qiao
- Department of Computer Science, University of Auckland, Auckland 1142, New Zealand
| | - Nikola Kasabov
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand
- Intelligent Systems Research Centre, Ulster University, Londonderry BT52 1SA, UK
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
7
|
Feuerecker B, Heimer MM, Geyer T, Fabritius MP, Gu S, Schachtner B, Beyer L, Ricke J, Gatidis S, Ingrisch M, Cyran CC. Artificial Intelligence in Oncological Hybrid Imaging. ROFO-FORTSCHR RONTG 2023; 195:105-114. [PMID: 36170852 DOI: 10.1055/a-1909-7013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
BACKGROUND Artificial intelligence (AI) applications have become increasingly relevant across a broad spectrum of settings in medical imaging. Due to the large amount of imaging data that is generated in oncological hybrid imaging, AI applications are desirable for lesion detection and characterization in primary staging, therapy monitoring, and recurrence detection. Given the rapid developments in machine learning (ML) and deep learning (DL) methods, the role of AI will have significant impact on the imaging workflow and will eventually improve clinical decision making and outcomes. METHODS AND RESULTS The first part of this narrative review discusses current research with an introduction to artificial intelligence in oncological hybrid imaging and key concepts in data science. The second part reviews relevant examples with a focus on applications in oncology as well as discussion of challenges and current limitations. CONCLUSION AI applications have the potential to leverage the diagnostic data stream with high efficiency and depth to facilitate automated lesion detection, characterization, and therapy monitoring to ultimately improve quality and efficiency throughout the medical imaging workflow. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based therapy guidance in oncology. However, significant challenges remain regarding application development, benchmarking, and clinical implementation. KEY POINTS · Hybrid imaging generates a large amount of multimodality medical imaging data with high complexity and depth.. · Advanced tools are required to enable fast and cost-efficient processing along the whole radiology value chain.. · AI applications promise to facilitate the assessment of oncological disease in hybrid imaging with high quality and efficiency for lesion detection, characterization, and response assessment. The goal is to generate reproducible, structured, quantitative diagnostic data for evidence-based oncological therapy guidance.. · Selected applications in three oncological entities (lung, prostate, and neuroendocrine tumors) demonstrate how AI algorithms may impact imaging-based tasks in hybrid imaging and potentially guide clinical decision making.. CITATION FORMAT · Feuerecker B, Heimer M, Geyer T et al. Artificial Intelligence in Oncological Hybrid Imaging. Fortschr Röntgenstr 2023; 195: 105 - 114.
Collapse
Affiliation(s)
- Benedikt Feuerecker
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany.,German Cancer Research Center (DKFZ), Partner site Munich, DKTK German Cancer Consortium, Munich, Germany
| | - Maurice M Heimer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Thomas Geyer
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Sijing Gu
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | | | - Leonie Beyer
- Department of Nuclear Medicine, University Hospital, LMU Munich, Munich, Germany
| | - Jens Ricke
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Sergios Gatidis
- Department of Radiology, University Hospital Tübingen, Tübingen, Germany.,MPI, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael Ingrisch
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Clemens C Cyran
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
8
|
Morland D, Triumbari EKA, Boldrini L, Gatta R, Pizzuto D, Annunziata S. Radiomics in Oncological PET Imaging: A Systematic Review—Part 1, Supradiaphragmatic Cancers. Diagnostics (Basel) 2022; 12:diagnostics12061329. [PMID: 35741138 PMCID: PMC9221970 DOI: 10.3390/diagnostics12061329] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Revised: 05/25/2022] [Accepted: 05/26/2022] [Indexed: 12/10/2022] Open
Abstract
Radiomics is an upcoming field in nuclear oncology, both promising and technically challenging. To summarize the already undertaken work on supradiaphragmatic neoplasia and assess its quality, we performed a literature search in the PubMed database up to 18 February 2022. Inclusion criteria were: studies based on human data; at least one specified tumor type; supradiaphragmatic malignancy; performing radiomics on PET imaging. Exclusion criteria were: studies only based on phantom or animal data; technical articles without a clinically oriented question; fewer than 30 patients in the training cohort. A review database containing PMID, year of publication, cancer type, and quality criteria (number of patients, retrospective or prospective nature, independent validation cohort) was constructed. A total of 220 studies met the inclusion criteria. Among them, 119 (54.1%) studies included more than 100 patients, 21 studies (9.5%) were based on prospectively acquired data, and 91 (41.4%) used an independent validation set. Most studies focused on prognostic and treatment response objectives. Because the textural parameters and methods employed are very different from one article to another, it is complicated to aggregate and compare articles. New contributions and radiomics guidelines tend to help improving quality of the reported studies over the years.
Collapse
Affiliation(s)
- David Morland
- Nuclear Medicine Unit, TracerGLab, Department of Radiology, Radiotherapy and Hematology, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168 Rome, Italy; (E.K.A.T.); (D.P.); (S.A.)
- Service de Médecine Nucléaire, Institut Godinot, 51100 Reims, France
- Laboratoire de Biophysique, UFR de Médecine, Université de Reims Champagne-Ardenne, 51100 Reims, France
- CReSTIC (Centre de Recherche en Sciences et Technologies de l’Information et de la Communication), EA 3804, Université de Reims Champagne-Ardenne, 51100 Reims, France
- Correspondence:
| | - Elizabeth Katherine Anna Triumbari
- Nuclear Medicine Unit, TracerGLab, Department of Radiology, Radiotherapy and Hematology, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168 Rome, Italy; (E.K.A.T.); (D.P.); (S.A.)
| | - Luca Boldrini
- Radiotherapy Unit, Radiomics, Department of Radiology, Radiotherapy and Hematology, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168 Rome, Italy; (L.B.); (R.G.)
| | - Roberto Gatta
- Radiotherapy Unit, Radiomics, Department of Radiology, Radiotherapy and Hematology, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168 Rome, Italy; (L.B.); (R.G.)
- Department of Clinical and Experimental Sciences, University of Brescia, 25121 Brescia, Italy
- Department of Oncology, Lausanne University Hospital, 1011 Lausanne, Switzerland
| | - Daniele Pizzuto
- Nuclear Medicine Unit, TracerGLab, Department of Radiology, Radiotherapy and Hematology, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168 Rome, Italy; (E.K.A.T.); (D.P.); (S.A.)
| | - Salvatore Annunziata
- Nuclear Medicine Unit, TracerGLab, Department of Radiology, Radiotherapy and Hematology, Fondazione Policlinico Universitario A. Gemelli, IRCCS, 00168 Rome, Italy; (E.K.A.T.); (D.P.); (S.A.)
| |
Collapse
|
9
|
Dal Toso L, Chalampalakis Z, Buvat I, Comtat C, Cook G, Goh V, Schnabel JA, Marsden PK. Improved 3D tumour definition and quantification of uptake in simulated lung tumours using deep learning. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac65d6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 04/08/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. In clinical positron emission tomography (PET) imaging, quantification of radiotracer uptake in tumours is often performed using semi-quantitative measurements such as the standardised uptake value (SUV). For small objects, the accuracy of SUV estimates is limited by the noise properties of PET images and the partial volume effect. There is need for methods that provide more accurate and reproducible quantification of radiotracer uptake. Approach. In this work, we present a deep learning approach with the aim of improving quantification of lung tumour radiotracer uptake and tumour shape definition. A set of simulated tumours, assigned with ‘ground truth’ radiotracer distributions, are used to generate realistic PET raw data which are then reconstructed into PET images. In this work, the ground truth images are generated by placing simulated tumours characterised by different sizes and activity distributions in the left lung of an anthropomorphic phantom. These images are then used as input to an analytical simulator to simulate realistic raw PET data. The PET images reconstructed from the simulated raw data and the corresponding ground truth images are used to train a 3D convolutional neural network. Results. When tested on an unseen set of reconstructed PET phantom images, the network yields improved estimates of the corresponding ground truth. The same network is then applied to reconstructed PET data generated with different point spread functions. Overall the network is able to recover better defined tumour shapes and improved estimates of tumour maximum and median activities. Significance. Our results suggest that the proposed approach, trained on data simulated with one scanner geometry, has the potential to restore PET data acquired with different scanners.
Collapse
|
10
|
Ghosh NK, Kumar A. Colorectal cancer: Artificial intelligence and its role in surgical decision making. Artif Intell Gastroenterol 2022; 3:36-45. [DOI: 10.35712/aig.v3.i2.36] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 02/02/2022] [Accepted: 04/26/2022] [Indexed: 02/06/2023] Open
Abstract
Despite several advances in the oncological management of colorectal cancer (CRC), there still remains a lacuna in the treatment strategy, which differs from center to center and on the philosophy of the treating clinician that is not without bias. Personalized treatment is essential for the treatment of CRC to achieve better long-term outcomes and to reduce morbidity. Surgery has an important role to play in the treatment. Surgical treatment of CRC is decided based on clinical parameters and investigations and hence likely to have judgmental errors. Artificial intelligence has been reported to be useful in the surveillance, diagnosis, treatment, and follow-up with accuracy in several malignancies. However, it is still evolving and yet to be established in surgical decision making in CRC. It is not only useful preoperatively but also intraoperatively. Artificial intelligence helps to rectify the human surgical decision when clinical data and radiological and laboratory parameters are fed into the computer and may guide correct surgical treatment.
Collapse
Affiliation(s)
- Nalini Kanta Ghosh
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, UP, India
| | - Ashok Kumar
- Department of Surgical Gastroenterology, Sanjay Gandhi Post Graduate Institute of Medical Sciences, Lucknow 226014, UP, India
| |
Collapse
|
11
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
12
|
Yousefirizi F, Pierre Decazes, Amyar A, Ruan S, Saboury B, Rahmim A. AI-Based Detection, Classification and Prediction/Prognosis in Medical Imaging:: Towards Radiophenomics. PET Clin 2021; 17:183-212. [PMID: 34809866 DOI: 10.1016/j.cpet.2021.09.010] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Artificial intelligence (AI) techniques have significant potential to enable effective, robust, and automated image phenotyping including the identification of subtle patterns. AI-based detection searches the image space to find the regions of interest based on patterns and features. There is a spectrum of tumor histologies from benign to malignant that can be identified by AI-based classification approaches using image features. The extraction of minable information from images gives way to the field of "radiomics" and can be explored via explicit (handcrafted/engineered) and deep radiomics frameworks. Radiomics analysis has the potential to be used as a noninvasive technique for the accurate characterization of tumors to improve diagnosis and treatment monitoring. This work reviews AI-based techniques, with a special focus on oncological PET and PET/CT imaging, for different detection, classification, and prediction/prognosis tasks. We also discuss needed efforts to enable the translation of AI techniques to routine clinical workflows, and potential improvements and complementary techniques such as the use of natural language processing on electronic health records and neuro-symbolic AI techniques.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Pierre Decazes
- Department of Nuclear Medicine, Henri Becquerel Centre, Rue d'Amiens - CS 11516 - 76038 Rouen Cedex 1, France; QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Amine Amyar
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France; General Electric Healthcare, Buc, France
| | - Su Ruan
- QuantIF-LITIS, Faculty of Medicine and Pharmacy, Research Building - 1st floor, 22 boulevard Gambetta, 76183 Rouen Cedex, France
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, USA; Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA, USA
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada; Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; Department of Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
13
|
Tanabe S, Perkins EJ, Ono R, Sasaki H. Artificial intelligence in gastrointestinal diseases. Artif Intell Gastroenterol 2021; 2:69-76. [DOI: 10.35712/aig.v2.i3.69] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 04/09/2021] [Accepted: 06/04/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) applications are growing in medicine. It is important to understand the current state of the AI applications prior to utilizing in disease research and treatment. In this review, AI application in the diagnosis and treatment of gastrointestinal diseases are studied and summarized. In most cases, AI studies had large amounts of data, including images, to learn to distinguish disease characteristics according to a human’s perspectives. The detailed pros and cons of utilizing AI approaches should be investigated in advance to ensure the safe application of AI in medicine. Evidence suggests that the collaborative usage of AI in both diagnosis and treatment of diseases will increase the precision and effectiveness of medicine. Recent progress in genome technology such as genome editing provides a specific example where AI has revealed the diagnostic and therapeutic possibilities of RNA detection and targeting.
Collapse
Affiliation(s)
- Shihori Tanabe
- Division of Risk Assessment, Center for Biological Safety and Research, National Institute of Health Sciences, Kawasaki 210-9501, Japan
| | - Edward J Perkins
- Environmental Laboratory, US Army Engineer Research and Development Center, Vicksburg, MS 3180, United States
| | - Ryuichi Ono
- Division of Cellular and Molecular Toxicology, Center for Biological Safety and Research, National Institute of Health Sciences, Kawasaki 210-9501, Japan
| | - Hiroki Sasaki
- Department of Clinical Genomics, Fundamental Innovative Oncology Core, National Cancer Center Research Institute, Tokyo 104-0045, Japan
| |
Collapse
|