1
|
Amini M, Salimi Y, Hajianfar G, Mainta I, Hervier E, Sanaat A, Rahmim A, Shiri I, Zaidi H. Fully Automated Region-Specific Human-Perceptive-Equivalent Image Quality Assessment: Application to 18 F-FDG PET Scans. Clin Nucl Med 2024; 49:1079-1090. [PMID: 39466652 DOI: 10.1097/rlu.0000000000005526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION We propose a fully automated framework to conduct a region-wise image quality assessment (IQA) on whole-body 18 F-FDG PET scans. This framework (1) can be valuable in daily clinical image acquisition procedures to instantly recognize low-quality scans for potential rescanning and/or image reconstruction, and (2) can make a significant impact in dataset collection for the development of artificial intelligence-driven 18 F-FDG PET analysis models by rejecting low-quality images and those presenting with artifacts, toward building clean datasets. PATIENTS AND METHODS Two experienced nuclear medicine physicians separately evaluated the quality of 174 18 F-FDG PET images from 87 patients, for each body region, based on a 5-point Likert scale. The body regisons included the following: (1) the head and neck, including the brain, (2) the chest, (3) the chest-abdomen interval (diaphragmatic region), (4) the abdomen, and (5) the pelvis. Intrareader and interreader reproducibility of the quality scores were calculated using 39 randomly selected scans from the dataset. Utilizing a binarized classification, images were dichotomized into low-quality versus high-quality for physician quality scores ≤3 versus >3, respectively. Inputting the 18 F-FDG PET/CT scans, our proposed fully automated framework applies 2 deep learning (DL) models on CT images to perform region identification and whole-body contour extraction (excluding extremities), then classifies PET regions as low and high quality. For classification, 2 mainstream artificial intelligence-driven approaches, including machine learning (ML) from radiomic features and DL, were investigated. All models were trained and evaluated on scores attributed by each physician, and the average of the scores reported. DL and radiomics-ML models were evaluated on the same test dataset. The performance evaluation was carried out on the same test dataset for radiomics-ML and DL models using the area under the curve, accuracy, sensitivity, and specificity and compared using the Delong test with P values <0.05 regarded as statistically significant. RESULTS In the head and neck, chest, chest-abdomen interval, abdomen, and pelvis regions, the best models achieved area under the curve, accuracy, sensitivity, and specificity of [0.97, 0.95, 0.96, and 0.95], [0.85, 0.82, 0.87, and 0.76], [0.83, 0.76, 0.68, and 0.80], [0.73, 0.72, 0.64, and 0.77], and [0.72, 0.68, 0.70, and 0.67], respectively. In all regions, models revealed highest performance, when developed on the quality scores with higher intrareader reproducibility. Comparison of DL and radiomics-ML models did not show any statistically significant differences, though DL models showed overall improved trends. CONCLUSIONS We developed a fully automated and human-perceptive equivalent model to conduct region-wise IQA over 18 F-FDG PET images. Our analysis emphasizes the necessity of developing separate models for body regions and performing data annotation based on multiple experts' consensus in IQA studies.
Collapse
Affiliation(s)
- Mehdi Amini
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ghasem Hajianfar
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ismini Mainta
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Elsa Hervier
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Isaac Shiri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | |
Collapse
|
2
|
Dorri Giv M, Arabi H, Naseri S, Alipour Firouzabad L, Aghaei A, Askari E, Raeisi N, Saber Tanha A, Bakhshi Golestani Z, Dabbagh Kakhki AH, Dabbagh Kakhki VR. Evaluation of the prostate cancer and its metastases in the [ 68 Ga]Ga-PSMA PET/CT images: deep learning method vs. conventional PET/CT processing. Nucl Med Commun 2024; 45:974-983. [PMID: 39224922 DOI: 10.1097/mnm.0000000000001891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024]
Abstract
PURPOSE This study demonstrates the feasibility and benefits of using a deep learning-based approach for attenuation correction in [ 68 Ga]Ga-PSMA PET scans. METHODS A dataset of 700 prostate cancer patients (mean age: 67.6 ± 5.9 years, range: 45-85 years) who underwent [ 68 Ga]Ga-PSMA PET/computed tomography was collected. A deep learning model was trained to perform attenuation correction on these images. Quantitative accuracy was assessed using clinical data from 92 patients, comparing the deep learning-based attenuation correction (DLAC) to computed tomography-based PET attenuation correction (PET-CTAC) using mean error, mean absolute error, and root mean square error based on standard uptake value. Clinical evaluation was conducted by three specialists who performed a blinded assessment of lesion detectability and overall image quality in a subset of 50 subjects, comparing DLAC and PET-CTAC images. RESULTS The DLAC model yielded mean error, mean absolute error, and root mean square error values of -0.007 ± 0.032, 0.08 ± 0.033, and 0.252 ± 125 standard uptake value, respectively. Regarding lesion detection and image quality, DLAC showed superior performance in 16 of the 50 cases, while in 56% of the cases, the images generated by DLAC and PET-CTAC were found to have closely comparable quality and lesion detectability. CONCLUSION This study highlights significant improvements in image quality and lesion detection capabilities through the integration of DLAC in [ 68 Ga]Ga-PSMA PET imaging. This innovative approach not only addresses challenges such as bladder radioactivity but also represents a promising method to minimize patient radiation exposure by integrating low-dose computed tomography and DLAC, ultimately improving diagnostic accuracy and patient outcomes.
Collapse
Affiliation(s)
- Masoumeh Dorri Giv
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology and Medical Informatics, Geneva University Hospital, Geneva, Switzerland,
| | - Shahrokh Naseri
- Department of Medical Physics, Faculty of Medicine, Mashhad University of Medical Science, Mashhad,
| | - Leila Alipour Firouzabad
- Department of Radition Technology, Radiation Biology Research Center, Iran University of Medical Sciences, Tehran and
| | - Atena Aghaei
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Emran Askari
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Nasrin Raeisi
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Amin Saber Tanha
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | - Zahra Bakhshi Golestani
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| | | | - Vahid Reza Dabbagh Kakhki
- Nuclear Medicine Research Center, Department of Nuclear Medicine, Ghaem Hospital, Mashhad University of Medical Science, Mashhad, Iran,
| |
Collapse
|
3
|
Sanaat A, Hu Y, Boccalini C, Salimi Y, Mansouri Z, Teixeira EPA, Mathoux G, Garibotto V, Zaidi H. Tracer-Separator: A Deep Learning Model for Brain PET Dual-Tracer (18F-FDG and Amyloid) Separation. Clin Nucl Med 2024:00003072-990000000-01360. [PMID: 39468375 DOI: 10.1097/rlu.0000000000005511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION Multiplexed PET imaging revolutionized clinical decision-making by simultaneously capturing various radiotracer data in a single scan, enhancing diagnostic accuracy and patient comfort. Through a transformer-based deep learning, this study underscores the potential of advanced imaging techniques to streamline diagnosis and improve patient outcomes. PATIENTS AND METHODS The research cohort consisted of 120 patients spanning from cognitively unimpaired individuals to those with mild cognitive impairment, dementia, and other mental disorders. Patients underwent various imaging assessments, including 3D T1-weighted MRI, amyloid PET scans using either 18F-florbetapir (FBP) or 18F-flutemetamol (FMM), and 18F-FDG PET. Summed images of FMM/FBP and FDG were used as proxy for simultaneous scanning of 2 different tracers. A SwinUNETR model, a convolution-free transformer architecture, was trained for image translation. The model was trained using mean square error loss function and 5-fold cross-validation. Visual evaluation involved assessing image similarity and amyloid status, comparing synthesized images with actual ones. Statistical analysis was conducted to determine the significance of differences. RESULTS Visual inspection of synthesized images revealed remarkable similarity to reference images across various clinical statuses. The mean centiloid bias for dementia, mild cognitive impairment, and healthy control subjects and for FBP tracers is 15.70 ± 29.78, 0.35 ± 33.68, and 6.52 ± 25.19, respectively, whereas for FMM, it is -6.85 ± 25.02, 4.23 ± 23.78, and 5.71 ± 21.72, respectively. Clinical evaluation by 2 readers further confirmed the model's efficiency, with 97 FBP/FMM and 63 FDG synthesized images (from 120 subjects) found similar to ground truth diagnoses (rank 3), whereas 3 FBP/FMM and 15 FDG synthesized images were considered nonsimilar (rank 1). Promising sensitivity, specificity, and accuracy were achieved in amyloid status assessment based on synthesized images, with an average sensitivity of 95 ± 2.5, specificity of 72.5 ± 12.5, and accuracy of 87.5 ± 2.5. Error distribution analyses provided valuable insights into error levels across brain regions, with most falling between -0.1 and +0.2 SUV ratio. Correlation analyses demonstrated strong associations between actual and synthesized images, particularly for FMM images (FBP: Y = 0.72X + 20.95, R2 = 0.54; FMM: Y = 0.65X + 22.77, R2 = 0.77). CONCLUSIONS This study demonstrated the potential of a novel convolution-free transformer architecture, SwinUNETR, for synthesizing realistic FDG and FBP/FMM images from summation scans mimicking simultaneous dual-tracer imaging.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yiyi Hu
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Gregory Mathoux
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | | |
Collapse
|
4
|
Hasanabadi S, Aghamiri SMR, Abin AA, Abdollahi H, Arabi H, Zaidi H. Enhancing Lymphoma Diagnosis, Treatment, and Follow-Up Using 18F-FDG PET/CT Imaging: Contribution of Artificial Intelligence and Radiomics Analysis. Cancers (Basel) 2024; 16:3511. [PMID: 39456604 PMCID: PMC11505665 DOI: 10.3390/cancers16203511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 10/11/2024] [Accepted: 10/15/2024] [Indexed: 10/28/2024] Open
Abstract
Lymphoma, encompassing a wide spectrum of immune system malignancies, presents significant complexities in its early detection, management, and prognosis assessment since it can mimic post-infectious/inflammatory diseases. The heterogeneous nature of lymphoma makes it challenging to definitively pinpoint valuable biomarkers for predicting tumor biology and selecting the most effective treatment strategies. Although molecular imaging modalities, such as positron emission tomography/computed tomography (PET/CT), specifically 18F-FDG PET/CT, hold significant importance in the diagnosis of lymphoma, prognostication, and assessment of treatment response, they still face significant challenges. Over the past few years, radiomics and artificial intelligence (AI) have surfaced as valuable tools for detecting subtle features within medical images that may not be easily discerned by visual assessment. The rapid expansion of AI and its application in medicine/radiomics is opening up new opportunities in the nuclear medicine field. Radiomics and AI capabilities seem to hold promise across various clinical scenarios related to lymphoma. Nevertheless, the need for more extensive prospective trials is evident to substantiate their reliability and standardize their applications. This review aims to provide a comprehensive perspective on the current literature regarding the application of AI and radiomics applied/extracted on/from 18F-FDG PET/CT in the management of lymphoma patients.
Collapse
Affiliation(s)
- Setareh Hasanabadi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran 1983969411, Iran; (S.H.); (S.M.R.A.)
| | - Seyed Mahmud Reza Aghamiri
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran 1983969411, Iran; (S.H.); (S.M.R.A.)
| | - Ahmad Ali Abin
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran 1983969411, Iran;
| | - Hamid Abdollahi
- Department of Radiology, University of British Columbia, Vancouver, BC V5Z 1M9, Canada;
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland;
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, The Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, 500 Odense, Denmark
- University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary
| |
Collapse
|
5
|
Cheng L, Gao H, Wang Z, Guo L, Wang X, Jin G. Prospective study of dual-phase 99mTc-MIBI SPECT/CT nomogram for differentiating non-small cell lung cancer from benign pulmonary lesions. Eur J Radiol 2024; 179:111657. [PMID: 39163806 DOI: 10.1016/j.ejrad.2024.111657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 07/15/2024] [Accepted: 07/29/2024] [Indexed: 08/22/2024]
Abstract
OBJECTIVES To establish and validate a technetium 99m sestamibi (99mTc-MIBI) single-photon emission computed tomography/computed tomography (SPECT/CT) nomogram for predicting non-small cell lung cancer (NSCLC). Comparing the diagnostic performance of early and delayed SPECT/CT nomogram, and compare the diagnostic performance of SPECT/CT radiomics models with single SPECT and CT radiomics models. METHODS This prospective study included 119 lesions (NSCLC: n = 92, benign pulmonary lesions: n = 27) from 103 patients (mean age: 59.68 ± 8.94 years). Patients underwent dual-phase 99mTc-MIBI SPECT/CT imaging. They were divided into the training (n = 83) and validation (n = 36) cohorts. Logistic regression, support vector machine, random forest, and light-gradient boosting machine were applied to train and determine the optimal machine learning model. Then, combining radiomics score and clinical factors, establish nomograms for diagnosing NSCLC. RESULT CYFRA21-1 was selected for constructing the clinical model. In early imaging, the areas under the curve (AUCs) of the clinical model, radiomics model, and nomogram were 0.571, 0.830, and 0.875, respectively. The nomogram performed better than the clinical model and similarly to the radiomics model (P=0.020, P=0.216), and there are no statistically significant differences in the predictive performance between the radiomics model and the clinical model (P=0.103). In delayed imaging, the AUC was 0.643, 0.888, and 0.893, respectively. The predictive performance of the nomogram was superior compared to the clinical model and comparable to the radiomics model (P=0.042, P=0.480), and the radiomics model also demonstrated superior diagnostic performance compared to the clinical model (P=0.049). Compared to early SPECT/CT results, the AUC values of the nomogram and radiomics models in the delayed phase were higher, although no statistical differences were found (P=0.831, P=0.568). In delayed imaging, the AUC of the radiomics models for CT and SPECT was 0.696 and 0.768, respectively, SPECT/CT radiomics exhibited significant differences compared with CT and SPECT alone (P=0.042, P=0.038). CONCLUSION Dual-phase 99mTc-MIBI SPECT/CT nomograms and radiomics models can effectively predict NSCLC, providing an economically and non-invasive imaging method for diagnosing NSCLC, moreover, these findings provide a basis for early diagnosis and treatment strategies in NSCLC patients. Delayed-phase SPECT/CT imaging may offer greater practical value than early-phase imaging for diagnosing NSCLC. However, this novel approach necessitates further validation in larger, multi-center cohorts. CLINICAL RELEVANCE Radiomics nomogram based on SPECT/CT for discriminating NSCLC from benign lung lesions helps to aid early diagnosis and guide treatment. KEY POINTS Nomograms, based on dual-phase SPECT/CT, was constructed to discriminate between non-small cell lung cancer and benign lesions. SPECT/CT radiomics model has better predictive performance than SPECT and CT radiomics model.
Collapse
Affiliation(s)
- Liping Cheng
- Department of Nuclear Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin 150000, China
| | - Han Gao
- Department of Radiology, Taikang Xianlin Gulou Hospital, Nanjing 210000, China
| | - Zhensheng Wang
- Department of Hepatobiliary Surgery, The Second Affiliated Hospital of Harbin Medical University, Harbin 150000, China
| | - Lin Guo
- Department of Nuclear Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin 150000, China
| | - Xuehan Wang
- Department of Nuclear Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin 150000, China
| | - Gang Jin
- Department of Nuclear Medicine, The Second Affiliated Hospital of Harbin Medical University, Harbin 150000, China.
| |
Collapse
|
6
|
Sharma V, Awate SP. Adversarial EM for variational deep learning: Application to semi-supervised image quality enhancement in low-dose PET and low-dose CT. Med Image Anal 2024; 97:103291. [PMID: 39121545 DOI: 10.1016/j.media.2024.103291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 07/23/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024]
Abstract
In positron emission tomography (PET) and X-ray computed tomography (CT), reducing radiation dose can cause significant degradation in image quality. For image quality enhancement in low-dose PET and CT, we propose a novel theoretical adversarial and variational deep neural network (DNN) framework relying on expectation maximization (EM) based learning, termed adversarial EM (AdvEM). AdvEM proposes an encoder-decoder architecture with a multiscale latent space, and generalized-Gaussian models enabling datum-specific robust statistical modeling in latent space and image space. The model robustness is further enhanced by including adversarial learning in the training protocol. Unlike typical variational-DNN learning, AdvEM proposes latent-space sampling from the posterior distribution, and uses a Metropolis-Hastings scheme. Unlike existing schemes for PET or CT image enhancement which train using pairs of low-dose images with their corresponding normal-dose versions, we propose a semi-supervised AdvEM (ssAdvEM) framework that enables learning using a small number of normal-dose images. AdvEM and ssAdvEM enable per-pixel uncertainty estimates for their outputs. Empirical analyses on real-world PET and CT data involving many baselines, out-of-distribution data, and ablation studies show the benefits of the proposed framework.
Collapse
Affiliation(s)
- Vatsala Sharma
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| |
Collapse
|
7
|
Michail C, Liaparinos P, Kalyvas N, Kandarakis I, Fountos G, Valais I. Radiation Detectors and Sensors in Medical Imaging. SENSORS (BASEL, SWITZERLAND) 2024; 24:6251. [PMID: 39409289 PMCID: PMC11478476 DOI: 10.3390/s24196251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/28/2024] [Revised: 09/23/2024] [Accepted: 09/25/2024] [Indexed: 10/20/2024]
Abstract
Medical imaging instrumentation design and construction is based on radiation sources and radiation detectors/sensors. This review focuses on the detectors and sensors of medical imaging systems. These systems are subdivided into various categories depending on their structure, the type of radiation they capture, how the radiation is measured, how the images are formed, and the medical goals they serve. Related to medical goals, detectors fall into two major areas: (i) anatomical imaging, which mainly concerns the techniques of diagnostic radiology, and (ii) functional-molecular imaging, which mainly concerns nuclear medicine. An important parameter in the evaluation of the detectors is the combination of the quality of the diagnostic result they offer and the burden of the patient with radiation dose. The latter has to be minimized; thus, the input signal (radiation photon flux) must be kept at low levels. For this reason, the detective quantum efficiency (DQE), expressing signal-to-noise ratio transfer through an imaging system, is of primary importance. In diagnostic radiology, image quality is better than in nuclear medicine; however, in most cases, the dose is higher. On the other hand, nuclear medicine focuses on the detection of functional findings and not on the accurate spatial determination of anatomical data. Detectors are integrated into projection or tomographic imaging systems and are based on the use of scintillators with optical sensors, photoconductors, or semiconductors. Analysis and modeling of such systems can be performed employing theoretical models developed in the framework of cascaded linear systems analysis (LCSA), as well as within the signal detection theory (SDT) and information theory.
Collapse
Affiliation(s)
| | | | | | - Ioannis Kandarakis
- Radiation Physics, Materials Technology and Biomedical Imaging Laboratory, Department of Biomedical Engineering, University of West Attica, Ag. Spyridonos, 12210 Athens, Greece; (C.M.); (P.L.); (N.K.); (G.F.); (I.V.)
| | | | | |
Collapse
|
8
|
Apostolopoulos ID, Papandrianos NI, Apostolopoulos DJ, Papageorgiou E. Between Two Worlds: Investigating the Intersection of Human Expertise and Machine Learning in the Case of Coronary Artery Disease Diagnosis. Bioengineering (Basel) 2024; 11:957. [PMID: 39451333 PMCID: PMC11504143 DOI: 10.3390/bioengineering11100957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2024] [Revised: 09/09/2024] [Accepted: 09/20/2024] [Indexed: 10/26/2024] Open
Abstract
Coronary artery disease (CAD) presents a significant global health burden, with early and accurate diagnostics crucial for effective management and treatment strategies. This study evaluates the efficacy of human evaluators compared to a Random Forest (RF) machine learning model in predicting CAD risk. It investigates the impact of incorporating human clinical judgments into the RF model's predictive capabilities. We recruited 606 patients from the Department of Nuclear Medicine at the University Hospital of Patras, Greece, from 16 February 2018 to 28 February 2022. Clinical data inputs included age, sex, comprehensive cardiovascular history (including prior myocardial infarction and revascularisation), CAD predisposing factors (such as hypertension, dyslipidemia, smoking, diabetes, and peripheral arteriopathy), baseline ECG abnormalities, and symptomatic descriptions ranging from asymptomatic states to angina-like symptoms and dyspnea on exertion. The diagnostic accuracies of human evaluators and the RF model (when trained with datasets inclusive of human judges' assessments) were comparable at 79% and 80.17%, respectively. However, the performance of the RF model notably declined to 73.76% when human clinical judgments were excluded from its training dataset. These results highlight a potential synergistic relationship between human expertise and advanced algorithmic predictions, suggesting a hybrid approach as a promising direction for enhancing CAD diagnostics.
Collapse
Affiliation(s)
- Ioannis D. Apostolopoulos
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece; (I.D.A.); (N.I.P.)
| | - Nikolaos I. Papandrianos
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece; (I.D.A.); (N.I.P.)
| | | | - Elpiniki Papageorgiou
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece; (I.D.A.); (N.I.P.)
| |
Collapse
|
9
|
Chin M, Jafaritadi M, Franco AB, Nasir Ullah M, Chinn G, Innes D, Levin CS. Self-normalization for a 1 mm 3resolution clinical PET system using deep learning. Phys Med Biol 2024; 69:175004. [PMID: 39084640 DOI: 10.1088/1361-6560/ad69fb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Accepted: 07/31/2024] [Indexed: 08/02/2024]
Abstract
Objective.This work proposes, for the first time, an image-based end-to-end self-normalization framework for positron emission tomography (PET) using conditional generative adversarial networks (cGANs).Approach.We evaluated different approaches by exploring each of the following three methodologies. First, we used images that were either unnormalized or corrected for geometric factors, which encompass all time-invariant factors, as input data types. Second, we set the input tensor shape as either a single axial slice (2D) or three contiguous axial slices (2.5D). Third, we chose either Pix2Pix or polarized self-attention (PSA) Pix2Pix, which we developed for this work, as a deep learning network. The targets for all approaches were the axial slices of images normalized using the direct normalization method. We performed Monte Carlo simulations of ten voxelized phantoms with the SimSET simulation tool and produced 26,000 pairs of axial image slices for training and testing.Main results.The results showed that 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the best performance among all the methods we tested. All approaches improved general image quality figures of merit peak signal to noise ratio (PSNR) and structural similarity index (SSIM) from ∼15 % to ∼55 %, and 2.5D PSA Pix2Pix showed the highest PSNR (28.074) and SSIM (0.921). Lesion detectability, measured with region of interest (ROI) PSNR, SSIM, normalized contrast recovery coefficient, and contrast-to-noise ratio, was generally improved for all approaches, and 2.5D PSA Pix2Pix trained with geometric-factors-corrected input images achieved the highest ROI PSNR (28.920) and SSIM (0.973).Significance.This study demonstrates the potential of an image-based end-to-end self-normalization framework using cGANs for improving PET image quality and lesion detectability without the need for separate normalization scans.
Collapse
Affiliation(s)
- Myungheon Chin
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Mojtaba Jafaritadi
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Andrew B Franco
- Department of Mechanical Engineering, Stanford University, Stanford, CA, United States of America
| | - Muhammad Nasir Ullah
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Garry Chinn
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Derek Innes
- Department of Radiology, Stanford University, Stanford, CA, United States of America
| | - Craig S Levin
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Department of Radiology, Stanford University, Stanford, CA, United States of America
- Department of Physics, Stanford University, Stanford, CA, United States of America
- Department of Bioengineering, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
10
|
Weyts K, Lequesne J, Johnson A, Curcio H, Parzy A, Coquan E, Lasnon C. The impact of introducing deep learning based [ 18F]FDG PET denoising on EORTC and PERCIST therapeutic response assessments in digital PET/CT. EJNMMI Res 2024; 14:72. [PMID: 39126532 DOI: 10.1186/s13550-024-01128-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 07/06/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND [18F]FDG PET denoising by SubtlePET™ using deep learning artificial intelligence (AI) was previously found to induce slight modifications in lesion and reference organs' quantification and in lesion detection. As a next step, we aimed to evaluate its clinical impact on [18F]FDG PET solid tumour treatment response assessments, while comparing "standard PET" to "AI denoised half-duration PET" ("AI PET") during follow-up. RESULTS 110 patients referred for baseline and follow-up standard digital [18F]FDG PET/CT were prospectively included. "Standard" EORTC and, if applicable, PERCIST response classifications by 2 readers between baseline standard PET1 and follow-up standard PET2 as a "gold standard" were compared to "mixed" classifications between standard PET1 and AI PET2 (group 1; n = 64), or between AI PET1 and standard PET2 (group 2; n = 46). Separate classifications were established using either standardized uptake values from ultra-high definition PET with or without AI denoising (simplified to "UHD") or EANM research limited v2 (EARL2)-compliant values (by Gaussian filtering in standard PET and using the same filter in AI PET). Overall, pooling both study groups, in 11/110 (10%) patients at least one EORTCUHD or EARL2 or PERCISTUHD or EARL2 mixed vs. standard classification was discordant, with 369/397 (93%) concordant classifications, unweighted Cohen's kappa = 0.86 (95% CI: 0.78-0.94). These modified mixed vs. standard classifications could have impacted management in 2% of patients. CONCLUSIONS Although comparing similar PET images is preferable for therapy response assessment, the comparison between a standard [18F]FDG PET and an AI denoised half-duration PET is feasible and seems clinically satisfactory.
Collapse
Affiliation(s)
- Kathleen Weyts
- Nuclear Medicine Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, 3 Avenue du General Harris, BP 45026, Caen Cedex 5, 14076, France.
| | - Justine Lequesne
- Biostatistics Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, France
| | - Alison Johnson
- Medical Oncology Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, France
| | - Hubert Curcio
- Medical Oncology Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, France
| | - Aurélie Parzy
- Medical Oncology Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, France
| | - Elodie Coquan
- Medical Oncology Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, France
| | - Charline Lasnon
- Nuclear Medicine Department, François Baclesse Comprehensive Cancer Centre, UNICANCER, Caen, 3 Avenue du General Harris, BP 45026, Caen Cedex 5, 14076, France
- UNICAEN, INSERM 1086 ANTICIPE, Normandy University, Caen, France
| |
Collapse
|
11
|
Kawakubo M, Nagao M, Yamamoto A, Kaimoto Y, Nakao R, Kawasaki H, Iwaguchi T, Inoue A, Kaneko K, Sakai A, Sakai S. Gated SPECT-Derived Myocardial Strain Estimated From Deep-Learning Image Translation Validated From N-13 Ammonia PET. Acad Radiol 2024:S1076-6332(24)00433-1. [PMID: 39095261 DOI: 10.1016/j.acra.2024.06.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 06/24/2024] [Accepted: 06/27/2024] [Indexed: 08/04/2024]
Abstract
RATIONALE AND OBJECTIVES This study investigated the use of deep learning-generated virtual positron emission tomography (PET)-like gated single-photon emission tomography (SPECTVP) for assessing myocardial strain, overcoming limitations of conventional SPECT. MATERIALS AND METHODS SPECT-to-PET translation models for short-axis, horizontal, and vertical long-axis planes were trained using image pairs from the same patients in stress (720 image pairs from 18 patients) and resting states (920 image pairs from 23 patients). Patients without ejection-fraction changes during SPECT and PET were selected for training. We independently analyzed circumferential strains from short-axis-gated SPECT, PET, and model-generated SPECTVP images using a feature-tracking algorithm. Longitudinal strains were similarly measured from horizontal and vertical long-axis images. Intraclass correlation coefficients (ICCs) were calculated with two-way random single-measure SPECT and SPECTVP (PET). ICCs (95% confidence intervals) were defined as excellent (≥0.75), good (0.60-0.74), moderate (0.40-0.59), or poor (≤0.39). RESULTS Moderate ICCs were observed for SPECT-derived stressed circumferential strains (0.56 [0.41-0.69]). Excellent ICCs were observed for SPECTVP-derived stressed circumferential strains (0.78 [0.68-0.85]). Excellent ICCs of stressed longitudinal strains from horizontal and vertical long axes, derived from SPECT and SPECTVP, were observed (0.83 [0.73-0.90], 0.91 [0.85-0.94]). CONCLUSION Deep-learning SPECT-to-PET transformation improves circumferential strain measurement accuracy using standard-gated SPECT. Furthermore, the possibility of applying longitudinal strain measurements via both PET and SPECTVP was demonstrated. This study provides preliminary evidence that SPECTVP obtained from standard-gated SPECT with postprocessing potentially adds clinical value through PET-equivalent myocardial strain analysis without increasing the patient burden.
Collapse
Affiliation(s)
- Masateru Kawakubo
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Fukuoka, Japan
| | - Michinobu Nagao
- Department of Diagnostic Imaging & Nuclear Medicine, Tokyo Women's Medical University, Tokyo, Japan.
| | - Atsushi Yamamoto
- Department of Diagnostic Imaging & Nuclear Medicine, Tokyo Women's Medical University, Tokyo, Japan
| | - Yoko Kaimoto
- Department of Radiology, Tokyo Women's Medical University, Tokyo, Japan
| | - Risako Nakao
- Department of Cardiology, Tokyo Women's Medical University, Tokyo, Japan
| | - Hiroshi Kawasaki
- Department of Advanced Information Technology, Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Takafumi Iwaguchi
- Department of Advanced Information Technology, Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Akihiro Inoue
- Department of Diagnostic Imaging & Nuclear Medicine, Tokyo Women's Medical University, Tokyo, Japan
| | - Koichiro Kaneko
- Department of Diagnostic Imaging & Nuclear Medicine, Tokyo Women's Medical University, Tokyo, Japan
| | - Akiko Sakai
- Department of Cardiology, Tokyo Women's Medical University, Tokyo, Japan
| | - Shuji Sakai
- Department of Diagnostic Imaging & Nuclear Medicine, Tokyo Women's Medical University, Tokyo, Japan
| |
Collapse
|
12
|
Dagnew TM, Tseng CEJ, Yoo CH, Makary MM, Goodheart AE, Striar R, Meyer TN, Rattray AK, Kang L, Wolf KA, Fiedler SA, Tocci D, Shapiro H, Provost S, Sultana E, Liu Y, Ding W, Chen P, Kubicki M, Shen S, Catana C, Zürcher NR, Wey HY, Hooker JM, Weiss RD, Wang C. Toward AI-driven neuroepigenetic imaging biomarker for alcohol use disorder: A proof-of-concept study. iScience 2024; 27:110159. [PMID: 39021792 PMCID: PMC11253155 DOI: 10.1016/j.isci.2024.110159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 04/13/2024] [Accepted: 05/29/2024] [Indexed: 07/20/2024] Open
Abstract
Alcohol use disorder (AUD) is a disorder of clinical and public health significance requiring novel and improved therapeutic solutions. Both environmental and genetic factors play a significant role in its pathophysiology. However, the underlying epigenetic molecular mechanisms that link the gene-environment interaction in AUD remain largely unknown. In this proof-of-concept study, we showed, for the first time, the neuroepigenetic biomarker capability of non-invasive imaging of class I histone deacetylase (HDAC) epigenetic enzymes in the in vivo brain for classifying AUD patients from healthy controls using a machine learning approach in the context of precision diagnosis. Eleven AUD patients and 16 age- and sex-matched healthy controls completed a simultaneous positron emission tomography-magnetic resonance (PET/MR) scan with the HDAC-binding radiotracer [11C]Martinostat. Our results showed lower HDAC expression in the anterior cingulate region in AUD. Furthermore, by applying a genetic algorithm feature selection, we identified five particular brain regions whose combined [11C]Martinostat relative standard uptake value (SUVR) features could reliably classify AUD vs. controls. We validate their promising classification reliability using a support vector machine classifier. These findings inform the potential of in vivo HDAC imaging biomarkers coupled with machine learning tools in the objective diagnosis and molecular translation of AUD that could complement the current diagnostic and statistical manual of mental disorders (DSM)-based intervention to propel precision medicine forward.
Collapse
Affiliation(s)
- Tewodros Mulugeta Dagnew
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Chieh-En J. Tseng
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Chi-Hyeon Yoo
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Meena M. Makary
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Systems and Biomedical Engineering Department, Cairo University, Giza, Egypt
| | - Anna E. Goodheart
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - Robin Striar
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tyler N. Meyer
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Anna K. Rattray
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Leyi Kang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kendall A. Wolf
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Stephanie A. Fiedler
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Darcy Tocci
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hannah Shapiro
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Scott Provost
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Eleanor Sultana
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Yan Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Wei Ding
- Department of Computer Science, University of Massachusetts Boston, Boston, MA, USA
| | - Ping Chen
- Department of Engineering, University of Massachusetts Boston, Boston, MA, USA
| | - Marek Kubicki
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Psychiatry Neuroimaging Laboratory, Departments of Psychiatry and Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
| | - Shiqian Shen
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Nicole R. Zürcher
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hsiao-Ying Wey
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jacob M. Hooker
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Roger D. Weiss
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
- Division of Alcohol, Drugs, and Addiction, McLean Hospital, Belmont, MA, USA
| | - Changning Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
13
|
Bouchareb Y, AlSaadi A, Zabah J, Jain A, Al-Jabri A, Phiri P, Shi JQ, Delanerolle G, Sirasanagandla SR. Technological Advances in SPECT and SPECT/CT Imaging. Diagnostics (Basel) 2024; 14:1431. [PMID: 39001321 PMCID: PMC11241697 DOI: 10.3390/diagnostics14131431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2024] [Revised: 06/11/2024] [Accepted: 06/15/2024] [Indexed: 07/16/2024] Open
Abstract
Single photon emission tomography/computed tomography (SPECT/CT) is a mature imaging technology with a dynamic role in the diagnosis and monitoring of a wide array of diseases. This paper reviews the technological advances, clinical impact, and future directions of SPECT and SPECT/CT imaging. The focus of this review is on signal amplifier devices, detector materials, camera head and collimator designs, image reconstruction techniques, and quantitative methods. Bulky photomultiplier tubes (PMTs) are being replaced by position-sensitive PMTs (PSPMTs), avalanche photodiodes (APDs), and silicon PMs to achieve higher detection efficiency and improved energy resolution and spatial resolution. Most recently, new SPECT cameras have been designed for cardiac imaging. The new design involves using specialised collimators in conjunction with conventional sodium iodide detectors (NaI(Tl)) or an L-shaped camera head, which utilises semiconductor detector materials such as CdZnTe (CZT: cadmium-zinc-telluride). The clinical benefits of the new design include shorter scanning times, improved image quality, enhanced patient comfort, reduced claustrophobic effects, and decreased overall size, particularly in specialised clinical centres. These noticeable improvements are also attributed to the implementation of resolution-recovery iterative reconstructions. Immense efforts have been made to establish SPECT and SPECT/CT imaging as quantitative tools by incorporating camera-specific modelling. Moreover, this review includes clinical examples in oncology, neurology, cardiology, musculoskeletal, and infection, demonstrating the impact of these advancements on clinical practice in radiology and molecular imaging departments.
Collapse
Affiliation(s)
- Yassine Bouchareb
- Department of Radiology & Molecular Imaging, College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Oman
| | - Afrah AlSaadi
- Department of Radiology & Molecular Imaging, College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Oman
| | - Jawa Zabah
- Department of Radiology & Molecular Imaging, Sultan Qaboos University Hospital, Muscat 123, Oman
| | - Anjali Jain
- Sultan Qaboos Comprehensive Cancer Care and Research Centre, Department of Radiology, Muscat 123, Oman
| | - Aziza Al-Jabri
- Department of Radiology & Molecular Imaging, Sultan Qaboos University Hospital, Muscat 123, Oman
| | - Peter Phiri
- Southern Health NHS Foundation Trust, Southampton SO40 2RZ, UK
- Psychology Department, Faculty of Environmental and Life Sciences, University of Southampton, Southampton SO17 1BJ, UK
| | - Jian Qing Shi
- Southern Health NHS Foundation Trust, Southampton SO40 2RZ, UK
- Southern University of Science and Technology, Southampton, UK
- Southern University of Science and Technology, Shenzhen 518055, China
| | - Gayathri Delanerolle
- Southern Health NHS Foundation Trust, Southampton SO40 2RZ, UK
- University of Birmingham, Birmingham, UK
| | - Srinivasa Rao Sirasanagandla
- Department of Human & Clinical Anatomy, College of Medicine and Health Sciences, Sultan Qaboos University, Muscat 123, Oman
| |
Collapse
|
14
|
Kim JH, Jung HS, Lee SE, Hou JU, Kwon YS. Improving difficult direct laryngoscopy prediction using deep learning and minimal image analysis: a single-center prospective study. Sci Rep 2024; 14:14209. [PMID: 38902319 PMCID: PMC11190276 DOI: 10.1038/s41598-024-65060-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 06/17/2024] [Indexed: 06/22/2024] Open
Abstract
Accurate prediction of difficult direct laryngoscopy (DDL) is essential to ensure optimal airway management and patient safety. The present study proposed an AI model that would accurately predict DDL using a small number of bedside pictures of the patient's face and neck taken simply with a smartphone. In this prospective single-center study, adult patients scheduled for endotracheal intubation under general anesthesia were included. Patient pictures were obtained in frontal, lateral, frontal-neck extension, and open mouth views. DDL prediction was performed using a deep learning model based on the EfficientNet-B5 architecture, incorporating picture view information through multitask learning. We collected 18,163 pictures from 3053 patients. After under-sampling to achieve a 1:1 image ratio of DDL to non-DDL, the model was trained and validated with a dataset of 6616 pictures from 1283 patients. The deep learning model achieved a receiver operating characteristic area under the curve of 0.81-0.88 and an F1-score of 0.72-0.81 for DDL prediction. Including picture view information improved the model's performance. Gradient-weighted class activation mapping revealed that neck and chin characteristics in frontal and lateral views are important factors in DDL prediction. The deep learning model we developed effectively predicts DDL and requires only a small set of patient pictures taken with a smartphone. The method is practical and easy to implement.
Collapse
Affiliation(s)
- Jong-Ho Kim
- Division of Big Data and Artificial Intelligence, Institute of New Frontier Research, Chuncheon Sacred Heart Hospital, Hallym University College of Medicine, Chuncheon, 24253, Republic of Korea
| | - Hee-Sun Jung
- Division of Software, Hallym University, 1, Hallymdaehak-gil, Chuncheon-si, Gangwon-do, 24252, Republic of Korea
| | - So-Eun Lee
- Department of Intelligence Computing, Hanyang University, Seoul, Republic of Korea
| | - Jong-Uk Hou
- Division of Software, Hallym University, 1, Hallymdaehak-gil, Chuncheon-si, Gangwon-do, 24252, Republic of Korea.
| | - Young-Suk Kwon
- Division of Big Data and Artificial Intelligence, Institute of New Frontier Research, Chuncheon Sacred Heart Hospital, Hallym University College of Medicine, Chuncheon, 24253, Republic of Korea.
- Department of Anesthesiology and Pain Medicine, Chuncheon, Sacred Heart Hospital, 77 Sakju-ro, Chuncheon-si, Gangwon-do, 24253, Republic of Korea.
| |
Collapse
|
15
|
Arabi H, Zaidi H. Contrastive Learning vs. Self-Learning vs. Deformable Data Augmentation in Semantic Segmentation of Medical Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01159-x. [PMID: 38858260 DOI: 10.1007/s10278-024-01159-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2024] [Revised: 05/23/2024] [Accepted: 05/24/2024] [Indexed: 06/12/2024]
Abstract
To develop a robust segmentation model, encoding the underlying features/structures of the input data is essential to discriminate the target structure from the background. To enrich the extracted feature maps, contrastive learning and self-learning techniques are employed, particularly when the size of the training dataset is limited. In this work, we set out to investigate the impact of contrastive learning and self-learning on the performance of the deep learning-based semantic segmentation. To this end, three different datasets were employed used for brain tumor and hippocampus delineation from MR images (BraTS and Decathlon datasets, respectively) and kidney segmentation from CT images (Decathlon dataset). Since data augmentation techniques are also aimed at enhancing the performance of deep learning methods, a deformable data augmentation technique was proposed and compared with contrastive learning and self-learning frameworks. The segmentation accuracy for the three datasets was assessed with and without applying data augmentation, contrastive learning, and self-learning to individually investigate the impact of these techniques. The self-learning and deformable data augmentation techniques exhibited comparable performance with Dice indices of 0.913 ± 0.030 and 0.920 ± 0.022 for kidney segmentation, 0.890 ± 0.035 and 0.898 ± 0.027 for hippocampus segmentation, and 0.891 ± 0.045 and 0.897 ± 0.040 for lesion segmentation, respectively. These two approaches significantly outperformed the contrastive learning and the original model with Dice indices of 0.871 ± 0.039 and 0.868 ± 0.042 for kidney segmentation, 0.872 ± 0.045 and 0.865 ± 0.048 for hippocampus segmentation, and 0.870 ± 0.049 and 0.860 ± 0.058 for lesion segmentation, respectively. The combination of self-learning with deformable data augmentation led to a robust segmentation model with no outliers in the outcomes. This work demonstrated the beneficial impact of self-learning and deformable data augmentation on organ and lesion segmentation, where no additional training datasets are needed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
16
|
Correia PMM, Cruzeiro B, Dias J, Encarnação PMCC, Ribeiro FM, Rodrigues CA, Silva ALM. Precise positioning of gamma ray interactions in multiplexed pixelated scintillators using artificial neural networks. Biomed Phys Eng Express 2024; 10:045038. [PMID: 38779912 DOI: 10.1088/2057-1976/ad4f73] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 05/22/2024] [Indexed: 05/25/2024]
Abstract
Introduction. The positioning ofγray interactions in positron emission tomography (PET) detectors is commonly made through the evaluation of the Anger logic flood histograms. machine learning techniques, leveraging features extracted from signal waveform, have demonstrated successful applications in addressing various challenges in PET instrumentation.Aim. This paper evaluates the use of artificial neural networks (NN) forγray interaction positioning in pixelated scintillators coupled to a multiplexed array of silicon photomultipliers (SiPM).Methods. An array of 16 Cerium doped Lutetium-based (LYSO) crystal pixels (cross-section 2 × 2 mm2) coupled to 16 SiPM (S13360-1350) were used for the experimental setup. Data from each of the 16 LYSO pixels was recorded, a total of 160000 events. The detectors were irradiated by 511 keV annihilationγrays from a Sodium-22 (22Na) source. Another LYSO crystal was used for electronic collimation. Features extracted from the signal waveform were used to train the model. Two models were tested: i) single multiple-class neural network (mcNN), with 16 possible outputs followed by a softmax and ii) 16 binary classification neural networks (bNN), each one specialized in identifying events occurred in each position.Results. Both NN models showed a mean positioning accuracy above 85% on the evaluation dataset, although the mcNN is faster to train.DiscussionThe method's accuracy is affected by the introduction of misclassified events that interacted in the neighbour's crystals and were misclassified during the dataset acquisition. Electronic collimation reduces this effect, however results could be improved using a more complex acquisition setup, such as a light-sharing configuration.ConclusionsThe methods comparison showed that mcNN and bNN can surpass the Anger logic, showing the feasibility of using these models in positioning procedures of future multiplexed detector systems in a linear configuration.
Collapse
Affiliation(s)
- P M M Correia
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - B Cruzeiro
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - J Dias
- Faculdade de Economia, CeBER, Universidade de Coimbra, Av. Dias da Silva, 165, 3004-512 Coimbra, Portugal
- INESC-Coimbra, Universidade de Coimbra, Rua Sílvio Lima, Pólo II, 3030-290 Coimbra, Portugal
| | - P M C C Encarnação
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - F M Ribeiro
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - C A Rodrigues
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| | - A L M Silva
- Institute for Nanostructures, Nanomodelling and Nanofabrication (i3N), University of Aveiro, Campus Universitário de Santiago, 3810-193, Aveiro, Portugal
| |
Collapse
|
17
|
Ma KC, Mena E, Lindenberg L, Lay NS, Eclarinal P, Citrin DE, Pinto PA, Wood BJ, Dahut WL, Gulley JL, Madan RA, Choyke PL, Turkbey IB, Harmon SA. Deep learning-based whole-body PSMA PET/CT attenuation correction utilizing Pix-2-Pix GAN. Oncotarget 2024; 15:288-300. [PMID: 38712741 DOI: 10.18632/oncotarget.28583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/08/2024] Open
Abstract
PURPOSE Sequential PET/CT studies oncology patients can undergo during their treatment follow-up course is limited by radiation dosage. We propose an artificial intelligence (AI) tool to produce attenuation-corrected PET (AC-PET) images from non-attenuation-corrected PET (NAC-PET) images to reduce need for low-dose CT scans. METHODS A deep learning algorithm based on 2D Pix-2-Pix generative adversarial network (GAN) architecture was developed from paired AC-PET and NAC-PET images. 18F-DCFPyL PSMA PET-CT studies from 302 prostate cancer patients, split into training, validation, and testing cohorts (n = 183, 60, 59, respectively). Models were trained with two normalization strategies: Standard Uptake Value (SUV)-based and SUV-Nyul-based. Scan-level performance was evaluated by normalized mean square error (NMSE), mean absolute error (MAE), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR). Lesion-level analysis was performed in regions-of-interest prospectively from nuclear medicine physicians. SUV metrics were evaluated using intraclass correlation coefficient (ICC), repeatability coefficient (RC), and linear mixed-effects modeling. RESULTS Median NMSE, MAE, SSIM, and PSNR were 13.26%, 3.59%, 0.891, and 26.82, respectively, in the independent test cohort. ICC for SUVmax and SUVmean were 0.88 and 0.89, which indicated a high correlation between original and AI-generated quantitative imaging markers. Lesion location, density (Hounsfield units), and lesion uptake were all shown to impact relative error in generated SUV metrics (all p < 0.05). CONCLUSION The Pix-2-Pix GAN model for generating AC-PET demonstrates SUV metrics that highly correlate with original images. AI-generated PET images show clinical potential for reducing the need for CT scans for attenuation correction while preserving quantitative markers and image quality.
Collapse
Affiliation(s)
- Kevin C Ma
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Esther Mena
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Liza Lindenberg
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Nathan S Lay
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Phillip Eclarinal
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - William L Dahut
- Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - James L Gulley
- Center for Immuno-Oncology, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ravi A Madan
- Genitourinary Malignancies Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Peter L Choyke
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Ismail Baris Turkbey
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Stephanie A Harmon
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| |
Collapse
|
18
|
Mansouri Z, Salimi Y, Akhavanallaf A, Shiri I, Teixeira EPA, Hou X, Beauregard JM, Rahmim A, Zaidi H. Deep transformer-based personalized dosimetry from SPECT/CT images: a hybrid approach for [ 177Lu]Lu-DOTATATE radiopharmaceutical therapy. Eur J Nucl Med Mol Imaging 2024; 51:1516-1529. [PMID: 38267686 PMCID: PMC11043201 DOI: 10.1007/s00259-024-06618-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
PURPOSE Accurate dosimetry is critical for ensuring the safety and efficacy of radiopharmaceutical therapies. In current clinical dosimetry practice, MIRD formalisms are widely employed. However, with the rapid advancement of deep learning (DL) algorithms, there has been an increasing interest in leveraging the calculation speed and automation capabilities for different tasks. We aimed to develop a hybrid transformer-based deep learning (DL) model that incorporates a multiple voxel S-value (MSV) approach for voxel-level dosimetry in [177Lu]Lu-DOTATATE therapy. The goal was to enhance the performance of the model to achieve accuracy levels closely aligned with Monte Carlo (MC) simulations, considered as the standard of reference. We extended our analysis to include MIRD formalisms (SSV and MSV), thereby conducting a comprehensive dosimetry study. METHODS We used a dataset consisting of 22 patients undergoing up to 4 cycles of [177Lu]Lu-DOTATATE therapy. MC simulations were used to generate reference absorbed dose maps. In addition, MIRD formalism approaches, namely, single S-value (SSV) and MSV techniques, were performed. A UNEt TRansformer (UNETR) DL architecture was trained using five-fold cross-validation to generate MC-based dose maps. Co-registered CT images were fed into the network as input, whereas the difference between MC and MSV (MC-MSV) was set as output. DL results are then integrated to MSV to revive the MC dose maps. Finally, the dose maps generated by MSV, SSV, and DL were quantitatively compared to the MC reference at both voxel level and organ level (organs at risk and lesions). RESULTS The DL approach showed slightly better performance (voxel relative absolute error (RAE) = 5.28 ± 1.32) compared to MSV (voxel RAE = 5.54 ± 1.4) and outperformed SSV (voxel RAE = 7.8 ± 3.02). Gamma analysis pass rates were 99.0 ± 1.2%, 98.8 ± 1.3%, and 98.7 ± 1.52% for DL, MSV, and SSV approaches, respectively. The computational time for MC was the highest (~2 days for a single-bed SPECT study) compared to MSV, SSV, and DL, whereas the DL-based approach outperformed the other approaches in terms of time efficiency (3 s for a single-bed SPECT). Organ-wise analysis showed absolute percent errors of 1.44 ± 3.05%, 1.18 ± 2.65%, and 1.15 ± 2.5% for SSV, MSV, and DL approaches, respectively, in lesion-absorbed doses. CONCLUSION A hybrid transformer-based deep learning model was developed for fast and accurate dose map generation, outperforming the MIRD approaches, specifically in heterogenous regions. The model achieved accuracy close to MC gold standard and has potential for clinical implementation for use on large-scale datasets.
Collapse
Affiliation(s)
- Zahra Mansouri
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Azadeh Akhavanallaf
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Eliluane Pirazzo Andrade Teixeira
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Xinchi Hou
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Jean-Mathieu Beauregard
- Cancer Research Centre and Department of Radiology and Nuclear Medicine, Université Laval, Quebec City, QC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine, University Medical Center Groningen, University of Groningen, 9700 RB, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
19
|
Curkic Kapidzic S, Gustafsson J, Larsson E, Jessen L, Sjögreen Gleisner K. Kidney dosimetry in [ 177Lu]Lu-DOTA-TATE therapy based on multiple small VOIs. Phys Med 2024; 120:103335. [PMID: 38555793 DOI: 10.1016/j.ejmp.2024.103335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 01/24/2024] [Accepted: 03/21/2024] [Indexed: 04/02/2024] Open
Abstract
PURPOSE The aim was to investigate the use of multiple small VOIs for kidney dosimetry in [177Lu]Lu-DOTA-TATE therapy. METHOD The study was based on patient and simulated SPECT images in anthropomorphic geometries. Images were reconstructed using two reconstruction programs (local LundaDose and commercial Hermia) using OS-EM with and without resolution recovery (RR). Five small VOIs were placed to determine the average activity concentration (AC) in each kidney. The study consisted of three steps: (i) determination of the number of iterations for AC convergence based on simulated images; (ii) determination of recovery-coefficients (RCs) for 2 mL VOIs using a separate set of simulated images; (iii) assessment of operator variability in AC estimates for simulated and patient images. Five operators placed the VOIs, using for guidance: a) SPECT/CT with RR, b) SPECT/CT without RR, and c) CT only. For simulated images, time-integrated ACs (TIACs) were evaluated. For patient images, estimated ACs were compared with results of a previous method based on whole-kidney VOIs. RESULTS Eight iterations and ten subsets were sufficient for both programs and reconstruction settings. Mean RCs (mean ± SD) with RR were 1.03 ± 0.02 (LundaDose) and 1.10 ± 0.03 (Hermia), and without RR 0.91 ± 0.03 (LundaDose) and 0.94 ± 0.03 (Hermia). Most stable and accurate estimates of the AC were obtained using five 2-mL VOIs guided by SPECT/CT with RR, applying them to images without RR, and including an explicit RC for recovery correction. CONCLUSION The small VOI method based on five 2-mL VOIs was found efficient and sufficiently accurate for kidney dosimetry in [177Lu]Lu-DOTA-TATE therapy.
Collapse
Affiliation(s)
- Selma Curkic Kapidzic
- Medical Radiation Physics, Lund, Lund University, Lund, Sweden; Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Sweden.
| | | | - Erik Larsson
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Sweden
| | - Lovisa Jessen
- Medical Radiation Physics, Lund, Lund University, Lund, Sweden
| | | |
Collapse
|
20
|
Azimi MS, Kamali-Asl A, Ay MR, Zeraatkar N, Hosseini MS, Sanaat A, Dadgar H, Arabi H. Deep learning-based partial volume correction in standard and low-dose positron emission tomography-computed tomography imaging. Quant Imaging Med Surg 2024; 14:2146-2164. [PMID: 38545051 PMCID: PMC10963814 DOI: 10.21037/qims-23-871] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 11/20/2023] [Indexed: 08/05/2024]
Abstract
BACKGROUND Positron emission tomography (PET) imaging encounters the obstacle of partial volume effects, arising from its limited intrinsic resolution, giving rise to (I) considerable bias, particularly for structures comparable in size to the point spread function (PSF) of the system; and (II) blurred image edges and blending of textures along the borders. We set out to build a deep learning-based framework for predicting partial volume corrected full-dose (FD + PVC) images from either standard or low-dose (LD) PET images without requiring any anatomical data in order to provide a joint solution for partial volume correction and de-noise LD PET images. METHODS We trained a modified encoder-decoder U-Net network with standard of care or LD PET images as the input and FD + PVC images by six different PVC methods as the target. These six PVC approaches include geometric transfer matrix (GTM), multi-target correction (MTC), region-based voxel-wise correction (RBV), iterative Yang (IY), reblurred Van-Cittert (RVC), and Richardson-Lucy (RL). The proposed models were evaluated using standard criteria, such as peak signal-to-noise ratio (PSNR), root mean squared error (RMSE), structural similarity index (SSIM), relative bias, and absolute relative bias. RESULTS Different levels of error were observed for these partial volume correction methods, which were relatively smaller for GTM with a SSIM of 0.63 for LD and 0.29 for FD, IY with an SSIM of 0.63 for LD and 0.67 for FD, RBV with an SSIM of 0.57 for LD and 0.65 for FD, and RVC with an SSIM of 0.89 for LD and 0.94 for FD PVC approaches. However, large quantitative errors were observed for multi-target MTC with an RMSE of 2.71 for LD and 2.45 for FD and RL with an RMSE of 5 for LD and 3.27 for FD PVC approaches. CONCLUSIONS We found that the proposed framework could effectively perform joint de-noising and partial volume correction for PET images with LD and FD input PET data (LD vs. FD). When no magnetic resonance imaging (MRI) images are available, the developed deep learning models could be used for partial volume correction on LD or standard PET-computed tomography (PET-CT) scans as an image quality enhancement technique.
Collapse
Affiliation(s)
- Mohammad-Saber Azimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Mohammad-Reza Ay
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | | | | | - Amirhossein Sanaat
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| |
Collapse
|
21
|
Azimi M, Kamali-Asl A, Ay MR, Zeraatkar N, Hosseini MS, Sanaat A, Arabi H. Attention-based deep neural network for partial volume correction in brain 18F-FDG PET imaging. Phys Med 2024; 119:103315. [PMID: 38377837 DOI: 10.1016/j.ejmp.2024.103315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 10/04/2023] [Accepted: 02/05/2024] [Indexed: 02/22/2024] Open
Abstract
PURPOSE This work set out to propose an attention-based deep neural network to predict partial volume corrected images from PET data not utilizing anatomical information. METHODS An attention-based convolutional neural network (ATB-Net) is developed to predict PVE-corrected images in brain PET imaging by concentrating on anatomical areas of the brain. The performance of the deep neural network for performing PVC without using anatomical images was evaluated for two PVC methods, including iterative Yang (IY) and reblurred Van-Cittert (RVC) approaches. The RVC and IY PVC approaches were applied to PET images to generate the reference images. The training of the U-Net network for the partial volume correction was trained twice, once without using the attention module and once with the attention module concentrating on the anatomical brain regions. RESULTS Regarding the peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and root mean square error (RMSE) metrics, the proposed ATB-Net outperformed the standard U-Net model (without attention compartment). For the RVC technique, the ATB-Net performed just marginally better than the U-Net; however, for the IY method, which is a region-wise method, the attention-based approach resulted in a substantial improvement. The mean absolute relative SUV difference and mean absolute relative bias improved by 38.02 % and 91.60 % for the RVC method and 77.47 % and 79.68 % for the IY method when using the ATB-Net model, respectively. CONCLUSIONS Our results propose that without using anatomical data, the attention-based DL model could perform PVC on PET images, which could be employed for PVC in PET imaging.
Collapse
Affiliation(s)
- MohammadSaber Azimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran.
| | - Mohammad-Reza Ay
- Research Center for Molecular and Cellular Imaging (RCMCI), Advanced Medical Technologies and Equipment Institute (AMTEI), Tehran University of Medical Sciences (TUMS), Tehran, Iran; Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran
| | | | | | - Amirhossein Sanaat
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland.
| |
Collapse
|
22
|
Usanase N, Uzun B, Ozsahin DU, Ozsahin I. A look at radiation detectors and their applications in medical imaging. Jpn J Radiol 2024; 42:145-157. [PMID: 37733205 DOI: 10.1007/s11604-023-01486-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 08/28/2023] [Indexed: 09/22/2023]
Abstract
The effectiveness and precision of disease diagnosis and treatment have increased, thanks to developments in clinical imaging over the past few decades. Science is developing and progressing steadily in imaging modalities, and effective outcomes are starting to show up as a result of the shorter scanning periods needed as well as the higher-resolution images generated. The choice of one clinical device over another is influenced by technical disparities among the equipment, such as detection medium, shorter scan time, patient comfort, cost-effectiveness, accessibility, greater sensitivity and specificity, and spatial resolution. Lately, computational algorithms, artificial intelligence (AI), in particular, have been incorporated with diagnostic and treatment techniques, including imaging systems. AI is a discipline comprised of multiple computational and mathematical models. Its applications aided in manipulating sophisticated data in imaging processes and increased imaging tests' accuracy and precision during diagnosis. Computed tomography (CT), positron emission tomography (PET), and Single Photon Emission Computed Tomography (SPECT) along with their corresponding radiation detectors have been reviewed in this study. This review will provide an in-depth explanation of the above-mentioned imaging modalities as well as the radiation detectors that are their essential components. From the early development of these medical instruments till now, various modifications and improvements have been done and more is yet to be established for better performance which calls for a necessity to capture the available information and record the gaps to be filled for better future advances.
Collapse
Affiliation(s)
- Natacha Usanase
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey.
| | - Berna Uzun
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey
- Department of Statistics, Carlos III Madrid University, Getafe, Madrid, Spain
| | - Dilber Uzun Ozsahin
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey
- Medical Diagnostic Imaging Department, College of Health Sciences, University of Sharjah, Sharjah, United Arab Emirates
| | - Ilker Ozsahin
- Operational Research Centre in Healthcare, Near East University, Mersin 10, Nicosia, Turkey
- Brain Health Imaging Institute, Department of Radiology, Weill Cornell Medicine, New York, NY, 10065, USA
| |
Collapse
|
23
|
Karimipourfard M, Sina S, Mahani H, Alavi M, Yazdi M. Impact of deep learning-based multiorgan segmentation methods on patient-specific internal dosimetry in PET/CT imaging: A comparative study. J Appl Clin Med Phys 2024; 25:e14254. [PMID: 38214349 PMCID: PMC10860559 DOI: 10.1002/acm2.14254] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 10/29/2023] [Accepted: 11/30/2023] [Indexed: 01/13/2024] Open
Abstract
PURPOSE Accurate and fast multiorgan segmentation is essential in image-based internal dosimetry in nuclear medicine. While conventional manual PET image segmentation is widely used, it suffers from both being time-consuming as well as subject to human error. This study exploited 2D and 3D deep learning (DL) models. Key organs in the trunk of the body were segmented and then used as a reference for networks. METHODS The pre-trained p2p-U-Net-GAN and HighRes3D architectures were fine-tuned with PET-only images as inputs. Additionally, the HighRes3D model was alternatively trained with PET/CT images. Evaluation metrics such as sensitivity (SEN), specificity (SPC), intersection over union (IoU), and Dice scores were considered to assess the performance of the networks. The impact of DL-assisted PET image segmentation methods was further assessed using the Monte Carlo (MC)-derived S-values to be used for internal dosimetry. RESULTS A fair comparison with manual low-dose CT-aided segmentation of the PET images was also conducted. Although both 2D and 3D models performed well, the HighRes3D offers superior performance with Dice scores higher than 0.90. Key evaluation metrics such as SEN, SPC, and IoU vary between 0.89-0.93, 0.98-0.99, and 0.87-0.89 intervals, respectively, indicating the encouraging performance of the models. The percentage differences between the manual and DL segmentation methods in the calculated S-values varied between 0.1% and 6% with a maximum attributed to the stomach. CONCLUSION The findings prove while the incorporation of anatomical information provided by the CT data offers superior performance in terms of Dice score, the performance of HighRes3D remains comparable without the extra CT channel. It is concluded that both proposed DL-based methods provide automated and fast segmentation of whole-body PET/CT images with promising evaluation metrics. Between them, the HighRes3D is more pronounced by providing better performance and can therefore be the method of choice for 18F-FDG-PET image segmentation.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Department of Ray‐Medical EngineeringShiraz UniversityShirazIran
- Radiation Research CenterShiraz UniversityShirazIran
| | - Hojjat Mahani
- Radiation Applications Research SchoolNuclear Science and Technology Research InstituteTehranIran
| | - Mehrosadat Alavi
- Department of Nuclear MedicineShiraz University of Medical SciencesShirazIran
| | - Mehran Yazdi
- School of Electrical and Computer EngineeringShiraz UniversityShirazIran
| |
Collapse
|
24
|
Jian M, Jin H, Zhang L, Wei B, Yu H. DBPNDNet: dual-branch networks using 3DCNN toward pulmonary nodule detection. Med Biol Eng Comput 2024; 62:563-573. [PMID: 37945795 DOI: 10.1007/s11517-023-02957-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Accepted: 10/21/2023] [Indexed: 11/12/2023]
Abstract
With the advancement of artificial intelligence, CNNs have been successfully introduced into the discipline of medical data analyzing. Clinically, automatic pulmonary nodules detection remains an intractable issue since those nodules existing in the lung parenchyma or on the chest wall are tough to be visually distinguished from shadows, background noises, blood vessels, and bones. Thus, when making medical diagnosis, clinical doctors need to first pay attention to the intensity cue and contour characteristic of pulmonary nodules, so as to locate the specific spatial locations of nodules. To automate the detection process, we propose an efficient architecture of multi-task and dual-branch 3D convolution neural networks, called DBPNDNet, for automatic pulmonary nodule detection and segmentation. Among the dual-branch structure, one branch is designed for candidate region extraction of pulmonary nodule detection, while the other incorporated branch is exploited for lesion region semantic segmentation of pulmonary nodules. In addition, we develop a 3D attention weighted feature fusion module according to the doctor's diagnosis perspective, so that the captured information obtained by the designed segmentation branch can further promote the effect of the adopted detection branch mutually. The experiment has been implemented and assessed on the commonly used dataset for medical image analysis to evaluate our designed framework. On average, our framework achieved a sensitivity of 91.33% false positives per CT scan and reached 97.14% sensitivity with 8 FPs per scan. The results of the experiments indicate that our framework outperforms other mainstream approaches.
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China.
- School of Information Science and Technology, Linyi University, Linyi, China.
| | - Haodong Jin
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Linsong Zhang
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| | - Benzheng Wei
- Medical Artificial Intelligence Research Center, Shandong University of Traditional Chinese Medicine, Qingdao, China
| | - Hui Yu
- School of Control Engineering, University of Shanghai for Science and Technology, Shanghai, China
- School of Creative Technologies, University of Portsmouth, Portsmouth, UK
| |
Collapse
|
25
|
Kobayashi T, Shigeki Y, Yamakawa Y, Tsutsumida Y, Mizuta T, Hanaoka K, Watanabe S, Morimoto-Ishikawa D, Yamada T, Kaida H, Ishii K. Generating PET Attenuation Maps via Sim2Real Deep Learning-Based Tissue Composition Estimation Combined with MLACF. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:167-179. [PMID: 38343219 DOI: 10.1007/s10278-023-00902-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/20/2023] [Accepted: 08/10/2023] [Indexed: 03/02/2024]
Abstract
Deep learning (DL) has recently attracted attention for data processing in positron emission tomography (PET). Attenuation correction (AC) without computed tomography (CT) data is one of the interests. Here, we present, to our knowledge, the first attempt to generate an attenuation map of the human head via Sim2Real DL-based tissue composition estimation from model training using only the simulated PET dataset. The DL model accepts a two-dimensional non-attenuation-corrected PET image as input and outputs a four-channel tissue-composition map of soft tissue, bone, cavity, and background. Then, an attenuation map is generated by a linear combination of the tissue composition maps and, finally, used as input for scatter+random estimation and as an initial estimate for attenuation map reconstruction by the maximum likelihood attenuation correction factor (MLACF), i.e., the DL estimate is refined by the MLACF. Preliminary results using clinical brain PET data showed that the proposed DL model tended to estimate anatomical details inaccurately, especially in the neck-side slices. However, it succeeded in estimating overall anatomical structures, and the PET quantitative accuracy with DL-based AC was comparable to that with CT-based AC. Thus, the proposed DL-based approach combined with the MLACF is also a promising CT-less AC approach.
Collapse
Affiliation(s)
- Tetsuya Kobayashi
- Technology Research Laboratory, Shimadzu Corporation, 3-9-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237, Japan.
| | - Yui Shigeki
- Technology Research Laboratory, Shimadzu Corporation, 3-9-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237, Japan
| | - Yoshiyuki Yamakawa
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Yumi Tsutsumida
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Tetsuro Mizuta
- Medical Systems Division, Shimadzu Corporation, 1, Nishinokyo Kuwabara-cho, Nakagyo-ku, Kyoto, 604-8511, Japan
| | - Kohei Hanaoka
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Shota Watanabe
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Daisuke Morimoto-Ishikawa
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Takahiro Yamada
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Hayato Kaida
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| | - Kazunari Ishii
- Division of Positron Emission Tomography, Institute of Advanced Clinical Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
- Department of Radiology, Faculty of Medicine, Kindai University, 377-2, Onohigashi, Osakasayama, Osaka, 589-8511, Japan
| |
Collapse
|
26
|
Apostolopoulos ID, Papandrianos NI, Papathanasiou ND, Papageorgiou EI. Fuzzy Cognitive Map Applications in Medicine over the Last Two Decades: A Review Study. Bioengineering (Basel) 2024; 11:139. [PMID: 38391626 PMCID: PMC10886348 DOI: 10.3390/bioengineering11020139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 01/18/2024] [Accepted: 01/27/2024] [Indexed: 02/24/2024] Open
Abstract
Fuzzy Cognitive Maps (FCMs) have become an invaluable tool for healthcare providers because they can capture intricate associations among variables and generate precise predictions. FCMs have demonstrated their utility in diverse medical applications, from disease diagnosis to treatment planning and prognosis prediction. Their ability to model complex relationships between symptoms, biomarkers, risk factors, and treatments has enabled healthcare providers to make informed decisions, leading to better patient outcomes. This review article provides a thorough synopsis of using FCMs within the medical domain. A systematic examination of pertinent literature spanning the last two decades forms the basis of this overview, specifically delineating the diverse applications of FCMs in medical realms, including decision-making, diagnosis, prognosis, treatment optimisation, risk assessment, and pharmacovigilance. The limitations inherent in FCMs are also scrutinised, and avenues for potential future research and application are explored.
Collapse
Affiliation(s)
| | - Nikolaos I Papandrianos
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece
| | | | - Elpiniki I Papageorgiou
- Department of Energy Systems, University of Thessaly, Gaiopolis Campus, 41500 Larisa, Greece
| |
Collapse
|
27
|
Gawel J, Rogulski Z. The Challenge of Single-Photon Emission Computed Tomography Image Segmentation in the Internal Dosimetry of 177Lu Molecular Therapies. J Imaging 2024; 10:27. [PMID: 38276319 PMCID: PMC10817423 DOI: 10.3390/jimaging10010027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/03/2024] [Accepted: 01/05/2024] [Indexed: 01/27/2024] Open
Abstract
The aim of this article is to review the single photon emission computed tomography (SPECT) segmentation methods used in patient-specific dosimetry of 177Lu molecular therapy. Notably, 177Lu-labelled radiopharmaceuticals are currently used in molecular therapy of metastatic neuroendocrine tumours (ligands for somatostatin receptors) and metastatic prostate adenocarcinomas (PSMA ligands). The proper segmentation of the organs at risk and tumours in targeted radionuclide therapy is an important part of the optimisation process of internal patient dosimetry in this kind of therapy. Because this is the first step in dosimetry assessments, on which further dose calculations are based, it is important to know the level of uncertainty that is associated with this part of the analysis. However, the robust quantification of SPECT images, which would ensure accurate dosimetry assessments, is very hard to achieve due to the intrinsic features of this device. In this article, papers on this topic were collected and reviewed to weigh up the advantages and disadvantages of the segmentation methods used in clinical practice. Degrading factors of SPECT images were also studied to assess their impact on the quantification of 177Lu therapy images. Our review of the recent literature gives an insight into this important topic. However, based on the PubMed and IEEE databases, only a few papers investigating segmentation methods in 177Lumolecular therapy were found. Although segmentation is an important step in internal dose calculations, this subject has been relatively lightly investigated for SPECT systems. This is mostly due to the inner features of SPECT. What is more, even when studies are conducted, they usually utilise the diagnostic radionuclide 99mTc and not a therapeutic one like 177Lu, which could be of concern regarding SPECT camera performance and its overall outcome on dosimetry.
Collapse
Affiliation(s)
- Joanna Gawel
- Faculty of Chemistry, University of Warsaw, 02-093 Warsaw, Poland
| | | |
Collapse
|
28
|
Yazdani E, Geramifar P, Karamzade-Ziarati N, Sadeghi M, Amini P, Rahmim A. Radiomics and Artificial Intelligence in Radiotheranostics: A Review of Applications for Radioligands Targeting Somatostatin Receptors and Prostate-Specific Membrane Antigens. Diagnostics (Basel) 2024; 14:181. [PMID: 38248059 PMCID: PMC10814892 DOI: 10.3390/diagnostics14020181] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 01/11/2024] [Accepted: 01/12/2024] [Indexed: 01/23/2024] Open
Abstract
Radiotheranostics refers to the pairing of radioactive imaging biomarkers with radioactive therapeutic compounds that deliver ionizing radiation. Given the introduction of very promising radiopharmaceuticals, the radiotheranostics approach is creating a novel paradigm in personalized, targeted radionuclide therapies (TRTs), also known as radiopharmaceuticals (RPTs). Radiotherapeutic pairs targeting somatostatin receptors (SSTR) and prostate-specific membrane antigens (PSMA) are increasingly being used to diagnose and treat patients with metastatic neuroendocrine tumors (NETs) and prostate cancer. In parallel, radiomics and artificial intelligence (AI), as important areas in quantitative image analysis, are paving the way for significantly enhanced workflows in diagnostic and theranostic fields, from data and image processing to clinical decision support, improving patient selection, personalized treatment strategies, response prediction, and prognostication. Furthermore, AI has the potential for tremendous effectiveness in patient dosimetry which copes with complex and time-consuming tasks in the RPT workflow. The present work provides a comprehensive overview of radiomics and AI application in radiotheranostics, focusing on pairs of SSTR- or PSMA-targeting radioligands, describing the fundamental concepts and specific imaging/treatment features. Our review includes ligands radiolabeled by 68Ga, 18F, 177Lu, 64Cu, 90Y, and 225Ac. Specifically, contributions via radiomics and AI towards improved image acquisition, reconstruction, treatment response, segmentation, restaging, lesion classification, dose prediction, and estimation as well as ongoing developments and future directions are discussed.
Collapse
Affiliation(s)
- Elmira Yazdani
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran 14496-14535, Iran
- Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran 14117-13135, Iran
| | - Najme Karamzade-Ziarati
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran 14117-13135, Iran
| | - Mahdi Sadeghi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran 14496-14535, Iran
- Finetech in Medicine Research Center, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Payam Amini
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran 14496-14535, Iran
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC V5Z 1L3, Canada
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC V5Z 1L3, Canada
| |
Collapse
|
29
|
Sanaat A, Amini M, Arabi H, Zaidi H. The quest for multifunctional and dedicated PET instrumentation with irregular geometries. Ann Nucl Med 2024; 38:31-70. [PMID: 37952197 PMCID: PMC10766666 DOI: 10.1007/s12149-023-01881-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/09/2023] [Indexed: 11/14/2023]
Abstract
We focus on reviewing state-of-the-art developments of dedicated PET scanners with irregular geometries and the potential of different aspects of multifunctional PET imaging. First, we discuss advances in non-conventional PET detector geometries. Then, we present innovative designs of organ-specific dedicated PET scanners for breast, brain, prostate, and cardiac imaging. We will also review challenges and possible artifacts by image reconstruction algorithms for PET scanners with irregular geometries, such as non-cylindrical and partial angular coverage geometries and how they can be addressed. Then, we attempt to address some open issues about cost/benefits analysis of dedicated PET scanners, how far are the theoretical conceptual designs from the market/clinic, and strategies to reduce fabrication cost without compromising performance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
30
|
Fuchs T, Kaiser L, Müller D, Papp L, Fischer R, Tran-Gia J. Enhancing Interoperability and Harmonisation of Nuclear Medicine Image Data and Associated Clinical Data. Nuklearmedizin 2023; 62:389-398. [PMID: 37907246 PMCID: PMC10689089 DOI: 10.1055/a-2187-5701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 09/21/2023] [Indexed: 11/02/2023]
Abstract
Nuclear imaging techniques such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) in combination with computed tomography (CT) are established imaging modalities in clinical practice, particularly for oncological problems. Due to a multitude of manufacturers, different measurement protocols, local demographic or clinical workflow variations as well as various available reconstruction and analysis software, very heterogeneous datasets are generated. This review article examines the current state of interoperability and harmonisation of image data and related clinical data in the field of nuclear medicine. Various approaches and standards to improve data compatibility and integration are discussed. These include, for example, structured clinical history, standardisation of image acquisition and reconstruction as well as standardised preparation of image data for evaluation. Approaches to improve data acquisition, storage and analysis will be presented. Furthermore, approaches are presented to prepare the datasets in such a way that they become usable for projects applying artificial intelligence (AI) (machine learning, deep learning, etc.). This review article concludes with an outlook on future developments and trends related to AI in nuclear medicine, including a brief research of commercial solutions.
Collapse
Affiliation(s)
- Timo Fuchs
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
| | - Lena Kaiser
- Department of Nuclear Medicine, LMU University Hospital, LMU, Munich, Germany
| | - Dominik Müller
- IT-Infrastructure for Translational Medical Research, University of Augsburg, Augsburg, Germany
- Medical Data Integration Center, University Hospital Augsburg, Augsburg, Germany
| | - Laszlo Papp
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Wien, Austria
| | - Regina Fischer
- Medical Data Integration Center (MEDIZUKR), University Hospital Regensburg, Regensburg, Germany
- Partner Site Regensburg, Bavarian Center for Cancer Research (BZKF), Regensburg, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Würzburg, Wurzburg, Germany
| |
Collapse
|
31
|
Jimenez-Mesa C, Arco JE, Martinez-Murcia FJ, Suckling J, Ramirez J, Gorriz JM. Applications of machine learning and deep learning in SPECT and PET imaging: General overview, challenges and future prospects. Pharmacol Res 2023; 197:106984. [PMID: 37940064 DOI: 10.1016/j.phrs.2023.106984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/04/2023] [Accepted: 11/04/2023] [Indexed: 11/10/2023]
Abstract
The integration of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) imaging techniques with machine learning (ML) algorithms, including deep learning (DL) models, is a promising approach. This integration enhances the precision and efficiency of current diagnostic and treatment strategies while offering invaluable insights into disease mechanisms. In this comprehensive review, we delve into the transformative impact of ML and DL in this domain. Firstly, a brief analysis is provided of how these algorithms have evolved and which are the most widely applied in this domain. Their different potential applications in nuclear imaging are then discussed, such as optimization of image adquisition or reconstruction, biomarkers identification, multimodal fusion and the development of diagnostic, prognostic, and disease progression evaluation systems. This is because they are able to analyse complex patterns and relationships within imaging data, as well as extracting quantitative and objective measures. Furthermore, we discuss the challenges in implementation, such as data standardization and limited sample sizes, and explore the clinical opportunities and future horizons, including data augmentation and explainable AI. Together, these factors are propelling the continuous advancement of more robust, transparent, and reliable systems.
Collapse
Affiliation(s)
- Carmen Jimenez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan E Arco
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Communications Engineering, University of Malaga, 29010, Spain
| | | | - John Suckling
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, 18010, Spain; Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK.
| |
Collapse
|
32
|
Liang R, Li F, Chen X, Tan F, Lan T, Yang J, Liao J, Yang Y, Liu N. Multimodal Imaging-Guided Strategy for Developing 177Lu-Labeled Metal-Organic Framework Nanomedicine with Potential in Cancer Therapy. ACS APPLIED MATERIALS & INTERFACES 2023; 15:45713-45724. [PMID: 37738473 DOI: 10.1021/acsami.3c11098] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Nano-metal-organic frameworks (nano-MOFs) labeled with radionuclides have shown great potential in the anticancer field. In this work, we proposed to combine fluorescence imaging (FI) with nuclear imaging to systematically evaluate the tumor inhibition of new nanomedicines from living cancer cells to the whole body, guiding the design and application of a high-performance anticancer radiopharmaceutical to glioma. An Fe-based nano-MOF vector, MIL-101(Fe)/PEG-FA, was decorated with fluorescent sulfo-cyanine7 (Cy7) to investigate the binding affinity of the targeting nanocarriers toward glioma cells in vitro, as well as possible administration modes for in vivo cancer therapy. Then, lutetium-177 (177Lu)-labeled MIL-101(Fe)/PEG-FA was prepared for high-sensitive imaging and targeted radiotherapy of glioma in vivo. It has been demonstrated that the obtained 177Lu-labeled MIL-101(Fe)/PEG-FA can work as a complementary probe to rectify the cancer binding affinity of the prepared nanocarrier given by fluorescence imaging, providing more precise biodistribution information. Besides, 177Lu-labeled MIL-101(Fe)/PEG-FA has excellent antitumor effect, leading to cell proliferation inhibition, upregulation of intracellular reactive oxygen species, tumor growth suppression, and immune response-related protein and cytokine upregulation. This work reveals that optical imaging and nuclear imaging can work complementarily as multimodal imaging in the design and evaluation of anticancer nanomedicine, offering a MIL-101(Fe)/PEG-FA-based pharmaceutical with potential in tumor endoradiotherapy.
Collapse
Affiliation(s)
- Ranxi Liang
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
- Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu 610041, P. R. China
| | - Feize Li
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Xijian Chen
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Fuyuan Tan
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Tu Lan
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Jijun Yang
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Jiali Liao
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Yuanyou Yang
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| | - Ning Liu
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu 610064, P. R. China
| |
Collapse
|
33
|
Gil J, Choi H, Paeng JC, Cheon GJ, Kang KW. Deep Learning-Based Feature Extraction from Whole-Body PET/CT Employing Maximum Intensity Projection Images: Preliminary Results of Lung Cancer Data. Nucl Med Mol Imaging 2023; 57:216-222. [PMID: 37720886 PMCID: PMC10504178 DOI: 10.1007/s13139-023-00802-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 03/20/2023] [Accepted: 04/03/2023] [Indexed: 09/19/2023] Open
Abstract
Purpose Deep learning (DL) has been widely used in various medical imaging analyses. Because of the difficulty in processing volume data, it is difficult to train a DL model as an end-to-end approach using PET volume as an input for various purposes including diagnostic classification. We suggest an approach employing two maximum intensity projection (MIP) images generated by whole-body FDG PET volume to employ pre-trained models based on 2-D images. Methods As a retrospective, proof-of-concept study, 562 [18F]FDG PET/CT images and clinicopathological factors of lung cancer patients were collected. MIP images of anterior and lateral views were used as inputs, and image features were extracted by a pre-trained convolutional neural network (CNN) model, ResNet-50. The relationship between the images was depicted on a parametric 2-D axes map using t-distributed stochastic neighborhood embedding (t-SNE), with clinicopathological factors. Results A DL-based feature map extracted by two MIP images was embedded by t-SNE. According to the visualization of the t-SNE map, PET images were clustered by clinicopathological features. The representative difference between the clusters of PET patterns according to the posture of a patient was visually identified. This map showed a pattern of clustering according to various clinicopathological factors including sex as well as tumor staging. Conclusion A 2-D image-based pre-trained model could extract image patterns of whole-body FDG PET volume by using anterior and lateral views of MIP images bypassing the direct use of 3-D PET volume that requires large datasets and resources. We suggest that this approach could be implemented as a backbone model for various applications for whole-body PET image analyses.
Collapse
Affiliation(s)
- Joonhyung Gil
- Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Jin Chul Paeng
- Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Gi Jeong Cheon
- Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| | - Keon Wook Kang
- Department of Nuclear Medicine, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 101 Daehak-ro, Jongno-gu, Seoul, 03080 Republic of Korea
| |
Collapse
|
34
|
Aberathne I, Kulasiri D, Samarasinghe S. Detection of Alzheimer's disease onset using MRI and PET neuroimaging: longitudinal data analysis and machine learning. Neural Regen Res 2023; 18:2134-2140. [PMID: 37056120 PMCID: PMC10328296 DOI: 10.4103/1673-5374.367840] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 12/08/2022] [Accepted: 01/12/2023] [Indexed: 02/17/2023] Open
Abstract
The scientists are dedicated to studying the detection of Alzheimer's disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer's disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer's disease onset.
Collapse
Affiliation(s)
- Iroshan Aberathne
- Centre for Advanced Computational Solutions (C-fACS), Lincoln University, Christchurch, New Zealand
| | - Don Kulasiri
- Centre for Advanced Computational Solutions (C-fACS), Lincoln University, Christchurch, New Zealand
| | - Sandhya Samarasinghe
- Centre for Advanced Computational Solutions (C-fACS), Lincoln University, Christchurch, New Zealand
| |
Collapse
|
35
|
Yang C, Ko K, Lin P. Reducing scan time in 177 Lu planar scintigraphy using convolutional neural network: A Monte Carlo simulation study. J Appl Clin Med Phys 2023; 24:e14056. [PMID: 37261890 PMCID: PMC10562044 DOI: 10.1002/acm2.14056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 05/15/2022] [Accepted: 05/16/2023] [Indexed: 06/02/2023] Open
Abstract
PURPOSE The aim of this study was to reduce scan time in 177 Lu planar scintigraphy through the use of convolutional neural network (CNN) to facilitate personalized dosimetry for 177 Lu-based peptide receptor radionuclide therapy. METHODS The CNN model used in this work was based on DenseNet, and the training and testing datasets were generated from Monte Carlo simulation. The CNN input images (IMGinput ) consisted of 177 Lu planar scintigraphy that contained 10-90% of the total photon counts, while the corresponding full-count images (IMG100% ) were used as the CNN label images. Two-sample t-test was conducted to compare the difference in pixel intensities within region of interest between IMG100% and CNN output images (IMGoutput ). RESULTS No difference was found in IMGoutput for rods with diameters ranging from 13 to 33 mm in the Derenzo phantom with a target-to-background ratio of 20:1, while statistically significant differences were found in IMGoutput for the 10-mm diameter rods when IMGinput containing 10% to 60% of the total photon counts were denoised. Statistically significant differences were found in IMGoutput for both right and left kidneys in the NCAT phantom when IMGinput containing 10% of the total photon counts were denoised. No statistically significant differences were found in IMGoutput for any other source organs in the NCAT phantom. CONCLUSION Our results showed that the proposed method can reduce scan time by up to 70% for objects larger than 13 mm, making it a useful tool for personalized dosimetry in 177 Lu-based peptide receptor radionuclide therapy in clinical practice.
Collapse
Affiliation(s)
- Ching‐Ching Yang
- Department of Medical Imaging and Radiological SciencesKaohsiung Medical UniversityKaohsiungTaiwan
- Department of Medical ResearchKaohsiung Medical University Chung‐Ho Memorial HospitalKaohsiungTaiwan
| | - Kuan‐Yin Ko
- Department of Nuclear MedicineNational Taiwan University Cancer CenterTaipeiTaiwan
- Graduate Institute of Clinical MedicineCollege of MedicineNational Taiwan UniversityTaipeiTaiwan
| | - Pei‐Yao Lin
- Department of Nuclear MedicineNational Taiwan University Cancer CenterTaipeiTaiwan
| |
Collapse
|
36
|
Kim H, Li Z, Son J, Fessler JA, Dewaraja YK, Chun SY. Physics-Guided Deep Scatter Estimation by Weak Supervision for Quantitative SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2961-2973. [PMID: 37104110 PMCID: PMC10593395 DOI: 10.1109/tmi.2023.3270868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Accurate scatter estimation is important in quantitative SPECT for improving image contrast and accuracy. With a large number of photon histories, Monte-Carlo (MC) simulation can yield accurate scatter estimation, but is computationally expensive. Recent deep learning-based approaches can yield accurate scatter estimates quickly, yet full MC simulation is still required to generate scatter estimates as ground truth labels for all training data. Here we propose a physics-guided weakly supervised training framework for fast and accurate scatter estimation in quantitative SPECT by using a 100× shorter MC simulation as weak labels and enhancing them with deep neural networks. Our weakly supervised approach also allows quick fine-tuning of the trained network to any new test data for further improved performance with an additional short MC simulation (weak label) for patient-specific scatter modelling. Our method was trained with 18 XCAT phantoms with diverse anatomies / activities and then was evaluated on 6 XCAT phantoms, 4 realistic virtual patient phantoms, 1 torso phantom and 3 clinical scans from 2 patients for 177Lu SPECT with single / dual photopeaks (113, 208 keV). Our proposed weakly supervised method yielded comparable performance to the supervised counterpart in phantom experiments, but with significantly reduced computation in labeling. Our proposed method with patient-specific fine-tuning achieved more accurate scatter estimates than the supervised method in clinical scans. Our method with physics-guided weak supervision enables accurate deep scatter estimation in quantitative SPECT, while requiring much lower computation in labeling, enabling patient-specific fine-tuning capability in testing.
Collapse
Affiliation(s)
- Hanvit Kim
- Digital Biomedical Research Division, Electronics and Telecommunications Research Institute, Daejeon, South Korea
- Department of Electrical Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, South Korea
| | - Zongyu Li
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA
| | - Jiye Son
- Interdisciplinary Program for Bioengineering, Seoul National University (SNU), Seoul, South Korea. This work was done when she was with the School of Electrical and Computer Engineering (ECE), UNIST
| | - Jeffrey A. Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA
| | - Yuni K. Dewaraja
- Dewaraja is with the Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | - Se Young Chun
- Department of ECE, INMC & IPAI, SNU, Seoul, South Korea
| |
Collapse
|
37
|
Hajianfar G, Kalayinia S, Hosseinzadeh M, Samanian S, Maleki M, Sossi V, Rahmim A, Salmanpour MR. Prediction of Parkinson's disease pathogenic variants using hybrid Machine learning systems and radiomic features. Phys Med 2023; 113:102647. [PMID: 37579523 DOI: 10.1016/j.ejmp.2023.102647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 05/08/2023] [Accepted: 07/29/2023] [Indexed: 08/16/2023] Open
Abstract
PURPOSE In Parkinson's disease (PD), 5-10% of cases are of genetic origin with mutations identified in several genes such as leucine-rich repeat kinase 2 (LRRK2) and glucocerebrosidase (GBA). We aim to predict these two gene mutations using hybrid machine learning systems (HMLS), via imaging and non-imaging data, with the long-term goal to predict conversion to active disease. METHODS We studied 264 and 129 patients with known LRRK2 and GBA mutations status from PPMI database. Each dataset includes 513 features such as clinical features (CFs), conventional imaging features (CIFs) and radiomic features (RFs) extracted from DAT-SPECT images. Features, normalized by Z-score, were univariately analyzed for statistical significance by the t-test and chi-square test, adjusted by Benjamini-Hochberg correction. Multiple HMLSs, including 11 features extraction (FEA) or 10 features selection algorithms (FSA) linked with 21 classifiers were utilized. We also employed Ensemble Voting (EV) to classify the genes. RESULTS For prediction of LRRK2 mutation status, a number of HMLSs resulted in accuracies of 0.98 ± 0.02 and 1.00 in 5-fold cross-validation (80% out of total data points) and external testing (remaining 20%), respectively. For predicting GBA mutation status, multiple HMLSs resulted in high accuracies of 0.90 ± 0.08 and 0.96 in 5-fold cross-validation and external testing, respectively. We additionally showed that SPECT-based RFs added value to the specific prediction of of GBA mutation status. CONCLUSION We demonstrated that combining medical information with SPECT-based imaging features, and optimal utilization of HMLS can produce excellent prediction of the mutations status in PD patients.
Collapse
Affiliation(s)
- Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran; Technological Virtual Collaboration (TECVICO Corp.), Vancouver BC, Canada
| | - Samira Kalayinia
- Cardiogenetic Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Mahdi Hosseinzadeh
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver BC, Canada; Department of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Sara Samanian
- Firoozgar Hospital Medical Genetics Laboratory, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Maleki
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Mohammad R Salmanpour
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver BC, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada.
| |
Collapse
|
38
|
Boehringer AS, Sanaat A, Arabi H, Zaidi H. An active learning approach to train a deep learning algorithm for tumor segmentation from brain MR images. Insights Imaging 2023; 14:141. [PMID: 37620554 PMCID: PMC10449747 DOI: 10.1186/s13244-023-01487-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/22/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. METHODS The publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training. RESULTS It was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data. CONCLUSION The active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data. CRITICAL RELEVANCE STATEMENT Active learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training. KEY POINTS • This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. • The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas. • Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.
Collapse
Affiliation(s)
- Andrew S Boehringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
39
|
Sanaei B, Faghihi R, Arabi H. Employing Multiple Low-Dose PET Images (at Different Dose Levels) as Prior Knowledge to Predict Standard-Dose PET Images. J Digit Imaging 2023; 36:1588-1596. [PMID: 36988836 PMCID: PMC10406788 DOI: 10.1007/s10278-023-00815-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.
Collapse
Affiliation(s)
- Behnoush Sanaei
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
40
|
Pinto S, Caribé P, Sebastião Matushita C, Bromfman Pianta D, Narciso L, da Silva AMM. Aiming for [ 18F]FDG-PET acquisition time reduction in clinical practice for neurological patients. Phys Med 2023; 112:102604. [PMID: 37429182 DOI: 10.1016/j.ejmp.2023.102604] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 03/02/2023] [Accepted: 05/04/2023] [Indexed: 07/12/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) imaging with [18F]FDG provides valuable information regarding the underlying pathological processes in neurodegenerative disorders. PET imaging for these populations should be as short as possible to limit head movements and improve comfort. This study aimed to validate an optimized [18F]FDG-PET image reconstruction protocol aiming to reduce acquisition time while maintaining adequate quantification accuracy and image quality. METHODS A time-reduced reconstruction protocol (5 min) was evaluated in [18F]FDG-PET retrospective data from healthy individuals and Alzheimer's disease (AD) patients. Standard (8 min) and time-reduced protocols were compared by means of image quality and quantification accuracy metrics, as well as standardized uptake value ratio (SUVR) and Z-scores (pons was used as reference). Images were randomly and blindly presented to experienced physicians and scored in terms of image quality. RESULTS No differences between protocols were identified during the visual assessment. Small differences (p < 0.01) in the pons SUVR were observed between the standard and time-reduced protocols for healthy individuals (-0.002 ± 0.011) and AD patients (-0.007 ± 0.013). Likewise, incorporating the PSF correction in the reconstruction algorithm resulted in small differences (p < 0.01) in SUVR between protocols (healthy individuals: -0.003 ± 0.011; AD patients: -0.007 ± 0.014). CONCLUSION Quality metrics were similar between time-reduced and standard protocols. In the visual assessment of the images, the physicians did not consider the use of PSF adequate, as it degraded the quality image. Shortening the acquisition time is possible by optimizing the image reconstruction parameters while maintaining adequate quantification accuracy and image quality.
Collapse
Affiliation(s)
- Samara Pinto
- Medical Image Computing Laboratory (MEDICOM), PUCRS, Porto Alegre, RS, Brazil.
| | - Paulo Caribé
- Medical Image Computing Laboratory (MEDICOM), PUCRS, Porto Alegre, RS, Brazil; Medical Imaging and Signal Processing (MEDISIP), Ghent University, Ghent, Belgium
| | | | | | - Lucas Narciso
- Medical Image Computing Laboratory (MEDICOM), PUCRS, Porto Alegre, RS, Brazil; Lawson Health Research Institute, London, Ontario, Canada
| | - Ana Maria Marques da Silva
- Medical Image Computing Laboratory (MEDICOM), PUCRS, Porto Alegre, RS, Brazil; School of Medicine, University of Sao Paulo, Sao Paulo, SP, Brazil
| |
Collapse
|
41
|
Pashazadeh A, Hoeschen C. [Opportunities for artificial intelligence in radiation protection : Improving safety of diagnostic imaging]. RADIOLOGIE (HEIDELBERG, GERMANY) 2023; 63:530-538. [PMID: 37347256 PMCID: PMC10299955 DOI: 10.1007/s00117-023-01167-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 05/16/2023] [Indexed: 06/23/2023]
Abstract
CLINICAL/METHODOLOGICAL ISSUE Imaging of structures of internal organs often requires ionizing radiation, which is a health risk. Reducing the radiation dose can increase the image noise, which means that images provide less information. STANDARD RADIOLOGICAL METHODS This problem is observed in commonly used medical imaging modalities such as computed tomography (CT), positron emission tomography (PET), single photon emission computed tomography (SPECT), angiography, fluoroscopy, and any modality that uses ionizing radiation for imaging. METHODOLOGICAL INNOVATIONS Artificial intelligence (AI) can improve the quality of low-dose images and help minimize radiation exposure. Potential applications are explored, and frameworks and procedures are critically evaluated. PERFORMANCE The performance of AI models varies. High-performance models could be used in clinical settings in the near future. Several challenges (e.g., quantitative accuracy, insufficient training data) must be addressed for optimal performance and widespread adoption of this technology in the field of medical imaging. PRACTICAL RECOMMENDATIONS To fully realize the potential of AI and deep learning (DL) in medical imaging, research and development must be intensified. In particular, quality control of AI models must be ensured, and training and testing data must be uncorrelated and quality assured. With sufficient scientific validation and rigorous quality management, AI could contribute to the safe use of low-dose techniques in medical imaging.
Collapse
Affiliation(s)
- Ali Pashazadeh
- Institut für Medizintechnik (IMT), Otto-von-Guericke-Universität Magdeburg, Otto-Hahn-Str. 2, 39016, Magdeburg, Deutschland.
| | - Christoph Hoeschen
- Institut für Medizintechnik (IMT), Otto-von-Guericke-Universität Magdeburg, Otto-Hahn-Str. 2, 39016, Magdeburg, Deutschland
| |
Collapse
|
42
|
Prieto Canalejo MA, Palau San Pedro A, Geronazzo R, Minsky DM, Juárez-Orozco LE, Namías M. Synthetic Attenuation Correction Maps for SPECT Imaging Using Deep Learning: A Study on Myocardial Perfusion Imaging. Diagnostics (Basel) 2023; 13:2214. [PMID: 37443608 DOI: 10.3390/diagnostics13132214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 06/24/2023] [Accepted: 06/27/2023] [Indexed: 07/15/2023] Open
Abstract
(1) Background: The CT-based attenuation correction of SPECT images is essential for obtaining accurate quantitative images in cardiovascular imaging. However, there are still many SPECT cameras without associated CT scanners throughout the world, especially in developing countries. Performing additional CT scans implies troublesome planning logistics and larger radiation doses for patients, making it a suboptimal solution. Deep learning (DL) offers a revolutionary way to generate complementary images for individual patients at a large scale. Hence, we aimed to generate linear attenuation coefficient maps from SPECT emission images reconstructed without attenuation correction using deep learning. (2) Methods: A total of 384 SPECT myocardial perfusion studies that used 99mTc-sestamibi were included. A DL model based on a 2D U-Net architecture was trained using information from 312 patients. The quality of the generated synthetic attenuation correction maps (ACMs) and reconstructed emission values were evaluated using three metrics and compared to standard-of-care data using Bland-Altman plots. Finally, a quantitative evaluation of myocardial uptake was performed, followed by a semi-quantitative evaluation of myocardial perfusion. (3) Results: In a test set of 66 test patients, the ACM quality metrics were MSSIM = 0.97 ± 0.001 and NMAE = 3.08 ± 1.26 (%), and the reconstructed emission quality metrics were MSSIM = 0.99 ± 0.003 and NMAE = 0.23 ± 0.13 (%). The 95% limits of agreement (LoAs) at the voxel level for reconstructed SPECT images were: [-9.04; 9.00]%, and for the segment level, they were [-11; 10]%. The 95% LoAs for the Summed Stress Score values between the images reconstructed were [-2.8, 3.0]. When global perfusion scores were assessed, only 2 out of 66 patients showed changes in perfusion categories. (4) Conclusion: Deep learning can generate accurate attenuation correction maps from non-attenuation-corrected cardiac SPECT images. These high-quality attenuation maps are suitable for attenuation correction in myocardial perfusion SPECT imaging and could obviate the need for additional imaging in standalone SPECT scanners.
Collapse
Affiliation(s)
| | | | - Ricardo Geronazzo
- Fundación Centro Diagnóstico Nuclear (FCDN), Buenos Aires C1417CVE, Argentina
| | - Daniel Mauricio Minsky
- Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica, San Martín B1650LWP, Argentina
| | | | - Mauro Namías
- Fundación Centro Diagnóstico Nuclear (FCDN), Buenos Aires C1417CVE, Argentina
| |
Collapse
|
43
|
Mínguez Gabiña P, Monserrat Fuertes T, Jauregui I, Del Amo C, Rodeño Ortiz de Zarate E, Gustafsson J. Activity recovery for differently shaped objects in quantitative SPECT. Phys Med Biol 2023; 68:125012. [PMID: 37236207 DOI: 10.1088/1361-6560/acd982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 05/26/2023] [Indexed: 05/28/2023]
Abstract
Objective.The aim was to theoretically and experimentally investigate recovery in SPECT images with objects of different shapes. Furthermore, the accuracy of volume estimation by thresholding was studied for those shapes.Approach.Nine spheres, nine oblate spheroids, and nine prolate spheroids phantom inserts were used, of which the six smaller spheres were part of the NEMA IEC body phantom and the rest of the inserts were 3D-printed. The inserts were filled with99mTc and177Lu. When filled with99mTc, SPECT images were acquired in a Siemens Symbia Intevo Bold gamma camera and when filled with177Lu in a General Electric NM/CT 870 DR gamma camera. The signal rate per activity (SRPA) was determined for all inserts and represented as a function of the volume-to-surface ratio and of the volume-equivalent radius using VOIs defined according to the sphere dimensions and VOIs defined using thresholding. Experimental values were compared with theoretical curves obtained analytically (spheres) or numerically (spheroids), starting from the convolution of a source distribution with a point-spread function. Validation of the activity estimation strategy was performed using four 3D-printed ellipsoids. Lastly, the threshold values necessary to determine the volume of each insert were obtained.Main results.Results showed that SRPA values for the oblate spheroids diverted from the other inserts, when SRPA were represented as a function of the volume-equivalent radius. However, SRPA values for all inserts followed a similar behaviour when represented as a function of the volume-to-surface ratio. Results for ellipsoids were in agreement with those results. For the three types of inserts the volume could be accurately estimated using a threshold method for volumes larger than 25 ml.Significance.Determination of SRPA independently of lesion or organ shape should decrease uncertainties in estimated activities and thereby, in the long term, be beneficial to patient care.
Collapse
Affiliation(s)
- Pablo Mínguez Gabiña
- Department of Medical Physics and Radiation Protection, Gurutzeta-Cruces University Hospital/ Biocruces Bizkaia Health Research Institute, Plaza Cruces s/n, E-48903 Barakaldo, Spain
- Faculty of Engineering, Department of Applied Physics, UPV/EHU, Bilbao, Spain
| | - Teresa Monserrat Fuertes
- Department of Medical Physics and Radiation Protection, Central University Hospital of Asturias, Oviedo, Spain
- Faculty of Medicine and Nursing, Department of Surgery, Radiology and Physical Medicine, UPV/EHU, Bilbao, Spain
| | - Inés Jauregui
- 3D Printing and Bioprinting Laboratory, Biocruces Bizkaia Health Research Institute, Plaza Cruces s/n, E-48903 Barakaldo, Spain
| | - Cristina Del Amo
- 3D Printing and Bioprinting Laboratory, Biocruces Bizkaia Health Research Institute, Plaza Cruces s/n, E-48903 Barakaldo, Spain
| | - Emilia Rodeño Ortiz de Zarate
- Department of Nuclear Medicine, Gurutzeta-Cruces University Hospital/ Biocruces Bizkaia Health Research Institute, Plaza Cruces s/n, E-48903 Barakaldo, Spain
| | | |
Collapse
|
44
|
Ying W. Phenomic Studies on Diseases: Potential and Challenges. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:285-299. [PMID: 36714223 PMCID: PMC9867904 DOI: 10.1007/s43657-022-00089-4] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 11/21/2022] [Accepted: 11/24/2022] [Indexed: 01/23/2023]
Abstract
The rapid development of such research field as multi-omics and artificial intelligence (AI) has made it possible to acquire and analyze the multi-dimensional big data of human phenomes. Increasing evidence has indicated that phenomics can provide a revolutionary strategy and approach for discovering new risk factors, diagnostic biomarkers and precision therapies of diseases, which holds profound advantages over conventional approaches for realizing precision medicine: first, the big data of patients' phenomes can provide remarkably richer information than that of the genomes; second, phenomic studies on diseases may expose the correlations among cross-scale and multi-dimensional phenomic parameters as well as the mechanisms underlying the correlations; and third, phenomics-based studies are big data-driven studies, which can significantly enhance the possibility and efficiency for generating novel discoveries. However, phenomic studies on human diseases are still in early developmental stage, which are facing multiple major challenges and tasks: first, there is significant deficiency in analytical and modeling approaches for analyzing the multi-dimensional data of human phenomes; second, it is crucial to establish universal standards for acquirement and management of phenomic data of patients; third, new methods and devices for acquirement of phenomic data of patients under clinical settings should be developed; fourth, it is of significance to establish the regulatory and ethical guidelines for phenomic studies on diseases; and fifth, it is important to develop effective international cooperation. It is expected that phenomic studies on diseases would profoundly and comprehensively enhance our capacity in prevention, diagnosis and treatment of diseases.
Collapse
Affiliation(s)
- Weihai Ying
- Med-X Research Institute and School of Biomedical Engineering, Shanghai Jiao Tong University, 1954 Huashan Road, Shanghai, 200030 China
- Collaborative Innovation Center for Genetics and Development, Shanghai, 200043 China
| |
Collapse
|
45
|
Sex-based differences in nuclear medicine imaging and therapy. Eur J Nucl Med Mol Imaging 2023; 50:971-974. [PMID: 36633615 DOI: 10.1007/s00259-023-06113-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 01/08/2023] [Indexed: 01/13/2023]
|
46
|
Saboury B, Bradshaw T, Boellaard R, Buvat I, Dutta J, Hatt M, Jha AK, Li Q, Liu C, McMeekin H, Morris MA, Scott PJH, Siegel E, Sunderland JJ, Pandit-Taskar N, Wahl RL, Zuehlsdorff S, Rahmim A. Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem. J Nucl Med 2023; 64:188-196. [PMID: 36522184 PMCID: PMC9902852 DOI: 10.2967/jnumed.121.263703] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 12/06/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
Trustworthiness is a core tenet of medicine. The patient-physician relationship is evolving from a dyad to a broader ecosystem of health care. With the emergence of artificial intelligence (AI) in medicine, the elements of trust must be revisited. We envision a road map for the establishment of trustworthy AI ecosystems in nuclear medicine. In this report, AI is contextualized in the history of technologic revolutions. Opportunities for AI applications in nuclear medicine related to diagnosis, therapy, and workflow efficiency, as well as emerging challenges and critical responsibilities, are discussed. Establishing and maintaining leadership in AI require a concerted effort to promote the rational and safe deployment of this innovative technology by engaging patients, nuclear medicine physicians, scientists, technologists, and referring providers, among other stakeholders, while protecting our patients and society. This strategic plan was prepared by the AI task force of the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland;
| | - Tyler Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Irène Buvat
- Institut Curie, Université PSL, INSERM, Université Paris-Saclay, Orsay, France
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - Helena McMeekin
- Department of Clinical Physics, Barts Health NHS Trust, London, United Kingdom
| | - Michael A Morris
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Peter J H Scott
- Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Eliot Siegel
- Department of Radiology and Nuclear Medicine, University of Maryland Medical Center, Baltimore, Maryland
| | - John J Sunderland
- Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa
| | - Neeta Pandit-Taskar
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Richard L Wahl
- Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri
| | - Sven Zuehlsdorff
- Siemens Medical Solutions USA, Inc., Hoffman Estates, Illinois; and
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
47
|
Sedlakova Z, Nachtigalova I, Rusina R, Matej R, Buncova M, Kukal J. Alzheimer ’s disease identification from 3D SPECT brain scans by variational analysis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
48
|
Salimi Y, Shiri I, Akavanallaf A, Mansouri Z, Arabi H, Zaidi H. Fully automated accurate patient positioning in computed tomography using anterior-posterior localizer images and a deep neural network: a dual-center study. Eur Radiol 2023; 33:3243-3252. [PMID: 36703015 PMCID: PMC9879741 DOI: 10.1007/s00330-023-09424-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/29/2022] [Accepted: 01/02/2023] [Indexed: 01/28/2023]
Abstract
OBJECTIVES This study aimed to improve patient positioning accuracy by relying on a CT localizer and a deep neural network to optimize image quality and radiation dose. METHODS We included 5754 chest CT axial and anterior-posterior (AP) images from two different centers, C1 and C2. After pre-processing, images were split into training (80%) and test (20%) datasets. A deep neural network was trained to generate 3D axial images from the AP localizer. The geometric centerlines of patient bodies were indicated by creating a bounding box on the predicted images. The distance between the body centerline, estimated by the deep learning model and ground truth (BCAP), was compared with patient mis-centering during manual positioning (BCMP). We evaluated the performance of our model in terms of distance between the lung centerline estimated by the deep learning model and the ground truth (LCAP). RESULTS The error in terms of BCAP was - 0.75 ± 7.73 mm and 2.06 ± 10.61 mm for C1 and C2, respectively. This error was significantly lower than BCMP, which achieved an error of 9.35 ± 14.94 and 13.98 ± 14.5 mm for C1 and C2, respectively. The absolute BCAP was 5.7 ± 5.26 and 8.26 ± 6.96 mm for C1 and C2, respectively. The LCAP metric was 1.56 ± 10.8 and -0.27 ± 16.29 mm for C1 and C2, respectively. The error in terms of BCAP and LCAP was higher for larger patients (p value < 0.01). CONCLUSION The accuracy of the proposed method was comparable to available alternative methods, carrying the advantage of being free from errors related to objects blocking the camera visibility. KEY POINTS • Patient mis-centering in the anterior-posterior direction (AP) is a common problem in clinical practice which can degrade image quality and increase patient radiation dose. • We proposed a deep neural network for automatic patient positioning using only the CT image localizer, achieving a performance comparable to alternative techniques, such as the external 3D visual camera. • The advantage of the proposed method is that it is free from errors related to objects blocking the camera visibility and that it could be implemented on imaging consoles as a patient positioning support tool.
Collapse
Affiliation(s)
- Yazdan Salimi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Azadeh Akavanallaf
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Zahra Mansouri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
49
|
The Use of Artificial Intelligence in the Diagnosis and Classification of Thyroid Nodules: An Update. Cancers (Basel) 2023; 15:cancers15030708. [PMID: 36765671 PMCID: PMC9913834 DOI: 10.3390/cancers15030708] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 01/20/2023] [Accepted: 01/20/2023] [Indexed: 01/27/2023] Open
Abstract
The incidence of thyroid nodules diagnosed is increasing every year, leading to a greater risk of unnecessary procedures being performed or wrong diagnoses being made. In our paper, we present the latest knowledge on the use of artificial intelligence in diagnosing and classifying thyroid nodules. We particularly focus on the usefulness of artificial intelligence in ultrasonography for the diagnosis and characterization of pathology, as these are the two most developed fields. In our search of the latest innovations, we reviewed only the latest publications of specific types published from 2018 to 2022. We analyzed 930 papers in total, from which we selected 33 that were the most relevant to the topic of our work. In conclusion, there is great scope for the use of artificial intelligence in future thyroid nodule classification and diagnosis. In addition to the most typical uses of artificial intelligence in cancer differentiation, we identified several other novel applications of artificial intelligence during our review.
Collapse
|
50
|
Jiang Y, Fang S, Feng J, Ruan Q, Zhang J. Synthesis and Bioevaluation of Novel Technetium-99m-Labeled Complexes with Norfloxacin HYNIC Derivatives for Bacterial Infection Imaging. Mol Pharm 2023; 20:630-640. [PMID: 36398935 DOI: 10.1021/acs.molpharmaceut.2c00830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
To seek a novel 99mTc-labeled quinolone derivative for bacterial infection SPECT imaging that aims to lower nontarget organ uptake, a novel norfloxacin 6-hydrazinoicotinamide (HYNIC) derivative (HYNICNF) was designed and synthesized. It was radiolabeled with different coligands, such as tricine, trisodium triphenylphosphine-3,3',3″-trisulfonate (TPPTS), sodium triphenylphosphine-3-monosulfonate (TPPMS), and ethylenediamine-N,N'-diacetic acid (EDDA), to obtain three 99mTc-labeled norfloxacin HYNIC complexes, namely, [99mTc]Tc-tricine-TPPTS-HYNICNF, [99mTc]Tc-tricine-TPPMS-HYNICNF, and [99mTc]Tc-EDDA-HYNICNF. These complexes were purified (RCP > 95%) and evaluated in vitro and in vivo for targeting bacteria. All three complexes are hydrophilic, maintain good stability, and specifically bind Staphylococcus aureus in vitro. The biodistribution in mice with bacterial infection demonstrated that [99mTc]Tc-EDDA-HYNICNF showed a higher abscess uptake and lower nontarget organ uptake and was able to distinguish bacterial infection and sterile inflammation. Single photon emission computed tomography (SPECT) image study in bacterial infection mice showed there was a visible accumulation in the infection site, suggesting that [99mTc]Tc-EDDA-HYNICNF is a potential radiotracer for bacterial infection imaging.
Collapse
Affiliation(s)
- Yuhao Jiang
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing 100875, China
| | - Si'an Fang
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing 100875, China
| | - Junhong Feng
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing 100875, China
| | - Qing Ruan
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing 100875, China
| | - Junbo Zhang
- Key Laboratory of Radiopharmaceuticals of Ministry of Education, NMPA Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Product Administration), College of Chemistry, Beijing Normal University, Beijing 100875, China
| |
Collapse
|