1
|
Tariq B, Sikander O, Francis N, Alkhatib M, Naseer F, Werghi N, Memisoglu E, Maalej N, Raja A. Assessment of material identification and quantification in the presence of metals using spectral photon counting CT. PLoS One 2024; 19:e0308658. [PMID: 39269959 PMCID: PMC11398698 DOI: 10.1371/journal.pone.0308658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 07/24/2024] [Indexed: 09/15/2024] Open
Abstract
Spectral Photon Counting Computed Tomography (SPCCT), a ground-breaking development in CT technology, has immense potential to address the persistent problem of metal artefacts in CT images. This study aims to evaluate the potential of Mars photon-counting CT technology in reducing metal artefacts. It focuses on identifying and quantifying clinically significant materials in the presence of metal objects. A multi-material phantom was used, containing inserts of varying concentrations of hydroxyapatite (a mineral present in teeth, bones, and calcified plaque), iodine (used as a contrast agent), CT water (to mimic soft tissue), and adipose (as a fat substitute). Three sets of scans were acquired: with aluminium, with stainless steel, and without a metal insert as a reference dataset. Data acquisition was performed using a Mars SPCCT scanner (Microlab 5×120); operated at 118 kVp and 80 μA. The images were subsequently reconstructed into five energy bins: 7-40, 40-50, 50-60, 60-79, and 79-118 keV. Evaluation metrics including signal-to-noise ratio (SNR), linearity of attenuation profiles, root mean square error (RMSE), and area under the curve (AUC) were employed to assess the energy and material-density images with and without metal inserts. Results show decreased metal artefacts and a better signal-to-noise ratio (up to 25%) with increased energy bins as compared to reference data. The attenuation profile also demonstrated high linearity (R2 >0.95) and lower RMSE across all material concentrations, even in the presence of aluminium and steel. Material identification accuracy for iodine and hydroxyapatite (with and without metal inserts) remained consistent, minimally impacting AUC values. For demonstration purposes, the biological sample was also scanned with the stainless steel volar implant and cortical bone screw, and the images were objectively assessed to indicate the potential effectiveness of SPCCT in replicating real-world clinical scenarios.
Collapse
Affiliation(s)
- Briya Tariq
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Osama Sikander
- Department of Biomedical Engineering & Sciences, National University of Sciences and Technology, Islamabad, Pakistan
| | - Nadine Francis
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Manar Alkhatib
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Farhat Naseer
- Department of Robotics and Intelligent Machine Engineering, National University of Sciences and Technology, Islamabad, Pakistan
| | - Naoufel Werghi
- Department of Electrical & Computer Engineering, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Esat Memisoglu
- Imaging Institute, Cleveland Clinic Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Nabil Maalej
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| | - Aamir Raja
- Department of Physics, Khalifa University, Abu Dhabi, United Arab Emirates
| |
Collapse
|
2
|
Li J, Jia S, Li D, Chow L, Zhang Q, Yang Y, Bai X, Qu Q, Gao Y, Li Z, Li Z, Shi R, Zhang B, Huang Y, Pan X, Hu Y, Gao Z, Zhou J, Park W, Huang X, Chu H, Chen Z, Li H, Wu P, Zhao G, Yao K, Hadzipasic M, Bernstock JD, Shankar GM, Nan K, Yu X, Traverso G. Wearable bio-adhesive metal detector array (BioMDA) for spinal implants. Nat Commun 2024; 15:7800. [PMID: 39242511 PMCID: PMC11379874 DOI: 10.1038/s41467-024-51987-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Accepted: 08/22/2024] [Indexed: 09/09/2024] Open
Abstract
Dynamic tracking of spinal instrumentation could facilitate real-time evaluation of hardware integrity and in so doing alert patients/clinicians of potential failure(s). Critically, no method yet exists to continually monitor the integrity of spinal hardware and by proxy the process of spinal arthrodesis; as such hardware failures are often not appreciated until clinical symptoms manifest. Accordingly, herein, we report on the development and engineering of a bio-adhesive metal detector array (BioMDA), a potential wearable solution for real-time, non-invasive positional analyses of osseous implants within the spine. The electromagnetic coupling mechanism and intimate interfacial adhesion enable the precise sensing of the metallic implants position without the use of radiation. The customized decoupling models developed facilitate the precise determination of the horizontal and vertical positions of the implants with incredible levels of accuracy (e.g., <0.5 mm). These data support the potential use of BioMDA in real-time/dynamic postoperative monitoring of spinal implants.
Collapse
Affiliation(s)
- Jian Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Shengxin Jia
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Dengfeng Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China.
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China.
| | - Lung Chow
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Qiang Zhang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Yiyuan Yang
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Xiao Bai
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Qingao Qu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Yuyu Gao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Zhiyuan Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Zongze Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Rui Shi
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Binbin Zhang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Ya Huang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Xinyu Pan
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Yue Hu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Zhan Gao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Jingkun Zhou
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - WooYoung Park
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Xingcan Huang
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Hongwei Chu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Zhenlin Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China
| | - Hu Li
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Pengcheng Wu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Guangyao Zhao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Kuanming Yao
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Muhamed Hadzipasic
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School Boston, Massachusetts, USA
| | - Joshua D Bernstock
- Department of Neurosurgery, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- David H. Koch Institute for Integrative Cancer Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ganesh M Shankar
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School Boston, Massachusetts, USA.
| | - Kewang Nan
- College of Pharmaceutical Sciences, Zhejiang University, Hangzhou, China.
- Department of Gastroenterology Surgery, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, 310000, China.
| | - Xinge Yu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China.
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong, China.
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, 518057, China.
| | - Giovanni Traverso
- Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA, USA.
- Division of Gastroenterology, Hepatology and Endoscopy, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Broad Institute of MIT and Harvard, Cambridge, MA, USA.
| |
Collapse
|
3
|
Kogan F, Yoon D, Teeter MG, Chaudhari AJ, Hales L, Barbieri M, Gold GE, Vainberg Y, Goyal A, Watkins L. Multimodal positron emission tomography (PET) imaging in non-oncologic musculoskeletal radiology. Skeletal Radiol 2024; 53:1833-1846. [PMID: 38492029 DOI: 10.1007/s00256-024-04640-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 03/18/2024]
Abstract
Musculoskeletal (MSK) disorders are associated with large impacts on patient's pain and quality of life. Conventional morphological imaging of tissue structure is limited in its ability to detect pain generators, early MSK disease, and rapidly assess treatment efficacy. Positron emission tomography (PET), which offers unique capabilities to evaluate molecular and metabolic processes, can provide novel information about early pathophysiologic changes that occur before structural or even microstructural changes can be detected. This sensitivity not only makes it a powerful tool for detection and characterization of disease, but also a tool able to rapidly assess the efficacy of therapies. These benefits have garnered more attention to PET imaging of MSK disorders in recent years. In this narrative review, we discuss several applications of multimodal PET imaging in non-oncologic MSK diseases including arthritis, osteoporosis, and sources of pain and inflammation. We also describe technical considerations and recent advancements in technology and radiotracers as well as areas of emerging interest for future applications of multimodal PET imaging of MSK conditions. Overall, we present evidence that the incorporation of PET through multimodal imaging offers an exciting addition to the field of MSK radiology and will likely prove valuable in the transition to an era of precision medicine.
Collapse
Affiliation(s)
- Feliks Kogan
- Department of Radiology, Stanford University, Stanford, CA, USA.
| | - Daehyun Yoon
- Department of Radiology, University of California-San Francisco, San Francisco, CA, USA
| | - Matthew G Teeter
- Department of Medical Biophysics, Western University, London, ON, Canada
| | | | - Laurel Hales
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Marco Barbieri
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Garry E Gold
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Yael Vainberg
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Ananya Goyal
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Lauren Watkins
- Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Ong W, Lee A, Tan WC, Fong KTD, Lai DD, Tan YL, Low XZ, Ge S, Makmur A, Ong SJ, Ting YH, Tan JH, Kumar N, Hallinan JTPD. Oncologic Applications of Artificial Intelligence and Deep Learning Methods in CT Spine Imaging-A Systematic Review. Cancers (Basel) 2024; 16:2988. [PMID: 39272846 PMCID: PMC11394591 DOI: 10.3390/cancers16172988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 08/14/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
In spinal oncology, integrating deep learning with computed tomography (CT) imaging has shown promise in enhancing diagnostic accuracy, treatment planning, and patient outcomes. This systematic review synthesizes evidence on artificial intelligence (AI) applications in CT imaging for spinal tumors. A PRISMA-guided search identified 33 studies: 12 (36.4%) focused on detecting spinal malignancies, 11 (33.3%) on classification, 6 (18.2%) on prognostication, 3 (9.1%) on treatment planning, and 1 (3.0%) on both detection and classification. Of the classification studies, 7 (21.2%) used machine learning to distinguish between benign and malignant lesions, 3 (9.1%) evaluated tumor stage or grade, and 2 (6.1%) employed radiomics for biomarker classification. Prognostic studies included three (9.1%) that predicted complications such as pathological fractures and three (9.1%) that predicted treatment outcomes. AI's potential for improving workflow efficiency, aiding decision-making, and reducing complications is discussed, along with its limitations in generalizability, interpretability, and clinical integration. Future directions for AI in spinal oncology are also explored. In conclusion, while AI technologies in CT imaging are promising, further research is necessary to validate their clinical effectiveness and optimize their integration into routine practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Aric Lee
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Wei Chuan Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Kuan Ting Dominic Fong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Daoyong David Lai
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shuliang Ge
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shao Jin Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Yong Han Ting
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
5
|
Yang J, Afaq A, Sibley R, McMilan A, Pirasteh A. Deep learning applications for quantitative and qualitative PET in PET/MR: technical and clinical unmet needs. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01199-y. [PMID: 39167304 DOI: 10.1007/s10334-024-01199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/23/2024]
Abstract
We aim to provide an overview of technical and clinical unmet needs in deep learning (DL) applications for quantitative and qualitative PET in PET/MR, with a focus on attenuation correction, image enhancement, motion correction, kinetic modeling, and simulated data generation. (1) DL-based attenuation correction (DLAC) remains an area of limited exploration for pediatric whole-body PET/MR and lung-specific DLAC due to data shortages and technical limitations. (2) DL-based image enhancement approximating MR-guided regularized reconstruction with a high-resolution MR prior has shown promise in enhancing PET image quality. However, its clinical value has not been thoroughly evaluated across various radiotracers, and applications outside the head may pose challenges due to motion artifacts. (3) Robust training for DL-based motion correction requires pairs of motion-corrupted and motion-corrected PET/MR data. However, these pairs are rare. (4) DL-based approaches can address the limitations of dynamic PET, such as long scan durations that may cause patient discomfort and motion, providing new research opportunities. (5) Monte-Carlo simulations using anthropomorphic digital phantoms can provide extensive datasets to address the shortage of clinical data. This summary of technical/clinical challenges and potential solutions may provide research opportunities for the research community towards the clinical translation of DL solutions.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA.
| | - Asim Afaq
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Robert Sibley
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Alan McMilan
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| | - Ali Pirasteh
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| |
Collapse
|
6
|
Xie K, Gao L, Zhang Y, Zhang H, Sun J, Lin T, Sui J, Ni X. Metal implant segmentation in CT images based on diffusion model. BMC Med Imaging 2024; 24:204. [PMID: 39107679 PMCID: PMC11301972 DOI: 10.1186/s12880-024-01379-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2024] [Accepted: 07/25/2024] [Indexed: 08/10/2024] Open
Abstract
BACKGROUND Computed tomography (CT) is widely in clinics and is affected by metal implants. Metal segmentation is crucial for metal artifact correction, and the common threshold method often fails to accurately segment metals. PURPOSE This study aims to segment metal implants in CT images using a diffusion model and further validate it with clinical artifact images and phantom images of known size. METHODS A retrospective study was conducted on 100 patients who received radiation therapy without metal artifacts, and simulated artifact data were generated using publicly available mask data. The study utilized 11,280 slices for training and verification, and 2,820 slices for testing. Metal mask segmentation was performed using DiffSeg, a diffusion model incorporating conditional dynamic coding and a global frequency parser (GFParser). Conditional dynamic coding fuses the current segmentation mask and prior images at multiple scales, while GFParser helps eliminate high-frequency noise in the mask. Clinical artifact images and phantom images are also used for model validation. RESULTS Compared with the ground truth, the accuracy of DiffSeg for metal segmentation of simulated data was 97.89% and that of DSC was 95.45%. The mask shape obtained by threshold segmentation covered the ground truth and DSCs were 82.92% and 84.19% for threshold segmentation based on 2500 HU and 3000 HU. Evaluation metrics and visualization results show that DiffSeg performs better than other classical deep learning networks, especially for clinical CT, artifact data, and phantom data. CONCLUSION DiffSeg efficiently and robustly segments metal masks in artifact data with conditional dynamic coding and GFParser. Future work will involve embedding the metal segmentation model in metal artifact reduction to improve the reduction effect.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Liugang Gao
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Yutao Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
- Changzhou Key Laboratory of Medical Physics, Changzhou, 213000, China
| | - Jiawei Sun
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Tao Lin
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China
| | - Xinye Ni
- Radiotherapy Department, The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213000, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213000, China.
- Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
- Changzhou Key Laboratory of Medical Physics, Changzhou, 213000, China.
| |
Collapse
|
7
|
Xiang B, Lu J, Yu J. Evaluating tooth segmentation accuracy and time efficiency in CBCT images using artificial intelligence: A systematic review and Meta-analysis. J Dent 2024; 146:105064. [PMID: 38768854 DOI: 10.1016/j.jdent.2024.105064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 04/22/2024] [Accepted: 05/09/2024] [Indexed: 05/22/2024] Open
Abstract
OBJECTIVES This systematic review and meta-analysis aimed to assess the current performance of artificial intelligence (AI)-based methods for tooth segmentation in three-dimensional cone-beam computed tomography (CBCT) images, with a focus on their accuracy and efficiency compared to those of manual segmentation techniques. DATA The data analyzed in this review consisted of a wide range of research studies utilizing AI algorithms for tooth segmentation in CBCT images. Meta-analysis was performed, focusing on the evaluation of the segmentation results using the dice similarity coefficient (DSC). SOURCES PubMed, Embase, Scopus, Web of Science, and IEEE Explore were comprehensively searched to identify relevant studies. The initial search yielded 5642 entries, and subsequent screening and selection processes led to the inclusion of 35 studies in the systematic review. Among the various segmentation methods employed, convolutional neural networks, particularly the U-net model, are the most commonly utilized. The pooled effect of the DSC score for tooth segmentation was 0.95 (95 %CI 0.94 to 0.96). Furthermore, seven papers provided insights into the time required for segmentation, which ranged from 1.5 s to 3.4 min when utilizing AI techniques. CONCLUSIONS AI models demonstrated favorable accuracy in automatically segmenting teeth from CBCT images while reducing the time required for the process. Nevertheless, correction methods for metal artifacts and tooth structure segmentation using different imaging modalities should be addressed in future studies. CLINICAL SIGNIFICANCE AI algorithms have great potential for precise tooth measurements, orthodontic treatment planning, dental implant placement, and other dental procedures that require accurate tooth delineation. These advances have contributed to improved clinical outcomes and patient care in dental practice.
Collapse
Affiliation(s)
- Bilu Xiang
- School of Dentistry, Shenzhen University Medical School, Shenzhen University, Shenzhen 518000, China.
| | - Jiayi Lu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| | - Jiayi Yu
- Department of Stomatology, Shenzhen University General Hospital, Shenzhen University, Shenzhen 518000, China
| |
Collapse
|
8
|
Ichikawa K, Kawashima H, Takata T. An image-based metal artifact reduction technique utilizing forward projection in computed tomography. Radiol Phys Technol 2024; 17:402-411. [PMID: 38546970 PMCID: PMC11128408 DOI: 10.1007/s12194-024-00790-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 01/26/2024] [Accepted: 02/13/2024] [Indexed: 05/27/2024]
Abstract
The projection data generated via the forward projection of a computed tomography (CT) image (FP-data) have useful potentials in cases where only image data are available. However, there is a question of whether the FP-data generated from an image severely corrupted by metal artifacts can be used for the metal artifact reduction (MAR). The aim of this study was to investigate the feasibility of a MAR technique using FP-data by comparing its performance with that of a conventional robust MAR using projection data normalization (NMARconv). The NMARconv was modified to make use of FP-data (FPNMAR). A graphics processing unit was used to reduce the time required to generate FP-data and subsequent processes. The performances of FPNMAR and NMARconv were quantitatively compared using a normalized artifact index (AIn) for two cases each of hip prosthesis and dental fillings. Several clinical CT images with metal artifacts were processed by FPNMAR. The AIn values of FPNMAR and NMARconv were not significantly different from each other, showing almost the same performance between these two techniques. For all the clinical cases tested, FPNMAR significantly reduced the metal artifacts; thereby, the images of the soft tissues and bones obscured by the artifacts were notably recovered. The computation time per image was ~ 56 ms. FPNMAR, which can be applied to CT images without accessing the projection data, exhibited almost the same performance as that of NMARconv, while consuming significantly shorter processing time. This capability testifies the potential of FPNMAR for wider use in clinical settings.
Collapse
Affiliation(s)
- Katsuhiro Ichikawa
- Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, 920-0942, Japan.
| | - Hiroki Kawashima
- Faculty of Health Sciences, Institute of Medical, Pharmaceutical and Health Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, 920-0942, Japan
| | - Tadanori Takata
- Department of Diagnostic Radiology, Kanazawa University Hospital, 13-1 Takara-machi, Kanazawa, 920-8641, Japan
| |
Collapse
|
9
|
Dehghani F, Karimian A, Arabi H. Joint Brain Tumor Segmentation from Multi-magnetic Resonance Sequences through a Deep Convolutional Neural Network. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:9. [PMID: 38993203 PMCID: PMC11111160 DOI: 10.4103/jmss.jmss_13_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/07/2023] [Accepted: 07/31/2023] [Indexed: 07/13/2024]
Abstract
Background Brain tumor segmentation is highly contributive in diagnosing and treatment planning. Manual brain tumor delineation is a time-consuming and tedious task and varies depending on the radiologist's skill. Automated brain tumor segmentation is of high importance and does not depend on either inter- or intra-observation. The objective of this study is to automate the delineation of brain tumors from the Fluid-attenuated inversion recovery (FLAIR), T1-weighted (T1W), T2-weighted (T2W), and T1W contrast-enhanced (T1ce) magnetic resonance (MR) sequences through a deep learning approach, with a focus on determining which MR sequence alone or which combination thereof would lead to the highest accuracy therein. Methods The BraTS-2020 challenge dataset, containing 370 subjects with four MR sequences and manually delineated tumor masks, is applied to train a residual neural network. This network is trained and assessed separately for each one of the MR sequences (single-channel input) and any combination thereof (dual- or multi-channel input). Results The quantitative assessment of the single-channel models reveals that the FLAIR sequence would yield higher segmentation accuracy compared to its counterparts with a 0.77 ± 0.10 Dice index. As to considering the dual-channel models, the model with FLAIR and T2W inputs yields a 0.80 ± 0.10 Dice index, exhibiting higher performance. The joint tumor segmentation on the entire four MR sequences yields the highest overall segmentation accuracy with a 0.82 ± 0.09 Dice index. Conclusion The FLAIR MR sequence is considered the best choice for tumor segmentation on a single MR sequence, while the joint segmentation on the entire four MR sequences would yield higher tumor delineation accuracy.
Collapse
Affiliation(s)
- Farzaneh Dehghani
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Hossein Arabi
- Department of Medical Imaging, Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
10
|
Selles M, Wellenberg RHH, Slotman DJ, Nijholt IM, van Osch JAC, van Dijke KF, Maas M, Boomsma MF. Image quality and metal artifact reduction in total hip arthroplasty CT: deep learning-based algorithm versus virtual monoenergetic imaging and orthopedic metal artifact reduction. Eur Radiol Exp 2024; 8:31. [PMID: 38480603 PMCID: PMC10937891 DOI: 10.1186/s41747-024-00427-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/02/2024] [Indexed: 03/17/2024] Open
Abstract
BACKGROUND To compare image quality, metal artifacts, and diagnostic confidence of conventional computed tomography (CT) images of unilateral total hip arthroplasty patients (THA) with deep learning-based metal artifact reduction (DL-MAR) to conventional CT and 130-keV monoenergetic images with and without orthopedic metal artifact reduction (O-MAR). METHODS Conventional CT and 130-keV monoenergetic images with and without O-MAR and DL-MAR images of 28 unilateral THA patients were reconstructed. Image quality, metal artifacts, and diagnostic confidence in bone, pelvic organs, and soft tissue adjacent to the prosthesis were jointly scored by two experienced musculoskeletal radiologists. Contrast-to-noise ratios (CNR) between bladder and fat and muscle and fat were measured. Wilcoxon signed-rank tests with Holm-Bonferroni correction were used. RESULTS Significantly higher image quality, higher diagnostic confidence, and less severe metal artifacts were observed on DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001 for all comparisons). Higher image quality, higher diagnostic confidence for bone and soft tissue adjacent to the prosthesis, and less severe metal artifacts were observed on DL-MAR when compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.014). CNRs were higher for DL-MAR and images with O-MAR compared to images without O-MAR (p < 0.001). Higher CNRs were observed on DL-MAR images compared to conventional images and 130-keV monoenergetic images with O-MAR (p ≤ 0.010). CONCLUSIONS DL-MAR showed higher image quality, diagnostic confidence, and superior metal artifact reduction compared to conventional CT images and 130-keV monoenergetic images with and without O-MAR in unilateral THA patients. RELEVANCE STATEMENT DL-MAR resulted into improved image quality, stronger reduction of metal artifacts, and improved diagnostic confidence compared to conventional and virtual monoenergetic images with and without metal artifact reduction, bringing DL-based metal artifact reduction closer to clinical application. KEY POINTS • Metal artifacts introduced by total hip arthroplasty hamper radiologic assessment on CT. • A deep-learning algorithm (DL-MAR) was compared to dual-layer CT images with O-MAR. • DL-MAR showed best image quality and diagnostic confidence. • Highest contrast-to-noise ratios were observed on the DL-MAR images.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands.
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands.
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands.
| | - Ruud H H Wellenberg
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | - Ingrid M Nijholt
- Department of Radiology, Isala, 8025 AB, Zwolle, the Netherlands
| | | | - Kees F van Dijke
- Department of Radiology & Nuclear Medicine, Noordwest Ziekenhuisgroep, 1815 JD, Alkmaar, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ, Amsterdam, the Netherlands
- Amsterdam Movement Sciences, 1081 BT, Amsterdam, the Netherlands
| | | |
Collapse
|
11
|
Aootaphao S, Puttawibul P, Thajchayapong P, Thongvigitmanee SS. Artifact suppression for breast specimen imaging in micro CBCT using deep learning. BMC Med Imaging 2024; 24:34. [PMID: 38321390 PMCID: PMC10845762 DOI: 10.1186/s12880-024-01216-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 01/29/2024] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND Cone-beam computed tomography (CBCT) has been introduced for breast-specimen imaging to identify a free resection margin of abnormal tissues in breast conservation. As well-known, typical micro CT consumes long acquisition and computation times. One simple solution to reduce the acquisition scan time is to decrease of the number of projections, but this method generates streak artifacts on breast specimen images. Furthermore, the presence of a metallic-needle marker on a breast specimen causes metal artifacts that are prominently visible in the images. In this work, we propose a deep learning-based approach for suppressing both streak and metal artifacts in CBCT. METHODS In this work, sinogram datasets acquired from CBCT and a small number of projections containing metal objects were used. The sinogram was first modified by removing metal objects and up sampling in the angular direction. Then, the modified sinogram was initialized by linear interpolation and synthesized by a modified neural network model based on a U-Net structure. To obtain the reconstructed images, the synthesized sinogram was reconstructed using the traditional filtered backprojection (FBP) approach. The remaining residual artifacts on the images were further handled by another neural network model, ResU-Net. The corresponding denoised image was combined with the extracted metal objects in the same data positions to produce the final results. RESULTS The image quality of the reconstructed images from the proposed method was improved better than the images from the conventional FBP, iterative reconstruction (IR), sinogram with linear interpolation, denoise with ResU-Net, sinogram with U-Net. The proposed method yielded 3.6 times higher contrast-to-noise ratio, 1.3 times higher peak signal-to-noise ratio, and 1.4 times higher structural similarity index (SSIM) than the traditional technique. Soft tissues around the marker on the images showed good improvement, and the mainly severe artifacts on the images were significantly reduced and regulated by the proposed. METHOD CONCLUSIONS Our proposed method performs well reducing streak and metal artifacts in the CBCT reconstructed images, thus improving the overall breast specimen images. This would be beneficial for clinical use.
Collapse
Affiliation(s)
- Sorapong Aootaphao
- Faculty of Medicine, Prince of Songkla University, Songkhla, Thailand.
- Medical Imaging System Research Team, Assistive Technology and Medical Devices Research Group, National Electronics and Computer Technology Center, National Science and Technology Development Agency, Pathum Thani, Thailand.
| | | | | | - Saowapak S Thongvigitmanee
- Medical Imaging System Research Team, Assistive Technology and Medical Devices Research Group, National Electronics and Computer Technology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| |
Collapse
|
12
|
Selles M, van Osch JAC, Maas M, Boomsma MF, Wellenberg RHH. Advances in metal artifact reduction in CT images: A review of traditional and novel metal artifact reduction techniques. Eur J Radiol 2024; 170:111276. [PMID: 38142571 DOI: 10.1016/j.ejrad.2023.111276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/14/2023] [Accepted: 12/18/2023] [Indexed: 12/26/2023]
Abstract
Metal artifacts degrade CT image quality, hampering clinical assessment. Numerous metal artifact reduction methods are available to improve the image quality of CT images with metal implants. In this review, an overview of traditional methods is provided including the modification of acquisition and reconstruction parameters, projection-based metal artifact reduction techniques (MAR), dual energy CT (DECT) and the combination of these techniques. Furthermore, the additional value and challenges of novel metal artifact reduction techniques that have been introduced over the past years are discussed such as photon counting CT (PCCT) and deep learning based metal artifact reduction techniques.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | | | - Mario Maas
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | | - Ruud H H Wellenberg
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| |
Collapse
|
13
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
14
|
Guo R, Zou Y, Zhang S, An J, Zhang G, Du X, Gong H, Xiong S, Long Y, Ma J. Preclinical validation of a novel deep learning-based metal artifact correction algorithm for orthopedic CT imaging. J Appl Clin Med Phys 2023; 24:e14166. [PMID: 37787513 PMCID: PMC10647951 DOI: 10.1002/acm2.14166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 06/25/2023] [Accepted: 09/05/2023] [Indexed: 10/04/2023] Open
Abstract
PURPOSE To validate a novel deep learning-based metal artifact correction (MAC) algorithm for CT, namely, AI-MAC, in preclinical setting with comparison to conventional MAC and virtual monochromatic imaging (VMI) technique. MATERIALS AND METHODS An experimental phantom was designed by consecutively inserting two sets of pedicle screws (size Φ 6.5 × 30-mm and Φ 7.5 × 40-mm) into a vertebral specimen to simulate the clinical scenario of metal implantation. The resulting MAC, VMI, and AI-MAC images were compared with respect to the metal-free reference image by subjective scoring, as well as by CT attenuation, image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and correction accuracy via adaptive segmentation of the paraspinal muscle and vertebral body. RESULTS The AI-MAC and VMI images showed significantly higher subjective scores than the MAC image (all p < 0.05). The SNRs and CNRs on the AI-MAC image were comparable to the reference (all p > 0.05), whereas those on the VMI were significantly lower (all p < 0.05). The paraspinal muscle segmented on the AI-MAC image was 4.6% and 5.1% more complete to the VMI and MAC images for the Φ 6.5 × 30-mm screws, and 5.0% and 5.1% for the Φ 7.5 × 40-mm screws, respectively. The vertebral body segmented on the VMI was closest to the reference, with only 3.2% and 7.4% overestimation for Φ 6.5 × 30-mm and Φ 7.5 × 40-mm screws, respectively. CONCLUSIONS Using metal-free reference as the ground truth for comparison, the AI-MAC outperforms VMI in characterizing soft tissue, while VMI is useful in skeletal depiction.
Collapse
Affiliation(s)
- Rui Guo
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | | | - Shuai Zhang
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | - Jiajia An
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | | | - Xiangdong Du
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | - Huan Gong
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | - Sining Xiong
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | - Yangfei Long
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| | - Jing Ma
- Department of RadiologyXinjiang Production & Construction Corps HospitalUrumqiChina
| |
Collapse
|
15
|
Liang D, Zhang S, Zhao Z, Wang G, Sun J, Zhao J, Li W, Xu LX. Two-stage generative adversarial networks for metal artifact reduction and visualization in ablation therapy of liver tumors. Int J Comput Assist Radiol Surg 2023; 18:1991-2000. [PMID: 37391537 DOI: 10.1007/s11548-023-02986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 06/12/2023] [Indexed: 07/02/2023]
Abstract
PURPOSE The strong metal artifacts produced by the electrode needle cause poor image quality, thus preventing physicians from observing the surgical situation during the puncture process. To address this issue, we propose a metal artifact reduction and visualization framework for CT-guided ablation therapy of liver tumors. METHODS Our framework contains a metal artifact reduction model and an ablation therapy visualization model. A two-stage generative adversarial network is proposed to reduce the metal artifacts of intraoperative CT images and avoid image blurring. To visualize the puncture process, the axis and tip of the needle are localized, and then the needle is rebuilt in 3D space intraoperatively. RESULTS Experiments show that our proposed metal artifact reduction method achieves higher SSIM (0.891) and PSNR (26.920) values than the state-of-the-art methods. The accuracy of ablation needle reconstruction is 2.76 mm average in needle tip localization and 1.64° average in needle axis localization. CONCLUSION We propose a novel metal artifact reduction and an ablation therapy visualization framework for CT-guided ablation therapy of liver cancer. The experiment results indicate that our approach can reduce metal artifacts and improve image quality. Furthermore, our proposed method demonstrates the potential for displaying the relative position of the tumor and the needle intraoperatively.
Collapse
Affiliation(s)
- Duan Liang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Shunan Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ziqi Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Guangzhi Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wentao Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200240, China
| | - Lisa X Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
16
|
Rau A, Straehle J, Stein T, Diallo T, Rau S, Faby S, Nikolaou K, Schoenberg SO, Overhoff D, Beck J, Urbach H, Klingler JH, Bamberg F, Weiss J. Photon-Counting Computed Tomography (PC-CT) of the spine: impact on diagnostic confidence and radiation dose. Eur Radiol 2023; 33:5578-5586. [PMID: 36890304 PMCID: PMC10326119 DOI: 10.1007/s00330-023-09511-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 01/23/2023] [Accepted: 02/10/2023] [Indexed: 03/10/2023]
Abstract
OBJECTIVES Computed tomography (CT) is employed to evaluate surgical outcome after spinal interventions. Here, we investigate the potential of multispectral photon-counting computed tomography (PC-CT) on image quality, diagnostic confidence, and radiation dose compared to an energy-integrating CT (EID-CT). METHODS In this prospective study, 32 patients underwent PC-CT of the spine. Data was reconstructed in two ways: (1) standard bone kernel with 65-keV (PC-CTstd) and (2) 130-keV monoenergetic images (PC-CT130 keV). Prior EID-CT was available for 17 patients; for the remaining 15, an age-, sex-, and body mass index-matched EID-CT cohort was identified. Image quality (5-point Likert scales on overall, sharpness, artifacts, noise, diagnostic confidence) of PC-CTstd and EID-CT was assessed by four radiologists independently. If metallic implants were present (n = 10), PC-CTstd and PC-CT130 keV images were again assessed by 5-point Likert scales by the same radiologists. Hounsfield units (HU) were measured within metallic artifact and compared between PC-CTstd and PC-CT130 keV. Finally, the radiation dose (CTDIvol) was evaluated. RESULTS Sharpness was rated significantly higher (p = 0.009) and noise significantly lower (p < 0.001) in PC-CTstd vs. EID-CT. In the subset of patients with metallic implants, reading scores for PC-CT130 keV revealed superior ratings vs. PC-CTstd for image quality, artifacts, noise, and diagnostic confidence (all p < 0.001) accompanied by a significant increase of HU values within the artifact (p < 0.001). Radiation dose was significantly lower for PC-CT vs. EID-CT (mean CTDIvol: 8.83 vs. 15.7 mGy; p < 0.001). CONCLUSIONS PC-CT of the spine with high-kiloelectronvolt reconstructions provides sharper images, higher diagnostic confidence, and lower radiation dose in patients with metallic implants. KEY POINTS • Compared to energy-integrating CT, photon-counting CT of the spine had significantly higher sharpness and lower image noise while radiation dose was reduced by 45%. • In patients with metallic implants, virtual monochromatic photon-counting images at 130 keV were superior to standard reconstruction at 65 keV in terms of image quality, artifacts, noise, and diagnostic confidence.
Collapse
Affiliation(s)
- Alexander Rau
- Department of Diagnostic and Interventional Radiology, University Hospital, Hugstetter Straße 55, 79106, Freiburg, Germany.
- Department of Neuroradiology, University Hospital, Breisacher Straße 64, 79106, Freiburg, Germany.
| | - Jakob Straehle
- Department of Neurosurgery, University Hospital, Breisacher Straße 64, 79106, Freiburg, Germany
| | - Thomas Stein
- Department of Diagnostic and Interventional Radiology, University Hospital, Hugstetter Straße 55, 79106, Freiburg, Germany
| | - Thierno Diallo
- Department of Diagnostic and Interventional Radiology, University Hospital, Hugstetter Straße 55, 79106, Freiburg, Germany
| | - Stephan Rau
- Department of Diagnostic and Interventional Radiology, University Hospital, Hugstetter Straße 55, 79106, Freiburg, Germany
- Department of Neuroradiology, University Hospital, Breisacher Straße 64, 79106, Freiburg, Germany
| | | | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, University Tuebingen, Hoppe-Seyler Straße 3, 72076, Tuebingen, Germany
| | - Stefan O Schoenberg
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Daniel Overhoff
- Department of Radiology and Nuclear Medicine, University Medical Center Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167, Mannheim, Germany
| | - Jürgen Beck
- Department of Neurosurgery, University Hospital, Breisacher Straße 64, 79106, Freiburg, Germany
| | - Horst Urbach
- Department of Neuroradiology, University Hospital, Breisacher Straße 64, 79106, Freiburg, Germany
| | - Jan-Helge Klingler
- Department of Neurosurgery, University Hospital, Breisacher Straße 64, 79106, Freiburg, Germany
| | - Fabian Bamberg
- Department of Diagnostic and Interventional Radiology, University Hospital, Hugstetter Straße 55, 79106, Freiburg, Germany
| | - Jakob Weiss
- Department of Diagnostic and Interventional Radiology, University Hospital, Hugstetter Straße 55, 79106, Freiburg, Germany
| |
Collapse
|
17
|
Selles M, Slotman DJ, van Osch JAC, Nijholt IM, Wellenberg RHH, Maas M, Boomsma MF. Is AI the way forward for reducing metal artifacts in CT? development of a generic deep learning-based method and initial evaluation in patients with sacroiliac joint implants. Eur J Radiol 2023; 163:110844. [PMID: 37119708 DOI: 10.1016/j.ejrad.2023.110844] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/01/2023]
Abstract
PURPOSE To develop a deep learning-based metal artifact reduction technique (dl-MAR) and quantitatively compare metal artifacts on dl-MAR-corrected CT-images, orthopedic metal artifact reduction (O-MAR)-corrected CT-images and uncorrected CT-images after sacroiliac (SI) joint fusion. METHODS dl-MAR was trained on CT-images with simulated metal artifacts. Pre-surgery CT-images and uncorrected, O-MAR-corrected and dl-MAR-corrected post-surgery CT-images of twenty-five patients undergoing SI joint fusion were retrospectively obtained. Image registration was applied to align pre-surgery with post-surgery CT-images within each patient, allowing placement of regions of interest (ROIs) on the same anatomical locations. Six ROIs were placed on the metal implant and the contralateral side in bone lateral of the SI joint, the gluteus medius muscle and the iliacus muscle. Metal artifacts were quantified as the difference in Hounsfield units (HU) between pre- and post-surgery CT-values within the ROIs on the uncorrected, O-MAR-corrected and dl-MAR-corrected images. Noise was quantified as standard deviation in HU within the ROIs. Metal artifacts and noise in the post-surgery CT-images were compared using linear multilevel regression models. RESULTS Metal artifacts were significantly reduced by O-MAR and dl-MAR in bone (p < 0.001), contralateral bone (O-MAR: p = 0.009; dl-MAR: p < 0.001), gluteus medius (p < 0.001), contralateral gluteus medius (p < 0.001), iliacus (p < 0.001) and contralateral iliacus (O-MAR: p = 0.024; dl-MAR: p < 0.001) compared to uncorrected images. Images corrected with dl-MAR resulted in stronger artifact reduction than images corrected with O-MAR in contralateral bone (p < 0.001), gluteus medius (p = 0.006), contralateral gluteus medius (p < 0.001), iliacus (p = 0.017), and contralateral iliacus (p < 0.001). Noise was reduced by O-MAR in bone (p = 0.009) and gluteus medius (p < 0.001) while noise was reduced by dl-MAR in all ROIs (p < 0.001) in comparison to uncorrected images. CONCLUSION dl-MAR showed superior metal artifact reduction compared to O-MAR in CT-images with SI joint fusion implants.
Collapse
Affiliation(s)
- Mark Selles
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands; Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands.
| | - Derk J Slotman
- Department of Radiology, Isala, 8025 AB Zwolle, the Netherlands
| | | | | | - Ruud H H Wellenberg
- Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | - Mario Maas
- Department of Radiology & Nuclear medicine, Amsterdam University Medical Centre, 1105 AZ Amsterdam, the Netherlands; Amsterdam Movement Sciences, 1081 BT Amsterdam, the Netherlands
| | | |
Collapse
|
18
|
Potočnik J, Foley S, Thomas E. Current and potential applications of artificial intelligence in medical imaging practice: A narrative review. J Med Imaging Radiat Sci 2023; 54:376-385. [PMID: 37062603 DOI: 10.1016/j.jmir.2023.03.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 03/25/2023] [Accepted: 03/28/2023] [Indexed: 04/18/2023]
Abstract
BACKGROUND AND PURPOSE Artificial intelligence (AI) is present in many areas of our lives. Much of the digital data generated in health care can be used for building automated systems to bring improvements to existing workflows and create a more personalised healthcare experience for patients. This review outlines select current and potential AI applications in medical imaging practice and provides a view of how diagnostic imaging suites will operate in the future. Challenges associated with potential applications will be discussed and healthcare staff considerations necessary to benefit from AI-enabled solutions will be outlined. METHODS Several electronic databases, including PubMed, ScienceDirect, Google Scholar, and University College Dublin Library Database, were used to identify relevant articles with a Boolean search strategy. Textbooks, government sources and vendor websites were also considered. RESULTS/DISCUSSION Many AI-enabled solutions in radiographic practice are available with more automation on the horizon. Traditional workflow will become faster, more effective, and more user friendly. AI can handle administrative or technical types of work, meaning it is applicable across all aspects of medical imaging practice. CONCLUSION AI offers significant potential to automate most of the manual tasks, ensure service consistency, and improve patient care. Radiographers, radiation therapists, and clinicians should ensure they have adequate understanding of the technology to enable ethical oversight of its implementation.
Collapse
Affiliation(s)
- Jaka Potočnik
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland.
| | - Shane Foley
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland
| | - Edel Thomas
- University College Dublin School of Medicine, Radiography & Diagnostic Imaging, Room A223, Belfield, Dublin 4, Ireland
| |
Collapse
|
19
|
Zhu M, Zhu Q, Song Y, Guo Y, Zeng D, Bian Z, Wang Y, Ma J. Physics-informed sinogram completion for metal artifact reduction in CT imaging. Phys Med Biol 2023; 68. [PMID: 36808913 DOI: 10.1088/1361-6560/acbddf] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 02/21/2023] [Indexed: 02/23/2023]
Abstract
Objective.Metal artifacts in the computed tomography (CT) imaging are unavoidably adverse to the clinical diagnosis and treatment outcomes. Most metal artifact reduction (MAR) methods easily result in the over-smoothing problem and loss of structure details near the metal implants, especially for these metal implants with irregular elongated shapes. To address this problem, we present the physics-informed sinogram completion (PISC) method for MAR in CT imaging, to reduce metal artifacts and recover more structural textures.Approach.Specifically, the original uncorrected sinogram is firstly completed by the normalized linear interpolation algorithm to reduce metal artifacts. Simultaneously, the uncorrected sinogram is also corrected based on the beam-hardening correction physical model, to recover the latent structure information in metal trajectory region by leveraging the attenuation characteristics of different materials. Both corrected sinograms are fused with the pixel-wise adaptive weights, which are manually designed according to the shape and material information of metal implants. To furtherly reduce artifacts and improve the CT image quality, a post-processing frequency split algorithm is adopted to yield the final corrected CT image after reconstructing the fused sinogram.Main results.We qualitatively and quantitatively evaluated the presented PISC method on two simulated datasets and three real datasets. All results demonstrate that the presented PISC method can effectively correct the metal implants with various shapes and materials, in terms of artifact suppression and structure preservation.Significance.We proposed a sinogram-domain MAR method to compensate for the over-smoothing problem existing in most MAR methods by taking advantage of the physical prior knowledge, which has the potential to improve the performance of the deep learning based MAR approaches.
Collapse
Affiliation(s)
- Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Qisen Zhu
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yi Guo
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China.,Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
20
|
Enhancement of 18F-Fluorodeoxyglucose PET Image Quality by Deep-Learning-Based Image Reconstruction Using Advanced Intelligent Clear-IQ Engine in Semiconductor-Based PET/CT Scanners. Diagnostics (Basel) 2022; 12:diagnostics12102500. [PMID: 36292189 PMCID: PMC9599974 DOI: 10.3390/diagnostics12102500] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/03/2022] [Accepted: 10/14/2022] [Indexed: 11/22/2022] Open
Abstract
Deep learning (DL) image quality improvement has been studied for application to 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). It is unclear, however, whether DL can increase the quality of images obtained with semiconductor-based PET/CT scanners. This study aimed to compare the quality of semiconductor-based PET/CT scanner images obtained by DL-based technology and conventional OSEM image with Gaussian postfilter. For DL-based data processing implementation, we used Advanced Intelligent Clear-IQ Engine (AiCE, Canon Medical Systems, Tochigi, Japan) and for OSEM images, Gaussian postfilter of 3 mm FWHM is used. Thirty patients who underwent semiconductor-based PET/CT scanner imaging between May 6, 2021, and May 19, 2021, were enrolled. We compared AiCE images and OSEM images and scored them for delineation, image noise, and overall image quality. We also measured standardized uptake values (SUVs) in tumors and healthy tissues and compared them between AiCE and OSEM. AiCE images scored significantly higher than OSEM images for delineation, image noise, and overall image quality. The Fleiss kappa value for the interobserver agreement was 0.57. Among the 21 SUV measurements in healthy organs, 11 (52.4%) measurements were significantly different between AiCE and OSEM images. More pathological lesions were detected in AiCE images as compared with OSEM images, with AiCE images showing higher SUVs for pathological lesions than OSEM images. AiCE can improve the quality of images acquired with semiconductor-based PET/CT scanners, including the noise level, contrast, and tumor detection capability.
Collapse
|
21
|
Atlas of non-pathological solitary or asymmetrical skeletal muscle uptake in [ 18F]FDG-PET. Jpn J Radiol 2022; 40:755-767. [PMID: 35344131 PMCID: PMC9345840 DOI: 10.1007/s11604-022-01264-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Accepted: 03/08/2022] [Indexed: 11/25/2022]
Abstract
Positron emission tomography (PET) using 2-deoxy-2-[18F]fluoro-d-glucose ([18F]FDG) is widely used in oncology and other fields. In [18F]FDG PET images, increased muscle uptake is observed owing exercise load or muscle tension, in addition to malignant tumors and inflammation. Moreover, we occasionally observe non-pathological solitary or unilateral skeletal muscle uptake, which is difficult to explain the strict reason. In most cases, we can interpret them as not having pathological significance. However, it is important to recognize such muscle uptake patterns to avoid misdiagnoses with pathological ones. Therefore, the teaching point of this pictorial essay is to comprehend the patterns of solitary or asymmetrical skeletal muscle uptake seen in routine [18F]FDG-PET scans. As an educational goal, you will be able to mention muscles where intense physiological [18F]FDG uptake can be observed, differentiate between physiological muscle uptake and lesion, and discuss with any physicians or specialists about uncertain muscle uptake.
Collapse
|
22
|
End-to-End Deep Learning CT Image Reconstruction for Metal Artifact Reduction. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app12010404] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Metal artifacts are common in CT-guided interventions due to the presence of metallic instruments. These artifacts often obscure clinically relevant structures, which can complicate the intervention. In this work, we present a deep learning CT reconstruction called iCTU-Net for the reduction of metal artifacts. The network emulates the filtering and back projection steps of the classical filtered back projection (FBP). A U-Net is used as post-processing to refine the back projected image. The reconstruction is trained end-to-end, i.e., the inputs of the iCTU-Net are sinograms and the outputs are reconstructed images. The network does not require a predefined back projection operator or the exact X-ray beam geometry. Supervised training is performed on simulated interventional data of the abdomen. For projection data exhibiting severe artifacts, the iCTU-Net achieved reconstructions with SSIM = 0.970±0.009 and PSNR = 40.7±1.6. The best reference method, an image based post-processing network, only achieved SSIM = 0.944±0.024 and PSNR = 39.8±1.9. Since the whole reconstruction process is learned, the network was able to fully utilize the raw data, which benefited from the removal of metal artifacts. The proposed method was the only studied method that could eliminate the metal streak artifacts.
Collapse
|
23
|
Paravastu SS, Hasani N, Farhadi F, Collins MT, Edenbrandt L, Summers RM, Saboury B. Applications of Artificial Intelligence in 18F-Sodium Fluoride Positron Emission Tomography/Computed Tomography:: Current State and Future Directions. PET Clin 2021; 17:115-135. [PMID: 34809861 DOI: 10.1016/j.cpet.2021.09.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
This review discusses the current state of artificial intelligence (AI) in 18F-NaF-PET/CT imaging and the potential applications to come in diagnosis, prognostication, and improvement of care in patients with bone diseases, with emphasis on the role of AI algorithms in CT bone segmentation, relying on their prevalence in medical imaging and utility in the extraction of spatial information in combined PET/CT studies.
Collapse
Affiliation(s)
- Sriram S Paravastu
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Skeletal Disorders and Mineral Homeostasis Section, National Institute of Dental and Craniofacial Research, National Institutes of Health (NIH), 30 Convent Dr., Building 30, Room 228 MSC 4320, Bethesda, MD 20892, USA
| | - Navid Hasani
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; University of Queensland Faculty of Medicine, Ochsner Clinical School, New Orleans, LA 70121, USA
| | - Faraz Farhadi
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA
| | - Michael T Collins
- Skeletal Disorders and Mineral Homeostasis Section, National Institute of Dental and Craniofacial Research, National Institutes of Health (NIH), 30 Convent Dr., Building 30, Room 228 MSC 4320, Bethesda, MD 20892, USA
| | - Lars Edenbrandt
- Department of Clinical Physiology, Sahlgrenska University Hospital, Göteborg, Sweden
| | - Ronald M Summers
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), 9000 Rockville Pike, Building 10, Room 1C455, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland- Baltimore County, Baltimore, MD, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
24
|
Bahrami A, Karimian A, Arabi H. Comparison of different deep learning architectures for synthetic CT generation from MR images. Phys Med 2021; 90:99-107. [PMID: 34597891 DOI: 10.1016/j.ejmp.2021.09.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/12/2021] [Accepted: 09/13/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Among the different available methods for synthetic CT generation from MR images for the task of MR-guided radiation planning, the deep learning algorithms have and do outperform their conventional counterparts. In this study, we investigated the performance of some most popular deep learning architectures including eCNN, U-Net, GAN, V-Net, and Res-Net for the task of sCT generation. As a baseline, an atlas-based method is implemented to which the results of the deep learning-based model are compared. METHODS A dataset consisting of 20 co-registered MR-CT pairs of the male pelvis is applied to assess the different sCT production methods' performance. The mean error (ME), mean absolute error (MAE), Pearson correlation coefficient (PCC), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics were computed between the estimated sCT and the ground truth (reference) CT images. RESULTS The visual inspection revealed that the sCTs produced by eCNN, V-Net, and ResNet, unlike the other methods, were less noisy and greatly resemble the ground truth CT image. In the whole pelvis region, the eCNN yielded the lowest MAE (26.03 ± 8.85 HU) and ME (0.82 ± 7.06 HU), and the highest PCC metrics were yielded by the eCNN (0.93 ± 0.05) and ResNet (0.91 ± 0.02) methods. The ResNet model had the highest PSNR of 29.38 ± 1.75 among all models. In terms of the Dice similarity coefficient, the eCNN method revealed superior performance in major tissue identification (air, bone, and soft tissue). CONCLUSIONS All in all, the eCNN and ResNet deep learning methods revealed acceptable performance with clinically tolerable quantification errors.
Collapse
Affiliation(s)
- Abbas Bahrami
- Faculty of Physics, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran.
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
25
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
26
|
Feng T, Yao S, Xi C, Zhao Y, Wang R, Wu S, Li C, Xu B. Deep learning-based image reconstruction for TOF PET with DIRECT data partitioning format. Phys Med Biol 2021; 66. [PMID: 34256356 DOI: 10.1088/1361-6560/ac13fe] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/13/2021] [Indexed: 11/12/2022]
Abstract
Conventional positron emission tomography (PET) image reconstruction is achieved by the statistical iterative method. Deep learning provides another opportunity for speeding up the image reconstruction process. However, conventional deep learning-based image reconstruction requires a fully connected network for learning the Radon transform. The use of fully connected networks greatly complicated the network and increased hardware cost. In this study, we proposed a novel deep learning-based image reconstruction method by utilizing the DIRECT data partitioning method. The U-net structure with only convolutional layers was used in our approach. Patch-based model training and testing were used to achieve 3D reconstructions within current hardware limitations. Time-of-flight (TOF)-histoimages were first generated from the listmode data to replace conventional sinograms. Different projection angles were used as different channels in the input. A total of 15 patient data were used in this study. For each patient, the dynamic whole-body scanning protocol was used to expand the training dataset and a total of 372 separate scans were included. The leave-one-patient-out validation method was used. Two separate studies were carried out. In the first study, the measured TOF-histoimages were directly used for model training and testing, to study the performance of the method in real-world applications. In the second study, TOF-histoimages were simulated from already reconstructed images to exclude the scatters, randoms, attenuation-activity mismatch effects. This study was used to evaluate the optimal performance when all other corrections are ideal. Volumes of interests were placed in the liver and lesion region to study image noise and lesion quantitations. The reconstructed images using the proposed deep learning method showed similar image quality when compared with the conventional expectation-maximization approach. A minimal difference was observed when the simulated TOF-histoimages were used as model input and testing, suggesting the deep learning model can indeed learn the reconstruction process. Some quantitative difference was observed when the measured TOF-histoimages were used. The two studies suggested that the major difference is caused by inaccurate corrections performed by the network itself, which indicated that physics-based corrections are still required for better quantitative performance. In conclusion, we have proposed a novel deep learning-based image reconstruction method for TOF PET. With the help of the DIRECT data partitioning method, no fully connected layers were used and 3D image reconstruction can be directly achieved within the limits of the current hardware.
Collapse
Affiliation(s)
- Tao Feng
- UIH America, Inc., Houston, TX, United States of America
| | - Shulin Yao
- PLA General Hospital, Beijing, People's Republic of China
| | - Chen Xi
- Shanghai United Imaging Healthcare, Shanghai, People's Republic of China
| | - Yizhang Zhao
- Shanghai United Imaging Healthcare, Shanghai, People's Republic of China
| | - Ruimin Wang
- PLA General Hospital, Beijing, People's Republic of China
| | - Shina Wu
- PLA General Hospital, Beijing, People's Republic of China
| | - Can Li
- PLA General Hospital, Beijing, People's Republic of China
| | - Baixuan Xu
- PLA General Hospital, Beijing, People's Republic of China
| |
Collapse
|
27
|
Abstract
PET/CT has become a preferred imaging modality over PET-only scanners in clinical practice. However, along with the significant improvement in diagnostic accuracy and patient throughput, pitfalls on PET/CT are reported as well. This review provides a general overview on the potential influence of the limitations with respect to PET/CT instrumentation and artifacts associated with the modality integration on the image appearance and quantitative accuracy of PET. Approaches proposed in literature to address the limitations or minimize the artifacts are discussed as well as their current challenges for clinical applications. Although the CT component can play an important role in assisting clinical diagnosis, we concentrate on the imaging scenarios where CT is used to provide auxiliary information for attenuation compensation and scatter correction in PET.
Collapse
Affiliation(s)
- Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT; Department of Biomedical Engineering, Yale University, New Haven, CT.
| |
Collapse
|
28
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|