1
|
Abadi E, Barufaldi B, Lago M, Badal A, Mello-Thoms C, Bottenus N, Wangerin KA, Goldburgh M, Tarbox L, Beaucage-Gauvreau E, Frangi AF, Maidment A, Kinahan PE, Bosmans H, Samei E. Toward widespread use of virtual trials in medical imaging innovation and regulatory science. Med Phys 2024. [PMID: 39369717 DOI: 10.1002/mp.17442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2024] [Revised: 09/06/2024] [Accepted: 09/18/2024] [Indexed: 10/08/2024] Open
Abstract
The rapid advancement in the field of medical imaging presents a challenge in keeping up to date with the necessary objective evaluations and optimizations for safe and effective use in clinical settings. These evaluations are traditionally done using clinical imaging trials, which while effective, pose several limitations including high costs, ethical considerations for repetitive experiments, time constraints, and lack of ground truth. To tackle these issues, virtual trials (aka in silico trials) have emerged as a promising alternative, using computational models of human subjects and imaging devices, and observer models/analysis to carry out experiments. To facilitate the widespread use of virtual trials within the medical imaging research community, a major need is to establish a common consensus framework that all can use. Based on the ongoing efforts of an AAPM Task Group (TG387), this article provides a comprehensive overview of the requirements for establishing virtual imaging trial frameworks, paving the way toward their widespread use within the medical imaging research community. These requirements include credibility, reproducibility, and accessibility. Credibility assessment involves verification, validation, uncertainty quantification, and sensitivity analysis, ensuring the accuracy and realism of computational models. A proper credibility assessment requires a clear context of use and the questions that the study is intended to objectively answer. For reproducibility and accessibility, this article highlights the need for detailed documentation, user-friendly software packages, and standard input/output formats. Challenges in data and software sharing, including proprietary data and inconsistent file formats, are discussed. Recommended solutions to enhance accessibility include containerized environments and data-sharing hubs, along with following standards such as CDISC (Clinical Data Interchange Standards Consortium). By addressing challenges associated with credibility, reproducibility, and accessibility, virtual imaging trials can be positioned as a powerful and inclusive resource, advancing medical imaging innovation and regulatory science.
Collapse
Affiliation(s)
- Ehsan Abadi
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Departments of Radiology and Electrical & Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
| | - Bruno Barufaldi
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Miguel Lago
- Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Andreu Badal
- Division of Imaging, Diagnostics and Software Reliability, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | | | - Nick Bottenus
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, Colorado, USA
| | - Kristen A Wangerin
- Research and Development, Pharmaceutical Diagnostics, GE HealthCare, Marlborough, Massachusetts, USA
| | | | - Lawrence Tarbox
- Department of Biomedical Informatics, College of Medicine, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Erica Beaucage-Gauvreau
- Institute of Physics-based Modeling for in silico Health (iSi Health), KU Leuven, Leuven, Belgium
| | - Alejandro F Frangi
- Christabel Pankhurst Institute, Division of Informatics, Imaging and Data Sciences, Department of Computer Science, University of Manchester, Manchester, UK
- Alan Turing Institute, British Library, London, UK
| | - Andrew Maidment
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Paul E Kinahan
- Departments of Radiology, Bioengineering, and Physics, University of Washington, Seattle, Washington, USA
| | - Hilde Bosmans
- Departments of Radiology and Medical Radiation Physics, KU Leuven, Leuven, Belgium
| | - Ehsan Samei
- Center for Virtual Imaging Trials, Carl E. Ravin Advanced Imaging Laboratories, Departments of Radiology and Electrical & Computer Engineering, Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
| |
Collapse
|
2
|
Li H, Tsai YH, Liu H, Ruan D. Metric learning guided sinogram denoising for cone beam CT enhancement. Med Phys 2024. [PMID: 39353140 DOI: 10.1002/mp.17435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/22/2024] [Accepted: 09/05/2024] [Indexed: 10/04/2024] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) is a widely available modality, but its clinical utility has been limited by low detail conspicuity and quantitative accuracy. Convenient post-reconstruction denoising is subject to back projected patterned residual, but joint denoise-reconstruction is typically computationally expensive and complex. PURPOSE In this study, we develop and evaluate a novel Metric-learning guided wavelet transform reconstruction (MEGATRON) approach to enhance image domain quality with projection-domain processing. METHODS Projection domain based processing has the benefit of being simple, efficient, and compatible with various reconstruction toolkit and vendor platforms. However, they also typically show inferior performance in the final reconstructed image, because the denoising goals in projection and image domains do not necessarily align. Motivated by these observations, this work aims to translate the demand for quality enhancement from the quantitative image domain to the more easily operable projection domain. Specifically, the proposed paradigm consists of a metric learning module and a denoising network module. Via metric learning, enhancement objectives on the wavelet encoded sinogram domain data are defined to reflect post-reconstruction image discrepancy. The denoising network maps measured cone-beam projection to its enhanced version, driven by the learnt objective. In doing so, the denoiser operates in the convenient sinogram to sinogram fashion but reflects improvement in reconstructed image as the final goal. Implementation-wise, metric learning was formalized as optimizing the weighted fitting of wavelet subbands, and a res-Unet, which is a Unet structure with residual blocks, was used for denoising. To access quantitative reference, cone-beam projections were simulated using the X-ray based Cancer Imaging Simulation Toolkit (XCIST). In both learning modules, a data set of 123 human thoraxes, which was from Open-Source Imaging Consortium (OSIC) Pulmonary Fibrosis Progression challenge, was used. Reconstructed CBCT thoracic images were compared against ground truth FB and performance was assessed in root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS MEGATRON achieved RMSE in HU value, PSNR, and SSIM were 30.97 ± 4.25, 37.45 ± 1.78, and 93.23 ± 1.62, respectively. These values are on par with reported results from sophisticated physics-driven CBCT enhancement, demonstrating promise and utility of the proposed MEGATRON method. CONCLUSION We have demonstrated that incorporating the proposed metric learning into sinogram denoising introduces awareness of reconstruction goal and improves final quantitative performance. The proposed approach is compatible with a wide range of denoiser network structures and reconstruction modules, to suit customized need or further improve performance.
Collapse
Affiliation(s)
- Haoran Li
- Department of Bioengineering, University of California Los Angeles, Los Angeles, California, USA
| | - Yun-Han Tsai
- Department of Bioengineering, University of California Los Angeles, Los Angeles, California, USA
| | - Hengjie Liu
- Graduate Program of Physics and Biology in Medicine, University of California, Los Angeles, Los Angeles, California, USA
| | - Dan Ruan
- Department of Bioengineering, University of California Los Angeles, Los Angeles, California, USA
- Graduate Program of Physics and Biology in Medicine, University of California, Los Angeles, Los Angeles, California, USA
- Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California, USA
| |
Collapse
|
3
|
Li M, Wu M, Pack J, Wu P, Yan P, De Man B, Wang A, Nieman K, Wang G. Coronary atherosclerotic plaque characterization with silicon-based photon-counting computed tomography (CT): A simulation-based feasibility study. Med Phys 2024. [PMID: 39321385 DOI: 10.1002/mp.17422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 08/23/2024] [Accepted: 08/30/2024] [Indexed: 09/27/2024] Open
Abstract
BACKGROUND Recent photon-counting computed tomography (PCCT) development brings great opportunities for plaque characterization with much-improved spatial resolution and spectral imaging capability. While existing coronary plaque PCCT imaging results are based on CZT- or CdTe-materials detectors, deep-silicon photon-counting detectors offer unique performance characteristics and promise distinct imaging capabilities. PURPOSE This study aims to numerically investigate the feasibility of characterizing plaques with a deep-silicon PCCT scanner and to demonstrate its potential performance advantages over traditional CT scanners using energy-integrating detectors (EID). METHODS We conducted a systematic simulation study of a deep-silicon PCCT scanner using a newly developed digital plaque phantom with clinically relevant geometrical and chemical properties. Through qualitative and quantitative evaluations, this study investigates the effects of spatial resolution, noise, and motion artifacts on plaque imaging. RESULTS Noise-free simulations indicated that PCCT imaging could delineate the boundary of necrotic cores with a much finer resolution than EID-CT imaging, achieving a structural similarity index metric (SSIM) score of 0.970 and reducing the root mean squared error (RMSE) by two-thirds. Measuring necrotic core area errors were reduced from 91.5% to 24%, and fibrous cap thickness errors were reduced from 349.8% to 33.3%. In the presence of noise, the optimal reconstruction was achieved using 0.25 mm voxels and a soft reconstruction kernel, yielding the highest contrast-to-noise ratio (CNR) of 3.48 for necrotic core detection and the best image quality metrics among all choices. However, the ultrahigh resolution of PCCT increased sensitivity to motion artifacts, which could be mitigated by keeping residual motion amplitude below 0.4 mm. CONCLUSIONS The findings suggest that deep-silicon PCCT scanner can offer sufficient spatial resolution and tissue contrast for effective plaque characterization, potentially improving diagnostic accuracy in cardiovascular imaging, provided image noise and motion blur can be mitigated using advanced algorithms. This simulation study involves several simplifications, which may result in some idealized outcomes that do not directly translate to clinical practice. Further validation studies with physical scans are necessary and will be considered for future work.
Collapse
Affiliation(s)
- Mengzhou Li
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Research, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Mingye Wu
- GE HealthCare Technology & Innovation Center, Niskayuna, New York, USA
| | - Jed Pack
- GE HealthCare Technology & Innovation Center, Niskayuna, New York, USA
| | - Pengwei Wu
- GE HealthCare Technology & Innovation Center, Niskayuna, New York, USA
| | - Pingkun Yan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Research, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, USA
| | - Bruno De Man
- GE HealthCare Technology & Innovation Center, Niskayuna, New York, USA
| | - Adam Wang
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Koen Nieman
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Medicine (Cardiovascular Medicine), Stanford University, Stanford, California, USA
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Research, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, USA
| |
Collapse
|
4
|
Song Y, Yao T, Peng S, Zhu M, Meng M, Ma J, Zeng D, Huang J, Bian Z, Wang Y. b-MAR: bidirectional artifact representations learning framework for metal artifact reduction in dental CBCT. Phys Med Biol 2024; 69:145010. [PMID: 38588680 DOI: 10.1088/1361-6560/ad3c0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 04/08/2024] [Indexed: 04/10/2024]
Abstract
Objective.Metal artifacts in computed tomography (CT) images hinder diagnosis and treatment significantly. Specifically, dental cone-beam computed tomography (Dental CBCT) images are seriously contaminated by metal artifacts due to the widespread use of low tube voltages and the presence of various high-attenuation materials in dental structures. Existing supervised metal artifact reduction (MAR) methods mainly learn the mapping of artifact-affected images to clean images, while ignoring the modeling of the metal artifact generation process. Therefore, we propose the bidirectional artifact representations learning framework to adaptively encode metal artifacts caused by various dental implants and model the generation and elimination of metal artifacts, thereby improving MAR performance.Approach.Specifically, we introduce an efficient artifact encoder to extract multi-scale representations of metal artifacts from artifact-affected images. These extracted metal artifact representations are then bidirectionally embedded into both the metal artifact generator and the metal artifact eliminator, which can simultaneously improve the performance of artifact removal and artifact generation. The artifact eliminator learns artifact removal in a supervised manner, while the artifact generator learns artifact generation in an adversarial manner. To further improve the performance of the bidirectional task networks, we propose artifact consistency loss to align the consistency of images generated by the eliminator and the generator with or without embedding artifact representations.Main results.To validate the effectiveness of our algorithm, experiments are conducted on simulated and clinical datasets containing various dental metal morphologies. Quantitative metrics are calculated to evaluate the results of the simulation tests, which demonstrate b-MAR improvements of >1.4131 dB in PSNR, >0.3473 HU decrements in RMSE, and >0.0025 promotion in structural similarity index measurement over the current state-of-the-art MAR methods. All results indicate that the proposed b-MAR method can remove artifacts caused by various metal morphologies and restore the structural integrity of dental tissues effectively.Significance.The proposed b-MAR method strengthens the joint learning of the artifact removal process and the artifact generation process by bidirectionally embedding artifact representations, thereby improving the model's artifact removal performance. Compared with other comparison methods, b-MAR can robustly and effectively correct metal artifacts in dental CBCT images caused by different dental metals.
Collapse
Affiliation(s)
- Yuyan Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Tianyi Yao
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Shengwang Peng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Manman Zhu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Mingqiang Meng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jianhua Ma
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Dong Zeng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Jing Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
| | - Zhaoying Bian
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| | - Yongbo Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, People's Republic of China
- Pazhou Lab (Huangpu), Guangzhou 510700, People's Republic of China
| |
Collapse
|
5
|
Wiedeman C, Lorraine P, Wang G, Do R, Simpson A, Peoples J, De Man B. Simulated deep CT characterization of liver metastases with high-resolution filtered back projection reconstruction. Vis Comput Ind Biomed Art 2024; 7:13. [PMID: 38861067 PMCID: PMC11166620 DOI: 10.1186/s42492-024-00161-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Accepted: 04/14/2024] [Indexed: 06/12/2024] Open
Abstract
Early diagnosis and accurate prognosis of colorectal cancer is critical for determining optimal treatment plans and maximizing patient outcomes, especially as the disease progresses into liver metastases. Computed tomography (CT) is a frontline tool for this task; however, the preservation of predictive radiomic features is highly dependent on the scanning protocol and reconstruction algorithm. We hypothesized that image reconstruction with a high-frequency kernel could result in a better characterization of liver metastases features via deep neural networks. This kernel produces images that appear noisier but preserve more sinogram information. A simulation pipeline was developed to study the effects of imaging parameters on the ability to characterize the features of liver metastases. This pipeline utilizes a fractal approach to generate a diverse population of shapes representing virtual metastases, and then it superimposes them on a realistic CT liver region to perform a virtual CT scan using CatSim. Datasets of 10,000 liver metastases were generated, scanned, and reconstructed using either standard or high-frequency kernels. These data were used to train and validate deep neural networks to recover crafted metastases characteristics, such as internal heterogeneity, edge sharpness, and edge fractal dimension. In the absence of noise, models scored, on average, 12.2% ( α = 0.012 ) and 7.5% ( α = 0.049 ) lower squared error for characterizing edge sharpness and fractal dimension, respectively, when using high-frequency reconstructions compared to standard. However, the differences in performance were statistically insignificant when a typical level of CT noise was simulated in the clinical scan. Our results suggest that high-frequency reconstruction kernels can better preserve information for downstream artificial intelligence-based radiomic characterization, provided that noise is limited. Future work should investigate the information-preserving kernels in datasets with clinical labels.
Collapse
Affiliation(s)
- Christopher Wiedeman
- Department of Electrical and Computer Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | | | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Richard Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Amber Simpson
- Biomedical Computing and Informatics, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Jacob Peoples
- Biomedical Computing and Informatics, Queen's University, Kingston, ON, K7L 3N6, Canada
| | - Bruno De Man
- GE Research - Healthcare, Niskayuna, NY, 12309, USA.
| |
Collapse
|
6
|
Takeya A, Watanabe K, Haga A. Fine structural human phantom in dentistry and instance tooth segmentation. Sci Rep 2024; 14:12630. [PMID: 38824210 PMCID: PMC11144222 DOI: 10.1038/s41598-024-63319-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2024] [Accepted: 05/28/2024] [Indexed: 06/03/2024] Open
Abstract
In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.
Collapse
Affiliation(s)
- Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Keiichiro Watanabe
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, 3-18-15 Kuramoto-cho, Tokushima, 770-8503, Japan.
| |
Collapse
|
7
|
Zhang J, Wu M, FitzGerald P, Araujo S, De Man B. Development and tuning of models for accurate simulation of CT spatial resolution using CatSim. Phys Med Biol 2024; 69:10.1088/1361-6560/ad2122. [PMID: 38252976 PMCID: PMC10922964 DOI: 10.1088/1361-6560/ad2122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 01/22/2024] [Indexed: 01/24/2024]
Abstract
Objective. We sought to systematically evaluate CatSim's ability to accurately simulate the spatial resolution produced by a typical 64-detector-row clinical CT scanner in the projection and image domains, over the range of clinically used x-ray techniques.Approach.Using a 64-detector-row clinical scanner, we scanned two phantoms designed to evaluate spatial resolution in the projection and image domains. These empirical scans were performed over the standard clinically used range of x-ray techniques (kV, and mA). We extracted projection data from the scanner, and we reconstructed images. For the CatSim simulations, we developed digital phantoms to represent the phantoms used in the empirical scans. We developed a new, realistic model for the x-ray source focal spot, and we empirically tuned a published model for the x-ray detector temporal response. We applied these phantoms and models to simulate scans equivalent to the empirical scans, and we reconstructed the simulated projections using the same methods used for the empirical scans. For the empirical and simulated scans, we qualitatively and quantitatively compared the projection-domain and image-domain point-spread functions (PSFs) as well as the image-domain modulation transfer functions. We reported four quantitative metrics and the percent error between the empirical and simulated results.Main Results.Qualitatively, the PSFs matched well in both the projection and image domains. Quantitatively, all four metrics generally agreed well, with most of the average errors substantially less than 5% for all x-ray techniques. Although the errors tended to increase with decreasing kV, we found that the CatSim simulations agreed with the empirical scans within limits required for the anticipated applications of CatSim.Significance.The new focal spot model and the new detector temporal response model are significant contributions to CatSim because they enabled achieving the desired level of agreement between empirical and simulated results. With these new models and this validation, CatSim users can be confident that the spatial resolution represented by simulations faithfully represents results that would be obtained by a real scanner, within reasonable, known limits. Furthermore, users of CatSim can vary parameters including but not limited to system geometry, focal spot size/shape and detector parameters, beyond the values available in physical scanners, and be confident in the results. Therefore, CatSim can be used to explore new hardware designs as well as new scanning and reconstruction methods, thus enabling acceleration of improved CT scan capabilities.
Collapse
Affiliation(s)
- Jiayong Zhang
- GE HealthCare Technology & Innovation Center, Niskayuna, NY
| | - Mingye Wu
- GE HealthCare Technology & Innovation Center, Niskayuna, NY
| | | | - Stephen Araujo
- GE HealthCare Technology & Innovation Center, Niskayuna, NY
| | - Bruno De Man
- GE HealthCare Technology & Innovation Center, Niskayuna, NY
| |
Collapse
|
8
|
Tanveer MS, Wiedeman C, Li M, Shi Y, De Man B, Maltz JS, Wang G. Deep-silicon photon-counting x-ray projection denoising through reinforcement learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:173-205. [PMID: 38217633 DOI: 10.3233/xst-230278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2024]
Abstract
BACKGROUND In recent years, deep reinforcement learning (RL) has been applied to various medical tasks and produced encouraging results. OBJECTIVE In this paper, we demonstrate the feasibility of deep RL for denoising simulated deep-silicon photon-counting CT (PCCT) data in both full and interior scan modes. PCCT offers higher spatial and spectral resolution than conventional CT, requiring advanced denoising methods to suppress noise increase. METHODS In this work, we apply a dueling double deep Q network (DDDQN) to denoise PCCT data for maximum contrast-to-noise ratio (CNR) and a multi-agent approach to handle data non-stationarity. RESULTS Using our method, we obtained significant image quality improvement for single-channel scans and consistent improvement for all three channels of multichannel scans. For the single-channel interior scans, the PSNR (dB) and SSIM increased from 33.4078 and 0.9165 to 37.4167 and 0.9790 respectively. For the multichannel interior scans, the channel-wise PSNR (dB) increased from 31.2348, 30.7114, and 30.4667 to 31.6182, 30.9783, and 30.8427 respectively. Similarly, the SSIM improved from 0.9415, 0.9445, and 0.9336 to 0.9504, 0.9493, and 0.0326 respectively. CONCLUSIONS Our results show that the RL approach improves image quality effectively, efficiently, and consistently across multiple spectral channels and has great potential in clinical applications.
Collapse
Affiliation(s)
- Md Sayed Tanveer
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Christopher Wiedeman
- Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Mengzhou Li
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yongyi Shi
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Bruno De Man
- GE HealthCare, One Research Circle, Niskayuna, NY, USA
| | | | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
9
|
Lyu T, Wu Z, Ma G, Jiang C, Zhong X, Xi Y, Chen Y, Zhu W. PDS-MAR: a fine-grained projection-domain segmentation-based metal artifact reduction method for intraoperative CBCT images with guidewires. Phys Med Biol 2023; 68:215007. [PMID: 37802062 DOI: 10.1088/1361-6560/ad00fc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 10/06/2023] [Indexed: 10/08/2023]
Abstract
Objective.Since the invention of modern Computed Tomography (CT) systems, metal artifacts have been a persistent problem. Due to increased scattering, amplified noise, and limited-angle projection data collection, it is more difficult to suppress metal artifacts in cone-beam CT, limiting its use in human- and robot-assisted spine surgeries where metallic guidewires and screws are commonly used.Approach.To solve this problem, we present a fine-grained projection-domain segmentation-based metal artifact reduction (MAR) method termed PDS-MAR, in which metal traces are augmented and segmented in the projection domain before being inpainted using triangular interpolation. In addition, a metal reconstruction phase is proposed to restore metal areas in the image domain.Main results.The proposed method is tested on both digital phantom data and real scanned cone-beam computed tomography (CBCT) data. It achieves much-improved quantitative results in both metal segmentation and artifact reduction in our phantom study. The results on real scanned data also show the superiority of this method.Significance.The concept of projection-domain metal segmentation would advance MAR techniques in CBCT and has the potential to push forward the use of intraoperative CBCT in human-handed and robotic-assisted minimal invasive spine surgeries.
Collapse
Affiliation(s)
- Tianling Lyu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Zhan Wu
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Gege Ma
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Chen Jiang
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| | - Xinyun Zhong
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, People's Republic of China
| | - Yan Xi
- First-Imaging Tech., Shanghai, People's Republic of China
| | - Yang Chen
- Laboratory of Imaging Science and Technology, Southeast University, Nanjing, People's Republic of China
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, People's Republic of China
| | - Wentao Zhu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou, People's Republic of China
| |
Collapse
|
10
|
Shimomura T, Fujiwara D, Inoue Y, Takeya A, Ohta T, Nozawa Y, Imae T, Nawa K, Nakagawa K, Haga A. Virtual cone-beam computed tomography simulator with human phantom library and its application to the elemental material decomposition. Phys Med 2023; 113:102648. [PMID: 37672845 DOI: 10.1016/j.ejmp.2023.102648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 06/19/2023] [Accepted: 07/29/2023] [Indexed: 09/08/2023] Open
Abstract
PURPOSE The purpose of this study is to develop a virtual CBCT simulator with a head and neck (HN) human phantom library and to demonstrate the feasibility of elemental material decomposition (EMD) for quantitative CBCT imaging using this virtual simulator. METHODS The library of 36 HN human phantoms were developed by extending the ICRP 110 adult phantoms based on human age, height, and weight statistics. To create the CBCT database for the library, a virtual CBCT simulator that simulated the direct and scattered X-ray on a flat panel detector using ray-tracing and deep-learning (DL) models was used. Gaussian distributed noise was also included on the flat panel detector, which was evaluated using a real CBCT system. The usefulness of the virtual CBCT system was demonstrated through the application of the developed DL-based EMD model for case involving virtual phantom and real patient. RESULTS The virtual simulator could generate various virtual CBCT images based on the human phantom library, and the prediction of the EMD could be successfully performed by preparing the CBCT database from the proposed virtual system, even for a real patient. The CBCT image degradation owing to the scattered X-ray and the statistical noise affected the prediction accuracy, although these effects were minimal. Furthermore, the elemental distribution using the real CBCT image was also predictable. CONCLUSIONS This study demonstrated the potential of using computer vision for medical data preparation and analysis, which could have important implications for improving patient outcomes, especially in adaptive radiation therapy.
Collapse
Affiliation(s)
- Taisei Shimomura
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan; Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Daiyu Fujiwara
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan
| | - Yuki Inoue
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan
| | - Atsushi Takeya
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan
| | - Takeshi Ohta
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Yuki Nozawa
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Toshikazu Imae
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Kanabu Nawa
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Keiichi Nakagawa
- Department of Radiology, The University of Tokyo Hospital, Bunkyo, Tokyo 113-8655, Japan
| | - Akihiro Haga
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima 770-8503, Japan.
| |
Collapse
|
11
|
Lyu T, Zhao W, Gao W, Zhu J, Xi Y, Chen Y, Zhu W. A Dual-Energy Metal Artifact Redcution Method for DECT Image Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-6. [PMID: 38083063 DOI: 10.1109/embc40787.2023.10340221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Metal implants are one of the culprits for image quality degradation in CT imaging, introducing so-called metal artifacts. With the help of the virtual-monochromatic imaging technique, dual-energy CT has been proven to be effective in metal artifact reduction. However, the virtual monochromatic images with suppressed metal artifacts show reduced CNR compared to polychromatic images. To remove metal artifacts on polychromatic images, we propose a dual-energy NMAR (deNMAR) algorithm in this paper that adds material decomposition to the widely used NMAR framework. The dual energy sinograms are first decomposed into water and bone sinograms, and metal regions are replaced with water on the reconstructed material maps. Prior sinograms are constructed by polyenergetically forward projecting the material maps with corresponding spectra, and they are used to guide metal trace interpolation in the same way as in the NMAR algorithm. We performed experiments on authentic human body phantoms, and the results show that the proposed deNMAR algorithm achieves better performance in tissue restoration compared to other compelling methods. Tissue boundaries become clear around metal implants, and CNR rises to 2.58 from ~1.70 on 80 kV images compared to other dual-energy-based algorithms.
Collapse
|