1
|
McKenzie EM, Tong N, Ruan D, Cao M, Chin RK, Sheng K. Using neural networks to extend cropped medical images for deformable registration among images with differing scan extents. Med Phys 2021; 48:4459-4471. [PMID: 34101198 DOI: 10.1002/mp.15039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/07/2021] [Accepted: 05/27/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Missing or discrepant imaging volume is a common challenge in deformable image registration (DIR). To minimize the adverse impact, we train a neural network to synthesize cropped portions of head and neck CT's and then test its use in DIR. METHODS Using a training dataset of 409 head and neck CT's, we trained a generative adversarial network to take in a cropped 3D image and output an image with synthesized anatomy in the cropped region. The network used a 3D U-Net generator along with Visual Geometry Group (VGG) deep feature losses. To test our technique, for each of the 53 test volumes, we used Elastix to deformably register combinations of a randomly cropped, full, and synthetically full volume to a single cropped, full, and synthetically full target volume. We additionally tested our method's robustness to crop extent by progressively increasing the amount of cropping, synthesizing the missing anatomy using our network, and then performing the same registration combinations. Registration performance was measured using 95% Hausdorff distance across 16 contours. RESULTS We successfully trained a network to synthesize missing anatomy in superiorly and inferiorly cropped images. The network can estimate large regions in an incomplete image, far from the cropping boundary. Registration using our estimated full images was not significantly different from registration using the original full images. The average contour matching error for full image registration was 9.9 mm, whereas our method was 11.6, 12.1, and 13.6 mm for synthesized-to-full, full-to-synthesized, and synthesized-to-synthesized registrations, respectively. In comparison, registration using the cropped images had errors of 31.7 mm and higher. Plotting the registered image contour error as a function of initial preregistered error shows that our method is robust to registration difficulty. Synthesized-to-full registration was statistically independent of cropping extent up to 18.7 cm superiorly cropped. Synthesized-to-synthesized registration was nearly independent, with a -0.04 mm of change in average contour error for every additional millimeter of cropping. CONCLUSIONS Different or inadequate in scan extent is a major cause of DIR inaccuracies. We address this challenge by training a neural network to complete cropped 3D images. We show that with image completion, the source of DIR inaccuracy is eliminated, and the method is robust to varying crop extent.
Collapse
Affiliation(s)
- Elizabeth M McKenzie
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Nuo Tong
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Dan Ruan
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Minsong Cao
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Robert K Chin
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Ke Sheng
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
2
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
3
|
Fu Y, Lei Y, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, Yang X. LungRegNet: An unsupervised deformable image registration method for 4D-CT lung. Med Phys 2020; 47:1763-1774. [PMID: 32017141 PMCID: PMC7165051 DOI: 10.1002/mp.14065] [Citation(s) in RCA: 51] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/09/2020] [Accepted: 01/27/2020] [Indexed: 12/11/2022] Open
Abstract
PURPOSE To develop an accurate and fast deformable image registration (DIR) method for four-dimensional computed tomography (4D-CT) lung images. Deep learning-based methods have the potential to quickly predict the deformation vector field (DVF) in a few forward predictions. We have developed an unsupervised deep learning method for 4D-CT lung DIR with excellent performances in terms of registration accuracies, robustness, and computational speed. METHODS A fast and accurate 4D-CT lung DIR method, namely LungRegNet, was proposed using deep learning. LungRegNet consists of two subnetworks which are CoarseNet and FineNet. As the name suggests, CoarseNet predicts large lung motion on a coarse scale image while FineNet predicts local lung motion on a fine scale image. Both the CoarseNet and FineNet include a generator and a discriminator. The generator was trained to directly predict the DVF to deform the moving image. The discriminator was trained to distinguish the deformed images from the original images. CoarseNet was first trained to deform the moving images. The deformed images were then used by the FineNet for FineNet training. To increase the registration accuracy of the LungRegNet, we generated vessel-enhanced images by generating pulmonary vasculature probability maps prior to the network prediction. RESULTS We performed fivefold cross validation on ten 4D-CT datasets from our department. To compare with other methods, we also tested our method using separate 10 DIRLAB datasets that provide 300 manual landmark pairs per case for target registration error (TRE) calculation. Our results suggested that LungRegNet has achieved better registration accuracy in terms of TRE than other deep learning-based methods available in the literature on DIRLAB datasets. Compared to conventional DIR methods, LungRegNet could generate comparable registration accuracy with TRE smaller than 2 mm. The integration of both the discriminator and pulmonary vessel enhancements into the network was crucial to obtain high registration accuracy for 4D-CT lung DIR. The mean and standard deviation of TRE were 1.00 ± 0.53 mm and 1.59 ± 1.58 mm on our datasets and DIRLAB datasets respectively. CONCLUSIONS An unsupervised deep learning-based method has been developed to rapidly and accurately register 4D-CT lung images. LungRegNet has outperformed its deep-learning-based peers and achieved excellent registration accuracy in terms of TRE.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
4
|
Qiu J, Harold Li H, Zhang T, Ma F, Yang D. Automatic x-ray image contrast enhancement based on parameter auto-optimization. J Appl Clin Med Phys 2017; 18:218-223. [PMID: 28875594 PMCID: PMC5689921 DOI: 10.1002/acm2.12172] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Revised: 06/21/2017] [Accepted: 08/02/2017] [Indexed: 11/23/2022] Open
Abstract
Purpose Insufficient image contrast associated with radiation therapy daily setup x‐ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user‐independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x‐ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. Methods The proposed method processes the 2D x‐ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high‐pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high‐pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior‐point constrained optimization algorithm. Results Fifty‐two kV and MV x‐ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window‐level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. Conclusion The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification.
Collapse
Affiliation(s)
- Jianfeng Qiu
- Department of Radiology, Taishan Medical University, Taian, China.,Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - H Harold Li
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Tiezhi Zhang
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Fangfang Ma
- Department of Radiology, Taishan Medical University, Taian, China
| | - Deshan Yang
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| |
Collapse
|
5
|
Gazi PM, Aminololama-Shakeri S, Yang K, Boone JM. Temporal subtraction contrast-enhanced dedicated breast CT. Phys Med Biol 2016; 61:6322-46. [PMID: 27494376 DOI: 10.1088/0031-9155/61/17/6322] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The development of a framework of deformable image registration and segmentation for the purpose of temporal subtraction contrast-enhanced breast CT is described. An iterative histogram-based two-means clustering method was used for the segmentation. Dedicated breast CT images were segmented into background (air), adipose, fibroglandular and skin components. Fibroglandular tissue was classified as either normal or contrast-enhanced then divided into tiers for the purpose of categorizing degrees of contrast enhancement. A variant of the Demons deformable registration algorithm, intensity difference adaptive Demons (IDAD), was developed to correct for the large deformation forces that stemmed from contrast enhancement. In this application, the accuracy of the proposed method was evaluated in both mathematically-simulated and physically-acquired phantom images. Clinical usage and accuracy of the temporal subtraction framework was demonstrated using contrast-enhanced breast CT datasets from five patients. Registration performance was quantified using normalized cross correlation (NCC), symmetric uncertainty coefficient, normalized mutual information (NMI), mean square error (MSE) and target registration error (TRE). The proposed method outperformed conventional affine and other Demons variations in contrast enhanced breast CT image registration. In simulation studies, IDAD exhibited improvement in MSE (0-16%), NCC (0-6%), NMI (0-13%) and TRE (0-34%) compared to the conventional Demons approaches, depending on the size and intensity of the enhancing lesion. As lesion size and contrast enhancement levels increased, so did the improvement. The drop in the correlation between the pre- and post-contrast images for the largest enhancement levels in phantom studies is less than 1.2% (150 Hounsfield units). Registration error, measured by TRE, shows only submillimeter mismatches between the concordant anatomical target points in all patient studies. The algorithm was implemented using a parallel processing architecture resulting in rapid execution time for the iterative segmentation and intensity-adaptive registration techniques. Characterization of contrast-enhanced lesions is improved using temporal subtraction contrast-enhanced dedicated breast CT. Adaptation of Demons registration forces as a function of contrast-enhancement levels provided a means to accurately align breast tissue in pre- and post-contrast image acquisitions, improving subtraction results. Spatial subtraction of the aligned images yields useful diagnostic information with respect to enhanced lesion morphology and uptake.
Collapse
Affiliation(s)
- Peymon M Gazi
- Department of Biomedical Engineering, University of California, Davis, One Shields Avenue, Davis, CA 95616, USA. Department of Radiology, University of California, Davis Medical Center, 4860 Y street, Suite 3100 Ellison Building, Sacramento, CA 95817, USA
| | | | | | | |
Collapse
|
6
|
Zhen X, Yan H, Zhou L, Jia X, Jiang SB. Deformable image registration of CT and truncated cone-beam CT for adaptive radiation therapy. Phys Med Biol 2013; 58:7979-93. [PMID: 24169817 DOI: 10.1088/0031-9155/58/22/7979] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Truncation of a cone-beam computed tomography (CBCT) image, mainly caused by the limited field of view (FOV) of CBCT imaging, poses challenges to the problem of deformable image registration (DIR) between computed tomography (CT) and CBCT images in adaptive radiation therapy (ART). The missing information outside the CBCT FOV usually causes incorrect deformations when a conventional DIR algorithm is utilized, which may introduce significant errors in subsequent operations such as dose calculation. In this paper, based on the observation that the missing information in the CBCT image domain does exist in the projection image domain, we propose to solve this problem by developing a hybrid deformation/reconstruction algorithm. As opposed to deforming the CT image to match the truncated CBCT image, the CT image is deformed such that its projections match all the corresponding projection images for the CBCT image. An iterative forward-backward projection algorithm is developed. Six head-and-neck cancer patient cases are used to evaluate our algorithm, five with simulated truncation and one with real truncation. It is found that our method can accurately register the CT image to the truncated CBCT image and is robust against image truncation when the portion of the truncated image is less than 40% of the total image.
Collapse
Affiliation(s)
- Xin Zhen
- Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA 92037-0843, USA. Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | | | | | | | | |
Collapse
|
7
|
Zhen X, Gu X, Yan H, Zhou L, Jia X, Jiang SB. CT to cone-beam CT deformable registration with simultaneous intensity correction. Phys Med Biol 2012; 57:6807-26. [PMID: 23032638 DOI: 10.1088/0031-9155/57/21/6807] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Computed tomography (CT) to cone-beam CT (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called deformation with intensity simultaneously corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons.
Collapse
Affiliation(s)
- Xin Zhen
- Center for Advanced Radiotherapy Technologies, Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, CA 92037-0843, USA
| | | | | | | | | | | |
Collapse
|
8
|
Nithiananthan S, Schafer S, Uneri A, Mirota DJ, Stayman JW, Zbijewski W, Brock KK, Daly MJ, Chan H, Irish JC, Siewerdsen JH. Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach. Med Phys 2011; 38:1785-98. [PMID: 21626913 DOI: 10.1118/1.3555037] [Citation(s) in RCA: 68] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). METHODS A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. RESULTS The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. CONCLUSIONS A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
Collapse
Affiliation(s)
- Sajendra Nithiananthan
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205, USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Chandler A, Wei W, Herron DH, Anderson EF, Johnson VE, Ng CS. Semiautomated motion correction of tumors in lung CT-perfusion studies. Acad Radiol 2011; 18:286-93. [PMID: 21295733 DOI: 10.1016/j.acra.2010.10.008] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2010] [Revised: 10/18/2010] [Accepted: 10/20/2010] [Indexed: 11/18/2022]
Abstract
RATIONALE AND OBJECTIVES To compare the relative performance of one-dimensional (1D) manual, rigid-translational, and nonrigid registration techniques to correct misalignment of lung tumor anatomy acquired from computed tomography perfusion (CTp) datasets. MATERIALS AND METHODS Twenty-five datasets in patients with lung tumors who had undergone a CTp protocol were evaluated. Each dataset consisted of one reference CT image from an initial cine slab and six subsequent breathhold helical volumes (16-row multi-detector CT), acquired during intravenous contrast administration. Each helical volume was registered to the reference image using two semiautomated intensity-based registration methods (rigid-translational and nonrigid), and 1D manual registration (the only registration method available in the relevant application software). The performance of each technique to align tumor regions was assessed quantitatively (percent overlap and distance of center of mass), and by a visual validation study (using a 5-point scale). The registration methods were statistically compared using linear mixed and ordinal probit regression models. RESULTS Quantitatively, tumor alignment with the nonrigid method compared to rigid-translation was borderline significant, which in turn was significantly better than the 1D manual method: average (± SD) percent overlap, 91.8 ± 2.3%, 87.7 ± 5.5%, and 77.6 ± 5.9%, respectively; and average (± SD) DCOM, 0.41 ± 0.16 mm, 1.08 ± 1.13 mm, and 2.99 ± 2.93 mm, respectively (all P < .0001). Visual validation confirmed these findings. CONCLUSION Semiautomated registration methods achieved superior alignment of lung tumors compared to the 1D manual method. This will hopefully translate into more reliable CTp analyses.
Collapse
Affiliation(s)
- Adam Chandler
- Department of Imaging Physics, MD Anderson Cancer Center, 1400 Pressler Street, Houston, TX 77030, USA
| | | | | | | | | | | |
Collapse
|
10
|
Yang D, Brame S, El Naqa I, Aditya A, Wu Y, Goddu SM, Mutic S, Deasy JO, Low DA. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research. Med Phys 2011; 38:67-77. [PMID: 21361176 PMCID: PMC3017581 DOI: 10.1118/1.3521468] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2010] [Revised: 10/27/2010] [Accepted: 11/08/2010] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). METHODS DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. RESULTS DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. CONCLUSIONS By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
Collapse
Affiliation(s)
- Deshan Yang
- Department of Radiation Oncology, School of Medicine, Washington University, Saint Louis, Missouri 63110, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|