1
|
Mueller S, Guyer G, Volken W, Frei D, Torelli N, Aebersold DM, Manser P, Fix MK. Efficiency enhancements of a Monte Carlo beamlet based treatment planning process: implementation and parameter study. Phys Med Biol 2023; 68. [PMID: 36655485 DOI: 10.1088/1361-6560/acb480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/18/2023] [Indexed: 01/20/2023]
Abstract
Objective.The computational effort to perform beamlet calculation, plan optimization and final dose calculation of a treatment planning process (TPP) generating intensity modulated treatment plans is enormous, especially if Monte Carlo (MC) simulations are used for dose calculation. The goal of this work is to improve the computational efficiency of a fully MC based TPP for static and dynamic photon, electron and mixed photon-electron treatment techniques by implementing multiple methods and studying the influence of their parameters.Approach.A framework is implemented calculating MC beamlets efficiently in parallel on each available CPU core. The user can specify the desired statistical uncertainty of the beamlets, a fractional sparse dose threshold to save beamlets in a sparse format and minimal distances to the PTV surface from which 2 × 2 × 2 = 8 (medium) or even 4 × 4 × 4 = 64 (large) voxels are merged. The compromise between final plan quality and computational efficiency of beamlet calculation and optimization is studied for several parameter values to find a reasonable trade-off. For this purpose, four clinical and one academic case are considered with different treatment techniques.Main results.Setting the statistical uncertainty to 5% (photon beamlets) and 15% (electron beamlets), the fractional sparse dose threshold relative to the maximal beamlet dose to 0.1% and minimal distances for medium and large voxels to the PTV to 1 cm and 2 cm, respectively, does not lead to substantial degradation in final plan quality compared to using 2.5% (photon beamlets) and 5% (electron beamlets) statistical uncertainty and no sparse format nor voxel merging. Only OAR sparing is slightly degraded. Furthermore, computation times are reduced by about 58% (photon beamlets), 88% (electron beamlets) and 96% (optimization).Significance.Several methods are implemented improving computational efficiency of beamlet calculation and plan optimization of a fully MC based TPP without substantial degradation in final plan quality.
Collapse
Affiliation(s)
- S Mueller
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - G Guyer
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - W Volken
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - D Frei
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - N Torelli
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - D M Aebersold
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - P Manser
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| | - M K Fix
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Switzerland
| |
Collapse
|
2
|
Lee H, Shin J, Verburg JM, Bobić M, Winey B, Schuemann J, Paganetti H. MOQUI: an open-source GPU-based Monte Carlo code for proton dose calculation with efficient data structure. Phys Med Biol 2022; 67:10.1088/1361-6560/ac8716. [PMID: 35926482 PMCID: PMC9513828 DOI: 10.1088/1361-6560/ac8716] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 08/04/2022] [Indexed: 11/11/2022]
Abstract
Objective.Monte Carlo (MC) codes are increasingly used for accurate radiotherapy dose calculation. In proton therapy, the accuracy of the dose calculation algorithm is expected to have a more significant impact than in photon therapy due to the depth-dose characteristics of proton beams. However, MC simulations come at a considerable computational cost to achieve statistically sufficient accuracy. There have been efforts to improve computational efficiency while maintaining sufficient accuracy. Among those, parallelizing particle transportation using graphic processing units (GPU) achieved significant improvements. Contrary to the central processing unit, a GPU has limited memory capacity and is not expandable. It is therefore challenging to score quantities with large dimensions requiring extensive memory. The objective of this study is to develop an open-source GPU-based MC package capable of scoring those quantities.Approach.We employed a hash-table, one of the key-value pair data structures, to efficiently utilize the limited memory of the GPU and score the quantities requiring a large amount of memory. With the hash table, only voxels interacting with particles will occupy memory, and we can search the data efficiently to determine their address. The hash-table was integrated with a novel GPU-based MC code, moqui.Main results.The developed code was validated against an MC code widely used in proton therapy, TOPAS, with homogeneous and heterogeneous phantoms. We also compared the dose calculation results of clinical treatment plans. The developed code agreed with TOPAS within 2%, except for the fall-off and regions, and the gamma pass rates of the results were >99% for all cases with a 2 mm/2% criteria.Significance.We can score dose-influence matrix and dose-rate on a GPU for a 3-field H&N case with 10 GB of memory using moqui, which would require more than 100 GB of memory with the conventionally used array data structure.
Collapse
Affiliation(s)
- Hoyeon Lee
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Jungwook Shin
- Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institutes of Health, Rockville, MD 20850, United States of America
| | - Joost M Verburg
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Mislav Bobić
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
- Department of Physics, ETH, Zürich 8092, Switzerland
| | - Brian Winey
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Jan Schuemann
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Harald Paganetti
- Dept. of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| |
Collapse
|
3
|
Li Y, Ding S, Wang B, Liu H, Huang X, Song T. Extension and validation of a GPU-Monte Carlo dose engine gDPM for 1.5 T MR-LINAC online independent dose verification. Med Phys 2021; 48:6174-6183. [PMID: 34387872 DOI: 10.1002/mp.15165] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 07/28/2021] [Accepted: 08/02/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE To extend and validate the accuracy and efficiency of a graphics processing unit (GPU)-Monte Carlo dose engine for Elekta Unity 1.5 T Magnetic Resonance-Linear Accelerator (MR-LINAC) online independent dose verification. METHODS Electron/positron propagation physics in a uniform magnetic field was implemented in a previously developed GPU-Monte Carlo dose engine-gDPM. The dose calculation accuracy in the magnetic field was first evaluated in heterogeneous phantom with EGSnrc. The dose engine was then commissioned to a Unity machine with a virtual two photon-source model and compared with the Monaco treatment planning system. Fifteen patient plans from five tumor sites were included for the quantification of online dose verification accuracy and efficiency. RESULTS The extended gDPM accurately calculated the dose in a 1.5 T external magnetic field and was well matched with EGSnrc. The relative dose difference along central beam axis was less than 0.5% for the homogeneous region in water-lung phantom. The maximum difference was found at the build-up regions and heterogeneous interfaces, reaching 1.9% and 2.4% for 2 and 6 MeV mono-energy photon beams, respectively. The root mean square errors for depth-dose fall-off region were less than 0.2% for all field sizes and presented a good match between gDPM and Monaco GPUMCD. For in-field profiles, the dose differences were within 1% for cross-plane and in-plane directions for all calculated depths except dmax. For penumbra regions, the distance-to-agreements between two dose profiles were less than 0.1 cm. For patient plan verification, the maximum relative average dose difference was 1.3%. The gamma passing rates with criteria 3% (2 mm) for dose regions above 20% were between 93% and 98%. gDPM can complete the dose calculation for less than 40 s with 5 × 108 photons on a single NVIDIA GTX-1080Ti GPU and achieve a statistical uncertainty of 0.5%-1.1% for all evaluated cases. CONCLUSIONS A GPU-Monte Carlo package-gDPM was extended and validated for Elekta Unity online plan verification. Its calculation accuracy and efficiency make it suitable for online independent dose verification for MR-LINAC.
Collapse
Affiliation(s)
- Yongbao Li
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Shouliang Ding
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Bin Wang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Hongdong Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Xiaoyan Huang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, China
| | - Ting Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| |
Collapse
|
4
|
Neph R, Ouyang C, Neylon J, Yang Y, Sheng K. Parallel beamlet dose calculation via beamlet contexts in a distributed multi-GPU framework. Med Phys 2019; 46:3719-3733. [PMID: 31183871 DOI: 10.1002/mp.13651] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 06/03/2019] [Accepted: 06/03/2019] [Indexed: 12/14/2022] Open
Abstract
PURPOSE Dose calculation is one of the most computationally intensive, yet essential tasks in the treatment planning process. With the recent interest in automatic beam orientation and arc trajectory optimization techniques, there is a great need for more efficient model-based dose calculation algorithms that can accommodate hundreds to thousands of beam candidates at once. Foundational work has shown the translation of dose calculation algorithms to graphical processing units (GPUs), lending to remarkable gains in processing efficiency. But these methods provide parallelization of dose for only a single beamlet, serializing the calculation of multiple beamlets and under-utilizing the potential of modern GPUs. In this paper, the authors propose a framework enabling parallel computation of many beamlet doses using a novel beamlet context transformation and further embed this approach in a scalable network of multi-GPU computational nodes. METHODS The proposed context-based transformation separates beamlet-local density and TERMA into distinct beamlet contexts that independently provide sufficient data for beamlet dose calculation. Beamlet contexts are arranged in a composite context array with dosimetric isolation, and the context array is subjected to a GPU collapsed-cone convolution superposition procedure, producing the set of beamlet-specific dose distributions in a single pass. Dose from each context is converted to a sparse representation for efficient storage and retrieval during treatment plan optimization. The context radius is a new parameter permitting flexibility between the speed and fidelity of the dose calculation process. A distributed manager-worker architecture is constructed around the context-based GPU dose calculation approach supporting an arbitrary number of worker nodes and resident GPUs. Phantom experiments were executed to verify the accuracy of the context-based approach compared to Monte Carlo and a reference CPU-CCCS implementation for single beamlets and broad beams composed by addition of beamlets. Dose for representative 4π beam sets was calculated in lung and prostate cases to compare its efficiency with that of an existing beamlet-sequential GPU-CCCS implementation. Code profiling was also performed to evaluate the scalability of the framework across many networked GPUs. RESULTS The dosimetric accuracy of the context-based method displays <1.35% and 2.35% average error from the existing serialized CPU-CCCS algorithm and Monte Carlo simulation for beamlet-specific PDDs in water and slab phantoms, respectively. The context-based method demonstrates substantial speedup of up to two orders of magnitude over the beamlet-sequential GPU-CCCS method in the tested configurations. The context-based framework demonstrates near linear scaling in the number of distributed compute nodes and GPUs employed, indicating that it is flexible enough to meet the performance requirements of most users by simply increasing the hardware utilization. CONCLUSIONS The context-based approach demonstrates a new expectation of performance for beamlet-based dose calculation methods. This approach has been successful in accelerating the dose calculation process for very large-scale treatment planning problems - such as automatic 4π IMRT beam orientation and VMAT arc trajectory selection, with hundreds of thousands of beamlets - in clinically feasible timeframes. The flexibility of this framework makes it as a strong candidate for use in a variety of other very large-scale treatment planning tasks and clinical workflows.
Collapse
Affiliation(s)
- Ryan Neph
- Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California, 90095, USA
| | - Cheng Ouyang
- Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California, 90095, USA
| | - John Neylon
- Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California, 90095, USA
| | - Youming Yang
- Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California, 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California Los Angeles, 200 Medical Plaza, #B265, Los Angeles, California, 90095, USA
| |
Collapse
|
5
|
Aland T, Walsh A, Jones M, Piccini A, Devlin A. Accuracy and efficiency of graphics processing unit (GPU) based Acuros XB dose calculation within the Varian Eclipse treatment planning system. Med Dosim 2018; 44:219-225. [PMID: 30153966 DOI: 10.1016/j.meddos.2018.07.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 05/11/2018] [Accepted: 07/23/2018] [Indexed: 10/28/2022]
Abstract
To evaluate, in terms of dosimetric accuracy and calculation efficiency, the implementation of a graphic processing unit (GPU)-based Acuros XB dose calculation engine within version 15.5 of the Varian Eclipse treatment planning system. Initial phantom based calculations and a range of 101 clinical cases were analyzed on a dedicated test system. Dosimetric differences, based on dose-volume histrogram parameters and plan comparison, were compared between central processing unit (CPU) and GPU based calculation. Calculation times were also compared between CPU and GPU, as well as PLAN and FIELD modes. No dosimetric differences were found between CPU and GPU. CPU based calculations ranged from 25 to 533 seconds per plan, reducing to 13 to 70 seconds for GPU. GPU was 4.4 times more efficient than CPU. FIELD mode was up to 1.3 times more efficient than PLAN mode. For the clinical cases and version of Eclipse used, no dosimetric differences were found between CPU and GPU. Based on this, GPU architecture has been safely implemented and is ready for clinical use. GPU based calculation times were superior to CPU, being on average, 4.4 times faster.
Collapse
Affiliation(s)
- Trent Aland
- ICON Group, South Brisbane, Queensland, Australia; School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Brisbane, Queensland, Australia.
| | | | - Mark Jones
- ICON Group, South Brisbane, Queensland, Australia
| | | | - Aimee Devlin
- ICON Group, South Brisbane, Queensland, Australia
| |
Collapse
|
6
|
Hagan A, Sawant A, Folkerts M, Modiri A. Multi-GPU configuration of 4D intensity modulated radiation therapy inverse planning using global optimization. Phys Med Biol 2018; 63:025028. [PMID: 29176059 DOI: 10.1088/1361-6560/aa9c96] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
We report on the design, implementation and characterization of a multi-graphic processing unit (GPU) computational platform for higher-order optimization in radiotherapy treatment planning. In collaboration with a commercial vendor (Varian Medical Systems, Palo Alto, CA), a research prototype GPU-enabled Eclipse (V13.6) workstation was configured. The hardware consisted of dual 8-core Xeon processors, 256 GB RAM and four NVIDIA Tesla K80 general purpose GPUs. We demonstrate the utility of this platform for large radiotherapy optimization problems through the development and characterization of a parallelized particle swarm optimization (PSO) four dimensional (4D) intensity modulated radiation therapy (IMRT) technique. The PSO engine was coupled to the Eclipse treatment planning system via a vendor-provided scripting interface. Specific challenges addressed in this implementation were (i) data management and (ii) non-uniform memory access (NUMA). For the former, we alternated between parameters over which the computation process was parallelized. For the latter, we reduced the amount of data required to be transferred over the NUMA bridge. The datasets examined in this study were approximately 300 GB in size, including 4D computed tomography images, anatomical structure contours and dose deposition matrices. For evaluation, we created a 4D-IMRT treatment plan for one lung cancer patient and analyzed computation speed while varying several parameters (number of respiratory phases, GPUs, PSO particles, and data matrix sizes). The optimized 4D-IMRT plan enhanced sparing of organs at risk by an average reduction of [Formula: see text] in maximum dose, compared to the clinical optimized IMRT plan, where the internal target volume was used. We validated our computation time analyses in two additional cases. The computation speed in our implementation did not monotonically increase with the number of GPUs. The optimal number of GPUs (five, in our study) is directly related to the hardware specifications. The optimization process took 35 min using 50 PSO particles, 25 iterations and 5 GPUs.
Collapse
Affiliation(s)
- Aaron Hagan
- University of Maryland, School of Medicine, Baltimore, MD, United States of America
| | | | | | | |
Collapse
|
7
|
Li Y, Tian Z, Song T, Wu Z, Liu Y, Jiang S, Jia X. A new approach to integrate GPU-based Monte Carlo simulation into inverse treatment plan optimization for proton therapy. Phys Med Biol 2017; 62:289-305. [PMID: 27991456 DOI: 10.1088/1361-6560/62/1/289] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Collapse
Affiliation(s)
- Yongbao Li
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390-8542, USA. Department of Engineering Physics, Key Laboratory of Particle & Radiation Imaging (Tsinghua University), Ministry of Education, Tsinghua University, Beijing 10084, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
8
|
Yang YM, Svatos M, Zankowski C, Bednarz B. Concurrent Monte Carlo transport and fluence optimization with fluence adjusting scalable transport Monte Carlo. Med Phys 2016; 43:3034-3048. [PMID: 27277051 DOI: 10.1118/1.4950711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The future of radiation therapy will require advanced inverse planning solutions to support single-arc, multiple-arc, and "4π" delivery modes, which present unique challenges in finding an optimal treatment plan over a vast search space, while still preserving dosimetric accuracy. The successful clinical implementation of such methods would benefit from Monte Carlo (MC) based dose calculation methods, which can offer improvements in dosimetric accuracy when compared to deterministic methods. The standard method for MC based treatment planning optimization leverages the accuracy of the MC dose calculation and efficiency of well-developed optimization methods, by precalculating the fluence to dose relationship within a patient with MC methods and subsequently optimizing the fluence weights. However, the sequential nature of this implementation is computationally time consuming and memory intensive. Methods to reduce the overhead of the MC precalculation have been explored in the past, demonstrating promising reductions of computational time overhead, but with limited impact on the memory overhead due to the sequential nature of the dose calculation and fluence optimization. The authors propose an entirely new form of "concurrent" Monte Carlo treat plan optimization: a platform which optimizes the fluence during the dose calculation, reduces wasted computation time being spent on beamlets that weakly contribute to the final dose distribution, and requires only a low memory footprint to function. In this initial investigation, the authors explore the key theoretical and practical considerations of optimizing fluence in such a manner. METHODS The authors present a novel derivation and implementation of a gradient descent algorithm that allows for optimization during MC particle transport, based on highly stochastic information generated through particle transport of very few histories. A gradient rescaling and renormalization algorithm, and the concept of momentum from stochastic gradient descent were used to address obstacles unique to performing gradient descent fluence optimization during MC particle transport. The authors have applied their method to two simple geometrical phantoms, and one clinical patient geometry to examine the capability of this platform to generate conformal plans as well as assess its computational scaling and efficiency, respectively. RESULTS The authors obtain a reduction of at least 50% in total histories transported in their investigation compared to a theoretical unweighted beamlet calculation and subsequent fluence optimization method, and observe a roughly fixed optimization time overhead consisting of ∼10% of the total computation time in all cases. Finally, the authors demonstrate a negligible increase in memory overhead of ∼7-8 MB to allow for optimization of a clinical patient geometry surrounded by 36 beams using their platform. CONCLUSIONS This study demonstrates a fluence optimization approach, which could significantly improve the development of next generation radiation therapy solutions while incurring minimal additional computational overhead.
Collapse
Affiliation(s)
- Y M Yang
- Department of Medical Physics, Wisconsin Institutes for Medical Research, University of Wisconsin, Madison, Wisconsin 53703
| | - M Svatos
- Varian Medical Systems, 3120 Hansen Way, Palo Alto, California 94304
| | - C Zankowski
- Varian Medical Systems, 3120 Hansen Way, Palo Alto, California 94304
| | - B Bednarz
- Department of Medical Physics, Wisconsin Institutes for Medical Research, University of Wisconsin, Madison, Wisconsin 53703
| |
Collapse
|
9
|
Després P, Beaulieu L, El Naqa I, Seuntjens J. Special section: Selected papers from the Fifth International Workshop on Monte Carlo Techniques in Medical Physics. Phys Med Biol 2015; 60:4947-50. [DOI: 10.1088/0031-9155/60/13/4947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|