26
|
Kalinowski J, Enger SA. RapidBrachyTG43: A Geant4-based TG-43 parameter and dose calculation module for brachytherapy dosimetry. Med Phys 2024; 51:3746-3757. [PMID: 38252746 DOI: 10.1002/mp.16948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 12/05/2023] [Accepted: 12/19/2023] [Indexed: 01/24/2024] Open
Abstract
BACKGROUND The AAPM TG-43U1 formalism remains the clinical standard for dosimetry of low- and high-energy γ $\gamma$ -emitting brachytherapy sources. TG-43U1 and related reports provide consensus datasets of TG-43 parameters derived from various published measured data and Monte Carlo simulations. These data are used to perform standardized and fast dose calculations for brachytherapy treatment planning. PURPOSE Monte Carlo TG-43 dosimetry parameters are commonly derived to characterize novel brachytherapy sources. RapidBrachyTG43 is a module of RapidBrachyMCTPS, a Monte Carlo-based treatment planning system, designed to automate this process, requiring minimal user input to prepare Geant4-based Monte Carlo simulations for a source. RapidBrachyTG43 may also perform a TG-43 dose to water-in-water calculation for a plan, substantially accelerating the same calculation performed using RapidBrachyMCTPS's Monte Carlo dose calculation engine. METHODS TG-43 parametersS K / A $S_K/A$ , Λ $\Lambda$ ,g L ( r ) $g_L(r)$ , andF ( r , θ ) $F(r,\theta)$ were calculated using three commercial source models, one each of125 $^{125}$ I,192 $^{192}$ Ir, and60 $^{60}$ Co, and were benchmarked to published data. TG-43 dose to water was calculated for a clinical breast brachytherapy plan and was compared to a Monte Carlo dose calculation with all patient tissues, air, and catheters set to water. RESULTS TG-43 parameters for the three simulated sources agreed with benchmark datasets within tolerances specified by the High Energy Brachytherapy Dosimetry working group. A gamma index comparison between the TG-43 and Monte Carlo dose-to-water calculations with a dose difference and difference to agreement criterion of 1%/1 mm yielded a 98.9% pass rate, with all relevant dose volume histogram metrics for the plan agreeing within 1%. Performing a TG-43-based dose calculation provided an acceleration of dose-to-water calculation by a factor of 165. CONCLUSIONS Determination of TG-43 parameter data for novel brachytherapy sources may now be facilitated by RapidBrachyMCTPS. These parameter datasets and existing consensus or published datasets may also be used to determine the TG-43 dose for a plan in RapidBrachyMCTPS.
Collapse
|
27
|
Lopushenko I, Sieryi O, Bykov A, Meglinski I. Exploring the evolution of circular polarized light backscattered from turbid tissue-like disperse medium utilizing generalized Monte Carlo modeling approach with a combined use of Jones and Stokes-Mueller formalisms. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:052913. [PMID: 38089555 PMCID: PMC10715447 DOI: 10.1117/1.jbo.29.5.052913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/24/2023] [Accepted: 10/26/2023] [Indexed: 12/18/2023]
Abstract
Significance Phase retardation of circularly polarized light (CPL), backscattered by biological tissue, is used extensively for quantitative evaluation of cervical intraepithelial neoplasia, presence of senile Alzheimer's plaques, and characterization of biotissues with optical anisotropy. The Stokes polarimetry and Mueller matrix approaches demonstrate high potential in definitive non-invasive cancer diagnosis and tissue characterization. The ultimate understanding of CPL interaction with tissues is essential for advancing medical diagnostics, optical imaging, therapeutic applications, and the development of optical instruments and devices. Aim We investigate propagation of CPL within turbid tissue-like scattering medium utilizing a combination of Jones and Stokes-Mueller formalisms in a Monte Carlo (MC) modeling approach. We explore the fundamentals of CPL memory effect and depolarization formation. Approach The generalized MC computational approach developed for polarization tracking within turbid tissue-like scattering medium is based on the iterative solution of the Bethe-Salpeter equation. The approach handles helicity response of CPL scattered in turbid medium and provides explicit expressions for assessment of its polarization state. Results Evolution of CPL backscattered by tissue-like medium at different conditions of observation in terms of source-detector configuration is assessed quantitatively. The depolarization of light is presented in terms of the coherence matrix and Stokes-Mueller formalism. The obtained results reveal the origins of the helicity flip of CPL depending on the source-detector configuration and the properties of the medium and are in a good agreement with the experiment. Conclusions By integrating Jones and Stokes-Mueller formalisms, the combined MC approach allows for a more complete representation of polarization effects in complex optical systems. The developed model is suitable to imitate propagation of the light beams of different shape and profile, including Gaussian, Bessel, Hermite-Gaussian, and Laguerre-Gaussian beams, within tissue-like medium. Diverse configuration of the experimental conditions, coherent properties of light, and peculiarities of polarization can be also taken into account.
Collapse
|
28
|
Adelhardt P, Koziol JA, Langheld A, Schmidt KP. Monte Carlo Based Techniques for Quantum Magnets with Long-Range Interactions. ENTROPY (BASEL, SWITZERLAND) 2024; 26:401. [PMID: 38785650 PMCID: PMC11120707 DOI: 10.3390/e26050401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/17/2024] [Accepted: 04/19/2024] [Indexed: 05/25/2024]
Abstract
Long-range interactions are relevant for a large variety of quantum systems in quantum optics and condensed matter physics. In particular, the control of quantum-optical platforms promises to gain deep insights into quantum-critical properties induced by the long-range nature of interactions. From a theoretical perspective, long-range interactions are notoriously complicated to treat. Here, we give an overview of recent advancements to investigate quantum magnets with long-range interactions focusing on two techniques based on Monte Carlo integration. First, the method of perturbative continuous unitary transformations where classical Monte Carlo integration is applied within the embedding scheme of white graphs. This linked-cluster expansion allows extracting high-order series expansions of energies and observables in the thermodynamic limit. Second, stochastic series expansion quantum Monte Carlo integration enables calculations on large finite systems. Finite-size scaling can then be used to determine the physical properties of the infinite system. In recent years, both techniques have been applied successfully to one- and two-dimensional quantum magnets involving long-range Ising, XY, and Heisenberg interactions on various bipartite and non-bipartite lattices. Here, we summarise the obtained quantum-critical properties including critical exponents for all these systems in a coherent way. Further, we review how long-range interactions are used to study quantum phase transitions above the upper critical dimension and the scaling techniques to extract these quantum critical properties from the numerical calculations.
Collapse
|
29
|
Praveen Kumar C, Aggarwal LM, Bhasi S, Sharma N. A Monte Carlo simulation-based decision support system for radiation oncologists in the treatment of glioblastoma multiforme. RADIATION AND ENVIRONMENTAL BIOPHYSICS 2024; 63:215-262. [PMID: 38664268 DOI: 10.1007/s00411-024-01065-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 03/24/2024] [Indexed: 05/15/2024]
Abstract
In the present research, we have developed a model-based crisp logic function statistical classifier decision support system supplemented with treatment planning systems for radiation oncologists in the treatment of glioblastoma multiforme (GBM). This system is based on Monte Carlo radiation transport simulation and it recreates visualization of treatment environments on mathematical anthropomorphic brain (MAB) phantoms. Energy deposition within tumour tissue and normal tissues are graded by quality audit factors which ensure planned dose delivery to tumour site thereby minimising damages to healthy tissues. The proposed novel methodology predicts tumour growth response to radiation therapy from a patient-specific medicine quality audit perspective. Validation of the study was achieved by recreating thirty-eight patient-specific mathematical anthropomorphic brain phantoms of treatment environments by taking into consideration density variation and composition of brain tissues. Dose computations accomplished through water phantom, tissue-equivalent head phantoms are neither cost-effective, nor patient-specific customized and is often less accurate. The above-highlighted drawbacks can be overcome by using open-source Electron Gamma Shower (EGSnrc) software and clinical case reports for MAB phantom synthesis which would result in accurate dosimetry with due consideration to the time factors. Considerable dose deviations occur at the tumour site for environments with intraventricular glioblastoma, haematoma, abscess, trapped air and cranial flaps leading to quality factors with a lower logic value of 0. Logic value of 1 depicts higher dose deposition within healthy tissues and also leptomeninges for majority of the environments which results in radiation-induced laceration.
Collapse
|
30
|
Atmaca Ö, Liu J, Ly TJ, Bajraktari F, Pott PP. Spatial sensitivity distribution assessment and Monte Carlo simulations for needle-based bioimpedance imaging during venipuncture using the finite element method. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2024:e3831. [PMID: 38690649 DOI: 10.1002/cnm.3831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Revised: 04/05/2024] [Accepted: 04/15/2024] [Indexed: 05/02/2024]
Abstract
Despite being among the most common medical procedures, needle insertions suffer from a high error rate. Impedance measurements using electrode-equipped needles offer promise for improved tissue targeting and reduced errors. Impedance visualization usually requires an extensive pre-measured impedance dataset for tissue differentiation and knowledge of the electric fields contributing to the resulting impedances. This work presents two finite element simulation approaches for both problems. The first approach describes the generation of a multitude of impedances with Monte Carlo simulations for both, homogeneous and inhomogeneous tissue to circumvent the need to rely on previously measured data. These datasets could be used for tissue discrimination. The second method describes the simulation of the spatial sensitivity distribution of an electrode layout. Two singularity analysis methods were employed to determine the bulk of the sensitivity within a finite volume, which in turn enables consistent 3D visualization. The modeled electrode layout consists of 12 electrodes radially placed around a hypodermic needle. Electrical excitation was simulated using two neighboring electrodes for current carriage and voltage pickup, which resulted in 12 distinct bipolar excitation states. Both, the impedance simulations and the respective singularity analysis methods were compared with each other. The results show that the statistical spread of impedances is highly dependent on the tissue type and its inhomogeneities. The bounded bulk of sensitivities of both methods are of similar extent and symmetry. Future models should incorporate more detailed tissue properties such as anisotropy or changing material properties due to tissue deformation to gain more accurate predictions.
Collapse
|
31
|
Quetin S, Bahoric B, Maleki F, Enger SA. Deep learning for high-resolution dose prediction in high dose rate brachytherapy for breast cancer treatment. Phys Med Biol 2024; 69:105011. [PMID: 38604185 DOI: 10.1088/1361-6560/ad3dbd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 04/11/2024] [Indexed: 04/13/2024]
Abstract
Objective.Monte Carlo (MC) simulations are the benchmark for accurate radiotherapy dose calculations, notably in patient-specific high dose rate brachytherapy (HDR BT), in cases where considering tissue heterogeneities is critical. However, the lengthy computational time limits the practical application of MC simulations. Prior research used deep learning (DL) for dose prediction as an alternative to MC simulations. While accurate dose predictions akin to MC were attained, graphics processing unit limitations constrained these predictions to large voxels of 3 mm × 3 mm × 3 mm. This study aimed to enable dose predictions as accurate as MC simulations in 1 mm × 1 mm × 1 mm voxels within a clinically acceptable timeframe.Approach.Computed tomography scans of 98 breast cancer patients treated with Iridium-192-based HDR BT were used: 70 for training, 14 for validation, and 14 for testing. A new cropping strategy based on the distance to the seed was devised to reduce the volume size, enabling efficient training of 3D DL models using 1 mm × 1 mm × 1 mm dose grids. Additionally, novel DL architecture with layer-level fusion were proposed to predict MC simulated dose to medium-in-medium (Dm,m). These architectures fuse information from TG-43 dose to water-in-water (Dw,w) with patient tissue composition at the layer-level. Different inputs describing patient body composition were investigated.Main results.The proposed approach demonstrated state-of-the-art performance, on par with the MCDm,mmaps, but 300 times faster. The mean absolute percent error for dosimetric indices between the MC and DL-predicted complete treatment plans was 0.17% ± 0.15% for the planning target volumeV100, 0.30% ± 0.32% for the skinD2cc, 0.82% ± 0.79% for the lungD2cc, 0.34% ± 0.29% for the chest wallD2ccand 1.08% ± 0.98% for the heartD2cc.Significance.Unlike the time-consuming MC simulations, the proposed novel strategy efficiently converts TG-43Dw,wmaps into preciseDm,mmaps at high resolution, enabling clinical integration.
Collapse
|
32
|
Rogers DWO. Minimum phantom size for megavoltage photon beam reference dosimetry. Med Phys 2024. [PMID: 38669481 DOI: 10.1002/mp.17099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 04/07/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024] Open
Abstract
BACKGROUND Water phantoms are required to perform reference dosimetry and beam quality measurements but there are no published studies about the size requirements for such phantoms. PURPOSE To investigate, using Monte Carlo techniques, the size requirements for water phantoms used in reference dosimetry and/or to measure the beam quality specifiers% d d ( 10 ) x $\%dd(10)_{\sf x}$ andT P R 10 20 $TPR^{20}_{10}$ . METHODS The EGSnrc application DOSXYZnrc is used to calculateD ( 10 ) $D(10)$ , the dose per incident fluence at 10 cm depth in a water phantom irradiated by incident10 × 10 cm 2 $10\,\times \,10 \, {\rm {cm}}^{2}$ beams of60 Co $^{60}{\rm {Co}}$ or 6 MV photons. The water phantom dimensions are varied from30 × 30 × 40 cm 3 $30 \,\times \, 30 \,\times \, 40 \, {\rm {cm}}^3$ to15 × 15 × 22 cm 3 $15 \,\times \, 15 \,\times \, 22 \, {\rm {cm}}^3$ and occasionally smaller. The% d d ( 10 ) x $\%dd(10)_{\sf x}$ andT P R 10 20 $TPR^{20}_{10}$ values are also calculated with care being taken to distinguishT P R 10 20 $TPR^{20}_{10}$ results when using Method A (changing depth of water in phantom) and Method B (moving entire phantom). Typical statistical uncertainties are 0.03%. RESULTS Phantom dimensions have only minor effects for phantoms larger than20 × 20 × 25 cm 3 $20 \,\times \, 20 \,\times \, 25 \, {\rm {cm}}^3$ . A table of corrections to the dose at 10 cm depth in10 × 10 cm 2 $10 \,\times \, 10 \, {\rm {cm}}^{2}$ beams of60 Co $^{60}{\rm {Co}}$ or 6 MV photons are provided and range from no correction to 0.75% for a60 Co $^{60}{\rm {Co}}$ beam incident on a20 × 20 × 15 cm 3 $20 \,\times \, 20 \,\times \, 15 \, {\rm {cm}}^3$ phantom. There can be distinct differences in theT P R 10 20 $TPR^{20}_{10}$ values measured using Method A or Method B, especially for smaller phantoms. It is explicitly demonstrated that, within ± $\pm$ 0.15%,T P R 10 20 $TPR^{20}_{10}$ values for a30 × 30 × 30 cm 3 $30 \,\times \, 30 \,\times \, 30 \, {\rm {cm}}^3$ phantom measured using Method A or B are independent of source detector distance between 40 and 200 cm. CONCLUSIONS The phantom sizes recommended in the TG-51 and IAEA TRS-398 reference dosimetry protocols are adequate for accurate reference dosimetry and in some cases are even conservative. Correction factors are necessary for accurate measurement of the dose at 10 cm depth in smaller phantoms and these factors are provided. Very accurate beam quality specifiers are not required for reference dosimetry itself, but for specifying beam stability and characteristics it is important to specify phantom sizes and also the method used forT P R 10 20 $TPR^{20}_{10}$ measurements.
Collapse
|
33
|
Ramos-Mendez JA, Ortiz R, Schuemann J, Paganetti H, Faddegon BA. TOPAS simulation of photoneutrons in radiotherapy: accuracy and speed with variance reduction. Phys Med Biol 2024. [PMID: 38657630 DOI: 10.1088/1361-6560/ad4303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
We provide optimal particle split numbers for speeding up TOPAS Monte Carlo simulations of linear accelerator (linac) treatment heads while maintaining accuracy. In addition, we provide a new TOPAS physics module for simulating photoneutron production and transport.
TOPAS simulation of a Siemens Oncor linac was used to determine the optimal number of splits for directional bremsstrahlung splitting as a function of the field size for 6MV and 18MV x-ray beams. The linac simulation was validated against published data of lateral dose profiles and percentage depth-dose curves (PDD) for the largest square field (40cm side). In separate simulations, neutron particle split and the custom TOPAS physics module was used to generate and transport photoneutrons, called "TsPhotoNeutron". Verification of accuracy was performed by comparing simulations with published measurements of: 1) neutron yields as a function of beam energy for thick targets of Al, Cu, Ta, W, Pb and concrete; and 2) photoneutron energy spectrum at 40cm laterally from the isocenter of the linac from an 18MV beam with closed jaws and MLC.
The optimal number of splits obtained for directional bremsstrahlung splitting enhanced the computational efficiency by two orders of magnitude. The efficiency decreased with increasing beam energy and field size. Calculated lateral profiles in the central region agreed within 1mm/2% from measured data, PDD curves within 1 mm/1%. For the TOPAS physics module, at a split number of 146, the efficiency of computing photoneutron yields was enhanced by a 27.6 factor, whereas it improved the accuracy over existing Geant4 physics modules.
This work provides simulation parameters and a new TOPAS physics module to improve the efficiency and accuracy of TOPAS simulations that involve photonuclear processes occurring in high-Z materials found in linac components, patient devices, and treatment rooms, as well as to explore new therapeutic modalities such as very-high-energy electron therapy.
Collapse
|
34
|
Kuhlmann ML, Pojtinger S. Implementation of a new EGSnrc particle source class for computed tomography: validation and uncertainty quantification. Phys Med Biol 2024; 69:095021. [PMID: 38537305 DOI: 10.1088/1361-6560/ad3886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 03/26/2024] [Indexed: 04/25/2024]
Abstract
Objective. Personalized dose monitoring and risk management are of increasing significance with the growing number of computer tomography (CT) examinations. These require high-quality Monte Carlo (MC) simulations that are of the utmost importance for the new developments in personalized CT dosimetry. This work aims to extend the MC framework EGSnrc source code with a new particle source. This, in turn, allows CT-scanner-specific dose and image calculations for any CT scanner. The novel method can be used with all modern EGSnrc user codes, particularly for the simulation of the effective dose based on DICOM images and the calculation of CT images.Approach. The new particle source can be used with input data derived by the user. The input data can be generated by the user based on a previously developed method for the experimental characterization of any CT scanner (doi.org/10.1016/j.ejmp.2015.09.006). Furthermore, the new particle source was benchmarked by air kerma measurements in an ionization chamber at a clinical CT scanner. For this, the simulated angular distribution and attenuation characteristics were compared to measurements to verify the source output free in air. In a second validation step, simulations of air kerma in a homogenous cylindrical and an anthropomorphic thorax phantom were performed and validated against experimentally determined results. A detailed uncertainty evaluation of the simulated air kerma values was developed.Main results. We successfully implemented a new particle source class for the simulation of realistic CT scans. This method can be adapted to any CT scanner. For the attenuation characteristics, there was a maximal deviation of 6.86% between the measurement and the simulation. The mean deviation for all tube voltages was 2.36% (σ= 1.6%). For the phantom measurements and simulations, all the values agreed within 5.0%. The uncertainty evaluation resulted in an uncertainty of 5.5% (k=1).
Collapse
|
35
|
Struelens L, Huet C, Broggio D, Dabin J, Desorgher L, Giussani A, Li WB, Nosske D, Lee YK, Cunha L, Carapinha MJR, Medvedec M, Covens P. Joint EURADOS-EANM initiative for an advanced computational framework for the assessment of external dose rates from nuclear medicine patients. EJNMMI Phys 2024; 11:38. [PMID: 38647987 PMCID: PMC11035505 DOI: 10.1186/s40658-024-00638-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 03/28/2024] [Indexed: 04/25/2024] Open
Abstract
BACKGROUND In order to ensure adequate radiation protection of critical groups such as staff, caregivers and the general public coming into proximity of nuclear medicine (NM) patients, it is necessary to consider the impact of the radiation emitted by the patients during their stay at the hospital or after leaving the hospital. Current risk assessments are based on ambient dose rate measurements in a single position at a specified distance from the patient and carried out at several time points after administration of the radiopharmaceutical to estimate the whole-body retention. The limitations of such an approach are addressed in this study by developing and validating a more advanced computational dosimetry approach using Monte Carlo (MC) simulations in combination with flexible and realistic computational phantoms and time activity distribution curves from reference biokinetic models. RESULTS Measurements of the ambient dose rate equivalent Ḣ*(10) at 1 m from the NM patient have been successfully compared against MC simulations with 5 different codes using the ICRP adult reference computational voxel phantoms, for typical clinical procedures with 99mTc-HDP/MDP, 18FDG and Na131I. All measurement data fall in the 95% confidence intervals, determined for the average simulated results. Moreover, the different MC codes (MCNP-X, PHITS, GATE, GEANT4, TRIPOLI-4®) have been compared for a more realistic scenario where the effective dose rate Ė of an exposed individual was determined in positions facing and aside the patient model at 30 cm, 50 cm and 100 cm. The variation between codes was lower than 8% for all the radiopharmaceuticals at 1 m, and varied from 5 to 16% for the face-to face and side-by-side configuration at 30 cm and 50 cm. A sensitivity study on the influence of patient model morphology demonstrated that the relative standard deviation of Ḣ*(10) at 1 m for the range of included patient models remained under 16% for time points up to 120 min post administration. CONCLUSIONS The validated computational approach will be further used for the evaluation of effective dose rates per unit administered activity for a variety of close-contact configurations and a range of radiopharmaceuticals as part of risk assessment studies. Together with the choice of appropriate dose constraints this would facilitate the setting of release criteria and patient restrictions.
Collapse
|
36
|
Hill MA, Staut N, Thompson JM, Verhaegen F. Dosimetric validation of SmART-RAD Monte Carlo modelling for x-ray cabinet radiobiology irradiators. Phys Med Biol 2024; 69:095014. [PMID: 38518380 PMCID: PMC11031639 DOI: 10.1088/1361-6560/ad3720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 02/23/2024] [Accepted: 03/22/2024] [Indexed: 03/24/2024]
Abstract
Objective. Accuracy and reproducibility in the measurement of radiation dose and associated reporting are critically important for the validity of basic and preclinical radiobiological studies performed with kilovolt x-ray radiation cabinets. This is essential to enable results of radiobiological studies to be repeated, as well as enable valid comparisons between laboratories. In addition, the commonly used single point dose value hides the 3D dose heterogeneity across the irradiated sample. This is particularly true for preclinical rodent models, and is generally difficult to measure directly. Radiation transport simulations integrated in an easy to use application could help researchers improve quality of dosimetry and reporting.Approach. This paper describes the use and dosimetric validation of a newly-developed Monte Carlo (MC) tool, SmART-RAD, to simulate the x-ray field in a range of standard commercial x-ray cabinet irradiators used for preclinical irradiations. Comparisons are made between simulated and experimentally determined dose distributions for a range of configurations to assess the potential use of this tool in determining dose distributions through samples, based on more readily available air-kerma calibration point measurements.Main results. Simulations gave very good dosimetric agreement with measured depth dose distributions in phantoms containing both water and bone equivalent materials. Good spatial and dosimetric agreement between simulated and measured dose distributions was obtained when using beam-shaping shielding.Significance. The MC simulations provided by SmART-RAD provide a useful tool to go from a limited number of dosimetry measurements to detailed 3D dose distributions through a non-homogeneous irradiated sample. This is particularly important when trying to determine the dose distribution in more complex geometries. The use of such a tool can improve reproducibility and dosimetry reporting in preclinical radiobiological research.
Collapse
|
37
|
Verfaillie G, Rutten J, D'Asseler Y, Bacher K. Accuracy of patient-specific CT organ doses from Monte Carlo simulations: influence of CT-based voxel models. Phys Eng Sci Med 2024:10.1007/s13246-024-01422-z. [PMID: 38634980 DOI: 10.1007/s13246-024-01422-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Accepted: 04/01/2024] [Indexed: 04/19/2024]
Abstract
Monte Carlo simulations using patient CT images as input are the gold standard to perform patient-specific dosimetry. However, in standard clinical practice patient's CT images are limited to the reconstructed CT scan range. In this study, organ dose calculations were performed with ImpactMC for chest and cardiac CT using whole-body and anatomy-specific voxel models to estimate the accuracy of CT organ doses based on the latter model. When the 3D patient model is limited to the CT scan range, CT organ doses from Monte Carlo simulations are the most accurate for organs entirely in the field of view. For these organs only the radiation dose related to scatter from the rest of the body is not incorporated. For organs lying partially outside the field of view organ doses are overestimated by not accounting for the non-irradiated tissue mass. This overestimation depends strongly on the amount of the organ volume located outside the field of view. To get a more accurate estimation of the radiation dose to these organs, the ICRP reference organ masses and densities could form a solution. Except for the breast, good agreement in dose was found for most organs. Voxel models generated from clinical CT examinations do not include the overscan in the z-direction. The availability of whole-body voxel models allowed to study this influence as well. As expected, overscan induces slightly higher organ doses.
Collapse
|
38
|
Cowburn J, Serrancolí G, Colyer S, Cazzola D. Optimal fibre length and maximum isometric force are the most influential parameters when modelling muscular adaptations to unloading using Hill-type muscle models. Front Physiol 2024; 15:1347089. [PMID: 38694205 PMCID: PMC11061504 DOI: 10.3389/fphys.2024.1347089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 03/25/2024] [Indexed: 05/04/2024] Open
Abstract
Introduction: Spaceflight is associated with severe muscular adaptations with substantial inter-individual variability. A Hill-type muscle model is a common method to replicate muscle physiology in musculoskeletal simulations, but little is known about how the underlying parameters should be adjusted to model adaptations to unloading. The aim of this study was to determine how Hill-type muscle model parameters should be adjusted to model disuse muscular adaptations. Methods: Isokinetic dynamometer data were taken from a bed rest campaign and used to perform tracking simulations at two knee extension angular velocities (30°·s-1 and 180°·s-1). The activation and contraction dynamics were solved using an optimal control approach and direct collocation method. A Monte Carlo sampling technique was used to perturb muscle model parameters within physiological boundaries to create a range of theoretical and feasible parameters to model muscle adaptations. Results: Optimal fibre length could not be shortened by more than 67% and 61% for the knee flexors and non-knee muscles, respectively. Discussion: The Hill-type muscle model successfully replicated muscular adaptations due to unloading, and recreated salient features of muscle behaviour associated with spaceflight, such as altered force-length behaviour. Future researchers should carefully adjust the optimal fibre lengths of their muscle-models when trying to model adaptations to unloading, particularly muscles that primarily operate on the ascending and descending limbs of the force-length relationship.
Collapse
|
39
|
Al-Hamadani MNA, Fadhel MA, Alzubaidi L, Balazs H. Reinforcement Learning Algorithms and Applications in Healthcare and Robotics: A Comprehensive and Systematic Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:2461. [PMID: 38676080 PMCID: PMC11053800 DOI: 10.3390/s24082461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 04/04/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
Reinforcement learning (RL) has emerged as a dynamic and transformative paradigm in artificial intelligence, offering the promise of intelligent decision-making in complex and dynamic environments. This unique feature enables RL to address sequential decision-making problems with simultaneous sampling, evaluation, and feedback. As a result, RL techniques have become suitable candidates for developing powerful solutions in various domains. In this study, we present a comprehensive and systematic review of RL algorithms and applications. This review commences with an exploration of the foundations of RL and proceeds to examine each algorithm in detail, concluding with a comparative analysis of RL algorithms based on several criteria. This review then extends to two key applications of RL: robotics and healthcare. In robotics manipulation, RL enhances precision and adaptability in tasks such as object grasping and autonomous learning. In healthcare, this review turns its focus to the realm of cell growth problems, clarifying how RL has provided a data-driven approach for optimizing the growth of cell cultures and the development of therapeutic solutions. This review offers a comprehensive overview, shedding light on the evolving landscape of RL and its potential in two diverse yet interconnected fields.
Collapse
|
40
|
Lass M, Kenter T, Plessl C, Brehm M. Characterizing Microheterogeneity in Liquid Mixtures via Local Density Fluctuations. ENTROPY (BASEL, SWITZERLAND) 2024; 26:322. [PMID: 38667876 PMCID: PMC11049288 DOI: 10.3390/e26040322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/04/2024] [Accepted: 04/05/2024] [Indexed: 04/28/2024]
Abstract
We present a novel approach to characterize and quantify microheterogeneity and microphase separation in computer simulations of complex liquid mixtures. Our post-processing method is based on local density fluctuations of the different constituents in sampling spheres of varying size. It can be easily applied to both molecular dynamics (MD) and Monte Carlo (MC) simulations, including periodic boundary conditions. Multidimensional correlation of the density distributions yields a clear picture of the domain formation due to the subtle balance of different interactions. We apply our approach to the example of force field molecular dynamics simulations of imidazolium-based ionic liquids with different side chain lengths at different temperatures, namely 1-ethyl-3-methylimidazolium chloride, 1-hexyl-3-methylimidazolium chloride, and 1-decyl-3-methylimidazolium chloride, which are known to form distinct liquid domains. We put the results into the context of existing microheterogeneity analyses and demonstrate the advantages and sensitivity of our novel method. Furthermore, we show how to estimate the configuration entropy from our analysis, and we investigate voids in the system. The analysis has been implemented into our program package TRAVIS and is thus available as free software.
Collapse
|
41
|
Berumen F, Ouellet S, Enger S, Beaulieu L. Aleatoric and epistemic uncertainty extraction of patient-specific deep learning-based dose predictions in LDR prostate brachytherapy. Phys Med Biol 2024; 69:085026. [PMID: 38484398 DOI: 10.1088/1361-6560/ad3418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 03/14/2024] [Indexed: 04/10/2024]
Abstract
Objective.In brachytherapy, deep learning (DL) algorithms have shown the capability of predicting 3D dose volumes. The reliability and accuracy of such methodologies remain under scrutiny for prospective clinical applications. This study aims to establish fast DL-based predictive dose algorithms for low-dose rate (LDR) prostate brachytherapy and to evaluate their uncertainty and stability.Approach.Data from 200 prostate patients, treated with125I sources, was collected. The Monte Carlo (MC) ground truth dose volumes were calculated with TOPAS considering the interseed effects and an organ-based material assignment. Two 3D convolutional neural networks, UNet and ResUNet TSE, were trained using the patient geometry and the seed positions as the input data. The dataset was randomly split into training (150), validation (25) and test (25) sets. The aleatoric (associated with the input data) and epistemic (associated with the model) uncertainties of the DL models were assessed.Main results.For the full test set, with respect to the MC reference, the predicted prostateD90metric had mean differences of -0.64% and 0.08% for the UNet and ResUNet TSE models, respectively. In voxel-by-voxel comparisons, the average global dose difference ratio in the [-1%, 1%] range included 91.0% and 93.0% of voxels for the UNet and the ResUNet TSE, respectively. One forward pass or prediction took 4 ms for a 3D dose volume of 2.56 M voxels (128 × 160 × 128). The ResUNet TSE model closely encoded the well-known physics of the problem as seen in a set of uncertainty maps. The ResUNet TSE rectum D2cchad the largest uncertainty metric of 0.0042.Significance.The proposed DL models serve as rapid dose predictors that consider the patient anatomy and interseed attenuation effects. The derived uncertainty is interpretable, highlighting areas where DL models may struggle to provide accurate estimations. The uncertainty analysis offers a comprehensive evaluation tool for dose predictor model assessment.
Collapse
|
42
|
Insley BA, Bartkoski DA, Balter PA, Prajapati S, Tailor R, Jaffray D, Salehpour MR. Numerical optimization of longitudinal collimator geometry for novel x-ray field. Phys Med Biol 2024. [PMID: 38588671 DOI: 10.1088/1361-6560/ad3c0d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
OBJECTIVE A novel X-ray field produced by an ultrathin conical target is described in the literature. However, the optimal design for an associated collimator remains ambiguous. Current optimization methods using Monte Carlo calculations restrict the efficiency and robustness of the design process. A more generic optimization method that reduces parameter constraints while minimizing computational load is necessary. A numerical method for optimizing the longitudinal collimator hole geometry for a cylindrically-symmetrical X-ray tube is demonstrated and compared to Monte Carlo calculations.
Approach: The X-ray phase space was modelled as a four-dimensional histogram differential in photon initial position, final position, and photon energy. The collimator was modelled as a stack of thin washers with varying inner radii. Simulated annealing was employed to optimize this set of inner radii according to various objective functions calculated on the photon flux at a specified plane.
Main results: The analytical transport model used for optimization was validated against Monte Carlo calculations using Geant4 via its wrapper, TOPAS. Optimized collimators and the resulting photon flux profiles are presented for three focal spot sizes and five positions of the source. Optimizations were performed with multiple objective functions based on various weightings of precision, intensity, and field flatness metrics. Finally, a select set of these optimized collimators, plus a parallel-hole collimator for comparison, were modelled in TOPAS. The evolution of the radiation field profiles are presented for various positions of the source for each collimator.
Significance: This novel optimization strategy proved consistent and robust across the range of X-ray tube settings regardless of the optimization starting point. Common collimator geometries were re-derived using this algorithm while simultaneously optimizing geometry-specific parameters. The advantages of this strategy over iterative Monte Carlo-based techniques, including computational efficiency, radiation source-specificity, and solution flexibility, make it a desirable optimization method for complex irradiation geometries.
.
Collapse
|
43
|
Zeng X, Zhang Z, Li D, Huang X, Wang Z, Wang Y, Zhou W, Wang P, Zhu M, Wei Q, Gong H, Wei L. Evaluation of monolithic crystal detector with dual-ended readout utilizing multiplexing method. Phys Med Biol 2024; 69:085003. [PMID: 38484392 DOI: 10.1088/1361-6560/ad3417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 03/14/2024] [Indexed: 04/04/2024]
Abstract
Objective.Monolithic crystal detectors are increasingly being applied in positron emission tomography (PET) devices owing to their excellent depth-of-interaction (DOI) resolution capabilities and high detection efficiency. In this study, we constructed and evaluated a dual-ended readout monolithic crystal detector based on a multiplexing method.Approach.We employed two 12 × 12 silicon photomultiplier (SiPM) arrays for readout, and the signals from the 12 × 12 array were merged into 12 X and 12 Y channels using channel multiplexing. In 2D reconstruction, three methods based on the centre of gravity (COG) were compared, and the concept of thresholds was introduced. Furthermore, a light convolutional neural network (CNN) was employed for testing. To enhance depth localization resolution, we proposed a method by utilizing the mutual information from both ends of the SiPMs. The source width and collimation effect were simulated using GEANT4, and the intrinsic spatial resolution was separated from the measured values.Main results.At an operational voltage of 29 V for the SiPM, an energy resolution of approximately 12.5 % was achieved. By subtracting a 0.8 % threshold from the total energy in every channel, a 2D spatial resolution of approximately 0.90 mm full width at half maximum (FWHM) can be obtained. Furthermore, a higher level of resolution, approximately 0.80 mm FWHM, was achieved using a CNN, with some alleviation of edge effects. With the proposed DOI method, a significant 1.36 mm FWHM average DOI resolution can be achieved. Additionally, it was found that polishing and black coating on the crystal surface yielded smaller edge effects compared to a rough surface with a black coating.Significance.The introduction of a threshold in COG method and a dual-ended readout scheme can lead to excellent spatial resolution for monolithic crystal detectors, which can help to develop PET systems with both high sensitivity and high spatial resolution.
Collapse
|
44
|
Blum I, Wong JS, Godino Padre K, Stolzenberg J, Fuchs H, Baumann KS, Poppe B, Looe HK. Fano cavity test and investigation of the response of the Roos chamber irradiated by proton beams in perpendicular magnetic fields up to 1 T. Phys Med Biol 2024; 69:085021. [PMID: 38452383 DOI: 10.1088/1361-6560/ad311a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 03/07/2024] [Indexed: 03/09/2024]
Abstract
Objective. The aim of this work is to investigate the response of the Roos chamber (type 34001) irradiated by clinical proton beams in magnetic fields.Approach. At first, a Fano test was implemented in Monte Carlo software package GATE version 9.2 (based on Geant4 version 11.0.2) using a cylindrical slab geometry in a magnetic field up to 1 T. In accordance to an experimental setup (Fuchset al2021), the magnetic field correction factorskQB⃗of the Roos chamber were determined at different energies up to 252 MeV and magnetic field strengths up to 1 T, by separately simulating the ratios of chamber signalsMQ/MQB⃗,without and with magnetic field, and the dose-conversion factorsDw,QB⃗/Dw,Qin a small cylinder of water, with and without magnetic field. Additionally, detailed simulations were carried out to understand the observed magnetic field dependence.Main results. The Fano test was passed with deviations smaller than 0.25% between 0 and 1 T. The ratios of the chamber signals show both energy and magnetic field dependence. The maximum deviation of the dose-conversion factors from unity of 0.22% was observed at the lowest investigated proton energy of 97.4 MeV andB⃗= 1 T. The resultingkQB⃗factors increase initially with the applied magnetic field and decrease again after reaching a maximum at around 0.5 T; except for the lowest 97.4 MeV beam that show no observable magnetic field dependence. The deviation from unity of the factors is also larger for higher proton energies, where the maximum lies at 1.0035(5), 1.0054(7) and 1.0069(7) for initial energies ofE0= 152, 223.4 and 252 MeV, respectively.Significance. Detailed Monte Carlo studies showed that the observed effect can be mainly attributed to the differences in the transport of electrons produced both outside and inside of the air cavity in the presence of a magnetic field.
Collapse
|
45
|
Zahari R, Cox J, Obara B. Uncertainty-aware image classification on 3D CT lung. Comput Biol Med 2024; 172:108324. [PMID: 38508053 DOI: 10.1016/j.compbiomed.2024.108324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 03/06/2024] [Accepted: 03/14/2024] [Indexed: 03/22/2024]
Abstract
Early detection is crucial for lung cancer to prolong the patient's survival. Existing model architectures used in such systems have shown promising results. However, they lack reliability and robustness in their predictions and the models are typically evaluated on a single dataset, making them overconfident when a new class is present. With the existence of uncertainty, uncertain images can be referred to medical experts for a second opinion. Thus, we propose an uncertainty-aware framework that includes three phases: data preprocessing and model selection and evaluation, uncertainty quantification (UQ), and uncertainty measurement and data referral for the classification of benign and malignant nodules using 3D CT images. To quantify the uncertainty, we employed three approaches; Monte Carlo Dropout (MCD), Deep Ensemble (DE), and Ensemble Monte Carlo Dropout (EMCD). We evaluated eight different deep learning models consisting of ResNet, DenseNet, and the Inception network family, all of which achieved average F1 scores above 0.832, and the highest average value of 0.845 was obtained using InceptionResNetV2. Furthermore, incorporating the UQ demonstrated significant improvement in the overall model performance. Upon evaluation of the uncertainty estimate, MCD outperforms the other UQ models except for the metric, URecall, where DE and EMCD excel, implying that they are better at identifying incorrect predictions with higher uncertainty levels, which is vital in the medical field. Finally, we show that using a threshold for data referral can greatly improve the performance further, increasing the accuracy up to 0.959.
Collapse
|
46
|
Day JA, Tanguay J. Monte-Carlo study of contrast-enhanced spectral mammography with cadmium telluride photon-counting x-ray detectors. Med Phys 2024; 51:2479-2498. [PMID: 37967277 DOI: 10.1002/mp.16837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 09/09/2023] [Accepted: 10/30/2023] [Indexed: 11/17/2023] Open
Abstract
BACKGROUND Contrast-enhanced spectral mammography (CESM) with photon-counting x-ray detectors (PCDs) can be used to improve the classification of breast cancers as benign or malignant. Commercially-available PCD-based mammography systems use silicon-based PCDs. Cadmium-telluride (CdTe) PCDs may provide a practical advantage over silicon-based PCDs because they can be implemented as large-area detectors that are more easily adaptable to existing mammography systems. PURPOSE The purpose of this work is to optimize CESM implemented with CdTe PCDs and to investigate the influence of the number of energy bins, electronic noise level, pixel size, and anode material on image quality. METHODS We developed a Monte Carlo model of the energy-bin-dependent modulation transfer functions (MTFs) and noise power spectra, including spatioenergetic noise correlations. We validated model predictions using a CdTe PCD with analog charge summing for charge-sharing suppression. Using the ideal-observer detectability, we optimized CESM for the task of detecting a 7-mm-diameter iodine nodule embedded in a breast with 50% glandularity. We optimized the tube voltage, beam filtration, and the location of energy thresholds for 50 and 100- μ $\mu$ m pixels, tungsten and molybdenum anodes, and two electronic noise levels. One of the electronic noise levels was that of the experimental system; the other was half that of the experimental system. Optimization was performed for CdTe PCDs with two or three energy bins. We also estimated the impact of anatomic noise due to background parenchymal enhancement and computed the minimum detectable iodine area density in the presence of quantum and anatomic noise. RESULTS Model predictions of the MTFs and noise power spectra agreed well with experiment. For optimized systems, adding a third energy bin increased quantum noise levels and reduced detectability by ∼55% compared to two-bin approaches that simply suppress contrast between fibroglandular and adipose tissue. Decreasing the electronic noise standard deviation from 3.4 to 1.7 keV increased iodine detectability by ∼5% and ∼30% for two-bin imaging and three-bin imaging, respectively. After optimizing for tube voltage, beam filtration, and the location of energy thresholds, there was ∼a 3% difference in iodine detectability between molybdenum and tungsten anodes for two-bin imaging, but for three-bin imaging, molybdenum anodes provided up to 14% increase in detectability relative to tungsten anodes. Anatomic noise decreased iodine detectability by 15% to 40%, with greater impact for lower electronic noise settings and larger pixel sizes. CONCLUSIONS For CESM implemented with CdTe PCDs, (1) quantitatively-accurate three-material decompositions using three energy bins are associated with substantial increases in quantum noise relative to two-energy-bin approaches that simply suppress contrast between fibroglandular and adipose tissues; (2) tungsten and molybdenum anodes can provide nearly equal iodine detectability for two-bin imaging, but molybdenum provides a modest detectability advantage for three-bin imaging provided that all other technique parameters are optimized; (3) reducing pixel sizes from 100 to 50 μ $\mu$ m can reduce detectability by up to 20% due to charge sharing; (4) anatomic noise due to background parenchymal enhancement is estimated to have a substantial impact on lesion visibility, reducing detectability by approximately 30%.
Collapse
|
47
|
Luo P, Chen L, Liu Y, Weng S. Forecast of the number of nursing beds per 1000 older people from 2023 to 2025: Empirical quantitative research. Nurs Open 2024; 11:e2159. [PMID: 38628098 PMCID: PMC11021919 DOI: 10.1002/nop2.2159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 02/29/2024] [Accepted: 03/26/2024] [Indexed: 04/19/2024] Open
Abstract
AIM This research aims to offer a reference point for relevant departments to enhance the allocation of ageing resources and formulate policies accordingly. DESIGN This study is designed as empirical quantitative research. METHODS Data from the National Bureau of Statistics and the Ministry of Civil Affairs regarding older adults (aged≥60) from 2000 to 2022 and nursing beds from 1978 to 2022 were analysed. The differential autoregressive integrated moving averages model and Monte Carlo simulation were used to predict the growth of nursing beds per 1000 older people in China for the Years 2023-2025. RESULTS It is projected that from 2023 to 2025, China will experience a further increase in its ageing population, with an average annual growth rate of 3.1%. By 2025, the number of older people in China is expected to surpass 300 million. Additionally, there will be a rise in the number of nursing beds, with an average annual growth rate of 1.9%, leading to a total of 8.79 million nursing beds by 2025. However, due to the rapid growth of the older population, there will be a slight decline in the number of nursing beds per 1000 older people in China, with an average annual growth rate of -1.00%.
Collapse
|
48
|
Zhang C, Chen J, Chu Z, Zhang P, Xu J. History and future of water footprint in the Yangtze River Delta of China. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2024; 31:25508-25523. [PMID: 38472581 DOI: 10.1007/s11356-024-32757-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 02/29/2024] [Indexed: 03/14/2024]
Abstract
Quantifying the drivers of water footprint evolution in the Yangtze River Delta is vital for the optimization of China's total water consumption. The article aims to decompose and predict the water footprint of the Yangtze River Delta and provide policy recommendations for optimizing water use in the Yangtze River Delta. The paper applies the LMDI method to decompose the water footprint of the Yangtze River Delta and its provinces into five major drivers: water footprint structure, water use intensity, R&D scale, R&D efficiency, and population size. Furthermore, this paper combines scenario analysis and Monte Carlo simulation methods to predict the potential evolution trends of water footprint under the basic, general, and enhanced water conservation scenario, respectively. The results show that (1) the expansion of R&D scale is the main factor promoting the growth of water footprint, the improvement of R&D efficiency, and the reduction of water intensity are the main factors inhibiting the increase of water footprint, and the water footprint structure and population size have less influence on water footprint. (2) The evolution trend of water footprint of each province under three scenarios is different. Compared to the basic scenario, the water footprint decreases more in Shanghai, Zhejiang, and Anhui under the general and enhanced water conservation scenario. The increase in water footprint in Jiangsu under the enhanced scenario is smaller than that of the general water conservation scenario.
Collapse
|
49
|
De Saint-Hubert M, Caprioli M, de Freitas Nascimento L, Delombaerde L, Himschoot K, Vandenbroucke D, Leblans P, Crijns W. New optically stimulated luminescence dosimetry film optimized for energy dependence guided by Monte Carlo simulations. Phys Med Biol 2024; 69:075005. [PMID: 38394683 DOI: 10.1088/1361-6560/ad2ca2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 02/23/2024] [Indexed: 02/25/2024]
Abstract
Optically stimulated luminescence (OSL) film dosimeters, based on BaFBr:Eu2+phosphor material, have major dosimetric advantages such as dose linearity, high spatial resolution, film re-usability, and immediate film readout. However, they exhibit an energy-dependent over-response at low photon energies because they are not made of tissue-equivalent materials. In this work, the OSL energy-dependent response was optimized by lowering the phosphor grain size and seeking an optimal choice of phosphor concentration and film thickness to achieve sufficient signal sensitivity. This optimization process combines measurement-based assessments of energy response in narrow x-ray beams with various energy response calculation methods applied to different film metrics. Theoretical approaches and MC dose simulations were used for homogeneous phosphor distributions and for isolated phosphor grains of different dimensions, where the dose in the phosphor grain was calculated. In total 8 OSL films were manufactured with different BaFBr:Eu2+median particle diameters (D50): 3.2μm, 1.5μm and 230 nm and different phosphor concentrations (1.6%, 5.3% and 21.3 %) and thicknesses (from 5.2 to 49μm). Films were irradiated in narrow x-ray spectra (N60, N80, N-150 and N-300) and the signal intensity relative to the nominal dose-to-water value was normalized to Co-60. Finally, we experimentally tested the response of several films in Varian 6MV TrueBeam STx linear accelerator using the following settings: 10 × 10 cm2field, 0deggantry angle, 90 cm SSD, 10 cm depth. The x-ray irradiation experiment reported a reduced energy response for the smallest grain size with an inverse correlation between response and grain size. The N-60 irradiation showed a 43% reduction in the energy over-response when going from 3μm to 230 nm grain size for the 5% phosphor concentration. Energy response calculation using a homogeneous dispersion of the phosphor underestimated the experimental response and was not able to obtain the experimental correlation between grain size and energy response. Isolated grain size modeling combined with MC dose simulations allowed to establish a good agreement with experimental data, and enabled steering the production of optimized OSL-films. The clinical 6 MV beam test confirmed a reduction in energy dependence, which is visible in small-grain films where a decrease in out-of-field over-response was observed.
Collapse
|
50
|
Broderick K, Burnley RA, Gellman AJ, Kitchin JR. Surface Segregation Studies in Ternary Noble Metal Alloys: Comparing DFT and Machine Learning with Experimental Data. Chemphyschem 2024:e202400073. [PMID: 38517936 DOI: 10.1002/cphc.202400073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 03/24/2024]
Abstract
Surface segregation, whereby the surface composition of an alloy differs systematically from the bulk, has historically been hard to study, because it requires experimental and modeling methods that span alloy composition space. In this work, we study surface segregation in catalytically relevant noble and platinum-group metal alloys with a focus on three ternary systems: AgAuCu, AuCuPd, and CuPdPt. We develop a data set of 2478 fcc slabs with those compositions including all three low-index crystallographic orientations relaxed with Density Functional Theory using the PBEsol functional with D3 dispersion corrections. We fine-tune a machine learning model on this data and use the model in a series of 1800 Monte Carlo simulations spanning ternary composition space for each surface orientation and ternary chemical system. The results of these simulations are validated against prior experimental surface segregation data collected using composition spread alloy films for AgAuCu and AuCuPd. Our findings reveal that simulations conducted using the (110) orientation most closely match experimentally observed surface segregation trends, and while predicted trends qualitatively match observation, biases in the PBEsol functional limit numeric accuracy. This study advances understanding of surface segregation and the utility of computational studies and highlights the need for further improvements in simulation accuracy.
Collapse
|