26
|
Dong Z, Chen X, Ritter J, Bai L, Huang J. American society of anesthesiologists physical status classification significantly affects the performances of machine learning models in intraoperative hypotension inference. J Clin Anesth 2024; 92:111309. [PMID: 37922642 PMCID: PMC10873053 DOI: 10.1016/j.jclinane.2023.111309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Revised: 09/24/2023] [Accepted: 10/24/2023] [Indexed: 11/07/2023]
Abstract
STUDY OBJECTIVE To explore how American Society of Anesthesiologists (ASA) physical status classification affects different machine learning models in hypotension prediction and whether the prediction uncertainty could be quantified. DESIGN Observational Studies SETTING: UofL health hospital PATIENTS: This study involved 562 hysterectomy surgeries performed on patients (≥ 18 years) between June 2020 and July 2021. INTERVENTIONS None MEASUREMENTS: Preoperative and intraoperative data is collected. Three parametric machine learning models, including Bayesian generalized linear model (BGLM), Bayesian neural network (BNN), a newly proposed BNN with multivariate mixed responses (BNNMR), and one nonparametric model, Gaussian Process (GP), were explored to predict patients' diastolic and systolic blood pressures (continuous responses) and patients' hypotensive event (binary response) for the next five minutes. Data was separated into American Society of Anesthesiologists (ASA) physical status class 1- 4 before being read in by four machine learning models. Statistical analysis and models' constructions are performed in Python. Sensitivity, specificity, and the confidence/credible intervals were used to evaluate the prediction performance of each model for each ASA physical status class. MAIN RESULTS ASA physical status classes require distinct models to accurately predict intraoperative blood pressures and hypotensive events. Overall, high sensitivity (above 0.85) and low uncertainty can be achieved by all models for ASA class 4 patients. In contrast, models trained without controlling ASA classes yielded lower sensitivity (below 0.5) and larger uncertainty. Particularly, in terms of predicting binary hypotensive event, for ASA physical status class 1, BNNMR yields the highest sensitivity of 1. For classes 2 and 3, BNN has the highest sensitivity of 0.429 and 0.415, respectively. For class 4, BNNMR and GP are tied with the highest sensitivity of 0.857. On the other hand, the sensitivity is just 0.031, 0.429, 0.165 and 0.305 for BNNMR, BNN, GBLM and GP models respectively, when training data is not divided by ASA physical status classes. In terms of predicting systolic blood pressure, the GP regression yields the lowest root mean squared errors (RMSE) of 2.072, 7.539, 9.214 and 0.295 for ASA physical status classes 1, 2, 3 and 4, respectively, but a RMSE of 126.894 if model is trained without controlling the ASA physical status class. The RMSEs for other models are far higher. RMSEs are 2.175, 13.861, 17.560 and 22.426 for classes 1, 2, 3 and 4 respectively for the BGLM. In terms of predicting diastolic blood pressure, the GP regression yields the lowest RMSEs of 2.152, 6.573, 5.371 and 0.831 for ASA physical status classes 1, 2, 3 and 4, respectively; RMSE of 8.084 if model is trained without controlling the ASA physical status class. The RMSEs for other models are far higher. Finally, in terms of the width of the 95% confidence interval of the mean prediction for systolic and diastolic blood pressures, GP regression gives narrower confidence interval with much smaller margin of error across all four ASA physical status classes. CONCLUSIONS Different ASA physical status classes present different data distributions, and thus calls for distinct machine learning models to improve prediction accuracy and reduce predictive uncertainty. Uncertainty quantification enabled by Bayesian inference provides valuable information for clinicians as an additional metric to evaluate performance of machine learning models for medical decision making.
Collapse
|
27
|
Kumar A, Singh UK, Pradhan B. Enhancing subsurface contamination assessment via ensemble prediction of ground electrical property: A Colorado AMD-impacted wetland case study. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2024; 351:119943. [PMID: 38169263 DOI: 10.1016/j.jenvman.2023.119943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/07/2023] [Accepted: 12/23/2023] [Indexed: 01/05/2024]
Abstract
Acid mine drainage (AMD) is recognized as a major environmental challenge in the Western United States, particularly in Colorado, leading to extreme subsurface contamination issue. Given Colorado's arid climate and dependence on groundwater, an accurate assessment of AMD-induced contamination is deemed crucial. While in past, machine learning (ML)-based inversion algorithms were used to reconstruct ground electrical properties (GEP) such as relative dielectric permittivity (RDP) from ground penetrating radar (GPR) data for contamination assessment, their inherent non-linear nature can introduce significant uncertainty and non-uniqueness into the reconstructed models. This is a challenge that traditional ML methods are not explicitly designed to address. In this study, a probabilistic hybrid technique has been introduced that combines the DeepLabv3+ architecture-based deep convolutional neural network (DCNN) with an ensemble prediction-based Monte Carlo (MC) dropout method. Different MC dropout rates (1%, 5%, and 10%) were initially evaluated using 1D and 2D synthetic GPR data for accurate and reliable RDP model prediction. The optimal rate was chosen based on minimal prediction uncertainty and the closest alignment of the mean or median model with the true RDP model. Notably, with the optimal MC dropout rate, prediction accuracy of over 95% for the 1D and 2D cases was achieved. Motivated by these results, the hybrid technique was applied to field GPR data collected over an AMD-impacted wetland near Silverton, Colorado. The field results underscored the hybrid technique's ability to predict an accurate subsurface RDP distribution for estimating the spatial extent of AMD-induced contamination. Notably, this technique not only provides a precise assessment of subsurface contamination but also ensures consistent interpretations of subsurface condition by different environmentalists examining the same GPR data. In conclusion, the hybrid technique presents a promising avenue for future environmental studies in regions affected by AMD or other contaminants that alter the natural distribution of GEP.
Collapse
|
28
|
Abdullah AA, Hassan MM, Mustafa YT. Leveraging Bayesian deep learning and ensemble methods for uncertainty quantification in image classification: A ranking-based approach. Heliyon 2024; 10:e24188. [PMID: 38293520 PMCID: PMC10825337 DOI: 10.1016/j.heliyon.2024.e24188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 12/08/2023] [Accepted: 01/04/2024] [Indexed: 02/01/2024] Open
Abstract
Bayesian deep learning (BDL) has emerged as a powerful technique for quantifying uncertainty in classification tasks, surpassing the effectiveness of traditional models by aligning with the probabilistic nature of real-world data. This alignment allows for informed decision-making by not only identifying the most likely outcome but also quantifying the surrounding uncertainty. Such capabilities hold great significance in fields like medical diagnoses and autonomous driving, where the consequences of misclassification are substantial. To further improve uncertainty quantification, the research community has introduced Bayesian model ensembles, which combines multiple Bayesian models to enhance predictive accuracy and uncertainty quantification. These ensembles have exhibited superior performance compared to individual Bayesian models and even non-Bayesian counterparts. In this study, we propose a novel approach that leverages the power of Bayesian ensembles for enhanced uncertainty quantification. The proposed method exploits the disparity between predicted positive and negative classes and employes it as a ranking metric for model selection. For each instance or sample, the ensemble's output for each class is determined by selecting the top 'k' models based on this ranking. Experimental results on different medical image classifications demonstrate that the proposed method consistently outperforms or achieves comparable performance to conventional Bayesian ensemble. This investigation highlights the practical application of Bayesian ensemble techniques in refining predictive performance and enhancing uncertainty evaluation in image classification tasks.
Collapse
|
29
|
Han Q, Qian X, Xu H, Wu K, Meng L, Qiu Z, Weng T, Zhou B, Gao X. DM-CNN: Dynamic Multi-scale Convolutional Neural Network with uncertainty quantification for medical image classification. Comput Biol Med 2024; 168:107758. [PMID: 38042102 DOI: 10.1016/j.compbiomed.2023.107758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 10/30/2023] [Accepted: 11/21/2023] [Indexed: 12/04/2023]
Abstract
Convolutional neural network (CNN) has promoted the development of diagnosis technology of medical images. However, the performance of CNN is limited by insufficient feature information and inaccurate attention weight. Previous works have improved the accuracy and speed of CNN but ignored the uncertainty of the prediction, that is to say, uncertainty of CNN has not received enough attention. Therefore, it is still a great challenge for extracting effective features and uncertainty quantification of medical deep learning models In order to solve the above problems, this paper proposes a novel convolutional neural network model named DM-CNN, which mainly contains the four proposed sub-modules : dynamic multi-scale feature fusion module (DMFF), hierarchical dynamic uncertainty quantifies attention (HDUQ-Attention) and multi-scale fusion pooling method (MF Pooling) and multi-objective loss (MO loss). DMFF select different convolution kernels according to the feature maps at different levels, extract different-scale feature information, and make the feature information of each layer have stronger representation ability for information fusion HDUQ-Attention includes a tuning block that adjust the attention weight according to the different information of each layer, and a Monte-Carlo (MC) dropout structure for quantifying uncertainty MF Pooling is a pooling method designed for multi-scale models, which can speed up the calculation and prevent overfitting while retaining the main important information Because the number of parameters in the backbone part of DM-CNN is different from other modules, MO loss is proposed, which has a fast optimization speed and good classification effect DM-CNN conducts experiments on publicly available datasets in four areas of medicine (Dermatology, Histopathology, Respirology, Ophthalmology), achieving state-of-the-art classification performance on all datasets. DM-CNN can not only maintain excellent performance, but also solve the problem of quantification of uncertainty, which is a very important task for the medical field. The code is available: https://github.com/QIANXIN22/DM-CNN.
Collapse
|
30
|
Dimitriou NM, Demirag E, Strati K, Mitsis GD. A calibration and uncertainty quantification analysis of classical, fractional and multiscale logistic models of tumour growth. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107920. [PMID: 37976612 DOI: 10.1016/j.cmpb.2023.107920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 10/27/2023] [Accepted: 11/05/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND AND OBJECTIVE The validation of mathematical models of tumour growth is frequently hampered by the lack of sufficient experimental data, resulting in qualitative rather than quantitative studies. Recent approaches to this problem have attempted to extract information about tumour growth by integrating multiscale experimental measurements, such as longitudinal cell counts and gene expression data. In the present study, we investigated the performance of several mathematical models of tumour growth, including classical logistic, fractional and novel multiscale models, in terms of quantifying in-vitro tumour growth in the presence and absence of therapy. We further examined the effect of genes associated with changes in chemosensitivity in cell death rates. METHODS The multiscale expansion of logistic growth models was performed by coupling gene expression profiles to the cell death rates. State-of-the-art Bayesian inference, likelihood maximisation and uncertainty quantification techniques allowed a thorough evaluation of model performance. RESULTS The results suggest that the classical single-cell population model (SCPM) was the best fit for the untreated and low-dose treatment conditions, while the multiscale model with a cell death rate symmetric with the expression profile of OCT4 (Sym-SCPM) yielded the best fit for the high-dose treatment data. Further identifiability analysis showed that the multiscale model was both structurally and practically identifiable under the condition of known OCT4 expression profiles. CONCLUSIONS Overall, the present study demonstrates that model performance can be improved by incorporating multiscale measurements of tumour growth when high-dose treatment is involved.
Collapse
|
31
|
Barahona J, Sahli Costabal F, Hurtado DE. Machine learning modeling of lung mechanics: Assessing the variability and propagation of uncertainty in respiratory-system compliance and airway resistance. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107888. [PMID: 37948910 DOI: 10.1016/j.cmpb.2023.107888] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 10/12/2023] [Accepted: 10/25/2023] [Indexed: 11/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Traditional assessment of patient response in mechanical ventilation relies on respiratory-system compliance and airway resistance. Clinical evidence has shown high variability in these parameters, highlighting the difficulty of predicting them before the start of ventilation therapy. This motivates the creation of computational models that can connect structural and tissue features with lung mechanics. In this work, we leverage machine learning (ML) techniques to construct predictive lung function models informed by non-linear finite element simulations, and use them to investigate the propagation of uncertainty in the lung mechanical response. METHODS We revisit a continuum poromechanical formulation of the lungs suitable for determining patient response. Based on this framework, we create high-fidelity finite element models of human lungs from medical images. We also develop a low-fidelity model based on an idealized sphere geometry. We then use these models to train and validate three ML architectures: single-fidelity and multi-fidelity Gaussian process regression, and artificial neural networks. We use the best predictive ML model to further study the sensitivity of lung response to variations in tissue structural parameters and boundary conditions via sensitivity analysis and forward uncertainty quantification. Codes are available for download at https://github.com/comp-medicine-uc/ML-lung-mechanics-UQ RESULTS: The low-fidelity model delivers a lung response very close to that predicted by high-fidelity simulations and at a fraction of the computational time. Regarding the trained ML models, the multi-fidelity GP model consistently delivers better accuracy than the single-fidelity GP and neural network models in estimating respiratory-system compliance and resistance (R2∼0.99). In terms of computational efficiency, our ML model delivers a massive speed-up of ∼970,000× with respect to high-fidelity simulations. Regarding lung function, we observed an almost matched and non-linear behavior between specific structural parameters and chest wall stiffness with compliance. Also, we observed a strong modulation of airways resistance with tissue permeability. CONCLUSIONS Our findings unveil the relevance of specific lung tissue parameters and boundary conditions in the respiratory-system response. Furthermore, we highlight the advantages of adopting a multi-fidelity ML approach that combines data from different levels to yield accurate and efficient estimates of clinical mechanical markers. We envision that the methods presented here can open the way to the development of predictive ML models of the lung response that can inform clinical decisions.
Collapse
|
32
|
Peluso A, Danciu I, Yoon HJ, Yusof JM, Bhattacharya T, Spannaus A, Schaefferkoetter N, Durbin EB, Wu XC, Stroup A, Doherty J, Schwartz S, Wiggins C, Coyle L, Penberthy L, Tourassi GD, Gao S. Deep learning uncertainty quantification for clinical text classification. J Biomed Inform 2024; 149:104576. [PMID: 38101690 DOI: 10.1016/j.jbi.2023.104576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 12/06/2023] [Accepted: 12/10/2023] [Indexed: 12/17/2023]
Abstract
INTRODUCTION Machine learning algorithms are expected to work side-by-side with humans in decision-making pipelines. Thus, the ability of classifiers to make reliable decisions is of paramount importance. Deep neural networks (DNNs) represent the state-of-the-art models to address real-world classification. Although the strength of activation in DNNs is often correlated with the network's confidence, in-depth analyses are needed to establish whether they are well calibrated. METHOD In this paper, we demonstrate the use of DNN-based classification tools to benefit cancer registries by automating information extraction of disease at diagnosis and at surgery from electronic text pathology reports from the US National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) population-based cancer registries. In particular, we introduce multiple methods for selective classification to achieve a target level of accuracy on multiple classification tasks while minimizing the rejection amount-that is, the number of electronic pathology reports for which the model's predictions are unreliable. We evaluate the proposed methods by comparing our approach with the current in-house deep learning-based abstaining classifier. RESULTS Overall, all the proposed selective classification methods effectively allow for achieving the targeted level of accuracy or higher in a trade-off analysis aimed to minimize the rejection rate. On in-distribution validation and holdout test data, with all the proposed methods, we achieve on all tasks the required target level of accuracy with a lower rejection rate than the deep abstaining classifier (DAC). Interpreting the results for the out-of-distribution test data is more complex; nevertheless, in this case as well, the rejection rate from the best among the proposed methods achieving 97% accuracy or higher is lower than the rejection rate based on the DAC. CONCLUSIONS We show that although both approaches can flag those samples that should be manually reviewed and labeled by human annotators, the newly proposed methods retain a larger fraction and do so without retraining-thus offering a reduced computational cost compared with the in-house deep learning-based abstaining classifier.
Collapse
|
33
|
Shuttleworth JG, Lei CL, Whittaker DG, Windley MJ, Hill AP, Preston SP, Mirams GR. Empirical Quantification of Predictive Uncertainty Due to Model Discrepancy by Training with an Ensemble of Experimental Designs: An Application to Ion Channel Kinetics. Bull Math Biol 2023; 86:2. [PMID: 37999811 PMCID: PMC10673765 DOI: 10.1007/s11538-023-01224-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 10/09/2023] [Indexed: 11/25/2023]
Abstract
When using mathematical models to make quantitative predictions for clinical or industrial use, it is important that predictions come with a reliable estimate of their accuracy (uncertainty quantification). Because models of complex biological systems are always large simplifications, model discrepancy arises-models fail to perfectly recapitulate the true data generating process. This presents a particular challenge for making accurate predictions, and especially for accurately quantifying uncertainty in these predictions. Experimentalists and modellers must choose which experimental procedures (protocols) are used to produce data used to train models. We propose to characterise uncertainty owing to model discrepancy with an ensemble of parameter sets, each of which results from training to data from a different protocol. The variability in predictions from this ensemble provides an empirical estimate of predictive uncertainty owing to model discrepancy, even for unseen protocols. We use the example of electrophysiology experiments that investigate the properties of hERG potassium channels. Here, 'information-rich' protocols allow mathematical models to be trained using numerous short experiments performed on the same cell. In this case, we simulate data with one model and fit it with a different (discrepant) one. For any individual experimental protocol, parameter estimates vary little under repeated samples from the assumed additive independent Gaussian noise model. Yet parameter sets arising from the same model applied to different experiments conflict-highlighting model discrepancy. Our methods will help select more suitable ion channel models for future studies, and will be widely applicable to a range of biological modelling problems.
Collapse
|
34
|
Yin T, Panapitiya G, Coda ED, Saldanha EG. Evaluating uncertainty-based active learning for accelerating the generalization of molecular property prediction. J Cheminform 2023; 15:105. [PMID: 37941055 PMCID: PMC10633997 DOI: 10.1186/s13321-023-00753-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 08/25/2023] [Indexed: 11/10/2023] Open
Abstract
Deep learning models have proven to be a powerful tool for the prediction of molecular properties for applications including drug design and the development of energy storage materials. However, in order to learn accurate and robust structure-property mappings, these models require large amounts of data which can be a challenge to collect given the time and resource-intensive nature of experimental material characterization efforts. Additionally, such models fail to generalize to new types of molecular structures that were not included in the model training data. The acceleration of material development through uncertainty-guided experimental design has the promise to significantly reduce the data requirements and enable faster generalization to new types of materials. To evaluate the potential of such approaches for electrolyte design applications, we perform comprehensive evaluation of existing uncertainty quantification methods on the prediction of two relevant molecular properties - aqueous solubility and redox potential. We develop novel evaluation methods to probe the utility of the uncertainty estimates for both in-domain and out-of-domain data sets. Finally, we leverage selected uncertainty estimation methods for active learning to evaluate their capacity to support experimental design.
Collapse
|
35
|
MacDonald DE, Cancelliere NM, Pereira VM, Steinman DA. Sensitivity of hostile hemodynamics to aneurysm geometry via unsupervised shape interpolation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 241:107762. [PMID: 37598472 DOI: 10.1016/j.cmpb.2023.107762] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 06/19/2023] [Accepted: 08/10/2023] [Indexed: 08/22/2023]
Abstract
BACKGROUND AND OBJECTIVE Vessel geometry and hemodynamics are intrinsically linked, whereby geometry determines hemodynamics, and hemodynamics influence vascular remodeling. Both have been used for testing clinical outcomes, but geometry/morphology generally has less uncertainty than hemodynamics derived from medical image-based computational fluid dynamics (CFD). To provide clinical utility, CFD-based hemodynamic parameters must be robust to modeling errors and/or uncertainties, but must also provide useful information not more-easily extracted from shape alone. The objective of this study was to methodically assess the response of hemodynamic parameters to gradual changes in shape created using an unsupervised 3D shape interpolation method. METHODS We trained the neural network NeuroMorph on 3 patient-derived intracranial aneurysm surfaces (labelled A, B, C), and then generated 3 distinct morph sequences (A→B, B→C, C→A) each containing 10 interpolated surfaces. From high-fidelity CFD simulation of these, we calculated a variety of common reduced hemodynamic parameters, including many previously associated with aneurysm rupture, and analyzed their responses to changes in shape, and their correlations. RESULTS The interpolated surfaces demonstrate complex, gradual changes in branch angles, vessel diameters, and aneurysm morphology. CFD simulation showed gradual changes in aneurysm jetting characteristics and wall-shear stress (WSS) patterns, but demonstrated a range of responses from the reduced hemodynamic parameters. Spatially and temporally averaged parameters including time-averaged WSS, time-averaged velocity, and low-shear area (LSA) showed low variation across all morph sequences, while parameters of flow complexity such as oscillatory shear, spectral broadening, and spectral bandedness indices showed high variation between slightly-altered neighboring surfaces. Correlation analysis revealed a great deal of mutual information with easier-to-measure shape-based parameters. CONCLUSIONS In the absence of large clinical datasets, unsupervised shape interpolation provides an ideal laboratory for exploring the delicate balance between robustness and sensitivity of nominal hemodynamic predictors of aneurysm rupture. Parameters like time-averaged WSS and LSA that are highly "robust" may, as a result, be effectively redundant to morphological predictors, whereas more sensitive parameters may be too uncertain for practical clinical use. Understanding these sensitivities may help identify parameters that are capable of providing added value to rupture risk assessment.
Collapse
|
36
|
Junttila V, Minunno F, Peltoniemi M, Forsius M, Akujärvi A, Ojanen P, Mäkelä A. Quantification of forest carbon flux and stock uncertainties under climate change and their use in regionally explicit decision making: Case study in Finland. AMBIO 2023; 52:1716-1733. [PMID: 37572230 PMCID: PMC10562356 DOI: 10.1007/s13280-023-01906-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 04/23/2023] [Accepted: 07/05/2023] [Indexed: 08/14/2023]
Abstract
Uncertainties are essential, yet often neglected, information for evaluating the reliability in forest carbon balance projections used in national and regional policy planning. We analysed uncertainties in the forest net biome exchange (NBE) and carbon stocks under multiple management and climate scenarios with a process-based ecosystem model. Sampled forest initial state values, model parameters, harvest levels and global climate models (GCMs) served as inputs in Monte Carlo simulations, which covered forests of the 18 regions of mainland Finland over the period 2015-2050. Under individual scenarios, the results revealed time- and region-dependent variability in the magnitude of uncertainty and mean values of the NBE projections. The main sources of uncertainty varied with time, by region and by the amount of harvested wood. Combinations of uncertainties in the representative concentration pathways scenarios, GCMs, forest initial values and model parameters were the main sources of uncertainty at the beginning, while the harvest scenarios dominated by the end of the simulation period, combined with GCMs and climate scenarios especially in the north. Our regionally explicit uncertainty analysis was found a useful approach to reveal the variability in the regional potentials to reach a policy related, future target level of NBE, which is important information when planning realistic and regionally fair national policy actions.
Collapse
|
37
|
Paasche H, Wang Y, Chand Baranwal V, Brönner M. Computation of a probabilistic uranium concentration map of Norway: A digital expert elicitation approach employing random forests and artificial neural networks. Heliyon 2023; 9:e21791. [PMID: 38027730 PMCID: PMC10660982 DOI: 10.1016/j.heliyon.2023.e21791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/27/2023] [Accepted: 10/27/2023] [Indexed: 12/01/2023] Open
Abstract
We compute the first probabilistic uranium concentration map of Norway. Such a map can support mineral exploration, geochemical mapping, or the assessment of the health risk to the human population. We employ multiple non-linear regression to fill the information gaps in sparse airborne and ground-borne uranium data sets. We mimic an expert elicitation by employing Random Forests and Multi-layer Perceptrons as digital agents equally qualified to find regression models. In addition to the regression, we use supervised classification to produce conservative and alarmistic classified maps outlining regions with different potential for the local occurrence of uranium concentration extremes. Embedding the introduced digital expert elicitation in a Monte Carlo approach we compute an ensemble of plausible uranium concentrations maps of Norway discretely quantifying the uncertainty resulting from the choice of the regression algorithm and the chosen parametrization of the used regression algorithms. We introduce digitated glyphs to visually integrate all computed maps and their associated uncertainties in a loss-free manner to fully communicate our probabilistic results to map perceivers. A strong correlation between mapped geology and uranium concentration is found, which could be used to optimize future sparse uranium concentration sampling to lower extrapolation components in future map updates.
Collapse
|
38
|
Dell'Oca A, Guadagnini A, Riva M. Probabilistic assessment of failure of infiltration structures under model and parametric uncertainty. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2023; 344:118466. [PMID: 37421819 DOI: 10.1016/j.jenvman.2023.118466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 06/09/2023] [Accepted: 06/18/2023] [Indexed: 07/10/2023]
Abstract
We focus on the quantification of the probability of failure (PF) of an infiltration structure, of the kind that is typically employed for the implementation of low impact development strategies in urban settings. Our approach embeds various sources of uncertainty. These include (a) the mathematical models rendering key hydrological traits of the system and the ensuing model parametrization as well as (b) design variables related to the drainage structure. As such, we leverage on a rigorous multi-model Global Sensitivity Analysis framework. We consider a collection of commonly used alternative models to represent our knowledge about the conceptualization of the system functioning. Each model is characterized by a set of uncertain parameters. As an original aspect, the sensitivity metrics we consider are related to a single- and a multi-model context. The former provides information about the relative importance that model parameters conditional to the choice of a given model can have on PF. The latter yields the importance that the selection of a given model has on PF and enables one to consider at the same time all of the alternative models analyzed. We demonstrate our approach through an exemplary application focused on the preliminary design phase of infiltration structures serving a region in the northern part of Italy. Results stemming from a multi-model context suggest that the contribution arising from the adoption of a given model is key to the quantification of the degree of importance associated with each uncertain parameter.
Collapse
|
39
|
Boland J, Telesca D, Sugar C, Jeste S, Dickinson A, DiStefano C, Şentürk D. Central Posterior Envelopes for Bayesian Functional Principal Component Analysis. JOURNAL OF DATA SCIENCE : JDS 2023; 21:715-734. [PMID: 38883309 PMCID: PMC11178334 DOI: 10.6339/23-jds1085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
Bayesian methods provide direct inference in functional data analysis applications without reliance on bootstrap techniques. A major tool in functional data applications is the functional principal component analysis which decomposes the data around a common mean function and identifies leading directions of variation. Bayesian functional principal components analysis (BFPCA) provides uncertainty quantification on the estimated functional model components via the posterior samples obtained. We propose central posterior envelopes (CPEs) for BFPCA based on functional depth as a descriptive visualization tool to summarize variation in the posterior samples of the estimated functional model components, contributing to uncertainty quantification in BFPCA. The proposed BFPCA relies on a latent factor model and targets model parameters within a mixed effects modeling framework using modified multiplicative gamma process shrinkage priors on the variance components. Functional depth provides a center-outward order to a sample of functions. We utilize modified band depth and modified volume depth for ordering of a sample of functions and surfaces, respectively, to derive at CPEs of the mean and eigenfunctions within the BFPCA framework. The proposed CPEs are showcased in extensive simulations. Finally, the proposed CPEs are applied to the analysis of a sample of power spectral densities (PSD) from resting state electroencephalography (EEG) where they lead to novel insights on diagnostic group differences among children diagnosed with autism spectrum disorder and their typically developing peers across age.
Collapse
|
40
|
Wang P, Yan Z, Du Z, Fu Y, Liu Z, Qu S, Zhuang Z. A Bayesian method with nonlinear noise model to calibrate constitutive parameters of soft tissue. J Mech Behav Biomed Mater 2023; 146:106070. [PMID: 37567066 DOI: 10.1016/j.jmbbm.2023.106070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/23/2023] [Accepted: 08/06/2023] [Indexed: 08/13/2023]
Abstract
The measured mechanical responses of soft tissue exhibit large variability and errors, especially for the softest brain tissue, while calibrating its constitutive parameters in a deterministic way remains a common practice. Here we implement a Bayesian method considering the nonlinear noise model to calibrate constitutive parameters of brain tissue. A probability model is first developed based on the measured experimental data, likelihood function, and prior function, from which the posterior distributions of model parameters are formulated. The likelihood function considers the nonlinear behaviors of the constitutive response and noise distribution of the experimentally measured data. Meanwhile, the prior predictive distribution is computed to check the probability model preliminarily. Secondly, the Markov Chain Monte Carlo (MCMC) method is used to compute the posterior distributions of model parameters, enabling assessment of parameter uncertainty, correlation, and model calibration error. Finally, the posterior predictive distributions of the overall response, constitutive response, and noise response are computed to validate the probabilistic model, all of which are consistent with the corresponding data. Furthermore, the effect of the prior distribution, experimental data, and noise model on posterior distribution is studied. Our study provides a general approach to calibrating constitutive parameters of soft tissue despite errors and large variability in experimental data.
Collapse
|
41
|
Yan J, Cai S, Cai X, Zhu G, Zhou W, Guo R, Yan H, Wang Y. Uncertainty quantification of microcirculatory characteristic parameters for recognition of cardiovascular diseases. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107674. [PMID: 37343374 DOI: 10.1016/j.cmpb.2023.107674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 05/14/2023] [Accepted: 06/08/2023] [Indexed: 06/23/2023]
Abstract
BACKGROUND Cardiovascular disease is one of the leading causes of death worldwide. However, according to studies, 90% of cardiovascular diseases can be prevented. Cardiovascular function parameters are an important basis for the diagnosis of cardiovascular diseases. The pulse wave also contains a wealth of physiological and pathological information, which can reflect the trend of cardiac function parameters at an early stage, so the measurement and analysis of the pulse wave signal becomes more and more important. The wearable pulse signal acquisition device has gradually become a new trend. In the mobile health scenario, convenient use is the prerequisite for long-term and rapid health monitoring. The data containing diverse pulse wave signals is the basis for obtaining more comprehensive and accurate human physiopathological information. Accurate data analysis and processing is the key to realizing the important goal of cardiovascular health monitoring. OBJECTIVE Based on the concept of mobile health care, wearable devices are developed to obtain physiological signals. The zero-dimensional model and the optimization algorithm are combined to complete the uncertainty quantification of the microcirculation parameters. Then, a feature set containing the cardiovasvular parameters can be constructed. The machine learning algorithm can be used to build a model that can accurately realize cardiovascular disease identification. METHODS This paper adopts laboratory-developed equipment to acquire the wrist pulse wave and fingertip volume pulse wave. A total of 323 samples were collected from healthy populations, hypertensive patients and patients with coronary heart disease (CHD). The pulse blood flow model in fingertip microcirculation is established, and the uncertainty quantification of model parameters is completed based on slime mold algorithm (SMA). After comparing and analyzing the performance of four algorithms on pulse wave classification, the identification model of cardiovascular diseases is established based on the microcirculatory characteristic parameter set and random forest algorithm (RF). RESULTS RF showed good classification performance among the four classification algorithms. The identification accuracy of the model built on the microcirculatory characteristic parameter set and RF algorithm all reached more than 88%. The highest recognition accuracy was 95.51% for coronary heart disease samples, 92.11% for healthy samples, and 88.55% for hypertensive samples. It can be seen that the model based on RF algorithm has a good ability to distinguish the characteristic parameters in different cardiovascular health states. CONCLUSIONS The wearable device designed in this paper can facilitate the daily health monitoring of cardiovascular disease. By using the combination of the physical model and machine learning model, the uncertainty quantification of microcirculation parameters and the identification of cardiovascular disease was finally completed. The recognition model based on machine learning provides a new idea and method for the research of cardiovascular health monitoring through pulse waves.
Collapse
|
42
|
Mariotti A, Celi S, Antonuccio MN, Salvetti MV. Impact of the Spatial Velocity Inlet Distribution on the Hemodynamics of the Thoracic Aorta. Cardiovasc Eng Technol 2023; 14:713-725. [PMID: 37726567 DOI: 10.1007/s13239-023-00682-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 09/01/2023] [Indexed: 09/21/2023]
Abstract
The impact of the distribution in space of the inlet velocity in the numerical simulations of the hemodynamics in the thoracic aorta is systematically investigated. A real healthy aorta geometry, for which in-vivo measurements are available, is considered. The distribution is modeled through a truncated cone shape, which is a suitable approximation of the real one downstream of a trileaflet aortic valve during the systolic part of the cardiac cycle. The ratio between the upper and the lower base of the truncated cone and the position of the center of the upper base are selected as uncertain parameters. A stochastic approach is chosen, based on the generalized Polynomial Chaos expansion, to obtain accurate response surfaces of the quantities of interest in the parameter space. The selected parameters influence the velocity distribution in the ascending aorta. Consequently, effects on the wall shear stress are observed, confirming the need to use patient-specific inlet conditions if interested in the hemodynamics of this region. The surface base ratio is globally the most important parameter. Conversely, the impact on the velocity and wall shear stress in the aortic arch and descending aorta is almost negligible.
Collapse
|
43
|
Zhou T, Zhu S. Uncertainty quantification and attention-aware fusion guided multi-modal MR brain tumor segmentation. Comput Biol Med 2023; 163:107142. [PMID: 37331100 DOI: 10.1016/j.compbiomed.2023.107142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 05/17/2023] [Accepted: 06/05/2023] [Indexed: 06/20/2023]
Abstract
Brain tumor is one of the most aggressive cancers in the world, accurate brain tumor segmentation plays a critical role in clinical diagnosis and treatment planning. Although deep learning models have presented remarkable success in medical segmentation, they can only obtain the segmentation map without capturing the segmentation uncertainty. To achieve accurate and safe clinical results, it is necessary to produce extra uncertainty maps to assist the subsequent segmentation revision. To this end, we propose to exploit the uncertainty quantification in the deep learning model and apply it to multi-modal brain tumor segmentation. In addition, we develop an effective attention-aware multi-modal fusion method to learn the complimentary feature information from the multiple MR modalities. First, a multi-encoder-based 3D U-Net is proposed to obtain the initial segmentation results. Then, an estimated Bayesian model is presented to measure the uncertainty of the initial segmentation results. Finally, the obtained uncertainty maps are integrated into a deep learning-based segmentation network, serving as an additional constraint information to further refine the segmentation results. The proposed network is evaluated on publicly available BraTS 2018 and BraTS 2019 datasets. The experimental results demonstrate that the proposed method outperforms the previous state-of-the-art methods on Dice score, Hausdorff distance and Sensitivity metrics. Furthermore, the proposed components could be easily applied to other network architectures and other computer vision fields.
Collapse
|
44
|
Buddenkotte T, Escudero Sanchez L, Crispin-Ortuzar M, Woitek R, McCague C, Brenton JD, Öktem O, Sala E, Rundo L. Calibrating ensembles for scalable uncertainty quantification in deep learning-based medical image segmentation. Comput Biol Med 2023; 163:107096. [PMID: 37302375 DOI: 10.1016/j.compbiomed.2023.107096] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 04/16/2023] [Accepted: 05/27/2023] [Indexed: 06/13/2023]
Abstract
Uncertainty quantification in automated image analysis is highly desired in many applications. Typically, machine learning models in classification or segmentation are only developed to provide binary answers; however, quantifying the uncertainty of the models can play a critical role for example in active learning or machine human interaction. Uncertainty quantification is especially difficult when using deep learning-based models, which are the state-of-the-art in many imaging applications. The current uncertainty quantification approaches do not scale well in high-dimensional real-world problems. Scalable solutions often rely on classical techniques, such as dropout, during inference or training ensembles of identical models with different random seeds to obtain a posterior distribution. In this paper, we present the following contributions. First, we show that the classical approaches fail to approximate the classification probability. Second, we propose a scalable and intuitive framework for uncertainty quantification in medical image segmentation that yields measurements that approximate the classification probability. Third, we suggest the usage of k-fold cross-validation to overcome the need for held out calibration data. Lastly, we motivate the adoption of our method in active learning, creating pseudo-labels to learn from unlabeled images and human-machine collaboration.
Collapse
|
45
|
DeJaco RF, Roberts MJ, Romsos EL, Vallone PM, Kearsley AJ. Reducing Bias and Quantifying Uncertainty in Fluorescence Produced by PCR. Bull Math Biol 2023; 85:83. [PMID: 37574503 PMCID: PMC10423706 DOI: 10.1007/s11538-023-01182-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 06/20/2023] [Indexed: 08/15/2023]
Abstract
We present a new approach for relating nucleic-acid content to fluorescence in a real-time Polymerase Chain Reaction (PCR) assay. By coupling a two-type branching process for PCR with a fluorescence analog of Beer's Law, the approach reduces bias and quantifies uncertainty in fluorescence. As the two-type branching process distinguishes between complementary strands of DNA, it allows for a stoichiometric description of reactions between fluorescent probes and DNA and can capture the initial conditions encountered in assays targeting RNA. Analysis of the expected copy-number identifies additional dynamics that occur at short times (or, equivalently, low cycle numbers), while investigation of the variance reveals the contributions from liquid volume transfer, imperfect amplification, and strand-specific amplification (i.e., if one strand is synthesized more efficiently than its complement). Linking the branching process to fluorescence by the Beer's Law analog allows for an a priori description of background fluorescence. It also enables uncertainty quantification (UQ) in fluorescence which, in turn, leads to analytical relationships between amplification efficiency (probability) and limit of detection. This work sets the stage for UQ-PCR, where both the input copy-number and its uncertainty are quantified from fluorescence kinetics.
Collapse
|
46
|
Rudolph MG, Collenteur RA, Kavousi A, Giese M, Wöhling T, Birk S, Hartmann A, Reimann T. A data-driven approach for modelling Karst spring discharge using transfer function noise models. ENVIRONMENTAL EARTH SCIENCES 2023; 82:339. [PMID: 37366470 PMCID: PMC10290613 DOI: 10.1007/s12665-023-11012-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 05/30/2023] [Indexed: 06/28/2023]
Abstract
Karst aquifers are important sources of fresh water on a global scale. The hydrological modelling of karst spring discharge, however, still poses a challenge. In this study we apply a transfer function noise (TFN) model in combination with a bucket-type recharge model to simulate karst spring discharge. The application of the noise model for the residual series has the advantage that it is more consistent with assumptions for optimization such as homoscedasticity and independence. In an earlier hydrological modeling study, named Karst Modeling Challenge (KMC; Jeannin et al., J Hydrol 600:126-508, 2021), several modelling approaches were compared for the Milandre Karst System in Switzerland. This serves as a benchmark and we apply the TFN model to KMC data, subsequently comparing the results to other models. Using different data-model-combinations, the most promising data-model-combination is identified in a three-step least-squares calibration. To quantify uncertainty, the Bayesian approach of Markov-chain Monte Carlo (MCMC) sampling is subsequently used with uniform priors for the previously identified best data-model combination. The MCMC maximum likelihood solution is used to simulate spring discharge for a previously unseen testing period, indicating a superior performance compared to all other models in the KMC. It is found that the model gives a physically feasible representation of the system, which is supported by field measurements. While the TFN model simulated rising limbs and flood recession especially well, medium and baseflow conditions were not represented as accurately. The TFN approach poses a well-performing data-driven alternative to other approaches that should be considered in future studies.
Collapse
|
47
|
Yang L, Hu YJ, Wang H, Li C, Tang BJ, Wang B, Cui H. Uncertainty quantification of CO 2 emissions from China's civil aviation industry to 2050. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2023; 336:117624. [PMID: 36868152 DOI: 10.1016/j.jenvman.2023.117624] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 02/10/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
To mitigate aviation's carbon emissions of the aviation industry, the following steps are vital: accurately quantifying the carbon emission path by considering uncertainty factors, including transportation demand in the post-COVID-19 pandemic period; identifying gaps between this path and emission reduction targets; and providing mitigation measures. Some mitigation measures that can be employed by China's civil aviation industry include the gradual realization of large-scale production of sustainable aviation fuels and transition to 100% sustainable and low-carbon sources of energy. This study identified the key driving factors of carbon emissions by using the Delphi Method and set scenarios that consider uncertainty, such as aviation development and emission reduction policies. A backpropagation neural network and Monte Carlo simulation were used to quantify the carbon emission path. The study results show that China's civil aviation industry can effectively help the country achieve its carbon peak and carbon neutrality goals. However, to achieve the net-zero carbon emissions goal of global aviation, China needs to reduce its emissions by approximately 82%-91% based on the optimal emission scenario. Thus, under the international net-zero target, China's civil aviation industry will face significant pressure to reduce its emissions. The use of sustainable aviation fuels is the best way to reduce aviation emissions by 2050. Moreover, in addition to the application of sustainable aviation fuel, it will be necessary to develop a new generation of aircraft introducing new materials and upgrading technology, implement additional carbon absorption measures, and make use of carbon trading markets to facilitate China's civil aviation industry's contribution to reduce climate change.
Collapse
|
48
|
Fan YJ, Allen JE, McLoughlin KS, Shi D, Bennion BJ, Zhang X, Lightstone FC. Evaluating point-prediction uncertainties in neural networks for protein-ligand binding prediction. ARTIFICIAL INTELLIGENCE CHEMISTRY 2023; 1:100004. [PMID: 37583465 PMCID: PMC10426331 DOI: 10.1016/j.aichem.2023.100004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Neural Network (NN) models provide potential to speed up the drug discovery process and reduce its failure rates. The success of NN models requires uncertainty quantification (UQ) as drug discovery explores chemical space beyond the training data distribution. Standard NN models do not provide uncertainty information. Some methods require changing the NN architecture or training procedure, limiting the selection of NN models. Moreover, predictive uncertainty can come from different sources. It is important to have the ability to separately model different types of predictive uncertainty, as the model can take assorted actions depending on the source of uncertainty. In this paper, we examine UQ methods that estimate different sources of predictive uncertainty for NN models aiming at protein-ligand binding prediction. We use our prior knowledge on chemical compounds to design the experiments. By utilizing a visualization method we create non-overlapping and chemically diverse partitions from a collection of chemical compounds. These partitions are used as training and test set splits to explore NN model uncertainty. We demonstrate how the uncertainties estimated by the selected methods describe different sources of uncertainty under different partitions and featurization schemes and the relationship to prediction error.
Collapse
|
49
|
Fanuel IM, Mirau S, Kajunguri D, Moyo F. Conservation of forest biomass and forest-dependent wildlife population: Uncertainty quantification of the model parameters. Heliyon 2023; 9:e16948. [PMID: 37332951 PMCID: PMC10272482 DOI: 10.1016/j.heliyon.2023.e16948] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/31/2023] [Accepted: 06/01/2023] [Indexed: 06/20/2023] Open
Abstract
The ecosystem is confronted with numerous challenges as a consequence of the escalating human population and its corresponding activities. Among these challenges lies the degradation of forest biomass, which directly contributes to a reduction in forested areas and poses a significant threat to the survival of wildlife species through the intensification of intraspecific competition. In this paper, a non-linear mathematical model to study the conservation of forest and wildlife species that are reliant on forest ecosystem within the framework of human population dynamics and its related activities is developed and analysed. The study assessed the impacts of economic measures in the form of incentives on reducing population pressure on forest resources as well as the potential benefits of technological efforts to accelerate the rate of reforestation. Qualitative and quantitative analyses reveals that economic and technological factors have the potential to contribute to resource conservation efforts. However, these efforts can only be used to a limited extent, and contrary to that, the system will be destabilised. Sensitivity analysis identified the parameters pertaining to human population, human activities, economic measures, and technological efforts as the most influential factors in the model.
Collapse
|
50
|
Christiansen AV, Frederiksen RR, Vilhelmsen TN, Christensen S, Maurya PK, Hansen B, Kim H, Høyer AS, Aamand J, Jakobsen R, Børgesen CD, Jacobsen BH, Auken E. N-Map: High-resolution groundwater N-retention mapping and modelling by integration of geophysical, geological, geochemical, and hydrological data. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2023; 343:118126. [PMID: 37267756 DOI: 10.1016/j.jenvman.2023.118126] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 11/15/2022] [Accepted: 05/06/2023] [Indexed: 06/04/2023]
Abstract
A key aspect of protecting aquatic ecosystems from agricultural nitrogen (N) is to locate (i) farmlands where nitrate leaches from the bottom of the root zone and (ii) denitrifying zones in the aquifers where nitrate is removed before entering the surface water (N-retention). N-retention affects the choice of field mitigation measures to reduce delivered N to surface water. Farmland parcels associated with high N-retention gives the lowest impact of the targeted field measures and vice versa. In Denmark, a targeted N-regulation approach is currently implemented on small catchment scale (approx. 15 km2). Although this regulatory scale is much more detailed than what has been used previously, it is still so large that regulation for most individual fields will be either over- or under-regulated due to large spatial variation in the N-retention. The potential cost reduction for farmers is of up to 20-30% from detailed retention mapping at the field scale compared to the current small catchment scale. In this study, we present a mapping framework (N-Map) for differentiating farmland according to their N-retention, which can be used for improving the effectiveness of targeted N-regulation. The framework currently only includes N-retention in the groundwater. The framework benefits from the incorporation of innovative geophysics in hydrogeological and geochemical mapping and modelling. To capture and describe relevant uncertainties a large number of equally probable realizations are created through Multiple Point Statistical (MPS) methods. This allows relevant descriptions of uncertainties of parts of the model structure and includes other relevant uncertainty measures that affects the obtained N-retention. The output is data-driven high-resolution groundwater N-retention maps, to be used by the individual farmers to manage their cropping systems due to the given regulatory boundary conditions. The detailed mapping allows farmers to use this information in the farm planning in order to optimize the use of field measures to reduce delivered agricultural N to the surface water and thereby lower the costs of the field measures. From farmer interviews, however, it is clear that not all farms will have an economic gain from the detailed mapping as the mapping costs will exceed the potential economic gains for the farmers. The costs of N-Map is here estimated to 5-7 €/ha/year plus implementation costs at the farm. At the society level, the N-retention maps allow authorities to point out opportunities for a more targeted implementation of field measures to efficiently reduce the delivered N-load to surface waters.
Collapse
|