1
|
How do medical schools influence their students' career choices? A realist evaluation. MEDICAL EDUCATION ONLINE 2024; 29:2320459. [PMID: 38404035 PMCID: PMC10898266 DOI: 10.1080/10872981.2024.2320459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 02/14/2024] [Indexed: 02/27/2024]
Abstract
INTRODUCTION The career choices of medical graduates vary widely between medical schools in the UK and elsewhere and are generally not well matched with societal needs. Research has found that experiences in medical school including formal, informal and hidden curricula are important influences. We conducted a realist evaluation of how and why these various social conditions in medical school influence career thinking. METHODS We interviewed junior doctors at the point of applying for speciality training. We selected purposively for a range of career choices. Participants were asked to describe points during their medical training when they had considered career options and how their thinking had been influenced by their context. Interview transcripts were coded for context-mechanism-outcome (CMO) configurations to test initial theories of how career decisions are made. RESULTS A total of 26 junior doctors from 12 UK medical schools participated. We found 14 recurring CMO configurations in the data which explained influences on career choice occurring during medical school. DISCUSSION Our initial theories about career decision-making were refined as follows: It involves a process of testing for fit of potential careers. This process is asymmetric with multiple experiences needed before deciding a career fits ('easing in') but sometimes only a single negative experience needed for a choice to be ruled out. Developing a preference for a speciality aligns with Person-Environment-Fit decision theories. Ruling out a potential career can however be a less thought-through process than rationality-based decision theories would suggest. Testing for fit is facilitated by longer and more authentic undergraduate placements, allocation of and successful completion of tasks, being treated as part of the team and enthusiastic role models. Informal career guidance is more influential than formal. We suggest some implications for medical school programmes.
Collapse
|
2
|
An assessment of the state of conservation planning in Europe. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230015. [PMID: 38583468 PMCID: PMC10999267 DOI: 10.1098/rstb.2023.0015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 11/11/2023] [Indexed: 04/09/2024] Open
Abstract
Expanding and managing current habitat and species protection measures is at the heart of the European biodiversity strategy. A structured approach is needed to gain insights into such issues is systematic conservation planning, which uses techniques from decision theory to identify places and actions that contribute most effectively to policy objectives given a set of constraints. Yet culturally and historically determined European landscapes make the implementation of any conservation plans challenging, requiring an analysis of synergies and trade-offs before implementation. In this work, we review the scientific literature for evidence of previous conservation planning approaches, highlighting recent advances and success stories. We find that the conceptual characteristics of European conservation planning studies likely reduced their potential in contributing to better-informed decisions. We outline pathways towards improving the uptake of decision theory and multi-criteria conservation planning at various scales, particularly highlighting the need for (a) open data and intuitive tools, (b) the integration of biodiversity-focused conservation planning with multiple objectives, (c) accounting of dynamic ecological processes and functions, and (d) better facilitation of entry-points and co-design practices of conservation planning scenarios with stakeholders. By adopting and improving these practices, European conservation planning might become more actionable and adaptable towards implementable policy outcomes. This article is part of the theme issue 'Ecological novelty and planetary stewardship: biodiversity dynamics in a transforming biosphere'.
Collapse
|
3
|
A Test Statistic Estimation-Based Approach for Establishing Self-Interpretable CNN-Based Binary Classifiers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1753-1765. [PMID: 38163307 PMCID: PMC11065575 DOI: 10.1109/tmi.2023.3348699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Interpretability is highly desired for deep neural network-based classifiers, especially when addressing high-stake decisions in medical imaging. Commonly used post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations of a given model, leading to ambiguity about which one to choose. To address this problem, a novel decision-theory-inspired approach is investigated to establish a self-interpretable model, given a pre-trained deep binary black-box medical image classifier. This approach involves utilizing a self-interpretable encoder-decoder model in conjunction with a single-layer fully connected network with unity weights. The model is trained to estimate the test statistic of the given trained black-box deep binary classifier to maintain a similar accuracy. The decoder output image, referred to as an equivalency map, is an image that represents a transformed version of the to-be-classified image that, when processed by the fixed fully connected layer, produces the same test statistic value as the original classifier. The equivalency map provides a visualization of the transformed image features that directly contribute to the test statistic value and, moreover, permits quantification of their relative contributions. Unlike the traditional post-hoc interpretability methods, the proposed method is self-interpretable, quantitative. Detailed quantitative and qualitative analyses have been performed with three different medical image binary classification tasks.
Collapse
|
4
|
Single-Cell Measurements and Modeling and Computation of Decision-Making Errors in a Molecular Signaling System with Two Output Molecules. BIOLOGY 2023; 12:1461. [PMID: 38132287 PMCID: PMC10740708 DOI: 10.3390/biology12121461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/13/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023]
Abstract
A cell constantly receives signals and takes different fates accordingly. Given the uncertainty rendered by signal transduction noise, a cell may incorrectly perceive these signals. It may mistakenly behave as if there is a signal, although there is none, or may miss the presence of a signal that actually exists. In this paper, we consider a signaling system with two outputs, and introduce and develop methods to model and compute key cell decision-making parameters based on the two outputs and in response to the input signal. In the considered system, the tumor necrosis factor (TNF) regulates the two transcription factors, the nuclear factor κB (NFκB) and the activating transcription factor-2 (ATF-2). These two system outputs are involved in important physiological functions such as cell death and survival, viral replication, and pathological conditions, such as autoimmune diseases and different types of cancer. Using the introduced methods, we compute and show what the decision thresholds are, based on the single-cell measured concentration levels of NFκB and ATF-2. We also define and compute the decision error probabilities, i.e., false alarm and miss probabilities, based on the concentration levels of the two outputs. By considering the joint response of the two outputs of the signaling system, one can learn more about complex cellular decision-making processes, the corresponding decision error rates, and their possible involvement in the development of some pathological conditions.
Collapse
|
5
|
Net benefit of diagnostic tests for multistate diseases: an indicator variables approach. J Biopharm Stat 2023; 33:611-638. [PMID: 36710380 DOI: 10.1080/10543406.2023.2169928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 12/30/2022] [Indexed: 01/31/2023]
Abstract
A limitation of the common measures of diagnostic test performance, such as sensitivity and specificity, is that they do not consider the relative importance of false negative and false positive test results, which are likely to have different clinical consequences. Therefore, the use of classification or prediction measures alone to compare diagnostic tests or biomarkers can be inconclusive for clinicians. Comparing tests on net benefit can be more conclusive because clinical consequences of misdiagnoses are considered. The literature suggested evaluating the binary diagnostic tests based on net benefit, but did not consider diagnostic tests that classify more than two disease states, e.g., stroke subtype (large-artery atherosclerosis, cardioembolism, small-vessel occlusion, stroke of other determined etiology, stroke of undetermined etiology), skin lesion subtype, breast cancer subtypes (benign, mass, calcification, architectural distortion, etc.), METAVIR liver fibrosis state (F0- F4), histopathological classification of cervical intraepithelial neoplasia (CIN), prostate Gleason grade, brain injury (intracranial hemorrhage, mass effect, midline shift, cranial fracture) . Other diseases have more than two stages, such as Alzheimer's disease (dementia due to Alzheimer's disease, mild cognitive disability (MCI) due to Alzheimer's disease, and preclinical presymptomatics due to Alzheimer's disease). In diseases with more than two states, the benefits and risks may vary between states. This paper extends the net-benefit approach of evaluating binary diagnostic tests to multi-state clinical conditions to rule-in or rule-out a clinical condition based on adverse consequences of work-up delay (due to false negative test result) and unnecessary workup (due to false positive test result). We demonstrate our approach with numerical examples and real data.
Collapse
|
6
|
Adaptive metrics for an evolving pandemic: A dynamic approach to area-level COVID-19 risk designations. Proc Natl Acad Sci U S A 2023; 120:e2302528120. [PMID: 37527346 PMCID: PMC10410764 DOI: 10.1073/pnas.2302528120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 04/27/2023] [Indexed: 08/03/2023] Open
Abstract
Throughout the COVID-19 pandemic, policymakers have proposed risk metrics, such as the CDC Community Levels, to guide local and state decision-making. However, risk metrics have not reliably predicted key outcomes and have often lacked transparency in terms of prioritization of false-positive versus false-negative signals. They have also struggled to maintain relevance over time due to slow and infrequent updates addressing new variants and shifts in vaccine- and infection-induced immunity. We make two contributions to address these weaknesses. We first present a framework to evaluate predictive accuracy based on policy targets related to severe disease and mortality, allowing for explicit preferences toward false-negative versus false-positive signals. This approach allows policymakers to optimize metrics for specific preferences and interventions. Second, we propose a method to update risk thresholds in real time. We show that this adaptive approach to designating areas as "high risk" improves performance over static metrics in predicting 3-wk-ahead mortality and intensive care usage at both state and county levels. We also demonstrate that with our approach, using only new hospital admissions to predict 3-wk-ahead mortality and intensive care usage has performed consistently as well as metrics that also include cases and inpatient bed usage. Our results highlight that a key challenge for COVID-19 risk prediction is the changing relationship between indicators and outcomes of policy interest. Adaptive metrics therefore have a unique advantage in a rapidly evolving pandemic context.
Collapse
|
7
|
Perspective on 'Harm' in Personalized Medicine. Am J Epidemiol 2023:kwad162. [PMID: 37470494 DOI: 10.1093/aje/kwad162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 07/10/2023] [Indexed: 07/21/2023] Open
Abstract
Avoiding harm is an uncontroversial aim of personalized medicine and other epidemiologic initiatives. However, the precise mathematical translation of "harm" is disputable. Here we use a formal causal language to study common, but distinct, definitions of "harm". We clarify that commitment to a definition of harm has important practical and philosophical implications for decision making. We relate our practical and philosophical considerations to ideas from medical ethics and legal practice.
Collapse
|
8
|
Value-of-Information Analysis for External Validation of Risk Prediction Models. Med Decis Making 2023:272989X231178317. [PMID: 37345680 DOI: 10.1177/0272989x231178317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
BACKGROUND A previously developed risk prediction model needs to be validated before being used in a new population. The finite size of the validation sample entails that there is uncertainty around model performance. We apply value-of-information (VoI) methodology to quantify the consequence of uncertainty in terms of net benefit (NB). METHODS We define the expected value of perfect information (EVPI) for model validation as the expected loss in NB due to not confidently knowing which of the alternative decisions confers the highest NB. We propose bootstrap-based and asymptotic methods for EVPI computations and conduct simulation studies to compare their performance. In a case study, we use the non-US subsets of a clinical trial as the development sample for predicting mortality after myocardial infarction and calculate the validation EVPI for the US subsample. RESULTS The computation methods generated similar EVPI values in simulation studies. EVPI generally declined with larger samples. In the case study, at the prespecified threshold of 0.02, the best decision with current information would be to use the model, with an incremental NB of 0.0020 over treating all. At this threshold, the EVPI was 0.0005 (relative EVPI = 25%). When scaled to the annual number of heart attacks in the US, the expected NB loss due to uncertainty was equal to 400 true positives or 19,600 false positives, indicating the value of further model validation. CONCLUSION VoI methods can be applied to the NB calculated during external validation of clinical prediction models. While uncertainty does not directly affect the clinical implications of NB findings, validation EVPI provides an objective perspective to the need for further validation and can be reported alongside NB in external validation studies. HIGHLIGHTS External validation is a critical step when transporting a risk prediction model to a new setting, but the finite size of the validation sample creates uncertainty about the performance of the model.In decision theory, such uncertainty is associated with loss of net benefit because it can prevent one from identifying whether the use of the model is beneficial over alternative strategies.We define the expected value of perfect information for external validation as the expected loss in net benefit by not confidently knowing if the use of the model is net beneficial.The adoption of a model for a new population should be based on its expected net benefit; independently, value-of-information methods can be used to decide whether further validation studies are warranted.
Collapse
|
9
|
Signal detection models as contextual bandits. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230157. [PMID: 37351497 PMCID: PMC10282591 DOI: 10.1098/rsos.230157] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 05/30/2023] [Indexed: 06/24/2023]
Abstract
Signal detection theory (SDT) has been widely applied to identify the optimal discriminative decisions of receivers under uncertainty. However, the approach assumes that decision-makers immediately adopt the appropriate acceptance threshold, even though the optimal response must often be learned. Here we recast the classical normal-normal (and power-law) signal detection model as a contextual multi-armed bandit (CMAB). Thus, rather than starting with complete information, decision-makers must infer how the magnitude of a continuous cue is related to the probability that a signaller is desirable, while simultaneously seeking to exploit the information they acquire. We explain how various CMAB heuristics resolve the trade-off between better estimating the underlying relationship and exploiting it. Next, we determined how naive human volunteers resolve signal detection problems with a continuous cue. As anticipated, a model of choice (accept/reject) that assumed volunteers immediately adopted the SDT-predicted acceptance threshold did not predict volunteer behaviour well. The Softmax rule for solving CMABs, with choices based on a logistic function of the expected payoffs, best explained the decisions of our volunteers but a simple midpoint algorithm also predicted decisions well under some conditions. CMABs offer principled parametric solutions to solving many classical SDT problems when decision-makers start with incomplete information.
Collapse
|
10
|
Abstract
Policymakers must make management decisions despite incomplete knowledge and conflicting model projections. Little guidance exists for the rapid, representative, and unbiased collection of policy-relevant scientific input from independent modeling teams. Integrating approaches from decision analysis, expert judgment, and model aggregation, we convened multiple modeling teams to evaluate COVID-19 reopening strategies for a mid-sized United States county early in the pandemic. Projections from seventeen distinct models were inconsistent in magnitude but highly consistent in ranking interventions. The 6-mo-ahead aggregate projections were well in line with observed outbreaks in mid-sized US counties. The aggregate results showed that up to half the population could be infected with full workplace reopening, while workplace restrictions reduced median cumulative infections by 82%. Rankings of interventions were consistent across public health objectives, but there was a strong trade-off between public health outcomes and duration of workplace closures, and no win-win intermediate reopening strategies were identified. Between-model variation was high; the aggregate results thus provide valuable risk quantification for decision making. This approach can be applied to the evaluation of management interventions in any setting where models are used to inform decision making. This case study demonstrated the utility of our approach and was one of several multimodel efforts that laid the groundwork for the COVID-19 Scenario Modeling Hub, which has provided multiple rounds of real-time scenario projections for situational awareness and decision making to the Centers for Disease Control and Prevention since December 2020.
Collapse
|
11
|
On the Accuracy of Replication Failure Rates. MULTIVARIATE BEHAVIORAL RESEARCH 2023; 58:598-615. [PMID: 37339430 DOI: 10.1080/00273171.2022.2066500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2023]
Abstract
A prominent approach to studying the replication crisis has been to conduct replications of several different scientific findings as part of the same research effort. The reported proportion of findings that these programs determined failed to replicate have become important statistics in the replication crisis. However, these "failure rates" are based on decisions about whether individual studies replicated, which are themselves subject to statistical uncertainty. In this article, we examine how that uncertainty impacts the accuracy of reported failure rates and find that the reported failure rates can be substantially biased and highly variable. Indeed, very high or very low failure rates could arise from chance alone.
Collapse
|
12
|
Decomposing clinical practice guidelines panels' deliberation into decision theoretical constructs. J Eval Clin Pract 2023; 29:459-471. [PMID: 36694469 DOI: 10.1111/jep.13809] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 01/09/2023] [Accepted: 01/11/2023] [Indexed: 01/26/2023]
Abstract
UNLABELLED RATIONALE, AIMS AND OBJECTIVES: The development of clinical practice guidelines (CPG) suffers from the lack of an explicit and transparent framework for synthesising the key elements necessary to formulate practice recommendations. We matched deliberations of the American Society of Haematology (ASH) CPG panel for the management of pulmonary embolism (PE) with the corresponding decision-theoretical constructs to assess agreement of the panel recommendations with explicit decision modelling. METHODS Five constructs were identified of which three were used to reformulate the panel's recommendations: (1) standard, expected utility threshold (EUT) decision model; (2) acceptable regret threshold model (ARg) to determine the frequency of tolerable false negative (FN) or false positive (FP) recommendations, and (3) fast-and-frugal tree (FFT) decision trees to formulate the entire strategy for management of PE. We compared four management strategies: withhold testing versus d-dimer → computerized pulmonary angiography (CTPA) ('ASH-Low') versus CTPA→ d-dimer ('ASH-High') versus treat without testing. RESULTS Different models generated different recommendations. For example, according to EUT, testing should be withheld for prior probability PE < 0.13%, a clinically untenable threshold which is up to 15 times (2/0.13) below the ASH guidelines threshold of ruling out PE (at post probability of PE ≤ 2%). Three models only agreed that the 'ASH low' strategy should be used for the range of pretest probabilities of PE between 0.13% and 13.27% and that the 'ASH high' management should be employed in a narrow range of the prior PE probabilities between 90.85% and 93.07%. For all other prior probabilities of PE, choosing one model did not ensure coherence with other models. CONCLUSIONS CPG panels rely on various decision-theoretical strategies to develop its recommendations. Decomposing CPG panels' deliberation can provide insights if the panels' deliberation retains a necessary coherence in developing guidelines. CPG recommendations often do not agree with the EUT decision analysis, widely used in medical decision-making modelling.
Collapse
|
13
|
Competing constraints shape the nonequilibrium limits of cellular decision-making. Proc Natl Acad Sci U S A 2023; 120:e2211203120. [PMID: 36862689 PMCID: PMC10013869 DOI: 10.1073/pnas.2211203120] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/03/2023] Open
Abstract
Gene regulation is central to cellular function. Yet, despite decades of work, we lack quantitative models that can predict how transcriptional control emerges from molecular interactions at the gene locus. Thermodynamic models of transcription, which assume that gene circuits operate at equilibrium, have previously been employed with considerable success in the context of bacterial systems. However, the presence of ATP-dependent processes within the eukaryotic transcriptional cycle suggests that equilibrium models may be insufficient to capture how eukaryotic gene circuits sense and respond to input transcription factor concentrations. Here, we employ simple kinetic models of transcription to investigate how energy dissipation within the transcriptional cycle impacts the rate at which genes transmit information and drive cellular decisions. We find that biologically plausible levels of energy input can lead to significant gains in how rapidly gene loci transmit information but discover that the regulatory mechanisms underlying these gains change depending on the level of interference from noncognate activator binding. When interference is low, information is maximized by harnessing energy to push the sensitivity of the transcriptional response to input transcription factors beyond its equilibrium limits. Conversely, when interference is high, conditions favor genes that harness energy to increase transcriptional specificity by proofreading activator identity. Our analysis further reveals that equilibrium gene regulatory mechanisms break down as transcriptional interference increases, suggesting that energy dissipation may be indispensable in systems where noncognate factor interference is sufficiently large.
Collapse
|
14
|
Adaptive metrics for an evolving pandemic A dynamic approach to area-level COVID-19 risk designations. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.02.15.23285969. [PMID: 36824769 PMCID: PMC9949193 DOI: 10.1101/2023.02.15.23285969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Throughout the COVID-19 pandemic, policymakers have proposed risk metrics, such as the CDC Community Levels, to guide local and state decision-making. However, risk metrics have not reliably predicted key outcomes and often lack transparency in terms of prioritization of false positive versus false negative signals. They have also struggled to maintain relevance over time due to slow and infrequent updates addressing new variants and shifts in vaccine- and infection-induced immunity. We make two contributions to address these weaknesses of risk metrics. We first present a framework to evaluate predictive accuracy based on policy targets related to severe disease and mortality, allowing for explicit preferences toward false negative versus false positive signals. This approach allows policymakers to optimize metrics for specific preferences and interventions. Second, we propose a novel method to update risk thresholds in real-time. We show that this adaptive approach to designating areas as "high risk" improves performance over static metrics in predicting 3-week-ahead mortality and intensive care usage at both state and county levels. We also demonstrate that with our approach, using only new hospital admissions to predict 3-week-ahead mortality and intensive care usage has performed consistently as well as metrics that also include cases and inpatient bed usage. Our results highlight that a key challenge for COVID-19 risk prediction is the changing relationship between indicators and outcomes of policy interest. Adaptive metrics therefore have a unique advantage in a rapidly evolving pandemic context. Significance Statement In the rapidly-evolving COVID-19 pandemic, public health risk metrics often become less relevant over time. Risk metrics are designed to predict future severe disease and mortality based on currently-available surveillance data, such as cases and hospitalizations. However, the relationship between cases, hospitalizations, and mortality has varied considerably over the course of the pandemic, in the context of new variants and shifts in vaccine- and infection-induced immunity. We propose an adaptive approach that regularly updates metrics based on the relationship between surveillance inputs and future outcomes of policy interest. Our method captures changing pandemic dynamics, requires only hospitalization input data, and outperforms static risk metrics in predicting high-risk states and counties.
Collapse
|
15
|
Trade-offs and the precautionary principle: A lexicographic utility approach. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2023; 43:260-268. [PMID: 35086159 DOI: 10.1111/risa.13892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 12/23/2021] [Accepted: 12/30/2021] [Indexed: 06/14/2023]
Abstract
The precautionary principle is often argued to be irrational because it cannot adequately explain how resources should be distributed across multiple possible catastrophes or between catastrophic and noncatastrophic risks. We address this problem of trade-offs by extending a recently proposed formal interpretation of the precautionary principle (PP) within a lexical utility framework and using it to prove results about which distribution of resources maximizes lexical utility when several catastrophic risks exist, given different assumptions. We also explain how our lexical utility interpretation of PP can recommend balanced distributions of resources between disaster prevention and other concerns.
Collapse
|
16
|
A Decision-Theoretic Public Health Framework for Heated Tobacco and Nicotine Vaping Products. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:13431. [PMID: 36294011 PMCID: PMC9602493 DOI: 10.3390/ijerph192013431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 09/22/2022] [Accepted: 09/30/2022] [Indexed: 06/16/2023]
Abstract
Markets for nicotine vaping products (NVPs) and heated tobacco products (HTPs) have grown as these products became positioned as harm-reduction alternatives to combusted tobacco products. Herein, we present a public health decision-theoretic framework incorporating different patterns of HTP, NVP, and cigarette use to examine their impacts on population health. Our framework demonstrates that, for individuals who would have otherwise smoked, HTP use may provide public health benefits by enabling cessation or by discouraging smoking initiation and relapse. However, the benefits are reduced if more harmful HTP use replaces less harmful NVP use. HTP use may also negatively impact public health by encouraging smoking by otherwise non-smokers or by encouraging initiation or relapse into smoking. These patterns are directly influenced by industry behavior as well as public policy towards HTPs, NVPs, and cigarettes. While substantial research has been devoted to NVPs, much less is known about HTPs. Better information is needed to more precisely define the health risks of HTPs compared to cigarettes and NVPs, the relative appeal of HTPs to consumers, and the likelihood of later transitioning to smoking or quitting all products. While our analysis provides a framework for gaining that information, it also illustrates the complexities in distinguishing key factors.
Collapse
|
17
|
Abstract
HIGHLIGHTS Fuzzy-trace theory (FTT) supports practical approaches to improving health and medicine.FTT differs in important respects from other theories of decision making, which has implications for how to help patients, providers, and health communicators.Gist mental representations emphasize categorical distinctions, reflect understanding in context, and help cue values relevant to health and patient care.Understanding the science behind theory is crucial for evidence-based medicine.
Collapse
|
18
|
Principles of risk decision-making. JOURNAL OF TOXICOLOGY AND ENVIRONMENTAL HEALTH. PART B, CRITICAL REVIEWS 2022; 25:250-278. [PMID: 35980104 DOI: 10.1080/10937404.2022.2107591] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Risk management decisions in public health require consideration of a number of complex, often conflicting factors. The aim of this review was to propose a set of 10 fundamental principles to guide risk decision-making. Although each of these principles is sound in its own right, the guidance provided by different principles might lead the decision-maker in different directions. For example, where the precautionary principle advocates for preemptive risk management action under situations of scientific uncertainty and potentially catastrophic consequences, the principle of risk-based decision-making encourages decision-makers to focus on established and modifiable risks, where a return on the investment in risk management is all but guaranteed in the near term. To evaluate the applicability of the 10 principles in practice, one needs to consider 10 diverse risk issues of broad concern and explore which of these principles are most appropriate in different contexts. The 10 principles presented here afford substantive insight into the process of risk management decision-making, although decision-makers will ultimately need to exercise judgment in reaching appropriate risk decisions, accounting for all of the scientific and extra-scientific factors relevant to the risk decision at hand.
Collapse
|
19
|
Exploring the Possible: A Unifying Cognitive and Evolutionary Approach to Art. Front Psychol 2022; 12:787789. [PMID: 35401289 PMCID: PMC8992540 DOI: 10.3389/fpsyg.2021.787789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 12/10/2021] [Indexed: 11/13/2022] Open
Abstract
The subjective delight associated with the creative arts poses a well-known challenge to an integrated causal analysis of human psychology. Here we examine the distal causes of art in terms of an irreducibly risky search in a vast phase space of cognition and behavior. To explore means to engage in an activity that may result in a zero or negative payoff. Moreover, you may be unable to assess the risks with any certainty; the costs might spiral out of control. At the same time, the known alternatives may simply not be viable; natural selection has no problems acting on the failure to locate new habitable subspaces. This represents the hard problem of evolution: there is no recurring procedure that will reliably deliver the benefits of a successful exploration. We propose to locate the emergence of play and art in the tension between the irreducible risks of exploration and its potential benefits and examine the complex suite of adaptations that has emerged to solve, however, imperfectly, the hard problem of evolution. This includes adaptations for lowering the cost of exploration and strategies for open-ended yet loosely targeted searches. We argue that the ability to become aware of possible actions, to evaluate their respective merits, and to explore and develop new strategies of perception, thinking, and action have had a major impact on human survival and reproduction and have been subject to persistent natural selection. The arts, we suggest, represent a distinct cognitive mode of pushing the boundaries of what is familiar and known into new areas of perceptual, emotional, and agentive exploration and discovery, characterized by a proximal motivation of intrinsic enjoyment.
Collapse
|
20
|
An alternative metric for evaluating the potential patient benefit of response-adaptive randomization procedures. Biometrics 2022. [PMID: 35394063 DOI: 10.1111/biom.13673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 03/31/2022] [Indexed: 11/27/2022]
Abstract
When planning a two-arm group sequential clinical trial with a binary primary outcome that has severe implications for quality of life (e.g., mortality), investigators may strive to find the design that maximizes in-trial patient benefit. In such cases, Bayesian response-adaptive randomization (BRAR) is often considered because it can alter the allocation ratio throughout the trial in favor of the treatment that is currently performing better. Although previous studies have recommended using fixed randomization over BRAR based on patient benefit metrics calculated from the realized trial sample size, these previous comparisons have been limited by failures to hold type I and II error rates constant across designs or consider the impacts on all individuals directly affected by the design choice. In this paper, we propose a metric for comparing designs with the same type I and II error rates that reflects expected outcomes among individuals who would participate in the trial if enrollment is open when they become eligible. We demonstrate how to use the proposed metric to guide the choice of design in the context of two recent trials in persons suffering out of hospital cardiac arrest. Using computer simulation, we demonstrate that various implementations of group sequential BRAR offer modest improvements with respect to the proposed metric relative to conventional group sequential monitoring alone. This article is protected by copyright. All rights reserved.
Collapse
|
21
|
Value of Information Analysis in Models to Inform Health Policy. ANNUAL REVIEW OF STATISTICS AND ITS APPLICATION 2022; 9:95-118. [PMID: 35415193 PMCID: PMC7612603 DOI: 10.1146/annurev-statistics-040120-010730] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Value of information (VoI) is a decision-theoretic approach to estimating the expected benefits from collecting further information of different kinds, in scientific problems based on combining one or more sources of data. VoI methods can assess the sensitivity of models to different sources of uncertainty and help to set priorities for further data collection. They have been widely applied in healthcare policy making, but the ideas are general to a range of evidence synthesis and decision problems. This article gives a broad overview of VoI methods, explaining the principles behind them, the range of problems that can be tackled with them, and how they can be implemented, and discusses the ongoing challenges in the area.
Collapse
|
22
|
Uncertainty and the Value of Information in Risk Prediction Modeling. Med Decis Making 2022; 42:661-671. [PMID: 35209762 PMCID: PMC9194963 DOI: 10.1177/0272989x221078789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background Because of the finite size of the development sample, predicted probabilities from a risk prediction model are inevitably uncertain. We apply value-of-information methodology to evaluate the decision-theoretic implications of prediction uncertainty. Methods Adopting a Bayesian perspective, we extend the definition of the expected value of perfect information (EVPI) from decision analysis to net benefit calculations in risk prediction. In the context of model development, EVPI is the expected gain in net benefit by using the correct predictions as opposed to predictions from a proposed model. We suggest bootstrap methods for sampling from the posterior distribution of predictions for EVPI calculation using Monte Carlo simulations. We used subsets of data of various sizes from a clinical trial for predicting mortality after myocardial infarction to show how EVPI changes with sample size. Results With a sample size of 1000 and at the prespecified threshold of 2% on predicted risks, the gains in net benefit using the proposed and the correct models were 0.0006 and 0.0011, respectively, resulting in an EVPI of 0.0005 and a relative EVPI of 87%. EVPI was zero only at unrealistically high thresholds (>85%). As expected, EVPI declined with larger samples. We summarize an algorithm for incorporating EVPI calculations into the commonly used bootstrap method for optimism correction. Conclusion The development EVPI can be used to decide whether a model can advance to validation, whether it should be abandoned, or whether a larger development sample is needed. Value-of-information methods can be applied to explore decision-theoretic consequences of uncertainty in risk prediction and can complement inferential methods in predictive analytics. R code for implementing this method is provided.
Collapse
|
23
|
The Evolution of Cognitive Biases in Human Learning. J Theor Biol 2022; 541:111031. [PMID: 35143847 DOI: 10.1016/j.jtbi.2022.111031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 01/13/2022] [Accepted: 01/17/2022] [Indexed: 11/26/2022]
Abstract
Cognitive biases like underinference, the hard-easy effect, and recurrently non-monotonic confidence are evolutionarily puzzling when viewed as persistent flaws in how people learn from environmental feedback. To explain these empirically robust cognitive biases from an evolutionary perspective, we propose a model of ancestral human learning based on the cultural-evolutionary-theoretic hypothesis that the primary selection pressure acting on ancestral human cognition pertained not to learning individually from environmental feedback, but to socially learning task-specific knowledge. In our model-which is inspired by classical Bayesian models-an ancestral human learner (the student) attempts to learn task-specific knowledge from a role model, with the option of switching between different tasks and role models. Suppose that the student's method of learning from their role model is a priori uncertain-in that it can either be successful imitation learning or de facto innovation learning-and the ecological fitness costs of meaningfully retaining environmental feedback are high. Then, the student's fitness-maximizing strategy does not retain their environmental feedback and-depending on the choice of model parameters-can be characterized by all of the aforementioned cognitive biases. Specifically, in order for the evolutionarily optimal estimate of confidence in this learning environment to be recurrently non-monotonic, it is necessary (as long as the environment's marginal payoff function satisfies a plausible quantitative condition) that a positive proportion of ancestral humans' attempted imitation learning was unknowingly implemented as de facto innovation learning. Moreover, an ecologically rational strategy of selective social learning can plausibly cause the evolutionarily optimal estimate of confidence to be recurrently non-monotonic in the empirically documented way: general increase with an intermediate period of decrease.
Collapse
|
24
|
Physician Authority, Family Choice, and the Best Interest of the Child. Camb Q Healthc Ethics 2022; 31:34-39. [PMID: 35049450 DOI: 10.1017/s0963180121000463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Two of the most poignant decisions in pediatrics concern disagreements between physicians and families over imperiled newborns. When can the family demand more life-sustaining treatment (LST) than physicians want to provide? When can it properly ask for less? The author looks at these questions from the point of view of decision theory, and first argues that insofar as the family acts in the child's best interest, its choices cannot be constrained, and that the maximax and minimax strategies are equally in the child's best interest. He then proposes a guideline according to which the family can demand LST if it is physiologically possible to preserve a life the child can be expected to welcome, and refuse such treatment if it causes suffering that is "more than can be borne" even if an uncompromised life is expected to emerge.
Collapse
|
25
|
The Optimal Sample Size for Usability Testing, From the Manufacturer's Perspective: A Value-of-Information Approach. VALUE IN HEALTH : THE JOURNAL OF THE INTERNATIONAL SOCIETY FOR PHARMACOECONOMICS AND OUTCOMES RESEARCH 2022; 25:116-124. [PMID: 35031090 DOI: 10.1016/j.jval.2021.07.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Revised: 06/16/2021] [Accepted: 07/14/2021] [Indexed: 06/14/2023]
Abstract
OBJECTIVES For medical devices, a usability assessment is mandatory for market access; the objective is to detect potentially harmful use errors that stem from the device's design. The manufacturer assesses the final version of the device and determines the risk-benefit ratio for remaining errors. Nevertheless, the decision rule currently used to determine the sample size for this testing has statistical limitations and the lack of a clear decision-making perspective. METHODS As an alternative, we developed a value-of-information analysis from the medical device manufacturer's perspective. The consequences of use errors not detected during usability testing and the errors' probability of occurrence were embedded in a loss function. The value of further testing was assessed as a reduction in the expected loss for the manufacturer. The optimal sample size was determined using the expected net benefit of sampling (ENBS) (the difference between the value provided by new participants and the cost of their inclusion). RESULTS The value-of-information approach was applied to a real usability test of a needle-free adrenaline autoinjector. The initial estimate (performed on the first n = 20 participants) gave an optimal sample size of 100 participants and an ENBS of €255 453. This estimation was updated iteratively as new participants were included. After the inclusion of 90 participants, the ENBS was null for any sample size; hence, the cost of adding more participants outweighed the expected value of information, and therefore, the study could be stopped. CONCLUSIONS On the basis of these results, our method seems to be highly suitable for sample size estimation in the usability testing of medical devices before market access.
Collapse
|
26
|
Why ex post peer review encourages high-risk research while ex ante review discourages it. Proc Natl Acad Sci U S A 2021; 118:2111615118. [PMID: 34921115 DOI: 10.1073/pnas.2111615118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/23/2021] [Indexed: 11/18/2022] Open
Abstract
Peer review is an integral component of contemporary science. While peer review focuses attention on promising and interesting science, it also encourages scientists to pursue some questions at the expense of others. Here, we use ideas from forecasting assessment to examine how two modes of peer review-ex ante review of proposals for future work and ex post review of completed science-motivate scientists to favor some questions instead of others. Our main result is that ex ante and ex post peer review push investigators toward distinct sets of scientific questions. This tension arises because ex post review allows investigators to leverage their own scientific beliefs to generate results that others will find surprising, whereas ex ante review does not. Moreover, ex ante review will favor different research questions depending on whether reviewers rank proposals in anticipation of changes to their own personal beliefs or to the beliefs of their peers. The tension between ex ante and ex post review puts investigators in a bind because most researchers need to find projects that will survive both. By unpacking the tension between these two modes of review, we can understand how they shape the landscape of science and how changes to peer review might shift scientific activity in unforeseen directions.
Collapse
|
27
|
Using social values in the prioritization of research: Quantitative examples and generalizations. Ecol Evol 2021; 11:18000-18010. [PMID: 35003652 PMCID: PMC8717289 DOI: 10.1002/ece3.8394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 10/26/2021] [Accepted: 11/03/2021] [Indexed: 11/09/2022] Open
Abstract
Identifying critical uncertainties about ecological systems can help prioritize research efforts intended to inform management decisions. However, exclusively focusing on the ecological system neglects the objectives of natural resource managers and the associated social values tied to risks and rewards of actions.I demonstrate how to prioritize research efforts for a harvested population by applying expected value of perfect information (EVPI) to harvest decisions made with a density-independent matrix population model. Research priorities identified by EVPI diverge from priorities identified by matrix elasticity analyses that ignore social utility.Using a density-dependent harvest model, the value of information about the intrinsic productivity of a population is shown to be sensitive to the socially determined penalty for implementing a harvest rate that deviates from the goal because of imperfection in estimation. Synthesis and applications. The effect of including social values into harvest decision-making depends on the assumed population model, uncertainty in population vital rates, and the particular form of the utility function used to represent risk/reward of harvest. EVPI analyses that include perceived utility of different outcomes can be used by managers seeking to optimize monitoring and research spending. Collaboration between applied ecologists and social scientists that quantitatively measure peoples' values is needed in many structured decision-making processes.
Collapse
|
28
|
Probability-of-decision interval 3+3 (POD-i3+3) design for phase I dose finding trials with late-onset toxicity. Stat Methods Med Res 2021; 31:534-548. [PMID: 34806915 DOI: 10.1177/09622802211052746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Late-onset toxicities often occur in phase I trials investigating novel immunotherapy and molecular targeted therapies. For trials with cohort based designs (such as modified toxicity probability interval, Bayesian optimal interval, and i3+3), patients are often turned away since the current cohort are still being followed without definite dose-limiting toxicities, which results in prolonged trial duration and waste of patient resources. In this paper, we incorporate a probability-of-decision framework into the i3+3 design and allow real-time dosing inference when the next patient becomes available. Both follow-up time for the pending patients and time to dose-limiting toxicities for the observed patients are used in calculating the posterior probability of each possible dosing decision. An intensive simulation study is conducted to evaluate the operating characteristics of the newly proposed probability-of-decision-i3+3 design under various dosing scenarios and patient accrual settings. Results show that the probability-of-decision-i3+3 design achieves comparable safety and reliability performances but much shorter trial duration compared to the complete designs.
Collapse
|
29
|
A guide to value of information methods for prioritising research in health impact modelling. EPIDEMIOLOGIC METHODS 2021; 10:20210012. [PMID: 35127249 PMCID: PMC7612319 DOI: 10.1515/em-2021-0012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Health impact simulation models are used to predict how a proposed policy or scenario will affect population health outcomes. These models represent the typically-complex systems that describe how the scenarios affect exposures to risk factors for disease or injury (e.g. air pollution or physical inactivity), and how these risk factors are related to measures of population health (e.g. expected survival). These models are informed by multiple sources of data, and are subject to multiple sources of uncertainty. We want to describe which sources of uncertainty contribute most to uncertainty about the estimate or decision arising from the model. Furthermore, we want to decide where further research should be focused to obtain further data to reduce this uncertainty, and what form that research might take. This article presents a tutorial in the use of Value of Information methods for uncertainty analysis and research prioritisation in health impact simulation models. These methods are based on Bayesian decision-theoretic principles, and quantify the expected benefits from further information of different kinds. The expected value of partial perfect information about a parameter measures sensitivity of a decision or estimate to uncertainty about that parameter. The expected value of sample information represents the expected benefit from a specific proposed study to get better information about the parameter. The methods are applicable both to situationswhere the model is used to make a decision between alternative policies, and situations where the model is simply used to estimate a quantity (such as expected gains in survival under a scenario). This paper explains how to calculate and interpret the expected value of information in the context of a simple model describing the health impacts of air pollution from motorised transport. We provide a general-purpose R package and full code to reproduce the example analyses.
Collapse
|
30
|
Robust Decision Analysis under Severe Uncertainty and Ambiguous Tradeoffs: An Invasive Species Case Study. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2021; 41:2140-2153. [PMID: 33951209 DOI: 10.1111/risa.13722] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Bayesian decision analysis is a useful method for risk management decisions, but is limited in its ability to consider severe uncertainty in knowledge, and value ambiguity in management objectives. We study the use of robust Bayesian decision analysis to handle problems where one or both of these issues arise. The robust Bayesian approach models severe uncertainty through bounds on probability distributions, and value ambiguity through bounds on utility functions. To incorporate data, standard Bayesian updating is applied on the entire set of distributions. To elicit our expert's utility representing the value of different management objectives, we use a modified version of the swing weighting procedure that can cope with severe value ambiguity. We demonstrate these methods on an environmental management problem to eradicate an alien invasive marmorkrebs recently discovered in Sweden, which needed a rapid response despite substantial knowledge gaps if the species was still present (i.e., severe uncertainty) and the need for difficult tradeoffs and competing interests (i.e., value ambiguity). We identify that the decision alternatives to drain the system and remove individuals in combination with dredging and sieving with or without a degradable biocide, or increasing pH, are consistently bad under the entire range of probability and utility bounds. This case study shows how robust Bayesian decision analysis provides a transparent methodology for integrating information in risk management problems where little data are available and/or where the tradeoffs are ambiguous.
Collapse
|
31
|
A framework for probabilistic weather forecast post-processing across models and lead times using machine learning. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200099. [PMID: 33583271 PMCID: PMC7898129 DOI: 10.1098/rsta.2020.0099] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Forecasting the weather is an increasingly data-intensive exercise. Numerical weather prediction (NWP) models are becoming more complex, with higher resolutions, and there are increasing numbers of different models in operation. While the forecasting skill of NWP models continues to improve, the number and complexity of these models poses a new challenge for the operational meteorologist: how should the information from all available models, each with their own unique biases and limitations, be combined in order to provide stakeholders with well-calibrated probabilistic forecasts to use in decision making? In this paper, we use a road surface temperature example to demonstrate a three-stage framework that uses machine learning to bridge the gap between sets of separate forecasts from NWP models and the 'ideal' forecast for decision support: probabilities of future weather outcomes. First, we use quantile regression forests to learn the error profile of each numerical model, and use these to apply empirically derived probability distributions to forecasts. Second, we combine these probabilistic forecasts using quantile averaging. Third, we interpolate between the aggregate quantiles in order to generate a full predictive distribution, which we demonstrate has properties suitable for decision support. Our results suggest that this approach provides an effective and operationally viable framework for the cohesive post-processing of weather forecasts across multiple models and lead times to produce a well-calibrated probabilistic output. This article is part of the theme issue 'Machine learning for weather and climate modelling'.
Collapse
|
32
|
Gazing at Social Interactions Between Foraging and Decision Theory. Front Neurorobot 2021; 15:639999. [PMID: 33859558 PMCID: PMC8042312 DOI: 10.3389/fnbot.2021.639999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 03/09/2021] [Indexed: 11/30/2022] Open
Abstract
Finding the underlying principles of social attention in humans seems to be essential for the design of the interaction between natural and artificial agents. Here, we focus on the computational modeling of gaze dynamics as exhibited by humans when perceiving socially relevant multimodal information. The audio-visual landscape of social interactions is distilled into a number of multimodal patches that convey different social value, and we work under the general frame of foraging as a tradeoff between local patch exploitation and landscape exploration. We show that the spatio-temporal dynamics of gaze shifts can be parsimoniously described by Langevin-type stochastic differential equations triggering a decision equation over time. In particular, value-based patch choice and handling is reduced to a simple multi-alternative perceptual decision making that relies on a race-to-threshold between independent continuous-time perceptual evidence integrators, each integrator being associated with a patch.
Collapse
|
33
|
Logistic retainment interval dose exploration design for Phase I clinical trials of cytotoxic agents. Pharm Stat 2021; 20:850-863. [PMID: 33738967 DOI: 10.1002/pst.2114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 12/29/2020] [Accepted: 02/22/2021] [Indexed: 11/10/2022]
Abstract
Phase I studies of a cytotoxic agent often aim to identify the dose that provides an investigator specified target dose-limiting toxicity (DLT) probability. In practice, an initial cohort receives a dose with a putative low DLT probability, and subsequent dosing follows by consecutively deciding whether to retain the current dose, escalate to the adjacent higher dose, or de-escalate to the adjacent lower dose. This article proposes a Phase I design derived using a Bayesian decision-theoretic approach to this sequential decision-making process. The design consecutively chooses the action that minimizes posterior expected loss where the loss reflects the distance on the log-odds scale between the target and the DLT probability of the dose that would be given to the next cohort under the corresponding action. A logistic model is assumed for the log odds of a DLT at the current dose with a weakly informative t-distribution prior centered at the target. The key design parameters are the pre-specified odds ratios for the DLT probabilities at the adjacent higher and lower doses. Dosing rules may be pre-tabulated, as these only depend on the outcomes at the current dose, which greatly facilitates implementation. The recommended default version of the proposed design improves dose selection relative to many established designs across a variety of scenarios.
Collapse
|
34
|
Information Theory for Agents in Artificial Intelligence, Psychology, and Economics. ENTROPY 2021; 23:e23030310. [PMID: 33800724 PMCID: PMC8001993 DOI: 10.3390/e23030310] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 03/02/2021] [Accepted: 03/04/2021] [Indexed: 12/05/2022]
Abstract
This review looks at some of the central relationships between artificial intelligence, psychology, and economics through the lens of information theory, specifically focusing on formal models of decision-theory. In doing so we look at a particular approach that each field has adopted and how information theory has informed the development of the ideas of each field. A key theme is expected utility theory, its connection to information theory, the Bayesian approach to decision-making and forms of (bounded) rationality. What emerges from this review is a broadly unified formal perspective derived from three very different starting points that reflect the unique principles of each field. Each of the three approaches reviewed can, in principle at least, be implemented in a computational model in such a way that, with sufficient computational power, they could be compared with human abilities in complex tasks. However, a central critique that can be applied to all three approaches was first put forward by Savage in The Foundations of Statistics and recently brought to the fore by the economist Binmore: Bayesian approaches to decision-making work in what Savage called ‘small worlds’ but cannot work in ‘large worlds’. This point, in various different guises, is central to some of the current debates about the power of artificial intelligence and its relationship to human-like learning and decision-making. Recent work on artificial intelligence has gone some way to bridging this gap but significant questions remain to be answered in all three fields in order to make progress in producing realistic models of human decision-making in the real world in which we live in.
Collapse
|
35
|
Abstract
Multi-arm multi-stage clinical trials in which more than two drugs are simultaneously investigated provide gains over separate single- or two-arm trials. In this paper we propose a generic Bayesian adaptive decision-theoretic design for multi-arm multi-stage clinical trials with K (K≥2) arms. The basic idea is that after each stage a decision about continuation of the trial and accrual of patients for an additional stage is made on the basis of the expected reduction in loss. For this purpose, we define a loss function that incorporates the patient accrual costs as well as costs associated with an incorrect decision at the end of the trial. An attractive feature of our loss function is that its estimation is computationally undemanding, also when K > 2. We evaluate the frequentist operating characteristics for settings with a binary outcome and multiple experimental arms. We consider both the situation with and without a control arm. In a simulation study, we show that our design increases the probability of making a correct decision at the end of the trial as compared to nonadaptive designs and adaptive two-stage designs.
Collapse
|
36
|
A Unified Theory of Human Judgements and Decision-Making under Uncertainty. ENTROPY 2020; 22:e22070738. [PMID: 33286509 PMCID: PMC7517281 DOI: 10.3390/e22070738] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 06/29/2020] [Accepted: 07/01/2020] [Indexed: 11/17/2022]
Abstract
Growing empirical evidence reveals that traditional set-theoretic structures cannot in general be applied to cognitive phenomena. This has raised several problems, as illustrated, for example, by probability judgement errors and decision-making (DM) errors. We propose here a unified theoretical perspective which applies the mathematical formalism of quantum theory in Hilbert space to cognitive domains. In this perspective, judgements and decisions are described as intrinsically non-deterministic processes which involve a contextual interaction between a conceptual entity and the cognitive context surrounding it. When a given phenomenon is considered, the quantum-theoretic framework identifies entities, states, contexts, properties and outcome statistics, and applies the mathematical formalism of quantum theory to model the considered phenomenon. We explain how the quantum-theoretic framework works in a variety of judgement and decision situations where systematic and significant deviations from classicality occur.
Collapse
|
37
|
Optimality of testing procedures for survival data in the nonproportional hazards setting. Biometrics 2020; 77:587-598. [PMID: 32535892 DOI: 10.1111/biom.13315] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 05/25/2020] [Accepted: 05/27/2020] [Indexed: 02/06/2023]
Abstract
Most statistical tests for treatment effects used in randomized clinical trials with survival outcomes are based on the proportional hazards assumption, which often fails in practice. Data from early exploratory studies may provide evidence of nonproportional hazards, which can guide the choice of alternative tests in the design of practice-changing confirmatory trials. We developed a test to detect treatment effects in a late-stage trial, which accounts for the deviations from proportional hazards suggested by early-stage data. Conditional on early-stage data, among all tests that control the frequentist Type I error rate at a fixed α level, our testing procedure maximizes the Bayesian predictive probability that the study will demonstrate the efficacy of the experimental treatment. Hence, the proposed test provides a useful benchmark for other tests commonly used in the presence of nonproportional hazards, for example, weighted log-rank tests. We illustrate this approach in simulations based on data from a published cancer immunotherapy phase III trial.
Collapse
|
38
|
Modeling and measurement of signaling outcomes affecting decision making in noisy intracellular networks using machine learning methods. Integr Biol (Camb) 2020; 12:122-138. [PMID: 32424393 DOI: 10.1093/intbio/zyaa009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 04/03/2020] [Accepted: 04/06/2020] [Indexed: 12/30/2022]
Abstract
Characterization of decision-making in cells in response to received signals is of importance for understanding how cell fate is determined. The problem becomes multi-faceted and complex when we consider cellular heterogeneity and dynamics of biochemical processes. In this paper, we present a unified set of decision-theoretic, machine learning and statistical signal processing methods and metrics to model the precision of signaling decisions, in the presence of uncertainty, using single cell data. First, we introduce erroneous decisions that may result from signaling processes and identify false alarms and miss events associated with such decisions. Then, we present an optimal decision strategy which minimizes the total decision error probability. Additionally, we demonstrate how graphing receiver operating characteristic curves conveniently reveals the trade-off between false alarm and miss probabilities associated with different cell responses. Furthermore, we extend the introduced framework to incorporate the dynamics of biochemical processes and reactions in a cell, using multi-time point measurements and multi-dimensional outcome analysis and decision-making algorithms. The introduced multivariate signaling outcome modeling framework can be used to analyze several molecular species measured at the same or different time instants. We also show how the developed binary outcome analysis and decision-making approach can be extended to more than two possible outcomes. As an example and to show how the introduced methods can be used in practice, we apply them to single cell data of PTEN, an important intracellular regulatory molecule in a p53 system, in wild-type and abnormal cells. The unified signaling outcome modeling framework presented here can be applied to various organisms ranging from viruses, bacteria, yeast and lower metazoans to more complex organisms such as mammalian cells. Ultimately, this signaling outcome modeling approach can be utilized to better understand the transition from physiological to pathological conditions such as inflammation, various cancers and autoimmune diseases.
Collapse
|
39
|
A Bayesian decision-theoretic approach to incorporate preclinical information into phase I oncology trials. Biom J 2020; 62:1408-1427. [PMID: 32285511 DOI: 10.1002/bimj.201900161] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 12/05/2019] [Accepted: 01/31/2020] [Indexed: 11/10/2022]
Abstract
Leveraging preclinical animal data for a phase I oncology trial is appealing yet challenging. In this paper, we use animal data to improve decision-making in a model-based dose-escalation procedure. We make a proposal for how to measure and address a prior-data conflict in a sequential study with a small sample size. Animal data are incorporated via a robust two-component mixture prior for the parameters of the human dose-toxicity relationship. The weights placed on each component of the prior are chosen empirically and updated dynamically as the trial progresses and more data accrue. After completion of each cohort, we use a Bayesian decision-theoretic approach to evaluate the predictive utility of the animal data for the observed human toxicity outcomes, reflecting the degree of agreement between dose-toxicity relationships in animals and humans. The proposed methodology is illustrated through several data examples and an extensive simulation study.
Collapse
|
40
|
Anxiety, avoidance, and sequential evaluation. COMPUTATIONAL PSYCHIATRY (CAMBRIDGE, MASS.) 2020; 4:10.1162/cpsy_a_00026. [PMID: 34036174 PMCID: PMC8143038 DOI: 10.1162/cpsy_a_00026] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2019] [Accepted: 12/29/2019] [Indexed: 01/13/2023]
Abstract
Anxiety disorders are characterized by a range of aberrations in the processing of and response to threat, but there is little clarity what core pathogenesis might underlie these symptoms. Here we propose that a particular set of unrealistically pessimistic assumptions can distort an agent's behavior and underlie a host of seemingly disparate anxiety symptoms. We formalize this hypothesis in a decision theoretic analysis of maladaptive avoidance and a reinforcement learning model, which shows how a localized bias in beliefs can formally explain a range of phenomena related to anxiety. The core observation, implicit in standard decision theoretic accounts of sequential evaluation, is that the potential for avoidance should be protective: if danger can be avoided later, it poses less threat now. We show how a violation of this assumption - via a pessimistic, false belief that later avoidance will be unsuccessful - leads to a characteristic, excessive propagation of fear and avoidance to situations far antecedent of threat. This single deviation can explain a range of features of anxious behavior, including exaggerated threat appraisals, fear generalization, and persistent avoidance. Simulations of the model reproduce laboratory demonstrations of abnormal decision making in anxiety, including in situations of approach-avoid conflict and planning to avoid losses. The model also ties together a number of other seemingly disjoint phenomena in anxious disorders. For instance, learning under the pessimistic bias captures a hypothesis about the role of anxiety in the later development of depression. The bias itself offers a new formalization of classic insights from the psychiatric literature about the central role of maladaptive beliefs about control and self-efficacy in anxiety. This perspective also extends previous computational accounts of beliefs about control in mood disorders, which neglected the sequential aspects of choice.
Collapse
|
41
|
Automated Tool for Health Utility Assessments: The Gambler II. MDM Policy Pract 2020; 5:2381468320914307. [PMID: 32215320 PMCID: PMC7081474 DOI: 10.1177/2381468320914307] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 02/06/2020] [Indexed: 12/20/2022] Open
Abstract
Background. The Gambler II is a web-based utility assessment tool supporting visual analogue scale (VAS), standard gamble (SG), and time trade-off (TTO) utility assessments. It contains novel features, including an easy to use project development authoring tool and use of multimedia clips for health state descriptions. Objectives. Evaluate the usability and understandability of the patient-facing side of The Gambler. Investigate the feasibility of using The Gambler and evaluate its impact on patient knowledge regarding the relevant health states. Materials and Methods. We used The Gambler to assess utilities on a convenience sample of 55 users for common long-term complications of type 2 diabetes mellitus: diabetic neuropathy, diabetic retinopathy, and diabetic foot infection requiring transmetatarsal amputation. Using VAS, SG, and TTO, we collected metadata, such as time spent on each assessment and the entire assessment process. We evaluated usability with an adaptation of the System Usability Scale survey and understandability. We evaluated impact on knowledge gained through knowledge assessments about these complications before and after use of The Gambler. Results. Overall satisfaction with The Gambler was high, 4.02 on a 5-point scale. Usability rated highly at 84.93 on a normalized scale between 0 and 100. Knowledge scores increased significantly following use of The Gambler from pretest mean of 68% to posttest mean of 76% (P < 0.01). Average time using the software: ∼7½ minutes. Conclusions. The Gambler is an easy to use and understand computer-based tool for utility assessment. It is feasible to use within clinical encounters to support shared decision making, and it has unique features that make it a powerful tool for investigators interested in research on health utilities.
Collapse
|
42
|
A forgotten γ error. J Eval Clin Pract 2019; 25:751-753. [PMID: 31063238 DOI: 10.1111/jep.13155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 04/05/2019] [Indexed: 11/29/2022]
|
43
|
Linking Decision Theory and Quantitative Microbial Risk Assessment: Tradeoffs Between Compliance and Efficacy for Waterborne Disease Interventions. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2019; 39:2214-2226. [PMID: 31529800 PMCID: PMC8634750 DOI: 10.1111/risa.13381] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 03/27/2019] [Accepted: 04/05/2019] [Indexed: 05/30/2023]
Abstract
Achieving health gains from the U.N. Sustainable Development Goals of universal coverage for water and sanitation will require interventions that can be widely adopted and maintained. Effectiveness-how an intervention performs based on actual use-as opposed to efficacy will therefore be central to evaluations of new and existing interventions. Incomplete compliance-when people do not always use the intervention and are therefore exposed to contamination-is thought to be responsible for the lower-than-expected risk reductions observed from water, sanitation, and hygiene interventions based on their efficacy at removing pathogens. We explicitly incorporated decision theory into a quantitative microbial risk assessment model. Specifically, we assume that the usability of household water treatment (HWT) devices (filters and chlorine) decreases as they become more efficacious due to issues such as taste or flow rates. Simulations were run to examine the tradeoff between device efficacy and usability. For most situations, HWT interventions that trade lower efficacy (i.e., remove less pathogens) for higher compliance (i.e., better usability) contribute substantial reductions in diarrheal disease risk compared to devices meeting current World Health Organization efficacy guidelines. Recommendations that take into account both the behavioral and microbiological properties of treatment devices are likely to be more effective at reducing the burden of diarrheal disease than current standards that only consider efficacy.
Collapse
|
44
|
Abstract
Current and future prospects for successfully rebuilding global fisheries remain debated due to uncertain stock status, variable management success, and disruptive environmental change. While scientists routinely account for some of this uncertainty in population models, the mechanisms by which this translates into decision-making and policy are problematic and can lead to unintentional overexploitation. Here, we explicitly track the role of measurement uncertainty and environmental variation in the decision-making process for setting catch quotas. Analyzing 109 well-sampled stocks from all oceans, we show that current practices may attain 55% recovery on average, while richer decision methods borrowed from robotics yield 85% recovery of global stocks by midcentury, higher economic returns, and greater robustness to environmental surprises. These results challenge the consensus that global fisheries can be rebuilt by existing approaches alone, while also underscoring that rebuilding stocks may still be achieved by improved decision-making tools that optimally manage this uncertainty.
Collapse
|
45
|
Concurrent assessment of epidemiological and operational uncertainties for optimal outbreak control: Ebola as a case study. Proc Biol Sci 2019; 286:20190774. [PMID: 31213182 PMCID: PMC6599986 DOI: 10.1098/rspb.2019.0774] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Determining how best to manage an infectious disease outbreak may be hindered by both epidemiological uncertainty (i.e. about epidemiological processes) and operational uncertainty (i.e. about the effectiveness of candidate interventions). However, these two uncertainties are rarely addressed concurrently in epidemic studies. We present an approach to simultaneously address both sources of uncertainty, to elucidate which source most impedes decision-making. In the case of the 2014 West African Ebola outbreak, epidemiological uncertainty is represented by a large ensemble of published models. Operational uncertainty about three classes of interventions is assessed for a wide range of potential intervention effectiveness. We ranked each intervention by caseload reduction in each model, initially assuming an unlimited budget as a counterfactual. We then assessed the influence of three candidate cost functions relating intervention effectiveness and cost for different budget levels. The improvement in management outcomes to be gained by resolving uncertainty is generally high in this study; appropriate information gain could reduce expected caseload by more than 50%. The ranking of interventions is jointly determined by the underlying epidemiological process, the effectiveness of the interventions and the size of the budget. An epidemiologically effective intervention might not be optimal if its costs outweigh its epidemiological benefit. Under higher-budget conditions, resolution of epidemiological uncertainty is most valuable. When budgets are tight, however, operational and epidemiological uncertainty are equally important. Overall, our study demonstrates that significant reductions in caseload could result from a careful examination of both epidemiological and operational uncertainties within the same modelling structure. This approach can be applied to decision-making for the management of other diseases for which multiple models and multiple interventions are available.
Collapse
|
46
|
Managing uncertainty in movement knowledge for environmental decisions. Conserv Lett 2019; 12:e12620. [PMID: 31423150 PMCID: PMC6686712 DOI: 10.1111/conl.12620] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 08/17/2018] [Accepted: 11/22/2018] [Indexed: 11/27/2022] Open
Abstract
Species' movements affect their response to environmental change but movement knowledge is often highly uncertain. We now have well-established methods to integrate movement knowledge into conservation practice but still lack a framework to deal with uncertainty in movement knowledge for environmental decisions. We provide a framework that distinguishes two dimensions of species' movement that are heavily influenced by uncertainty: knowledge about movement and relevance of movement to environmental decisions. Management decisions can be informed by their position in this knowledge-relevance space. We then outline a framework to support decisions around (1) increasing understanding of the relevance of movement knowledge, (2) increasing robustness of decisions to uncertainties and (3) improving knowledge on species' movement. Our decision-support framework provides guidance for managing movement-related uncertainty in systematic conservation planning, agri-environment schemes, habitat restoration and international biodiversity policy. It caters to different resource levels (time and funding) so that species' movement knowledge can be more effectively integrated into environmental decisions.
Collapse
|
47
|
Abstract
Ecological management and decision-making typically focus on uncertainty about the future, but surprisingly little is known about how to account for uncertainty of the present: that is, the realities of having only partial or imperfect measurements. Our primary paradigms for handling decisions under uncertainty-the precautionary principle and optimal control-have so far given contradictory results. This paradox is best illustrated in the example of fisheries management, where many ideas that guide thinking about ecological decision-making were first developed. We find that simplistic optimal control approaches have repeatedly concluded that a manager should increase catch quotas when faced with greater uncertainty about the fish biomass. Current best practices take a more precautionary approach, decreasing catch quotas by a fixed amount to account for uncertainty. Using comparisons to both simulated and historical catch data, we find that neither approach is sufficient to avoid stock collapses under moderate observational uncertainty. Using partially observed Markov decision process (POMDP) methods, we demonstrate how this paradox arises from flaws in the standard theory, which contributes to overexploitation of fisheries and increased probability of economic and ecological collapse. In contrast, we find that POMDP-based management avoids such overexploitation while also generating higher economic value. These results have significant implications for how we handle uncertainty in both fisheries and ecological management more generally.
Collapse
|
48
|
Decision Theory-Based COI-SNP Tagging Approach for 126 Scombriformes Species Tagging. Front Genet 2019; 10:259. [PMID: 31001317 PMCID: PMC6456664 DOI: 10.3389/fgene.2019.00259] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Accepted: 03/08/2019] [Indexed: 12/02/2022] Open
Abstract
The mitochondrial gene cytochrome c oxidase I (COI) is commonly used for DNA barcoding in animals. However, most of the COI barcode nucleotides are conserved and sequences longer than about 650 base pairs increase the computational burden for species identification. To solve this problem, we propose a decision theory-based COI SNP tagging (DCST) approach that focuses on the discrimination of species using single nucleotide polymorphisms (SNPs) as the variable nucleotides of the sequences of a group of species. Using the example of 126 teleost mackerel fish species (order: Scombriformes), we identified 281 SNPs by alignment and trimming of their COI sequences. After decision rule making, 49 SNPs in 126 fish species were determined using the scoring system of the DCST approach. These COI-SNP barcodes were finally transformed into one-dimensional barcode images. Our proposed DCST approach simplifies the computational complexity and identifies the most effective and fewest SNPs to resolve or discriminate species for species tagging.
Collapse
|
49
|
Implicit decision framing as an unrecognized source of confusion in endangered species classification. CONSERVATION BIOLOGY : THE JOURNAL OF THE SOCIETY FOR CONSERVATION BIOLOGY 2018; 32:1246-1254. [PMID: 29987850 DOI: 10.1111/cobi.13185] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 04/20/2018] [Accepted: 05/04/2018] [Indexed: 06/08/2023]
Abstract
Legal classification of species requires scientific and values-based components, and how those components interact depends on how people frame the decision. Is classification a negotiation of trade-offs, a decision on how to allocate conservation efforts, or simply a comparison of the biological status of a species to a legal standard? The answers to problem-framing questions such as these influence decision making in species classifications. In our experience, however, decision makers, staff biologists, and stakeholders often have differing perspectives of the decision problem and assume different framings. In addition to differences between individuals, in some cases it appears individuals themselves are unclear about the decision process, which contributes to regulatory paralysis, litigation, and a loss of trust by agency staff and the public. We present 5 framings: putting species in the right bin, doing right by the species over time, saving the most species on a limited budget, weighing extinction risk against other objectives, and strategic classification to advance conservation. These framings are inspired by elements observed in current classification practices. Putting species in the right bin entails comparing a scientific status assessment with policy thresholds and accounting for potential misclassification costs. Doing right by the species adds a time dimension to the classification decision, and saving the most species on a limited budget classifies a suite of species simultaneously. Weighing extinction risk against other objectives would weigh ecological or socioeconomic concerns in classification decisions, and strategic classification to advance conservation would make negotiation a component of classification. We view these framings as a means to generate thought, discussion, and movement toward selection and application of explicit classification framings. Being explicit about the decision framing could lead decision makers toward more efficient and defensible decisions, reduce internal confusion and external conflict, and support better collaboration between scientists and policy makers.
Collapse
|
50
|
Using the Value of Information to improve conservation decision making. Biol Rev Camb Philos Soc 2018; 94:629-647. [PMID: 30280477 DOI: 10.1111/brv.12471] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Revised: 08/31/2018] [Accepted: 09/04/2018] [Indexed: 12/11/2022]
Abstract
Conservation decisions are challenging, not only because they often involve difficult conflicts among outcomes that people value, but because our understanding of the natural world and our effects on it is fraught with uncertainty. Value of Information (VoI) methods provide an approach for understanding and managing uncertainty from the standpoint of the decision maker. These methods are commonly used in other fields (e.g. economics, public health) and are increasingly used in biodiversity conservation. This decision-analytical approach can identify the best management alternative to select where the effectiveness of interventions is uncertain, and can help to decide when to act and when to delay action until after further research. We review the use of VoI in the environmental domain, reflect on the need for greater uptake of VoI, particularly for strategic conservation planning, and suggest promising areas for new research. We also suggest common reporting standards as a means of increasing the leverage of this powerful tool. The environmental science, ecology and biodiversity categories of the Web of Knowledge were searched using the terms 'Value of Information,' 'Expected Value of Perfect Information,' and the abbreviation 'EVPI.' Google Scholar was searched with the same terms, and additionally the terms decision and biology, biodiversity conservation, fish, or ecology. We identified 1225 papers from these searches. Included studies were limited to those that showed an application of VoI in biodiversity conservation rather than simply describing the method. All examples of use of VOI were summarised regarding the application of VoI, the management objectives, the uncertainties, the models used, how the objectives were measured, and the type of VoI. While the use of VoI appears to be on the increase in biodiversity conservation, the reporting of results is highly variable, which can make it difficult to understand the decision context and which uncertainties were considered. Moreover, it was unclear if, and how, the papers informed management and policy interventions, which is why we suggest a range of reporting standards that would aid the use of VoI. The use of VoI in conservation settings is at an early stage. There are opportunities for broader applications, not only for species-focussed management problems, but also for setting local or global research priorities for biodiversity conservation, making funding decisions, or designing or improving protected area networks and management. The long-term benefits of applying VoI methods to biodiversity conservation include a more structured and decision-focused allocation of resources to research.
Collapse
|