501
|
Paddock SM. Statistical benchmarks for health care provider performance assessment: a comparison of standard approaches to a hierarchical Bayesian histogram-based method. Health Serv Res 2014; 49:1056-73. [PMID: 24461071 DOI: 10.1111/1475-6773.12149] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
OBJECTIVE Examine how widely used statistical benchmarks of health care provider performance compare with histogram-based statistical benchmarks obtained via hierarchical Bayesian modeling. DATA SOURCES Publicly available data from 3,240 hospitals during April 2009-March 2010 on two process-of-care measures reported on the Medicare Hospital Compare website. STUDY DESIGN Secondary data analyses of two process-of-care measures comparing statistical benchmark estimates and threshold exceedance determinations under various combinations of hospital performance measure estimates and benchmarking approaches. PRINCIPAL FINDINGS Statistical benchmarking approaches for determining top 10 percent performance varied with respect to which hospitals exceeded the performance benchmark; such differences were not found at the 50 percent threshold. Benchmarks derived from the histogram of provider performance under hierarchical Bayesian modeling provide a compromise between benchmarks based on direct (raw) estimates, which are overdispersed relative to the true distribution of provider performance and prone to high variance for small providers, and posterior mean provider performance, for which over-shrinkage and under-dispersion relative to the true provider performance distribution is a concern. CONCLUSIONS Given the rewards and penalties associated with characterizing top performance, the ability of statistical benchmarks to summarize key features of the provider performance distribution should be examined.
Collapse
|
502
|
Heather N. Interpreting null findings from trials of alcohol brief interventions. Front Psychiatry 2014; 5:85. [PMID: 25076917 PMCID: PMC4100216 DOI: 10.3389/fpsyt.2014.00085] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Accepted: 07/03/2014] [Indexed: 12/03/2022] Open
Abstract
The effectiveness of alcohol brief intervention (ABI) has been established by a succession of meta-analyses but, because the effects of ABI are small, null findings from randomized controlled trials are often reported and can sometimes lead to skepticism regarding the benefits of ABI in routine practice. This article first explains why null findings are likely to occur under null hypothesis significance testing (NHST) due to the phenomenon known as "the dance of the p-values." A number of misconceptions about null findings are then described, using as an example the way in which the results of the primary care arm of a recent cluster-randomized trial of ABI in England (the SIPS project) have been misunderstood. These misinterpretations include the fallacy of "proving the null hypothesis" that lack of a significant difference between the means of sample groups can be taken as evidence of no difference between their population means, and the possible effects of this and related misunderstandings of the SIPS findings are examined. The mistaken inference that reductions in alcohol consumption seen in control groups from baseline to follow-up are evidence of real effects of control group procedures is then discussed and other possible reasons for such reductions, including regression to the mean, research participation effects, historical trends, and assessment reactivity, are described. From the standpoint of scientific progress, the chief problem about null findings under the conventional NHST approach is that it is not possible to distinguish "evidence of absence" from "absence of evidence." By contrast, under a Bayesian approach, such a distinction is possible and it is explained how this approach could classify ABIs in particular settings or among particular populations as either truly ineffective or as of unknown effectiveness, thus accelerating progress in the field of ABI research.
Collapse
|
503
|
Jackson GS, Hillegonds DJ, Muzikar P, Goehring B. Ultra-trace analysis of 41Ca in urine by accelerator mass spectrometry: an inter-laboratory comparison. NUCLEAR INSTRUMENTS & METHODS IN PHYSICS RESEARCH. SECTION B, BEAM INTERACTIONS WITH MATERIALS AND ATOMS 2013; 313:10.1016/j.nimb.2013.08.004. [PMID: 24179312 PMCID: PMC3810309 DOI: 10.1016/j.nimb.2013.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
A 41Ca interlaboratory comparison between Lawrence Livermore National Laboratory (LLNL) and the Purdue Rare Isotope Laboratory (PRIME Lab) has been completed. Analysis of the ratios assayed by accelerator mass spectrometry (AMS) shows that there is no statistically significant difference in the ratios. Further, Bayesian analysis shows that the uncertainties reported by both facilities are correct with the possibility of a slight under-estimation by one laboratory. Finally, the chemistry procedures used by the two facilities to produce CaF2 for the cesium sputter ion source are robust and don't yield any significant differences in the final result.
Collapse
|
504
|
Zhang J, Braun TM. A Phase I Bayesian Adaptive Design to Simultaneously Optimize Dose and Schedule Assignments Both Between and Within Patients. J Am Stat Assoc 2013; 108. [PMID: 24222927 DOI: 10.1080/01621459.2013.806927] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
In traditional schedule or dose-schedule finding designs, patients are assumed to receive their assigned dose-schedule combination throughout the trial even though the combination may be found to have an undesirable toxicity profile, which contradicts actual clinical practice. Since no systematic approach exists to optimize intra-patient dose-schedule assignment, we propose a Phase I clinical trial design that extends existing approaches to optimize dose and schedule solely between patients by incorporating adaptive variations to dose-schedule assignments within patients as the study proceeds. Our design is based on a Bayesian non-mixture cure rate model that incorporates multiple administrations each patient receives with the per-administration dose included as a covariate. Simulations demonstrate that our design identifies safe dose and schedule combinations as well as the traditional method that does not allow for intra-patient dose-schedule reassignments, but with a larger number of patients assigned to safe combinations. Supplementary materials for this article are available online.
Collapse
|
505
|
Dietze MC, Lebauer DS, Kooper R. On improving the communication between models and data. PLANT, CELL & ENVIRONMENT 2013; 36:1575-1585. [PMID: 23181765 DOI: 10.1111/pce.12043] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2012] [Revised: 11/15/2012] [Accepted: 11/18/2012] [Indexed: 05/25/2023]
Abstract
The potential for model-data synthesis is growing in importance as we enter an era of 'big data', greater connectivity and faster computation. Realizing this potential requires that the research community broaden its perspective about how and why they interact with models. Models can be viewed as scaffolds that allow data at different scales to inform each other through our understanding of underlying processes. Perceptions of relevance, accessibility and informatics are presented as the primary barriers to broader adoption of models by the community, while an inability to fully utilize the breadth of expertise and data from the community is a primary barrier to model improvement. Overall, we promote a community-based paradigm to model-data synthesis and highlight some of the tools and techniques that facilitate this approach. Scientific workflows address critical informatics issues in transparency, repeatability and automation, while intuitive, flexible web-based interfaces make running and visualizing models more accessible. Bayesian statistics provides powerful tools for assimilating a diversity of data types and for the analysis of uncertainty. Uncertainty analyses enable new measurements to target those processes most limiting our predictive ability. Moving forward, tools for information management and data assimilation need to be improved and made more accessible.
Collapse
|
506
|
Abstract
Detecting overlapping communities is essential to analyzing and exploring natural networks such as social networks, biological networks, and citation networks. However, most existing approaches do not scale to the size of networks that we regularly observe in the real world. In this paper, we develop a scalable approach to community detection that discovers overlapping communities in massive real-world networks. Our approach is based on a Bayesian model of networks that allows nodes to participate in multiple communities, and a corresponding algorithm that naturally interleaves subsampling from the network and updating an estimate of its communities. We demonstrate how we can discover the hidden community structure of several real-world networks, including 3.7 million US patents, 575,000 physics articles from the arXiv preprint server, and 875,000 connected Web pages from the Internet. Furthermore, we demonstrate on large simulated networks that our algorithm accurately discovers the true community structure. This paper opens the door to using sophisticated statistical models to analyze massive networks.
Collapse
|
507
|
Increased transmissibility explains the third wave of infection by the 2009 H1N1 pandemic virus in England. Proc Natl Acad Sci U S A 2013; 110:13422-7. [PMID: 23882078 DOI: 10.1073/pnas.1303117110] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In the 2009 H1N1 pandemic, the United Kingdom experienced two waves of infection, the first in the late spring and the second in the autumn. Given the low level of susceptibility to the pandemic virus expected to be remaining in the population after the second wave, it was a surprise that a substantial third epidemic occurred in the UK population between November 2010 and February 2011, despite no evidence for any significant antigenic evolution of the pandemic virus. Here, we use a mathematical model of influenza transmission embedded within a Bayesian synthesis inferential framework to jointly analyze syndromic, virological, and serological surveillance data collected in England in 2009-2011 and thereby assess epidemiological mechanisms which might have generated the third wave. We find that substantially increased transmissibility of the H1N1pdm09 virus is required to reproduce the third wave, suggesting that the virus evolved and increased fitness in the human host by the end of 2010, or that the very cold weather experienced in the United Kingdom at that time enhanced transmission rates. We also find some evidence that the preexisting heterologous immunity which reduced attack rates in adults during 2009 had substantially decayed by the winter of 2010, thus increasing the susceptibility of the adult population to infection. Finally, our analysis suggests that a pandemic vaccination campaign targeting adults and school-age children could have mitigated or prevented the third wave even at moderate levels of coverage.
Collapse
|
508
|
Abstract
Mainly, two statistical methodologies are applicable to the design and analysis of clinical trials: frequentist and Bayesian. Most traditional clinical trial designs are based on frequentist statistics. In frequentist statistics prior information is utilized formally only in the design of a clinical trial but not in the analysis of the data. On the other hand, Bayesian statistics provide a formal mathematical method for combining prior information with current information at the design stage, during the conduct of the trial, and at the analysis stage. It is easier to implement adaptive trial designs using Bayesian methods than frequentist methods. The Bayesian approach can also be applied for post-marketing surveillance purposes and in meta-analysis. The basic tenets of good trial design are same for both Bayesian and frequentist trials. It has been recommended that the type of analysis to be used (Bayesian or frequentist) should be chosen beforehand. Switching to an analysis method that produces a more favorable outcome after observing the data is not recommended.
Collapse
|
509
|
Chambert T, Rotella JJ, Higgs MD, Garrott RA. Individual heterogeneity in reproductive rates and cost of reproduction in a long-lived vertebrate. Ecol Evol 2013; 3:2047-60. [PMID: 23919151 PMCID: PMC3728946 DOI: 10.1002/ece3.615] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Revised: 04/18/2013] [Accepted: 04/24/2013] [Indexed: 01/28/2023] Open
Abstract
Individual variation in reproductive success is a key feature of evolution, but also has important implications for predicting population responses to variable environments. Although such individual variation in reproductive outcomes has been reported in numerous studies, most analyses to date have not considered whether these realized differences were due to latent individual heterogeneity in reproduction or merely random chance causing different outcomes among like individuals. Furthermore, latent heterogeneity in fitness components might be expressed differently in contrasted environmental conditions, an issue that has only rarely been investigated. Here, we assessed (i) the potential existence of latent individual heterogeneity and (ii) the nature of its expression (fixed vs. variable) in a population of female Weddell seals (Leptonychotes weddellii), using a hierarchical modeling approach on a 30-year mark–recapture data set consisting of 954 individual encounter histories. We found strong support for the existence of latent individual heterogeneity in the population, with “robust” individuals expected to produce twice as many pups as “frail” individuals. Moreover, the expression of individual heterogeneity appeared consistent, with only mild evidence that it might be amplified when environmental conditions are severe. Finally, the explicit modeling of individual heterogeneity allowed us to detect a substantial cost of reproduction that was not evidenced when the heterogeneity was ignored.
Collapse
|
510
|
Lee KB, Lee JM, Park TS, Oh PJ, Lee SH, Han JB. New method to determine the decision threshold for low-level radioactivity measurements. Appl Radiat Isot 2013; 81:7-9. [PMID: 23578909 DOI: 10.1016/j.apradiso.2013.03.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Accepted: 03/12/2013] [Indexed: 11/25/2022]
Abstract
We discuss a new method to determine the value of a decision threshold that can be used to decide whether to choose a one-sided confidence interval or a two-sided confidence interval. The method is based on the Feldman-Cousins unified approach providing a unique confidence region for estimated parameters. We apply this method to a net count rate measurand in low-level radioactivity measurements which is physically restricted to nonnegative values. We tabulate the values of the decision threshold and detection limit of the measurand for some typical coverage probabilities. The decision threshold in this method does indeed enable a decision on whether or not the physical effect quantified by the measurand is present.
Collapse
|
511
|
Lipsky AM, Lewis RJ. Response-adaptive decision-theoretic trial design: operating characteristics and ethics. Stat Med 2013; 32:3752-65. [PMID: 23558674 DOI: 10.1002/sim.5807] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2011] [Revised: 02/28/2013] [Accepted: 03/04/2013] [Indexed: 11/11/2022]
Abstract
Adaptive randomization is used in clinical trials to increase statistical efficiency. In addition, some clinicians and researchers believe that using adaptive randomization leads necessarily to more ethical treatment of subjects in a trial. We develop Bayesian, decision-theoretic, clinical trial designs with response-adaptive randomization and a primary goal of estimating treatment effect and then contrast these designs with designs that also include in their loss function a cost for poor subject outcome. When the loss function did not incorporate a cost for poor subject outcome, the gains in efficiency from response-adaptive randomization were accompanied by ethically concerning subject allocations. Conversely, including a cost for poor subject outcome demonstrated a more acceptable balance between the competing needs in the trial. A subsequent, parallel set of trials designed to control explicitly types I and II error rates showed that much of the improvement achieved through modification of the loss function was essentially negated. Therefore, gains in efficiency from the use of a decision-theoretic, response-adaptive design using adaptive randomization may only be assumed to apply to those goals that are explicitly included in the loss function. Trial goals, including ethical ones, which do not appear in the loss function, are ignored and may even be compromised; it is thus inappropriate to assume that all adaptive trials are necessarily more ethical. Controlling types I and II error rates largely negates the benefit of including competing needs in favor of the goal of parameter estimation.
Collapse
|
512
|
Thompson CK, Schwabe F, Schoof A, Mendoza E, Gampe J, Rochefort C, Scharff C. Young and intense: FoxP2 immunoreactivity in Area X varies with age, song stereotypy, and singing in male zebra finches. Front Neural Circuits 2013; 7:24. [PMID: 23450800 PMCID: PMC3584353 DOI: 10.3389/fncir.2013.00024] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2012] [Accepted: 02/02/2013] [Indexed: 11/21/2022] Open
Abstract
FOXP2 is a transcription factor functionally relevant for learned vocalizations in humans and songbirds. In songbirds, FoxP2 mRNA expression in the medium spiny neurons of the basal ganglia song nucleus Area X is developmentally regulated and varies with singing conditions in different social contexts. How individual neurons in Area X change FoxP2 expression across development and in social contexts is not known, however. Here we address this critical gap in our understanding of FoxP2 as a link between neuronal networks and behavior. We used a statistically unbiased analysis of FoxP2-immunoreactivity (FoxP2-IR) on a neuron-by-neuron basis and found a bimodal distribution of FoxP2-IR neurons in Area X: weakly-stained and intensely-stained. The density of intensely-stained FoxP2-IR neurons was 10 times higher in juveniles than in adults, exponentially decreased with age, and was negatively correlated with adult song stability. Three-week old neurons labeled with BrdU were more than five times as likely to be intensely-stained than weakly-stained. The density of FoxP2-IR putative migratory neurons with fusiform-shaped nuclei substantially decreased as birds aged. The density of intensely-stained FoxP2-IR neurons was not affected by singing whereas the density of weakly-stained FoxP2-IR neurons was. Together, these data indicate that young Area X medium spiny neurons express FoxP2 at high levels and decrease expression as they become integrated into existing neural circuits. Once integrated, levels of FoxP2 expression correlate with singing behavior. Together, these findings raise the possibility that FoxP2 levels may orchestrate song learning and song stereotypy in adults by a common mechanism.
Collapse
|
513
|
Nabholz B, Uwimana N, Lartillot N. Reconstructing the phylogenetic history of long-term effective population size and life-history traits using patterns of amino acid replacement in mitochondrial genomes of mammals and birds. Genome Biol Evol 2013; 5:1273-90. [PMID: 23711670 PMCID: PMC3730341 DOI: 10.1093/gbe/evt083] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2013] [Indexed: 12/22/2022] Open
Abstract
The nearly neutral theory, which proposes that most mutations are deleterious or close to neutral, predicts that the ratio of nonsynonymous over synonymous substitution rates (dN/dS), and potentially also the ratio of radical over conservative amino acid replacement rates (Kr/Kc), are negatively correlated with effective population size. Previous empirical tests, using life-history traits (LHT) such as body-size or generation-time as proxies for population size, have been consistent with these predictions. This suggests that large-scale phylogenetic reconstructions of dN/dS or Kr/Kc might reveal interesting macroevolutionary patterns in the variation in effective population size among lineages. In this work, we further develop an integrative probabilistic framework for phylogenetic covariance analysis introduced previously, so as to estimate the correlation patterns between dN/dS, Kr/Kc, and three LHT, in mitochondrial genomes of birds and mammals. Kr/Kc displays stronger and more stable correlations with LHT than does dN/dS, which we interpret as a greater robustness of Kr/Kc, compared with dN/dS, the latter being confounded by the high saturation of the synonymous substitution rate in mitochondrial genomes. The correlation of Kr/Kc with LHT was robust when controlling for the potentially confounding effects of nucleotide compositional variation between taxa. The positive correlation of the mitochondrial Kr/Kc with LHT is compatible with previous reports, and with a nearly neutral interpretation, although alternative explanations are also possible. The Kr/Kc model was finally used for reconstructing life-history evolution in birds and mammals. This analysis suggests a fairly large-bodied ancestor in both groups. In birds, life-history evolution seems to have occurred mainly through size reduction in Neoavian birds, whereas in placental mammals, body mass evolution shows disparate trends across subclades. Altogether, our work represents a further step toward a more comprehensive phylogenetic reconstruction of the evolution of life-history and of the population-genetics environment.
Collapse
|
514
|
Pueyo S. Comparison of two views of maximum entropy in biodiversity: Frank (2011) and Pueyo et al. (2007). Ecol Evol 2012; 2:988-93. [PMID: 22837843 PMCID: PMC3399164 DOI: 10.1002/ece3.231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2011] [Revised: 01/09/2012] [Accepted: 01/18/2012] [Indexed: 11/24/2022] Open
Abstract
An increasing number of authors agree in that the maximum entropy principle (MaxEnt) is essential for the understanding of macroecological patterns. However, there are subtle but crucial differences among the approaches by several of these authors. This poses a major obstacle for anyone interested in applying the methodology of MaxEnt in this context. In a recent publication, Frank (2011) gives some arguments why his own approach would represent an improvement as compared to the earlier paper by Pueyo et al. (2007) and also to the views by Edwin T. Jaynes, who first formulated MaxEnt in the context of statistical physics. Here I show that his criticisms are flawed and that there are fundamental reasons to prefer the original approach.
Collapse
|
515
|
Ochs MF, Fertig EJ. Matrix Factorization for Transcriptional Regulatory Network Inference. IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN BIOINFORMATICS AND COMPUTATIONAL BIOLOGY PROCEEDINGS. IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE IN BIOINFORMATICS AND COMPUTATIONAL BIOLOGY 2012; 2012:387-396. [PMID: 25364782 DOI: 10.1109/cibcb.2012.6217256] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
Inference of Transcriptional Regulatory Networks (TRNs) provides insight into the mechanisms driving biological systems, especially mammalian development and disease. Many techniques have been developed for TRN estimation from indirect biochemical measurements. Although successful when initially tested in model organisms, these regulatory models often fail when applied to data from multicellular organisms where multiple regulation and gene reuse increase dramatically. Non-negative matrix factorization techniques were initially introduced to find non-orthogonal patterns in data, making them ideal techniques for inference in cases of multiple regulation. We review these techniques and their application to TRN analysis.
Collapse
|
516
|
Shen J, Preskorn S, Dragalin V, Slomkowski M, Padmanabhan SK, Fardipour P, Sharma A, Krams M. How Adaptive Trial Designs can Increase Efficiency in Psychiatric Drug Development: A Case Study. INNOVATIONS IN CLINICAL NEUROSCIENCE 2011; 8:26-34. [PMID: 21860843 PMCID: PMC3159542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
This paper uses a recently completed study to illustrate how adaptive trial designs can increase efficiency of psychiatric drug development. The design employed allowed a continuous reassessment of the estimated dose-response such that patients were randomized in a double-blind fashion to one of seven doses of the investigational drug, placebo, or active comparator. The study design also permitted early detection of futility allowing for early study termination. By using the adaptive trial design approach, only 202 patients were needed to make the determination of futility. In contrast, a conventional design would have required enrollment of 450 patients and considerably more time and expense to reach the same conclusion. Adaptive trial designs are important at this time when many pharmaceutical companies are abandoning the development of psychiatric medications because of the inefficiency of conventional approaches.
Collapse
|
517
|
Wei K, Körding K. Uncertainty of feedback and state estimation determines the speed of motor adaptation. Front Comput Neurosci 2010; 4:11. [PMID: 20485466 PMCID: PMC2871692 DOI: 10.3389/fncom.2010.00011] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2009] [Accepted: 03/30/2010] [Indexed: 12/18/2022] Open
Abstract
Humans can adapt their motor behaviors to deal with ongoing changes. To achieve this, the nervous system needs to estimate central variables for our movement based on past knowledge and new feedback, both of which are uncertain. In the Bayesian framework, rates of adaptation characterize how noisy feedback is in comparison to the uncertainty of the state estimate. The predictions of Bayesian models are intuitive: the nervous system should adapt slower when sensory feedback is more noisy and faster when its state estimate is more uncertain. Here we want to quantitatively understand how uncertainty in these two factors affects motor adaptation. In a hand reaching experiment we measured trial-by-trial adaptation to a randomly changing visual perturbation to characterize the way the nervous system handles uncertainty in state estimation and feedback. We found both qualitative predictions of Bayesian models confirmed. Our study provides evidence that the nervous system represents and uses uncertainty in state estimate and feedback during motor adaptation.
Collapse
|
518
|
Miró-Quesada G, Del Castillo E, Peterson JJ. A Bayesian Approach for Multiple Response Surface Optimization in the Presence of Noise Variables. J Appl Stat 2007; 31:251-270. [PMID: 39372312 PMCID: PMC11451937 DOI: 10.1080/0266476042000184019] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
An approach for the multiple response robust parameter design problem based on a methodology by Peterson (2000) is presented. The approach is Bayesian, and consists of maximizing the posterior predictive probability that the process satisfies a set of constraints on the responses. In order to find a solution robust to variation in the noise variables, the predictive density is integrated not only with respect to the response variables but also with respect to the assumed distribution of the noise variables. The maximization problem involves repeated Monte Carlo integrations, and two different methods to solve it are evaluated. A Matlab code was written that rapidly finds an optimal (robust) solution in case it exists. Two examples taken from the literature are used to illustrate the proposed method.
Collapse
|
519
|
Ulvila JW, Gaffney JE. Evaluation of Intrusion Detection Systems. JOURNAL OF RESEARCH OF THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY 2003; 108:453-473. [PMID: 27413623 PMCID: PMC4844520 DOI: 10.6028/jres.108.040] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 02/07/2004] [Indexed: 06/06/2023]
Abstract
This paper presents a comprehensive method for evaluating intrusion detection systems (IDSs). It integrates and extends ROC (receiver operating characteristic) and cost analysis methods to provide an expected cost metric. Results are given for determining the optimal operation of an IDS based on this expected cost metric. Results are given for the operation of a single IDS and for a combination of two IDSs. The method is illustrated for: 1) determining the best operating point for a single and double IDS based on the costs of mistakes and the hostility of the operating environment as represented in the prior probability of intrusion and 2) evaluating single and double IDSs on the basis of expected cost. A method is also described for representing a compound IDS as an equivalent single IDS. Results are presented from the point of view of a system administrator, but they apply equally to designers of IDSs.
Collapse
|