1
|
Glynn D, Nikolaidis G, Jankovic D, Welton NJ. Constructing Relative Effect Priors for Research Prioritization and Trial Design: A Meta-epidemiological Analysis. Med Decis Making 2023; 43:553-563. [PMID: 37057388 PMCID: PMC10336712 DOI: 10.1177/0272989x231165985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 03/01/2023] [Indexed: 04/15/2023]
Abstract
BACKGROUND Bayesian methods have potential for efficient design of randomized clinical trials (RCTs) by incorporating existing evidence. Furthermore, value of information (VOI) methods estimate the value of reducing decision uncertainty, aiding transparent research prioritization. These methods require a prior distribution describing current uncertainty in key parameters, such as relative treatment effect (RTE). However, at the time of designing and commissioning research, there may be no data to base the prior on. The aim of this article is to present methods to construct priors for RTEs based on a collection of previous RCTs. METHODS We developed 2 Bayesian hierarchical models that captured variability in RTE between studies within disease area accounting for study characteristics. We illustrate the methods using a data set of 743 published RCTs across 9 disease areas to obtain predictive distributions for RTEs for a range of disease areas. We illustrate how the priors from such an analysis can be used in a VOI analysis for an RCT in bladder cancer and compare the results with those using an uninformative prior. RESULTS For most disease areas, the predicted RTE favored new interventions over comparators. The predicted effects and uncertainty differed across the 9 disease areas. VOI analysis showed that the expected value of research is much lower with our empirically derived prior compared with an uninformative prior. CONCLUSIONS This study demonstrates a novel approach to generating informative priors that can be used to aid research prioritization and trial design. The methods can also be used to combine RCT evidence with expert opinion. Further work is needed to create a rich database of RCT evidence that can be used to form off-the-shelf priors. HIGHLIGHTS Bayesian methods have potential to aid the efficient design of randomized clinical trials (RCTs) by incorporating existing evidence. Value-of-information (VOI) methods can be used to aid research prioritization by calculating the value of current decision uncertainty.These methods require a distribution describing current uncertainty in key parameters, that is, "prior distributions."This article demonstrates a methodology to estimate prior distributions for relative treatment effects (odds and hazard ratios) estimated from a collection of previous RCTs.These results may be combined with expert elicitation to facilitate 1) value-of-information methods to prioritize research or 2) Bayesian methods for research design.
Collapse
Affiliation(s)
- David Glynn
- Centre for Health Economics, University of York, UK
| | | | | | | |
Collapse
|
2
|
Paul TC, Johnson KA, Hagen GM. Super-resolution imaging of neuronal structure with structured illumination microscopy. bioRxiv 2023:2023.05.26.542523. [PMID: 37292949 PMCID: PMC10245995 DOI: 10.1101/2023.05.26.542523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Super-resolution structured illumination microscopy (SR-SIM) is a method in optical fluorescence microscopy which is suitable for imaging a wide variety of cells and tissues in biological and biomedical research. Typically, SIM methods use high spatial frequency illumination patterns generated by laser interference. This approach provides high resolution but is limited to thin samples such as cultured cells. Using a different strategy for processing the raw data and coarser illumination patterns, we imaged through a 150 µm thick coronal section of a mouse brain expressing GFP in a subset of neurons. The resolution reached 144 nm, an improvement of 1.7 fold beyond conventional widefield imaging.
Collapse
Affiliation(s)
- Tristan C. Paul
- UCCS BioFrontiers Center, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, Colorado, 80918
| | - Karl A. Johnson
- UCCS BioFrontiers Center, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, Colorado, 80918
| | - Guy M. Hagen
- UCCS BioFrontiers Center, University of Colorado Colorado Springs, 1420 Austin Bluffs Parkway, Colorado Springs, Colorado, 80918
| |
Collapse
|
3
|
Gianola D. Opinionated Views on Genome-Assisted Inference and Prediction During a Pandemic. Front Plant Sci 2021; 12:717284. [PMID: 34421971 PMCID: PMC8377666 DOI: 10.3389/fpls.2021.717284] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Accepted: 06/30/2021] [Indexed: 06/13/2023]
|
4
|
Gubbins S. Using the basic reproduction number to assess the risk of transmission of lumpy skin disease virus by biting insects. Transbound Emerg Dis 2019; 66:1873-1883. [PMID: 31038286 PMCID: PMC6767157 DOI: 10.1111/tbed.13216] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2019] [Revised: 04/24/2019] [Accepted: 04/25/2019] [Indexed: 01/23/2023]
Abstract
In recent years, lumpy skin disease virus (LSDV) has emerged as a major threat to cattle outside Africa, where it is endemic. Although evidence suggests that LSDV is transmitted by the bites of blood sucking arthropods, few studies have assessed the risk of transmission posed by particular vector species. Here this risk is assessed by calculating the basic reproduction number (R0 ) for transmission of LSDV by five species of biting insect: the stable fly, Stomoxys calcitrans, the biting midge, Culicoides nubeculosus, and three mosquito species, Aedes aegypti, Anopheles stephensi, and Culex quinquefasciatus. Parameters relating to mechanical transmission of LSDV were estimated using new analyses of previously published data from transmission experiments, while vector life history parameters were derived from the published literature. Uncertainty and sensitivity analyses were used to compute R0 for each species and to identify those parameters which influence its magnitude. Results suggest that S. calcitrans is likely to be the most efficient at transmitting LSDV, with Ae. aegypti also an efficient vector. By contrast, C. nubeculosus, An. stephensi, and Cx. quinquefasciatus are likely to be inefficient vectors of LSDV. However, there is considerable uncertainty associated with the estimates of R0 , reflecting uncertainty in most of the constituent parameters. Sensitivity analysis suggests that future experimental work should focus on estimating the probability of transmission from insect to bovine and on the virus inactivation rate in insects.
Collapse
|
5
|
Tozetto AB, Carvalho HM, Rosa RS, Mendes FG, Silva WR, Nascimento JV, Milistetd M. Coach Turnover in Top Professional Brazilian Football Championship: A Multilevel Survival Analysis. Front Psychol 2019; 10:1246. [PMID: 31244714 PMCID: PMC6562306 DOI: 10.3389/fpsyg.2019.01246] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Accepted: 05/13/2019] [Indexed: 11/26/2022] Open
Abstract
In this study, we examined the probability of coaches’ survival in the top Brazilian professional football championship considering variation across the competitive seasons between 2012 and 2017, considering a multilevel framework. We also considered whether previous coaching experience in the top Brazilian professional football championship would change the probability of coaches’ survival across the season. The data considered 4,560 games from the top professional Brazilian football league (Campeonato Brasileiro Série A) between the 2012 and 2017 seasons. At the start of each season, the coach from each team was followed, being recorded at the time the event occurred, i.e., the coach being sacked. A total survival of 120 coaches was considered between the seasons of 2012 and 2017, i.e., 20 coaches at the beginning of each season. Coaches were assigned as novice (no previous experience as head coach in the top Brazilian championship) or experienced (with at least some previous experience as head coach in the top Brazilian championship). Data were available and extracted from the official website of the Brazilian Football Confederation1. On average and considering un-pooled observations, the median life of a coach was about 16.5 rounds. Considering variation between 2012 and 2017 seasons, only about 26.3% (95% CI: 18.2–36.1) of the coaches ended a season without being sacked. By mid-season, at round 19, the probability of coaches’ survival was 0.42 (95% CI: 0.32–0.53). Variation between season on survival estimates per round was substantial (between-season standard deviation = 0.48, 95% credible intervals: 0.25–0.95; corresponding to an inverse logit = 0.62, 95% CI: 0.56–0.72). There was no substantial variation between novice and experienced coaches’ survival probability. The present results expose the vulnerability of the coaching context in Brazilian football, potentially highlighting an excessive emphasis on short-term results to mediate club management decisions.
Collapse
Affiliation(s)
- Alexandre B Tozetto
- Department of Physical Education, College of Health and Sport Science, Santa Catarina State University, Florianópolis, Brazil.,Department of Physical Education, School of Sports, Federal University of Santa Catarina, Florianópolis, Brazil
| | - Humberto M Carvalho
- Department of Physical Education, School of Sports, Federal University of Santa Catarina, Florianópolis, Brazil
| | - Rodolfo S Rosa
- Department of Physical Education, School of Sports, Federal University of Santa Catarina, Florianópolis, Brazil
| | - Felipe G Mendes
- Department of Physical Education, School of Sports, Federal University of Santa Catarina, Florianópolis, Brazil
| | - Walan R Silva
- Department of Physical Education, College of Health and Sport Science, Santa Catarina State University, Florianópolis, Brazil
| | - Juarez V Nascimento
- Department of Physical Education, School of Sports, Federal University of Santa Catarina, Florianópolis, Brazil
| | - Michel Milistetd
- Department of Physical Education, School of Sports, Federal University of Santa Catarina, Florianópolis, Brazil
| |
Collapse
|
6
|
Dias TCM, Diniz MA, Pereira CAB, Polpo A. Overview of the 37th MaxEnt. Entropy (Basel) 2018; 20:E694. [PMID: 33265783 DOI: 10.3390/e20090694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 09/06/2018] [Indexed: 11/16/2022]
Abstract
The 37th edition of MaxEnt was held in Brazil, hosting several distinguished researchers and students. The workshop offered four tutorials, nine invited talks, twenty four oral presentations and twenty seven poster presentations. All submissions received their first choice between oral and poster presentations. The event held a celebration to Julio Stern’s 60th anniversary and awarded two prizes to young researchers. As customary, the workshop had one free afternoon, in which participants visited the city’s surroundings and experienced Brazilian food and traditions.
Collapse
|
7
|
Duan LL, Wang X, Clancy JP, Szczesniak RD. Joint hierarchical Gaussian process model with application to personalized prediction in medical monitoring. Stat (Int Stat Inst) 2018; 7. [PMID: 29593867 DOI: 10.1002/sta4.178] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
A two-level Gaussian process (GP) joint model is proposed to improve personalized prediction of medical monitoring data. The proposed model is applied to jointly analyze multiple longitudinal biomedical outcomes, including continuous measurements and binary outcomes, to achieve better prediction in disease progression. At the population level of the hierarchy, two independent GPs are used to capture the nonlinear trends in both the continuous biomedical marker and the binary outcome, respectively; at the individual level, a third GP, which is shared by the longitudinal measurement model and the longitudinal binary model, induces the correlation between these two model components and strengthens information borrowing across individuals. The proposed model is particularly advantageous in personalized prediction. It is applied to the motivating clinical data on cystic fibrosis disease progression, for which lung function measurements and onset of acute respiratory events are monitored jointly throughout each patient's clinical course. The results from both the simulation studies and the cystic fibrosis data application suggest that the inclusion of the shared individual-level GPs under the joint model framework leads to important improvements in personalized disease progression prediction.
Collapse
Affiliation(s)
- Leo L Duan
- Department of Statistical Science, Duke University, P.O. Box 90251, Durham, NC 27708, USA
| | - Xia Wang
- Department of Mathematical Sciences, University of Cincinnati, Cincinnati, OH 45221, USA
| | - John P Clancy
- Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, Cincinnati, OH 45229, USA
| | - Rhonda D Szczesniak
- Division of Biostatistics & Epidemiology, Cincinnati Children's Hospital Medical Center, 3333 Burnet Ave (MLC 5041), Cincinnati, OH 45229, USA
| |
Collapse
|
8
|
Hubert P, Padovese L, Stern JM. A Sequential Algorithm for Signal Segmentation. Entropy (Basel) 2018; 20:E55. [PMID: 33265142 DOI: 10.3390/e20010055] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 01/08/2018] [Accepted: 01/09/2018] [Indexed: 11/17/2022]
Abstract
The problem of event detection in general noisy signals arises in many applications; usually, either a functional form of the event is available, or a previous annotated sample with instances of the event that can be used to train a classification algorithm. There are situations, however, where neither functional forms nor annotated samples are available; then, it is necessary to apply other strategies to separate and characterize events. In this work, we analyze 15-min samples of an acoustic signal, and are interested in separating sections, or segments, of the signal which are likely to contain significant events. For that, we apply a sequential algorithm with the only assumption that an event alters the energy of the signal. The algorithm is entirely based on Bayesian methods.
Collapse
|
9
|
Abstract
Many economically important traits in plant breeding have low heritability or are difficult to measure. For these traits, genomic selection has attractive features and may boost genetic gains. Our goal was to evaluate alternative scenarios to implement genomic selection for yield components in soybean (Glycine max L. merr). We used a nested association panel with cross validation to evaluate the impacts of training population size, genotyping density, and prediction model on the accuracy of genomic prediction. Our results indicate that training population size was the factor most relevant to improvement in genome-wide prediction, with greatest improvement observed in training sets up to 2000 individuals. We discuss assumptions that influence the choice of the prediction model. Although alternative models had minor impacts on prediction accuracy, the most robust prediction model was the combination of reproducing kernel Hilbert space regression and BayesB. Higher genotyping density marginally improved accuracy. Our study finds that breeding programs seeking efficient genomic selection in soybeans would best allocate resources by investing in a representative training set.
Collapse
Affiliation(s)
- Alencar Xavier
- Department of Agronomy, Purdue University, West Lafayette, Indiana 47907
| | - William M Muir
- Department of Animal Science, Purdue University, West Lafayette, Indiana 47907
| | - Katy Martin Rainey
- Department of Agronomy, Purdue University, West Lafayette, Indiana 47907
| |
Collapse
|
10
|
Jenkins MW, Linderman GC, Bezerra HG, Fujino Y, Costa MA, Wilson DL, Rollins AM. 3-D Stent Detection in Intravascular OCT Using a Bayesian Network and Graph Search. IEEE Trans Med Imaging 2015; 34:1549-1561. [PMID: 25751863 PMCID: PMC4547908 DOI: 10.1109/tmi.2015.2405341] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Worldwide, many hundreds of thousands of stents are implanted each year to revascularize occlusions in coronary arteries. Intravascular optical coherence tomography is an important emerging imaging technique, which has the resolution and contrast necessary to quantitatively analyze stent deployment and tissue coverage following stent implantation. Automation is needed, as current, it takes up to 16 h to manually analyze hundreds of images and thousands of stent struts from a single pullback. For automated strut detection, we used image formation physics and machine learning via a Bayesian network, and 3-D knowledge of stent structure via graph search. Graph search was done on en face projections using minimum spanning tree algorithms. Depths of all struts in a pullback were simultaneously determined using graph cut. To assess the method, we employed the largest validation data set used so far, involving more than 8000 clinical images from 103 pullbacks from 72 patients. Automated strut detection achieved a 0.91±0.04 recall, and 0.84±0.08 precision. Performance was robust in images of varying quality. This method can improve the workflow for analysis of stent clinical trial data, and can potentially be used in the clinic to facilitate real-time stent analysis and visualization, aiding stent implantation.
Collapse
|
11
|
Abstract
A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA), independent component analysis (ICA), anechoic demixing, and the time-varying synergy model (d'Avella and Tresch, 2002). However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model. We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria [Bayesian information criterion, BIC (Schwarz, 1978) and the Akaike Information Criterion (AIC) (Akaike, 1974)]. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.
Collapse
Affiliation(s)
- Dominik M Endres
- Section Computational Sensomotorics, Department of Cognitive Neurology, CIN, HIH, BCCN, University Clinic Tübingen Tübingen, Germany
| | - Enrico Chiovetto
- Section Computational Sensomotorics, Department of Cognitive Neurology, CIN, HIH, BCCN, University Clinic Tübingen Tübingen, Germany
| | - Martin A Giese
- Section Computational Sensomotorics, Department of Cognitive Neurology, CIN, HIH, BCCN, University Clinic Tübingen Tübingen, Germany
| |
Collapse
|
12
|
de Los Campos G, Hickey JM, Pong-Wong R, Daetwyler HD, Calus MPL. Whole-genome regression and prediction methods applied to plant and animal breeding. Genetics 2013; 193:327-45. [PMID: 22745228 PMCID: PMC3567727 DOI: 10.1534/genetics.112.143313] [Citation(s) in RCA: 471] [Impact Index Per Article: 42.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2012] [Accepted: 06/11/2012] [Indexed: 11/18/2022] Open
Abstract
Genomic-enabled prediction is becoming increasingly important in animal and plant breeding and is also receiving attention in human genetics. Deriving accurate predictions of complex traits requires implementing whole-genome regression (WGR) models where phenotypes are regressed on thousands of markers concurrently. Methods exist that allow implementing these large-p with small-n regressions, and genome-enabled selection (GS) is being implemented in several plant and animal breeding programs. The list of available methods is long, and the relationships between them have not been fully addressed. In this article we provide an overview of available methods for implementing parametric WGR models, discuss selected topics that emerge in applications, and present a general discussion of lessons learned from simulation and empirical data analysis in the last decade.
Collapse
Affiliation(s)
- Gustavo de Los Campos
- Department of Biostatistics, School of Public Health, University of Alabama, Birmingham, AL 35294, USA.
| | | | | | | | | |
Collapse
|
13
|
Abstract
We introduce a tomographic reconstruction method implemented using a shape-based regularization technique. Spatial models of known features in the structure being reconstructed are integrated into the reconstruction process as regularizers. Our regularization scheme is driven locally through shape information obtained from segmentation and compared with a known spatial model. We demonstrated our method on tomography data from digital phantoms, simulated data, and experimental electron tomography (ET) data of virus complexes. Our reconstruction showed reduced blurring and an improvement in the resolution of the reconstructed volume was also measured. This method also produced improved demarcation of spike boundaries in viral membranes when compared with popular techniques like weighted back projection and the algebraic reconstruction technique. Improved ET reconstructions will provide better structure elucidation and improved feature visualization, which can aid in solving key biological issues. Our method can also be generalized to other tomographic modalities.
Collapse
Affiliation(s)
- Ajay Gopinath
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX 78712 USA.
| | | | | | | | | | | |
Collapse
|
14
|
Abstract
Tandem mass spectrometry has emerged as a powerful tool for the characterization of complex protein samples, an increasingly important problem in biology. The effort to efficiently and accurately perform inference on data from tandem mass spectrometry experiments has resulted in several statistical methods. We use a common framework to describe the predominant methods and discuss them in detail. These methods are classified using the following categories: set cover methods, iterative methods, and Bayesian methods. For each method, we analyze and evaluate the outcome and methodology of published comparisons to other methods; we use this comparison to comment on the qualities and weaknesses, as well as the overall utility, of all methods. We discuss the similarities between these methods and suggest directions for the field that would help unify these similar assumptions in a more rigorous manner and help enable efficient and reliable protein inference.
Collapse
|
15
|
Abstract
Functional data are increasingly encountered in scientific studies, and their high dimensionality and complexity lead to many analytical challenges. Various methods for functional data analysis have been developed, including functional response regression methods that involve regression of a functional response on univariate/multivariate predictors with nonparametrically represented functional coefficients. In existing methods, however, the functional regression can be sensitive to outlying curves and outlying regions of curves, so is not robust. In this paper, we introduce a new Bayesian method, robust functional mixed models (R-FMM), for performing robust functional regression within the general functional mixed model framework, which includes multiple continuous or categorical predictors and random effect functions accommodating potential between-function correlation induced by the experimental design. The underlying model involves a hierarchical scale mixture model for the fixed effects, random effect and residual error functions. These modeling assumptions across curves result in robust nonparametric estimators of the fixed and random effect functions which down-weight outlying curves and regions of curves, and produce statistics that can be used to flag global and local outliers. These assumptions also lead to distributions across wavelet coefficients that have outstanding sparsity and adaptive shrinkage properties, with great flexibility for the data to determine the sparsity and the heaviness of the tails. Together with the down-weighting of outliers, these within-curve properties lead to fixed and random effect function estimates that appear in our simulations to be remarkably adaptive in their ability to remove spurious features yet retain true features of the functions. We have developed general code to implement this fully Bayesian method that is automatic, requiring the user to only provide the functional data and design matrices. It is efficient enough to handle large data sets, and yields posterior samples of all model parameters that can be used to perform desired Bayesian estimation and inference. Although we present details for a specific implementation of the R-FMM using specific distributional choices in the hierarchical model, 1D functions, and wavelet transforms, the method can be applied more generally using other heavy-tailed distributions, higher dimensional functions (e.g. images), and using other invertible transformations as alternatives to wavelets.
Collapse
Affiliation(s)
- Hongxiao Zhu
- Statistical and Applied Mathematical Sciences Institute, RTP, NC
| | | | | |
Collapse
|
16
|
Abstract
We review adaptive designs for clinical trials, giving special attention to the control of the Type I error in late-phase confirmatory trials, when the trial planner wishes to adjust the final sample size of the study in response to an unblinded analysis of interim estimates of treatment effects. We point out that there is considerable inefficiency in using the adaptive designs that employ conditional power calculations to reestimate the sample size and that maintain the Type I error by using certain weighted test statistics. Although these adaptive designs have little advantage over familiar group-sequential designs, our review also describes recent developments in adaptive designs that are both flexible and efficient. We also discuss the use of Bayesian designs, when the context of use demands control over operating characteristics (Type I and II errors) and correction of the bias of estimated treatment effects.
Collapse
Affiliation(s)
- Tze Leung Lai
- Department of Statistics, Stanford University, Stanford, California 94305
- Department of Health Research and Policy, Stanford University, Stanford, California 94305
| | - Philip William Lavori
- Department of Statistics, Stanford University, Stanford, California 94305
- Department of Health Research and Policy, Stanford University, Stanford, California 94305
| | - Mei-Chiung Shih
- Department of Health Research and Policy, Stanford University, Stanford, California 94305
- Cooperative Studies Program, U.S. Department of Veterans Affairs, Mountain View, California 94043
| |
Collapse
|
17
|
Abstract
Hierarchical models are widely-used to characterize the performance of individual healthcare providers. However, little attention has been devoted to system-wide performance evaluations, the goals of which include identifying extreme (e.g., top 10%) provider performance and developing statistical benchmarks to define high-quality care. Obtaining optimal estimates of these quantities requires estimating the empirical distribution function (EDF) of provider-specific parameters that generate the dataset under consideration. However, the difficulty of obtaining uncertainty bounds for a square-error loss minimizing EDF estimate has hindered its use in system-wide performance evaluations. We therefore develop and study a percentile-based EDF estimate for univariate provider-specific parameters. We compute order statistics of samples drawn from the posterior distribution of provider-specific parameters to obtain relevant uncertainty assessments of an EDF estimate and its features, such as thresholds and percentiles. We apply our method to data from the Medicare End Stage Renal Disease (ESRD) Program, a health insurance program for people with irreversible kidney failure. We highlight the risk of misclassifying providers as exceptionally good or poor performers when uncertainty in statistical benchmark estimates is ignored. Given the high stakes of performance evaluations, statistical benchmarks should be accompanied by precision estimates.
Collapse
Affiliation(s)
| | - Thomas A. Louis
- Department of Biostatistics, Johns Hopkins University Bloomberg School of Public Health, Baltimore, MD 21205-2179 U.S.A
| |
Collapse
|
18
|
Zhang S, Midthune D, Guenther PM, Krebs-Smith SM, Kipnis V, Dodd KW, Buckman DW, Tooze JA, Freedman L, Carroll RJ. A NEW MULTIVARIATE MEASUREMENT ERROR MODEL WITH ZERO-INFLATED DIETARY DATA, AND ITS APPLICATION TO DIETARY ASSESSMENT. Ann Appl Stat 2011; 5:1456-1487. [PMID: 21804910 PMCID: PMC3145332 DOI: 10.1214/10-aoas446] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In the United States the preferred method of obtaining dietary intake data is the 24-hour dietary recall, yet the measure of most interest is usual or long-term average daily intake, which is impossible to measure. Thus, usual dietary intake is assessed with considerable measurement error. Also, diet represents numerous foods, nutrients and other components, each of which have distinctive attributes. Sometimes, it is useful to examine intake of these components separately, but increasingly nutritionists are interested in exploring them collectively to capture overall dietary patterns. Consumption of these components varies widely: some are consumed daily by almost everyone on every day, while others are episodically consumed so that 24-hour recall data are zero-inflated. In addition, they are often correlated with each other. Finally, it is often preferable to analyze the amount of a dietary component relative to the amount of energy (calories) in a diet because dietary recommendations often vary with energy level. The quest to understand overall dietary patterns of usual intake has to this point reached a standstill. There are no statistical methods or models available to model such complex multivariate data with its measurement error and zero inflation. This paper proposes the first such model, and it proposes the first workable solution to fit such a model. After describing the model, we use survey-weighted MCMC computations to fit the model, with uncertainty estimation coming from balanced repeated replication.The methodology is illustrated through an application to estimating the population distribution of the Healthy Eating Index-2005 (HEI-2005), a multi-component dietary quality index involving ratios of interrelated dietary components to energy, among children aged 2-8 in the United States. We pose a number of interesting questions about the HEI-2005 and provide answers that were not previously within the realm of possibility, and we indicate ways that our approach can be used to answer other questions of importance to nutritional science and public health.
Collapse
Affiliation(s)
- Saijuan Zhang
- Department of Statistics Texas A&M University 3143 TAMU College Station, Texas 77843-3143 U.S.A
| | - Douglas Midthune
- Biometry Research Group Division of Cancer Prevention National Cancer Institute 6130 Executive Boulevard EPN-3131 Bethesda, Maryland 20892-7354 U.S.A
| | - Patricia M. Guenther
- Center for Nutrition Policy and Promotion U.S. Department of Agriculture 3101 Park Center Drive, Ste. 1034 Alexandria, Virginia 22302 U.S.A.
| | - Susan M. Krebs-Smith
- Applied Research Program Division of Cancer Control and Population Sciences National Cancer Institute 6130 Executive Boulevard, EPN-4005 Bethesda, Maryland 20892, U.S.A.
| | - Victor Kipnis
- Biometry Research Group Division of Cancer Prevention National Cancer Institute 6130 Executive Boulevard EPN-3131 Bethesda, Maryland 20892-7354 U.S.A
| | - Kevin W. Dodd
- Biometry Research Group Division of Cancer Prevention National Cancer Institute 6130 Executive Boulevard EPN-3131 Bethesda, Maryland 20892-7354 U.S.A
| | - Dennis W. Buckman
- Information Management Services, Inc. 12501 Prosperity Drive Silver Spring, Maryland 20904, U.S.A.
| | - Janet A. Tooze
- Department of Biostatistical Sciences Wake Forest University, School of Medicine Medical Center Boulevard Winston-Salem, North Carolina 27157, U.S.A.
| | - Laurence Freedman
- Gertner Institute for Epidemiology and Health Policy Research Sheba Medical Center Tel Hashomer 52161, Israel
| | - Raymond J. Carroll
- Department of Statistics Texas A&M University 3143 TAMU College Station, Texas 77843-3143 U.S.A
| |
Collapse
|
19
|
Zhang J, Zhou X, Wang H, Suffredini A, Zhang L, Huang Y, Wong S. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry. IEEE Trans Signal Process 2010; 58:5883-5894. [PMID: 21544266 PMCID: PMC3085289 DOI: 10.1109/tsp.2010.2065226] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.
Collapse
Affiliation(s)
- Jianqiu Zhang
- Department of Electrical and Computer Engineering, University of Texas at San Antonio, TX 78249 USA
| | - Xiaobo Zhou
- Texas Methodist Hospital Research Institute, Houston, TX 77030 USA
| | - Honghui Wang
- Critical Care Medicine Department, Clinical Center, National Institutes of Health, Bethesda, MD 20892 USA
| | - Anthony Suffredini
- Critical Care Medicine Department, Clinical Center, National Institutes of Health, Bethesda, MD 20892 USA
| | - Lin Zhang
- School of Information and Electric Engineering, China University of Mining Technology, XuZhou 221116, China
| | - Yufei Huang
- Department of Electrical and Computer Engineering, University of Texas at San Antonio, TX 78249 USA, and also with the Greehey Children’s Cancer Research Institute, Department of Epidemiology and Biostatistics, University Texas Health Science Center at San Antonio, San Antonio TX 78229 USA
| | - Stephen Wong
- Texas Methodist Hospital Research Institute, Houston, TX 77030 USA
| |
Collapse
|
20
|
Abstract
During the last twenty years there have been considerable methodological developments in the design and analysis of Phase 1, Phase 2 and Phase 1/2 dose-finding studies. Many of these developments are related to the continual reassessment method (CRM), first introduced by O'Quigley, Pepe and Fisher (1990). CRM models have proven themselves to be of practical use and, in this discussion, we investigate the basic approach, some connections to other methods, some generalizations, as well as further applications of the model. We obtain some new results which can provide guidance in practice.
Collapse
Affiliation(s)
- John O’Quigley
- Inserm, Université Paris VI, Place Jussieu, 75005 Paris, France
| | - Mark Conaway
- Division of Biostatistics, Department of Public Health Sciences, University of Virginia, Charlottesville, VA 22908, USA
| |
Collapse
|
21
|
Abstract
AIMS We present a statistical model for evaluating the effects of substance use when substance use might be under-reported. The model is a special case of the Bayesian formulation of the 'classical' measurement error model, requiring that the analyst quantify prior beliefs about rates of under-reporting and the true prevalence of substance use in the study population. DESIGN Prospective study. SETTING A diversion program for youths on probation for drug-related crimes. PARTICIPANTS A total of 257 youths at risk for re-incarceration. MEASUREMENTS The effects of true cocaine use on recidivism risks while accounting for possible under-reporting. FINDINGS The proposed model showed a 60% lower mean time to re-incarceration among actual cocaine users. This effect size is about 75% larger than that estimated in the analysis that relies only on self-reported cocaine use. Sensitivity analysis comparing different prior beliefs about prevalence of cocaine use and rates of under-reporting universally indicate larger effects than the analysis that assumes that everyone tells the truth about their drug use. CONCLUSION The proposed Bayesian model allows one to estimate the effect of actual drug use on study outcome measures.
Collapse
Affiliation(s)
- Garnett P. McMillan
- Corresponding Author. Behavioral Health Research Center of the Southwest, A Center of the Pacific Institute for Research and Evaluation 612 Encino Pl NE Albuquerque, NM 87102, USA
| | | | | |
Collapse
|
22
|
Andrianoelina O, Favreau B, Ramamonjisoa L, Bouvet JM. Small effect of fragmentation on the genetic diversity of Dalbergia monticola, an endangered tree species of the eastern forest of Madagascar, detected by chloroplast and nuclear microsatellites. Ann Bot 2009; 104:1231-42. [PMID: 19773273 PMCID: PMC2766213 DOI: 10.1093/aob/mcp231] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2009] [Revised: 06/29/2009] [Accepted: 07/27/2009] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND AIMS The oriental forest ecosystem in Madagascar has been seriously impacted by fragmentation. The pattern of genetic diversity was analysed on a tree species, Dalbergia monticola, which plays an important economic role in Madagascar and is one of the many endangered tree species in the eastern forest. METHODS Leaves from 546 individuals belonging to 18 small populations affected by different levels of fragmentation were genotyped using eight nuclear (nuc) and three chloroplast (cp) microsatellite markers. KEY RESULTS For nuclear microsatellites, allelic richness (R) and heterozygosity (H(e,nuc)) differed between types of forest: R = 7.36 and R = 9.55, H(e,nuc) = 0.64 and H(e,nuc) = 0.80 in fragmented and non-fragmented forest, respectively, but the differences were not significant. Only the mean number of alleles (N(a,nuc)) and the fixation index F(IS) differed significantly: N(a,nuc) = 9.41 and N(a,nuc) = 13.18, F(IS) = 0.06 and F(IS) = 0.15 in fragmented and non-fragmented forests, respectively. For chloroplast microsatellites, estimated genetic diversity was higher in non-fragmented forest, but the difference was not significant. No recent bottleneck effect was detected for either population. Overall differentiation was low for nuclear microsatellites (F(ST,nuc) = 0.08) and moderate for chloroplast microsatellites (F(ST,cp) = 0.49). A clear relationship was observed between genetic and geographic distance (r = 0.42 P < 0.01 and r = 0.42 P = 0.03 for nuclear and chloroplast microsatellites, respectively), suggesting a pattern of isolation by distance. Analysis of population structure using the neighbor-joining method or Bayesian models separated southern populations from central and northern populations with nuclear microsatellites, and grouped the population according to regions with chloroplast microsatellites, but did not separate the fragmented populations. CONCLUSIONS Residual diversity and genetic structure of populations of D. monticola in Madagascar suggest a limited impact of fragmentation on molecular genetic parameters.
Collapse
Affiliation(s)
- O. Andrianoelina
- Silo national des Graines Forestières, Ambatobe BP 5091, Antananarivo, Madagascar
| | - B. Favreau
- Cirad-Bios Department, Research Unit ‘genetic diversity and breeding of forest tree species’, Campus international de Baillarguet TA-A 39/C, BP 5035, 34398 Montpellier cedex, France
| | - L. Ramamonjisoa
- Silo national des Graines Forestières, Ambatobe BP 5091, Antananarivo, Madagascar
| | - J.-M. Bouvet
- Cirad-Bios Department, Research Unit ‘genetic diversity and breeding of forest tree species’, Campus international de Baillarguet TA-A 39/C, BP 5035, 34398 Montpellier cedex, France
| |
Collapse
|
23
|
Abstract
BACKGROUND The genetic architecture of body weight and body composition is complex because these traits are normally influenced by multiple genes and their interactions, even after controlling for the environment. Bayesian methodology provides an efficient way of estimating these interactions. SUBJECTS AND MEASUREMENTS We used Bayesian model selection techniques to simultaneously estimate the main effects, epistasis and gene-sex interactions on age-related body weight (at 3, 6 and 10 weeks, denoted as WT3wk, WT6wk and WT10wk) and body composition (organ weights and fat-related traits) in an F(2) sample obtained from a cross between high-growth (M16i) mice and low-growth (L6) mice. RESULTS We observed epistatic and main-effect quantitative trait loci (QTL) that controlled both body weight and body composition. Epistatic effects were generally more significant for WT6wk than WT10wk. Chromosomes 5 and 13 interacted strongly to control body weight at 3 weeks. A pleiotropic QTL on chromosome 2 was associated with body weight and some body composition phenotypes. Testis weight was regulated by a QTL on chromosome 13 with a significantly large main effect (2log(e)BF approximately 15). CONCLUSION By analyzing epistatic interactions, we detected QTL not found in a previous analysis of this mouse population. Hence, the detection of gene-gene interactions may provide new information about the genetic architecture of complex obesity-related traits and may lead to the detection of additional obesity genes.
Collapse
Affiliation(s)
- Georgina A. Ankra-Badu
- Department of Biostatistics, Section on Statistical Genetics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Daniel Pomp
- Departments of Nutrition, Cell and Molecular Physiology, University of North Carolina, Chapel Hill, NC 27599, USA
| | - Daniel Shriner
- Department of Biostatistics, Section on Statistical Genetics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - David B. Allison
- Department of Biostatistics, Section on Statistical Genetics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
- Clinical Nutrition Research Center, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| | - Nengjun Yi
- Department of Biostatistics, Section on Statistical Genetics, University of Alabama at Birmingham, Birmingham, AL 35294, USA
- Clinical Nutrition Research Center, University of Alabama at Birmingham, Birmingham, AL 35294, USA
| |
Collapse
|
24
|
Liu H, Gao SJ, Huang Y. A Bayesian Approach for Identifying miRNA Targets by Combining Sequence Prediction and Expression Profiling. Proc Int Joint Conf Bioinforma Syst Biol Intell Comput 2009; 2009:185-189. [PMID: 21234333 PMCID: PMC3018734 DOI: 10.1109/ijcbs.2009.120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
MicroRNAs (miRNAs) are single-stranded non-coding RNAs shown to plays important regulatory roles in a wide range of biological processes and diseases. The functions and regulatory mechanisms of most of miRNAs are still poorly understood in part because of the difficulty in identifying the miRNA regulatory targets. To this end, computational methods have evolved as important tools for genome-wide target screening. Although considerable work in the past few years has produced many target prediction algorithms, most of them are solely based on sequence, and their accuracy is still poor. In contrast, gene expression profiling from miRNA over-expression experiments can provide additional information about miRNA targets. This paper presents a Bayesian approach to integrate sequence level prediction result with expression profiling to improve the performance of miRNA target identification. The test on proteomic and IP pull-down data demonstrated better performance of the proposed approach.
Collapse
Affiliation(s)
- Hui Liu
- SIEE, China University of Mining and Technology, Xuzhou, China
| | | | | |
Collapse
|
25
|
Abstract
Many complex human diseases and traits of biological and/or economic importance are determined by interacting networks of multiple quantitative trait loci (QTL) and environmental factors. Mapping QTL is critical for understanding the genetic basis of complex traits, and for ultimate identification of genes responsible. A variety of sophisticated statistical methods for QTL mapping have been developed. Among these developments, the evolution of Bayesian approaches for multiple QTL mapping over the past decade has been remarkable. Bayesian methods can jointly infer the number of QTL, their genomic positions and their genetic effects. Here, we review recently developed and still developing Bayesian methods and associated computer software for mapping multiple QTL in experimental crosses. We compare and contrast these methods to clearly describe the relationships among different Bayesian methods. We conclude this review by highlighting some areas of future research.
Collapse
Affiliation(s)
- N Yi
- Section on Statistical Genetics, Department of Biostatistics, University of Alabama at Birmingham, Birmingham, AL 35294-0022, USA.
| | | |
Collapse
|
26
|
Abstract
Current methods for statistical analysis of neuropsychological test data in schizophrenia are inherently insufficient for revealing valid cognitive impairment profiles. While neuropsychological tests aim to selectively sample discrete cognitive domains, test performance often requires several cognitive operations or "attributes." Conventional statistical approaches assign each neuropsychological score of interest to a single attribute or "domain" (e.g., attention, executive, etc.), and scores are calculated for each. This can yield misleading information about underlying cognitive impairments. We report findings applying a new method for examining neuropsychological test data in schizophrenia, based on finite partially ordered sets (posets) as classification models. A total of 220 schizophrenia outpatients were administered the Positive and Negative Symptom Scale (PANSS) and a neuropsychological test battery. Selected tests were submitted to cognitive attribute analysis a priori by two neuropsychologists. Applying Bayesian classification methods (posets), each patient was classified with respect to proficiency on the underlying attributes, based upon his or her individual test performance pattern. Twelve cognitive "classes" are described in the sample. Resulting classification models provided detailed "diagnoses" into "attribute-based" profiles of cognitive strength/weakness, mimicking expert clinician judgment. Classification was efficient, requiring few measures to achieve accurate classification. Attributes were associated with PANSS factors in the expected manner (only the negative and cognition factors were associated with the attributes), and a double dissociation was observed in which divergent thinking was selectively associated with negative symptoms, possibly reflecting a manifestation of Kraepelin's hypothesis regarding the impact of volitional disturbances on thought. Using posets for extracting more precise cognitive information from neuropsychological data may reveal more valid cognitive endophenotypes, while dramatically reducing the amount of testing required.
Collapse
Affiliation(s)
- Judith Jaeger
- Center for Neuropsychiatric Rehabilitation Research, Zucker Hillside Hospital, North Shore Long Island Jewish Hospital, 75-59 263rd St., Glen Oaks, NY 11004, USA.
| | | | | | | |
Collapse
|
27
|
Leung GM, Woo PPS, McGhee SM, Cheung ANY, Fan S, Mang O, Thach TQ, Ngan HYS. Age-period-cohort analysis of cervical cancer incidence in Hong Kong from 1972 to 2001 using maximum likelihood and Bayesian methods. J Epidemiol Community Health 2006; 60:712-20. [PMID: 16840762 PMCID: PMC2588081 DOI: 10.1136/jech.2005.042275] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/22/2006] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To examine the secular effects of opportunistic screening for cervical cancer in a rich, developed community where most other such populations have long adopted organised screening. DESIGN, SETTING, AND PARTICIPANTS The analysis was based on 15 140 cases of invasive cervical cancer from 1972 to 2001. The effects of chronological age, time period, and birth cohort were decomposed using both maximum likelihood and Bayesian methods. RESULTS The overall age adjusted incidence decreased from 24.9 in 1972-74 to 9.5 per 100,000 in 1999-2001, in a log-linear fashion, yielding an average annual reduction of 4.0% (p<0.001) during the 30 year period. There were two second order and thus identifiable changes: (1) around the mid-1920s cohort curve representing an age-period interaction masquerading as a cohort change that denotes the first availability of Pap testing during the 1960s concentrated among women in their 40s; (2) a hook around the calendar years 1982-83 when cervical cytology became a standard screening test for pregnant women. CONCLUSIONS Hong Kong's cervical cancer rates have declined since Pap tests first became available in the 1960s, most probably because of increasing population coverage over time and in successive generations in a haphazard fashion and punctuated by the systematic introduction of routine cytology as part of antenatal care in the 1980s.
Collapse
Affiliation(s)
- Gabriel M Leung
- Department of Community Medicine and School of Public Health, University of Hong Kong, China.
| | | | | | | | | | | | | | | |
Collapse
|
28
|
Abstract
This article considers the utility of the bounded cumulative hazard model in cure rate estimation, which is an appealing alternative to the widely used two-component mixture model. This approach has the following distinct advantages: (1) It allows for a natural way to extend the proportional hazards regression model, leading to a wide class of extended hazard regression models. (2) In some settings the model can be interpreted in terms of biologically meaningful parameters. (3) The model structure is particularly suitable for semiparametric and Bayesian methods of statistical inference. Notwithstanding the fact that the model has been around for less than a decade, a large body of theoretical results and applications has been reported to date. This review article is intended to give a big picture of these modeling techniques and associated statistical problems. These issues are discussed in the context of survival data in cancer.
Collapse
Affiliation(s)
| | - J. G. Ibrahim
- Department of Biostatistics, University of North Carolina, McGavran-Greenberg Hall, Chapel Hill, NC 27599
| | - A. Y. Yakovlev
- Department of Statistics and Computational Biology, University of Rochester, 601 Elmwood Avenue, Box 630, Rochester, NY 14642
| |
Collapse
|
29
|
Abstract
In the case of the mixed linear model the random effects are usually assumed to be normally distributed in both the Bayesian and classical frameworks. In this paper, the Dirichlet process prior was used to provide nonparametric Bayesian estimates for correlated random effects. This goal was achieved by providing a Gibbs sampler algorithm that allows these correlated random effects to have a nonparametric prior distribution. A sampling based method is illustrated. This method which is employed by transforming the genetic covariance matrix to an identity matrix so that the random effects are uncorrelated, is an extension of the theory and the results of previous researchers. Also by using Gibbs sampling and data augmentation a simulation procedure was derived for estimating the precision parameter M associated with the Dirichlet process prior. All needed conditional posterior distributions are given. To illustrate the application, data from the Elsenburg Dormer sheep stud were analysed. A total of 3325 weaning weight records from the progeny of 101 sires were used.
Collapse
Affiliation(s)
- Abraham Johannes van der Merwe
- Department of Mathematical Statistics, Faculty of Science, University of the Free State, PO Box 339, Bloemfontein, 9300 Republic of South Africa.
| | | |
Collapse
|