1
|
Olschewski S, Spektor MS, Le Mens G. Frequent winners explain apparent skewness preferences in experience-based decisions. Proc Natl Acad Sci U S A 2024; 121:e2317751121. [PMID: 38489382 PMCID: PMC10962955 DOI: 10.1073/pnas.2317751121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 01/17/2024] [Indexed: 03/17/2024] Open
Abstract
Do people's attitudes toward the (a)symmetry of an outcome distribution affect their choices? Financial investors seek return distributions with frequent small returns but few large ones, consistent with leading models of choice in economics and finance that assume right-skewed preferences. In contrast, many experiments in which decision-makers learn about choice options through experience find the opposite choice tendency, in favor of left-skewed options. To reconcile these seemingly contradicting findings, the present work investigates the effect of skewness on choices in experience-based decisions. Across seven studies, we show that apparent preferences for left-skewed outcome distributions are a consequence of those distributions having a higher value in most direct outcome comparisons, a "frequent-winner effect." By manipulating which option is the frequent winner, we show that choice tendencies for frequent winners can be obtained even with identical outcome distributions. Moreover, systematic choice tendencies in favor of right- or left-skewed options can be obtained by manipulating which option is experienced as the frequent winner. We also find evidence for an intrinsic preference for right-skewed outcome distributions. The frequent-winner phenomenon is robust to variations in outcome distributions and experimental paradigms. These findings are confirmed by computational analyses in which a reinforcement-learning model capturing frequent winning and intrinsic skewness preferences provides the best account of the data. Our work reconciles conflicting findings of aggregated behavior in financial markets and experiments and highlights the need for theories of decision-making sensitive to joint outcome distributions of the available options.
Collapse
Affiliation(s)
- Sebastian Olschewski
- Department of Psychology, University of Basel, 4055Basel, Switzerland
- Warwick Business School, University of Warwick, CV4 7EQCoventry, United Kingdom
| | - Mikhail S. Spektor
- Department of Psychology, University of Warwick, CV4 7EQCoventry, United Kingdom
- Department of Economics and Business, Universitat Pompeu Fabra, 08005Barcelona, Spain
| | - Gaël Le Mens
- Department of Economics and Business, Universitat Pompeu Fabra, 08005Barcelona, Spain
- Barcelona School of Economics (BSE), Barcelona08005, Spain
- Universitat Pompeu Fabra–Barcelona School of Management, 08008Barcelona, Spain
| |
Collapse
|
2
|
Saffari SE, Soo SA, Mohammadi R, Ng KP, Greene W, Kandiah N. Modelling the Distribution of Cognitive Outcomes for Early-Stage Neurocognitive Disorders: A Model Comparison Approach. Biomedicines 2024; 12:393. [PMID: 38397995 PMCID: PMC10886528 DOI: 10.3390/biomedicines12020393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Background: Cognitive assessments for patients with neurocognitive disorders are mostly measured by the Montreal Cognitive Assessment (MoCA) and Visual Cognitive Assessment Test (VCAT) as screening tools. These cognitive scores are usually left-skewed and the results of the association analysis might not be robust. This study aims to study the distribution of the cognitive outcomes and to discuss potential solutions. Materials and Methods: In this retrospective cohort study of individuals with subjective cognitive decline or mild cognitive impairment, the inverse-transformed cognitive outcomes are modelled using different statistical distributions. The robustness of the proposed models are checked under different scenarios: with intercept-only, models with covariates, and with and without bootstrapping. Results: The main results were based on the VCAT score and validated via the MoCA score. The findings suggested that the inverse transformation method improved the modelling the cognitive scores compared to the conventional methods using the original cognitive scores. The association of the baseline characteristics (age, gender, and years of education) and the cognitive outcomes were reported as estimates and 95% confidence intervals. Bootstrap methods improved the estimate precision and the bootstrapped standard errors of the estimates were more robust. Cognitive outcomes were widely analysed using linear regression models with the default normal distribution as a conventional method. We compared the results of our suggested models with the normal distribution under various scenarios. Goodness-of-fit measurements were compared between the proposed models and conventional methods. Conclusions: The findings support the use of the inverse transformation method to model the cognitive outcomes instead of the original cognitive scores for early-stage neurocognitive disorders where the cognitive outcomes are left-skewed.
Collapse
Affiliation(s)
- Seyed Ehsan Saffari
- Centre for Quantitative Medicine, Duke-NUS Medical School, National University of Singapore, Singapore 169857, Singapore;
- Department of Neurology, National Neuroscience Institute, Singapore 308433, Singapore;
| | - See Ann Soo
- Dementia Research Centre (Singapore), Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 308232, Singapore; (S.A.S.); (N.K.)
| | - Raziyeh Mohammadi
- Centre for Quantitative Medicine, Duke-NUS Medical School, National University of Singapore, Singapore 169857, Singapore;
| | - Kok Pin Ng
- Department of Neurology, National Neuroscience Institute, Singapore 308433, Singapore;
- Duke-NUS Medical School, National University of Singapore, Singapore 169857, Singapore
| | - William Greene
- Stern School of Business (Emeritus), New York University, New York, NY 10012, USA;
| | - Negaenderan Kandiah
- Dementia Research Centre (Singapore), Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 308232, Singapore; (S.A.S.); (N.K.)
- Duke-NUS Medical School, National University of Singapore, Singapore 169857, Singapore
| |
Collapse
|
3
|
Martins I, Freitas Lopes H. Stochastic Volatility Models with Skewness Selection. Entropy (Basel) 2024; 26:142. [PMID: 38392397 PMCID: PMC10887516 DOI: 10.3390/e26020142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 01/07/2024] [Accepted: 01/10/2024] [Indexed: 02/24/2024]
Abstract
This paper expands traditional stochastic volatility models by allowing for time-varying skewness without imposing it. While dynamic asymmetry may capture the likely direction of future asset returns, it comes at the risk of leading to overparameterization. Our proposed approach mitigates this concern by leveraging sparsity-inducing priors to automatically select the skewness parameter as dynamic, static or zero in a data-driven framework. We consider two empirical applications. First, in a bond yield application, dynamic skewness captures interest rate cycles of monetary easing and tightening and is partially explained by central banks' mandates. In a currency modeling framework, our model indicates no skewness in the carry factor after accounting for stochastic volatility. This supports the idea of carry crashes resulting from volatility surges instead of dynamic skewness.
Collapse
Affiliation(s)
- Igor Martins
- Insper Institute of Education and Research, Rua Quatá 300, São Paulo 04546-042, Brazil
| | | |
Collapse
|
4
|
Papenberg M. K-Plus anticlustering: An improved k-means criterion for maximizing between-group similarity. Br J Math Stat Psychol 2024; 77:80-102. [PMID: 37431687 DOI: 10.1111/bmsp.12315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 05/08/2023] [Accepted: 06/09/2023] [Indexed: 07/12/2023]
Abstract
Anticlustering refers to the process of partitioning elements into disjoint groups with the goal of obtaining high between-group similarity and high within-group heterogeneity. Anticlustering thereby reverses the logic of its better known twin-cluster analysis-and is usually approached by maximizing instead of minimizing a clustering objective function. This paper presents k-plus, an extension of the classical k-means objective of maximizing between-group similarity in anticlustering applications. K-plus represents between-group similarity as discrepancy in distribution moments (means, variance, and higher-order moments), whereas the k-means criterion only reflects group differences with regard to means. While constituting a new criterion for anticlustering, it is shown that k-plus anticlustering can be implemented by optimizing the original k-means criterion after the input data have been augmented with additional variables. A computer simulation and practical examples show that k-plus anticlustering achieves high between-group similarity with regard to multiple objectives. In particular, optimizing between-group similarity with regard to variances usually does not compromise similarity with regard to means; the k-plus extension is therefore generally preferred over classical k-means anticlustering. Examples are given on how k-plus anticlustering can be applied to real norming data using the open source R package anticlust, which is freely available via CRAN.
Collapse
|
5
|
Hafiz R, Irfanoglu MO, Nayak A, Pierpaoli C. "Pscore": A Novel Percentile-Based Metric to Accurately Assess Individual Deviations in Non-Gaussian Distributions of Quantitative MRI Metrics. J Magn Reson Imaging 2024. [PMID: 38291798 DOI: 10.1002/jmri.29248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 01/08/2024] [Accepted: 01/09/2024] [Indexed: 02/01/2024] Open
Abstract
BACKGROUND Quantitative magnetic resonance imaging (MRI) metrics could be used in personalized medicine to assess individuals against normative distributions. Conventional Zscore analysis is inadequate in the presence of non-Gaussian distributions. Therefore, if quantitative MRI metrics deviate from normality, an alternative is needed. PURPOSE To confirm non-Gaussianity of diffusion MRI (dMRI) metrics on a publicly available dataset, and to propose a novel percentile-based method, "Pscore" to address this issue. STUDY TYPE Retrospective cohort. POPULATION Nine hundred and sixty-one healthy young adults (age: 22-35 years, females: 53%) from the Human Connectome Project. FIELD STRENGTH/SEQUENCE 3-T, spin-echo diffusion echo-planar imaging, T1-weighted: MPRAGE. ASSESSMENT The dMRI data were preprocessed using the TORTOISE pipeline. Forty-eight regions of interest (ROIs) from the JHU atlas were redrawn on a study-specific diffusion tensor (DT) template and average values were computed from various DT and mean apparent propagator (MAP) metrics. For each ROI, percentile ranks across participants were computed to generate "Pscores"-which normalized the difference between the median and a participant's value with the corresponding difference between the median and the 5th/95th percentile values. STATISTICAL TESTS ROI-wise distributions were assessed using log transformations, Zscore, and the "Pscore" methods. The percentages of extreme values above-95th and below-5th percentile boundaries (PEV>95 (%), PEV<5 (%)) were also assessed in the overall white matter. Bootstrapping was performed to test the reliability of Pscores in small samples (N = 100) using 100 iterations. RESULTS The dMRI metric distributions were systematically non-Gaussian, including positively skewed (eg, mean and radial diffusivity) and negatively skewed (eg, fractional and propagator anisotropy) metrics. This resulted in unbalanced tails in Zscore distributions (PEV>95 ≠ 5%, PEV<5 ≠ 5%) whereas "Pscore" distributions were symmetric and balanced (PEV>95 = PEV<5 = 5%); even for small bootstrapped samples (averagePEV > 95 ¯ = PEV < 5 ¯ = 5 ± 0 % $$ \overline{{\mathrm{PEV}}_{>95}}=\overline{{\mathrm{PEV}}_{<5}}=5\pm 0\% $$ [SD]). DATA CONCLUSION The inherent skewness observed for dMRI metrics may preclude the use of conventional Zscore analysis. The proposed "Pscore" method may help estimating individual deviations more accurately in skewed normative data, even from small datasets. LEVEL OF EVIDENCE 1 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Rakibul Hafiz
- Laboratory on Quantitative Medical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland, USA
| | - M Okan Irfanoglu
- Laboratory on Quantitative Medical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland, USA
| | - Amritha Nayak
- Laboratory on Quantitative Medical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland, USA
- Military Traumatic Brain Injury Initiative (MTBI2-formerly known as the Center for Neuroscience and Regenerative Medicine [CNRM]), Bethesda, Maryland, USA
- The Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, Maryland, USA
| | - Carlo Pierpaoli
- Laboratory on Quantitative Medical Imaging, National Institute of Biomedical Imaging and Bioengineering, Bethesda, Maryland, USA
| |
Collapse
|
6
|
Shim G, Yang D, Cho W, Kim J, Ryu H, Choi W, Kim J. Elastic Resistance and Shoulder Movement Patterns: An Analysis of Reaching Tasks Based on Proprioception. Bioengineering (Basel) 2023; 11:1. [PMID: 38275569 PMCID: PMC10813056 DOI: 10.3390/bioengineering11010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/09/2023] [Accepted: 12/15/2023] [Indexed: 01/27/2024] Open
Abstract
This study departs from the conventional research on horizontal plane reach movements by examining human motor control strategies in vertical plane elastic load reach movements conducted without visual feedback. Here, participants performed shoulder presses with elastic resistances at low, moderate, and high intensities without access to visual information about their hand position, relying exclusively on proprioceptive feedback and synchronizing their movements with a metronome set at a 3 s interval. The results revealed consistent performance symmetry across different intensities in terms of the reach speed (p = 0.254-0.736), return speed (p = 0.205-0.882), and movement distance (p = 0.480-0.919). This discovery underscores the human capacity to uphold bilateral symmetry in movement execution when relying solely on proprioception. Furthermore, this study observed an asymmetric velocity profile where the reach duration remained consistent irrespective of the load (1.15 s), whereas the return duration increased with higher loads (1.39 s-1.45 s). These findings suggest that, in the absence of visual feedback, the asymmetric velocity profile does not result from the execution of the action but rather represents a deliberate deceleration post-reach aimed at achieving the target position as generated by the brain's internal model. These findings hold significant implications for interpreting rehabilitation approaches under settings devoid of visual feedback.
Collapse
Affiliation(s)
- Gyuseok Shim
- Department of Human Ecology & Technology, BrainKorea21 FOUR, Handong Global University, Pohang 37554, Republic of Korea; (G.S.); (D.Y.)
| | - Duwon Yang
- Department of Human Ecology & Technology, BrainKorea21 FOUR, Handong Global University, Pohang 37554, Republic of Korea; (G.S.); (D.Y.)
| | - Woorim Cho
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama 226-8503, Japan;
| | - Jihyeon Kim
- Department of Digital Healthcare, Human Integrated Solution, Goyang 10464, Republic of Korea;
| | - Hyangshin Ryu
- Department of Digital Healthcare, Human Integrated Solution, Goyang 10464, Republic of Korea;
| | - Woong Choi
- College of ICT Construction & Welfare Convergence, Kangnam University, Yongin 16979, Republic of Korea
| | - Jaehyo Kim
- Department of Human Ecology & Technology, BrainKorea21 FOUR, Handong Global University, Pohang 37554, Republic of Korea; (G.S.); (D.Y.)
| |
Collapse
|
7
|
Jönemo J, Eklund A. Brain Age Prediction Using 2D Projections Based on Higher-Order Statistical Moments and Eigenslices from 3D Magnetic Resonance Imaging Volumes. J Imaging 2023; 9:271. [PMID: 38132689 PMCID: PMC10743800 DOI: 10.3390/jimaging9120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/30/2023] [Accepted: 12/04/2023] [Indexed: 12/23/2023] Open
Abstract
Brain age prediction from 3D MRI volumes using deep learning has recently become a popular research topic, as brain age has been shown to be an important biomarker. Training deep networks can be very computationally demanding for large datasets like the U.K. Biobank (currently 29,035 subjects). In our previous work, it was demonstrated that using a few 2D projections (mean and standard deviation along three axes) instead of each full 3D volume leads to much faster training at the cost of a reduction in prediction accuracy. Here, we investigated if another set of 2D projections, based on higher-order statistical central moments and eigenslices, leads to a higher accuracy. Our results show that higher-order moments do not lead to a higher accuracy, but that eigenslices provide a small improvement. We also show that an ensemble of such models provides further improvement.
Collapse
Affiliation(s)
- Johan Jönemo
- Division of Medical Informatics, Department of Biomedical Engineering, Linköping University, 581 83 Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, 581 83 Linköping, Sweden
| | - Anders Eklund
- Division of Medical Informatics, Department of Biomedical Engineering, Linköping University, 581 83 Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, 581 83 Linköping, Sweden
- Division of Statistics and Machine Learning, Department of Computer and Information Science, Linköping University, 581 83 Linköping, Sweden
| |
Collapse
|
8
|
Hernandez-Velasco LL, Abanto-Valle CA, Dey DK, Castro LM. A Bayesian approach for mixed effects state-space models under skewness and heavy tails. Biom J 2023; 65:e2100302. [PMID: 37853834 DOI: 10.1002/bimj.202100302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Revised: 05/29/2023] [Accepted: 06/15/2023] [Indexed: 10/20/2023]
Abstract
Human immunodeficiency virus (HIV) dynamics have been the focus of epidemiological and biostatistical research during the past decades to understand the progression of acquired immunodeficiency syndrome (AIDS) in the population. Although there are several approaches for modeling HIV dynamics, one of the most popular is based on Gaussian mixed-effects models because of its simplicity from the implementation and interpretation viewpoints. However, in some situations, Gaussian mixed-effects models cannot (a) capture serial correlation existing in longitudinal data, (b) deal with missing observations properly, and (c) accommodate skewness and heavy tails frequently presented in patients' profiles. For those cases, mixed-effects state-space models (MESSM) become a powerful tool for modeling correlated observations, including HIV dynamics, because of their flexibility in modeling the unobserved states and the observations in a simple way. Consequently, our proposal considers an MESSM where the observations' error distribution is a skew-t. This new approach is more flexible and can accommodate data sets exhibiting skewness and heavy tails. Under the Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is implemented. To evaluate the properties of the proposed models, we carried out some exciting simulation studies, including missing data in the generated data sets. Finally, we illustrate our approach with an application in the AIDS Clinical Trial Group Study 315 (ACTG-315) clinical trial data set.
Collapse
Affiliation(s)
- Lina L Hernandez-Velasco
- Facultad de Ciencias Básicas, Universidad Santiago de Cali, Calle 5 62-00, Santiago de Cali, Colombia
| | - Carlos A Abanto-Valle
- Department of Statistics, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Dipak K Dey
- Department of Statistics, University of Connecticut, Storrs, Connecticut, USA
| | - Luis M Castro
- Department of Statistics, Pontificia Universidad Católica de Chile, Casilla 306, Correo 22, Santiago, Chile
- Center for the Discovery of Structures in Complex Data, Casilla 306, Correo 22, Santiago, Chile
| |
Collapse
|
9
|
Shi Y, Li H, Wang C, Chen J, Jiang H, Shih YCT, Zhang H, Song Y, Feng Y, Liu L. A flexible quasi-likelihood model for microbiome abundance count data. Stat Med 2023; 42:4632-4643. [PMID: 37607718 PMCID: PMC11045296 DOI: 10.1002/sim.9880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/24/2023]
Abstract
In this article, we present a flexible model for microbiome count data. We consider a quasi-likelihood framework, in which we do not make any assumptions on the distribution of the microbiome count except that its variance is an unknown but smooth function of the mean. By comparing our model to the negative binomial generalized linear model (GLM) and Poisson GLM in simulation studies, we show that our flexible quasi-likelihood method yields valid inferential results. Using a real microbiome study, we demonstrate the utility of our method by examining the relationship between adenomas and microbiota. We also provide an R package "fql" for the application of our method.
Collapse
Affiliation(s)
- Yiming Shi
- Division of Biostatistics, Washington University in St. Louis, St. Louis, Missouri
| | - Huilin Li
- Division of Biostatistics, Department of Population Health, New York University School of Medicine, New York, New York
| | - Chan Wang
- Division of Biostatistics, Department of Population Health, New York University School of Medicine, New York, New York
| | - Jun Chen
- Division of Computational Biology, Mayo Clinic, Rochester, Minnesota
| | - Hongmei Jiang
- Department of Statistics, Northwestern University, Evanston, Illinois
| | - Ya-Chen T. Shih
- Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Haixiang Zhang
- Center for Applied Mathematics, Tianjin University, Tianjin, China
| | - Yizhe Song
- Division of Biology and Biomedical Sciences, Washington University in St. Louis, St. Louis, Missouri
| | - Yang Feng
- Department of Biostatistics, College of Global Public Health, New York University, New York, New York
| | - Lei Liu
- Division of Biostatistics, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
10
|
Lee I, Sinha D, Mai Q, Zhang X, Bandyopadhyay D. Bayesian regression analysis of skewed tensor responses. Biometrics 2023; 79:1814-1825. [PMID: 35983634 DOI: 10.1111/biom.13743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 08/10/2022] [Indexed: 11/30/2022]
Abstract
Tensor regression analysis is finding vast emerging applications in a variety of clinical settings, including neuroimaging, genomics, and dental medicine. The motivation for this paper is a study of periodontal disease (PD) with an order-3 tensor response: multiple biomarkers measured at prespecified tooth-sites within each tooth, for each participant. A careful investigation would reveal considerable skewness in the responses, in addition to response missingness. To mitigate the shortcomings of existing analysis tools, we propose a new Bayesian tensor response regression method that facilitates interpretation of covariate effects on both marginal and joint distributions of highly skewed tensor responses, and accommodates missing-at-random responses under a closure property of our tensor model. Furthermore, we present a prudent evaluation of the overall covariate effects while identifying their possible variations on only a sparse subset of the tensor components. Our method promises Markov chain Monte Carlo (MCMC) tools that are readily implementable. We illustrate substantial advantages of our proposal over existing methods via simulation studies and application to a real data set derived from a clinical study of PD. The R package BSTN available in GitHub implements our model.
Collapse
Affiliation(s)
- Inkoo Lee
- Department of Statistics, Rice University, Houston, Texas, USA
| | - Debajyoti Sinha
- Department of Statistics, Florida State University, Tallahassee, Florida, USA
| | - Qing Mai
- Department of Statistics, Florida State University, Tallahassee, Florida, USA
| | - Xin Zhang
- Department of Statistics, Florida State University, Tallahassee, Florida, USA
| | | |
Collapse
|
11
|
Passy SI, Mruzek JL, Budnick WR, Leboucher T, Jamoneau A, Chase JM, Soininen J, Sokol ER, Tison-Rosebery J, Vilmi A, Wang J, Larson CA. On the shape and origins of the freshwater species-area relationship. Ecology 2023; 104:e3917. [PMID: 36336908 DOI: 10.1002/ecy.3917] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 08/25/2022] [Accepted: 10/06/2022] [Indexed: 11/09/2022]
Abstract
The species-area relationship (SAR) has over a 150-year-long history in ecology, but how its shape and origins vary across scales and organisms remains incompletely understood. This is the first subcontinental freshwater study to examine both these properties of the SAR in a spatially explicit way across major organismal groups (diatoms, insects, and fish) that differ in body size and dispersal capacity. First, to describe the SAR shape, we evaluated the fit of three commonly used models, logarithmic, power, and Michaelis-Menten. Second, we proposed a hierarchical framework to explain the variability in the SAR shape, captured by the parameters of the SAR model. According to this framework, scale and species group were the top predictors of the SAR shape, climatic factors (heterogeneity and median conditions) represented the second predictor level, and metacommunity properties (intraspecific spatial aggregation, γ-diversity, and species abundance distribution) the third predictor level. We calculated the SAR as a sample-based rarefaction curve using 60 streams within landscape windows (scales) in the United States, ranging from 160,000 to 6,760,000 km2 . First, we found that all models provided good fits (R2 ≥ 0.93), but the frequency of the best-fitting model was strongly dependent on organism, scale, and metacommunity properties. The Michaelis-Menten model was most common in fish, at the largest scales, and at the highest levels of intraspecific spatial aggregation. The power model was most frequent in diatoms and insects, at smaller scales, and in metacommunities with the lowest evenness. The logarithmic model fit best exclusively at the smallest scales and in species-poor metacommunities, primarily fish. Second, we tested our framework with the parameters of the most broadly used SAR model, the log-log form of the power model, using a structural equation model. This model supported our framework and revealed that the SAR slope was best predicted by scale- and organism-dependent metacommunity properties, particularly spatial aggregation, whereas the intercept responded most strongly to species group and γ-diversity. Future research should investigate from the perspective of our framework how shifts in metacommunity properties due to climate change may alter the SAR.
Collapse
Affiliation(s)
- Sophia I Passy
- Department of Biology, University of Texas at Arlington, Arlington, Texas, USA
| | - Joseph L Mruzek
- Forestry and Environmental Conservation Department, Clemson University, Clemson, South Carolina, USA
| | - William R Budnick
- Department of Fisheries and Wildlife, Michigan State University, East Lansing, Michigan, USA
| | - Thibault Leboucher
- Laboratory for Continental Environments, National Scientific Research Center, University of Lorraine, Metz, France
| | | | - Jonathan M Chase
- Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Leipzig, Germany
| | - Janne Soininen
- Department of Geosciences and Geography, University of Helsinki, Helsinki, Finland
| | - Eric R Sokol
- National Ecological Observatory Network (NEON), Battelle, Boulder, Colorado, USA
| | | | - Annika Vilmi
- Freshwater Centre, Finnish Environment Institute, Oulu, Finland
| | - Jianjun Wang
- State Key Laboratory of Lake Science and Environment, Nanjing Institute of Geography and Limnology, Chinese Academy of Sciences, Nanjing, China
| | - Chad A Larson
- Washington State Department of Ecology, Environmental Assessment Program, Lacey, Washington, USA
| |
Collapse
|
12
|
Davis JJJ, Kozma R, Schübeler F. Analysis of Meditation vs. Sensory Engaged Brain States Using Shannon Entropy and Pearson's First Skewness Coefficient Extracted from EEG Data. Sensors (Basel) 2023; 23:1293. [PMID: 36772332 PMCID: PMC9920060 DOI: 10.3390/s23031293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 01/19/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
It has been proposed that meditative states show different brain dynamics than other more engaged states. It is known that when people sit with closed eyes instead of open eyes, they have different brain dynamics, which may be associated with a combination of deprived sensory input and more relaxed inner psychophysiological and cognitive states. Here, we study such states based on a previously established experimental methodology, with the aid of an electro-encephalography (EEG) array with 128 electrodes. We derived the Shannon Entropy (H) and Pearson's 1st Skewness Coefficient (PSk) from the power spectrum for the modalities of meditation and video watching, including 20 participants, 11 meditators and 9 non-meditators. The discriminating performance of the indices H and PSk was evaluated using Student's t-test. The results demonstrate a statistically significant difference between the mean H and PSk values during meditation and video watch modes. We show that the H index is useful to discriminate between Meditator and Non-Meditator participants during meditation over both the prefrontal and occipital areas, while the PSk index is useful to discriminate Meditators from Non-Meditators based on the prefrontal areas for both meditation and video modes. Moreover, we observe episodes of anti-correlation between the prefrontal and occipital areas during meditation, while there is no evidence for such anticorrelation periods during video watching. We outline directions of future studies incorporating further statistical indices for the characterization of brain states.
Collapse
Affiliation(s)
- Joshua J. J. Davis
- Department of Physics, Dodd-Walls Centre for Photonics and Quantum Technologies, University of Auckland, Auckland 1142, New Zealand
| | - Robert Kozma
- Department of Mathematics, University of Memphis, Memphis, TN 38152, USA
- Kozmos Research Laboratories, Boston, MA 02215, USA
- School of Informatics, Obuda University, H-1034 Budapest, Hungary
| | | |
Collapse
|
13
|
Suzuki A, Murakami H, Mukherjee A. Distribution-free Phase-I scheme for location, scale and skewness shifts with an application in monitoring customers' waiting time. J Appl Stat 2023; 50:827-847. [PMID: 36925911 PMCID: PMC10013472 DOI: 10.1080/02664763.2021.1994530] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Phase-I analysis of historical data from a statistical process is a strategic problem in Statistical Process Monitoring and control. Before the establishment of process stability, it is challenging to model historical data. Consequently, a distribution-free approach is a natural choice in Phase-I monitoring. Existing distribution-free Phase-I control charts are suitable for detecting instability in location and scale parameters only and are often insensitive in complex processes involving skewness or shape parameters. A new Phase-I control chart is proposed to identify more general shifts, including location, scale and skewness. The proposed Phase-I scheme is efficient in such a situation. The proposed Phase-I scheme uses subsamples, and the plotting statistic is based on the omnibus multi-sample linear rank statistic corresponding to the location, scale and skewness shifts. The new scheme can identify subsamples that are not in control, and it can also indicate one or more process parameters where a deviation has occurred. The encouraging performance of the proposed scheme is established with a large-scale numerical study based on Monte-Carlo in detecting shifts of various nature in a comprehensive class of situations. An illustration based on monitoring the waiting time data from a customer service centre is given. Some concluding remarks and some future research problems are also offered.
Collapse
Affiliation(s)
- Akira Suzuki
- Department of Applied Mathematics, Graduate School of Science, Tokyo University of Science, Tokyo, Japan
| | - Hidetoshi Murakami
- Department of Applied Mathematics, Tokyo University of Science, Tokyo, Japan
| | - Amitava Mukherjee
- Production, Operations and Decision Sciences Area, XLRI- Xavier School of Management, XLRI Jamshedpur, Jharkhand, India
| |
Collapse
|
14
|
Hu J, Wang C, Blaser MJ, Li H. Joint modeling of zero-inflated longitudinal proportions and time-to-event data with application to a gut microbiome study. Biometrics 2022; 78:1686-1698. [PMID: 34213763 PMCID: PMC8720317 DOI: 10.1111/biom.13515] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 05/26/2021] [Accepted: 06/17/2021] [Indexed: 12/30/2022]
Abstract
Recent studies have suggested that the temporal dynamics of the human microbiome may have associations with human health and disease. An increasing number of longitudinal microbiome studies, which record time to disease onset, aim to identify candidate microbes as biomarkers for prognosis. Owing to the ultra-skewness and sparsity of microbiome proportion (relative abundance) data, directly applying traditional statistical methods may result in substantial power loss or spurious inferences. We propose a novel joint modeling framework [JointMM], which is comprised of two sub-models: a longitudinal sub-model called zero-inflated scaled-beta generalized linear mixed-effects regression to depict the temporal structure of microbial proportions among subjects; and a survival sub-model to characterize the occurrence of an event and its relationship with the longitudinal microbiome proportions. JointMM is specifically designed to handle the zero-inflated and highly skewed longitudinal microbial proportion data and examine whether the temporal pattern of microbial presence and/or the nonzero microbial proportions are associated with differences in the time to an event. The longitudinal sub-model of JointMM also provides the capacity to investigate how the (time-varying) covariates are related to the temporal microbial presence/absence patterns and/or the changing trend in nonzero proportions. Comprehensive simulations and real data analyses are used to assess the statistical efficiency and interpretability of JointMM.
Collapse
Affiliation(s)
- Jiyuan Hu
- Division of Biostatistics, Department of Population Health, New York University Grossman School of Medicine, New York, NY 10016, U.S.A
| | - Chan Wang
- Division of Biostatistics, Department of Population Health, New York University Grossman School of Medicine, New York, NY 10016, U.S.A
| | - Martin J. Blaser
- Center for Advanced Biotechnology and Medicine, Rutgers University, Piscataway, NJ 08854, U.S.A
| | - Huilin Li
- Division of Biostatistics, Department of Population Health, New York University Grossman School of Medicine, New York, NY 10016, U.S.A
| |
Collapse
|
15
|
Revuelta J, Ximénez C, Minaya N. Overfactoring in rating scale data: A comparison between factor analysis and item response theory. Front Psychol 2022; 13:982137. [PMID: 36533017 PMCID: PMC9750161 DOI: 10.3389/fpsyg.2022.982137] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 11/01/2022] [Indexed: 10/15/2023] Open
Abstract
Educational and psychological measurement is typically based on dichotomous variables or rating scales comprising a few ordered categories. When the mean of the observed responses approaches the upper or the lower bound of the scale, the distribution of the data becomes skewed and, if a categorical factor model holds in the population, the Pearson correlation between variables is attenuated. The consequence of this correlation attenuation is that the traditional linear factor model renders an excessive number of factors. This article presents the results of a simulation study investigating the problem of overfactoring and some solutions. We compare five widely known approaches: (1) The maximum-likelihood factor analysis (FA) model for normal data, (2) the categorical factor analysis (FAC) model based on polychoric correlations and maximum likelihood (ML) estimation, (3) the FAC model estimated using a weighted least squares algorithm, (4) the mean corrected chi-square statistic by Satorra-Bentler to handle the lack of normality, and (5) the Samejima's graded response model (GRM) from item response theory (IRT). Likelihood-ratio chi-square, parallel analysis (PA), and categorical parallel analysis (CPA) are used as goodness-of-fit criteria to estimate the number of factors in the simulation study. Our results indicate that the maximum-likelihood estimation led to overfactoring in the presence of skewed variables both for the linear and categorical factor model. The Satorra-Bentler and GRM constitute the most reliable alternatives to estimate the number of factors.
Collapse
|
16
|
Astivia OLO, Edward K. Theoretical considerations when simulating data from the g-and-h family of distributions. Br J Math Stat Psychol 2022; 75:699-727. [PMID: 35635727 DOI: 10.1111/bmsp.12274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 02/27/2022] [Indexed: 06/15/2023]
Abstract
The g-and-h family of distributions is a computationally efficient, flexible option to model and simulate non-normal data. In spite of its popularity, there are several theoretical aspects of these distributions that need special consideration when they are used. In this paper some of these aspects are explored. In particular, through mathematical analysis it is shown that a popular multivariate generalization of the g-and-h distribution may result in marginal distributions which are no longer g-and-h distributed, that more than one set of (g,h) parameters can correspond to the same values of population skewness and excess kurtosis, and that multivariate generalizations of g-and-h distributions available in the literature are special cases of Gaussian copula distributions. A small-scale simulation is also used to demonstrate how simulation conclusions can change when different (g,h) parameters are used to simulate data, even if they imply the same population values of skewness and excess kurtosis.
Collapse
Affiliation(s)
| | - Kroc Edward
- Department of Educational and Counselling Psychology and Special Education, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
17
|
Helwig NE. Computing the real solutions of Fleishman's equations for simulating non-normal data. Br J Math Stat Psychol 2022; 75:319-333. [PMID: 34779511 DOI: 10.1111/bmsp.12259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Revised: 09/26/2021] [Indexed: 06/13/2023]
Abstract
Fleishman's power method is frequently used to simulate non-normal data with a desired skewness and kurtosis. Fleishman's method requires solving a system of nonlinear equations to find the third-order polynomial weights that transform a standard normal variable into a non-normal variable with desired moments. Most users of the power method seem unaware that Fleishman's equations have multiple solutions for typical combinations of skewness and kurtosis. Furthermore, researchers lack a simple method for exploring the multiple solutions of Fleishman's equations, so most applications only consider a single solution. In this paper, we propose novel methods for finding all real-valued solutions of Fleishman's equations. Additionally, we characterize the solutions in terms of differences in higher order moments. Our theoretical analysis of the power method reveals that there typically exists two solutions of Fleishman's equations that have noteworthy differences in higher order moments. Using simulated examples, we demonstrate that these differences can have remarkable effects on the shape of the non-normal distribution, as well as the sampling distributions of statistics calculated from the data. Some considerations for choosing a solution are discussed, and some recommendations for improved reporting standards are provided.
Collapse
Affiliation(s)
- Nathaniel E Helwig
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, USA
- School of Statistics, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
18
|
Iijima K, Yokota H, Yamaguchi T, Nakano M, Ouchi T, Maki F, Takasaki M, Shimizu Y, Hori H, Iwamuro H, Sasanuma J, Watanabe K, Uno T. Predictors of thermal increase in magnetic resonance-guided focused ultrasound treatment for essential tremor: histogram analysis of skull density ratio values for 1024 elements. J Neurosurg 2022; 136:1381-1386. [PMID: 34653973 DOI: 10.3171/2021.5.jns21669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2021] [Accepted: 05/27/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Sufficient thermal increase capable of generating thermocoagulation is indispensable for an effective clinical outcome in patients undergoing magnetic resonance-guided focused ultrasound (MRgFUS). The skull density ratio (SDR) is one of the most dominant predictors of thermal increase prior to treatment. However, users currently rely only on the average SDR value (SDRmean) as a screening criterion, although some patients with low SDRmean values can achieve sufficient thermal increase. The present study aimed to examine the numerical distribution of SDR values across 1024 elements to identify more precise predictors of thermal increase during MRgFUS. METHODS The authors retrospectively analyzed the correlations between the skull parameters and the maximum temperature achieved during unilateral ventral intermediate nucleus thalamotomy with MRgFUS in a cohort of 55 patients. In addition, the numerical distribution of SDR values was quantified across 1024 elements by using the skewness, kurtosis, entropy, and uniformity of the SDR histogram. Next, the authors evaluated the correlation between the aforementioned indices and a peak temperature > 55°C by using univariate and multivariate logistic regression analyses. Receiver operating characteristic curve analysis was performed to compare the predictive ability of the indices. The diagnostic performance of significant factors was also assessed. RESULTS The SDR skewness (SDRskewness) was identified as a significant predictor of thermal increase in the univariate and multivariate logistic regression analyses (p < 0.001, p = 0.013). Moreover, the receiver operating characteristic curve analysis indicated that the SDRskewness exhibited a better predictive ability than the SDRmean, with area under the curve values of 0.847 and 0.784, respectively. CONCLUSIONS The SDRskewness is a more accurate predictor of thermal increase than the conventional SDRmean. The authors suggest setting the SDRskewness cutoff value to 0.68. SDRskewness may allow for the inclusion of treatable patients with essential tremor who would have been screened out based on the SDRmean exclusion criterion.
Collapse
Affiliation(s)
- Ken Iijima
- 1Department of Diagnostic Radiology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Hajime Yokota
- 2Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba
| | - Toshio Yamaguchi
- 3Research Institute for Diagnostic Radiology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Masayuki Nakano
- 4Department of Neurosurgery, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Takahiro Ouchi
- 5Department of Neurology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Futaba Maki
- 5Department of Neurology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Masahito Takasaki
- 6Department of Anesthesiology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa; and
| | - Yasuhiro Shimizu
- 1Department of Diagnostic Radiology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Hiroki Hori
- 1Department of Diagnostic Radiology, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | | | - Jinichi Sasanuma
- 4Department of Neurosurgery, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Kazuo Watanabe
- 4Department of Neurosurgery, Shin-Yurigaoka General Hospital, Kawasaki, Kanagawa
| | - Takashi Uno
- 2Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba
| |
Collapse
|
19
|
Verheyen S, White A, Storms G. A Comparison of the Spatial Arrangement Method and the Total-Set Pairwise Rating Method for Obtaining Similarity Data in the Conceptual Domain. Multivariate Behav Res 2022; 57:356-384. [PMID: 33327792 DOI: 10.1080/00273171.2020.1857216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
We compare two methods for obtaining similarity data in the conceptual domain. In the Spatial Arrangement Method (SpAM), participants organize stimuli on a computer screen so that the distance between stimuli represents their perceived dissimilarity. In Total-Set Pairwise Rating Method (PRaM), participants rate the (dis)similarity of all pairs of stimuli on a Likert scale. In each of three studies, we had participants indicate the similarity of four sets of conceptual stimuli with either PRaM or SpAM. Studies 1 and 2 confirm two caveats that have been raised for SpAM. (i) While SpAM takes significantly less time to complete than PRaM, it yields less reliable data than PRaM does. (ii) Because of the spatial manner in which similarity is measured in SpAM, the method is biased against feature representations. Despite these differences, averaging SpAM and PRaM dissimilarity data across participants yields comparable aggregate data. Study 3 shows that by having participants only judge half of the pairs in PRaM, its duration can be significantly reduced, without affecting the dissimilarity distribution, but at the cost of a smaller reliability. Having participants arrange multiple subsets of the stimuli does not do away with the spatial bias of SpAM.
Collapse
|
20
|
Yedutenko M, Howlett MHC, Kamermans M. Enhancing the dark side: asymmetric gain of cone photoreceptors underpins their discrimination of visual scenes based on skewness. J Physiol 2021; 600:123-142. [PMID: 34783026 PMCID: PMC9300210 DOI: 10.1113/jp282152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 11/11/2021] [Indexed: 11/08/2022] Open
Abstract
Psychophysical data indicate that humans can discriminate visual scenes based on their skewness, i.e. the ratio of dark and bright patches within a visual scene. It has also been shown that at a phenomenological level this skew discrimination is described by the so-called blackshot mechanism, which accentuates strong negative contrasts within a scene. Here, we present a set of observations suggesting that the underlying computation might start as early as the cone phototransduction cascade, whose gain is higher for strong negative contrasts than for strong positive contrasts. We recorded from goldfish cone photoreceptors and found that the asymmetry in the phototransduction gain leads to responses with larger amplitudes when using negatively rather than positively skewed light stimuli. This asymmetry in amplitude was present in the cone photocurrent, voltage response and synaptic output. Given that the properties of the phototransduction cascade are universal across vertebrates, it is possible that the mechanism shown here gives rise to a general ability to discriminate between scenes based only on their skewness, which psychophysical studies have shown humans can do. Thus, our data suggest the importance of non-linearity of the early photoreceptor for perception. Additionally, we found that stimulus skewness leads to a subtle change in photoreceptor kinetics. For negatively skewed stimuli, the impulse response functions of the cone peak later than for positively skewed stimuli. However, stimulus skewness does not affect the overall integration time of the cone. KEY POINTS: Humans can discriminate visual scenes based on skewness, i.e. the relative prevalence of bright and dark patches within a scene. Here, we show that negatively skewed time-series stimuli induce larger responses in goldfish cone photoreceptors than comparable positively skewed stimuli. This response asymmetry originates from within the phototransduction cascade, where gain is higher for strong negative contrasts (dark patches) than for strong positive contrasts (bright patches). Unlike the implicit assumption often contained within models of downstream visual neurons, our data show that cone photoreceptors do not simply relay linearly filtered versions of visual stimuli to downstream circuitry, but that they also emphasize specific stimulus features. Given that the phototransduction cascade properties among vertebrate retinas are mostly universal, our data imply that the skew discrimination by human subjects reported in psychophysical studies might stem from the asymmetric gain function of the phototransduction cascade.
Collapse
Affiliation(s)
- Matthew Yedutenko
- Retinal Signal Processing Laboratory, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Marcus H C Howlett
- Retinal Signal Processing Laboratory, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands
| | - Maarten Kamermans
- Retinal Signal Processing Laboratory, Netherlands Institute for Neuroscience, Amsterdam, The Netherlands.,Department of Biomedical Physics and Biomedical Optics, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
21
|
Goliński A, Spencer P. Modeling the Covid-19 epidemic using time series econometrics. Health Econ 2021; 30:2808-2828. [PMID: 34428329 PMCID: PMC8646920 DOI: 10.1002/hec.4413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 05/07/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
The classic "logistic" model has provided a realistic model of the behaviour of Covid-19 in China and many East Asian countries. Once these countries passed the peak, the daily case count fell back, mirroring its initial climb in a symmetric way, just as the classic model predicts. However, in Italy and Spain and most other Western countries, the first wave of the epidemic was very different. The daily count fell back gradually from the peak but remained stubbornly high. The reason for the divergence from the classical model remain unclear. We take an empirical stance on this issue and develop a model framework based upon the statistical characteristics of the time series. With the possible exception of China, the workhorse logistic model is decisively rejected against more flexible alternatives.
Collapse
Affiliation(s)
- Adam Goliński
- Department of Economics and Related StudiesUniversity of YorkYorkUK
| | - Peter Spencer
- Department of Economics and Related StudiesUniversity of YorkYorkUK
| |
Collapse
|
22
|
Cheng X, Stasiewicz MJ. Evaluation of the Impact of Skewness, Clustering, and Probe Sampling Plan on Aflatoxin Detection in Corn. Risk Anal 2021; 41:2065-2080. [PMID: 33733507 PMCID: PMC9290973 DOI: 10.1111/risa.13721] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2020] [Revised: 02/10/2021] [Accepted: 02/11/2021] [Indexed: 06/12/2023]
Abstract
Probe sampling plans for aflatoxin in corn attempt to reliably estimate concentrations in bulk corn given complications like skewed contamination distribution and hotspots. To evaluate and improve sampling plans, three sampling strategies (simple random sampling, stratified random sampling, systematic sampling with U.S. GIPSA sampling schemes), three numbers of probes (5, 10, 100, the last a proxy for autosampling), four clustering levels (1, 10, 100, 1,000 kernels/cluster source), and six aflatoxin concentrations (5, 10, 20, 40, 80, 100 ppb) were assessed by Monte-Carlo simulation. Aflatoxin distribution was approximated by PERT and Gamma distributions of experimental aflatoxin data for uncontaminated and naturally contaminated single kernels. The model was validated against published data repeatedly sampling 18 grain lots contaminated with 5.8-680 ppb aflatoxin. All empirical acceptance probabilities fell within the range of simulated acceptance probabilities. Sensitivity analysis with partial rank correlation coefficients found acceptance probability more sensitive to aflatoxin concentration (-0.87) and clustering level (0.28) than number of probes (-0.09) and sampling strategy (0.04). Comparison of operating characteristic curves indicate all sampling strategies have similar average performance at the 20 ppb threshold (0.8-3.5% absolute marginal change), but systematic sampling has larger variability at clustering levels above 100. Taking extra probes improves detection (1.8% increase in absolute marginal change) when aflatoxin is spatially clustered at 1,000 kernels/cluster, but not when contaminated grains are homogenously distributed. Therefore, taking many small samples, for example, autosampling, may increase sampling plan reliability. The simulation is provided as an R Shiny web app for stakeholder use evaluating grain sampling plans.
Collapse
Affiliation(s)
- Xianbin Cheng
- University of Illinois at Urbana‐ChampaignUrbanaILUSA
| | | |
Collapse
|
23
|
Estupiñan-López F, Gaona-Tiburcio C, Jáquez-Muñoz J, Zambrano-Robledo P, Maldonado-Bandala E, Cabral-Miramontes J, Nieves-Mendoza D, D Delgado A, Flores-De Los Rios JP, Almeraya-Calderón F. A Comparative Study of Corrosion AA6061 and AlSi10Mg Alloys Produced by Extruded and Additive Manufacturing. Materials (Basel) 2021; 14:5793. [PMID: 34640190 DOI: 10.3390/ma14195793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/23/2021] [Accepted: 09/28/2021] [Indexed: 11/16/2022]
Abstract
The aim of this work was to evaluate the corrosion behavior of the AA6061 and AlSi10Mg alloys produced by extruded and additive manufacturing (selective laser melting, SLM). Alloys were immersed in two electrolytes in H2O and 3.5 wt. % NaCl solutions at room temperature and their corrosion behavior was studied by electrochemical noise technique (EN). Three different methods filtered EN signals, and the statistical analysis was employed to obtain Rn, the localization index (LI), Kurtosis, skew, and the potential spectral density analysis (PSD). The Energy Dispersion Plots (EDP) of wavelets method was employed to determine the type of corrosion and the Hilbert-Huang Transform (HHT), analyzing the Hilbert Spectra. The result indicated that the amplitude of the transients in the time series in potential and current is greater in the AlSi10Mg alloy manufactured by additive manufacturing. The amplitude of the transients decreases in both alloys (AA6061 and AlSi10Mg) as time increases.
Collapse
|
24
|
Villegas Rivas D, Valbuena Torres N, Milla Pino M, Vásquez MG, Velásquez Casana Y, Delgado Bazan E, Osorio Carrera C, Shimabuku Ysa R, Chávez Santos R, Calderón LR, Chamiquit CJ. Using an Algorithm to Fit a GAMLSS Model on Dry Matter Data from Brachiaria brizantha. Pak J Biol Sci 2021; 24:468-476. [PMID: 34486306 DOI: 10.3923/pjbs.2021.468.476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
<b>Background and Objective:</b> Forage production in the tropics is generally asymmetrically distributed. Hence the need to use more complex models, especially when multiple comparisons are made and there are very large deviations from normality. The objective of this research is to fit a Generalized Additive Model for Location, Scale and Shape (GAMLSS) model on accumulated dry matter data from <i>Brachiaria brizantha</i> using a model selection algorithm. <b>Materials and Methods:</b> A Box-Cox Power Exponential (BCPE) distribution was adjusted on the dry matter from <i>Brachiaria brizantha</i> data implementing GAMLSS in R (programming language). The accumulated dry matter data for <i>B. brizantha</i> were obtained from a study carried out on a farm in the state of Portuguesa, Venezuela. The explanatory covariate x was the interval between cuts (21, 28, 35 and 42 days). <b>Results:</b> The dependent variable (dry matter) exhibited both skewness and kurtosis. GAMLSS allowed flexible modeling of both the distribution of the dry matter yield from <i>B. brizantha</i> and the dependence of all the parameters of the distribution on intervals between cuttings. For the dry matter yield from <i>B. brizantha</i>, which exhibited skewness and leptokurtosis, the BCPE distribution, provided the best fit. <b>Conclusion:</b> The interval between cuttings showed an effect that is reflected in the average yield of dry matter from <i>B. brizantha</i>. The interval between cuts affected the skewness and the kurtosis of the distribution.
Collapse
|
25
|
Wang H, Li Q, Yang S, Liu Y. Fault Recognition of Rolling Bearings Based on Parameter Optimized Multi-Scale Permutation Entropy and Gath-Geva. Entropy (Basel) 2021; 23:e23081040. [PMID: 34441180 PMCID: PMC8394354 DOI: 10.3390/e23081040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/04/2021] [Accepted: 08/10/2021] [Indexed: 11/29/2022]
Abstract
To extract fault features of rolling bearing vibration signals precisely, a fault diagnosis method based on parameter optimized multi-scale permutation entropy (MPE) and Gath-Geva (GG) clustering is proposed. The method can select the important parameters of MPE method adaptively, overcome the disadvantages of fixed MPE parameters and greatly improve the accuracy of fault identification. Firstly, aiming at the problem of parameter determination and considering the interaction among parameters comprehensively of MPE, taking skewness of MPE as fitness function, the time series length and embedding dimension were optimized respectively by particle swarm optimization (PSO) algorithm. Then the fault features of rolling bearing were extracted by parameter optimized MPE and the standard clustering centers is obtained with GG clustering. Finally, the samples are clustered with the Euclid nearness degree to obtain recognition rate. The validity of the parameter optimization is proved by calculating the partition coefficient and average fuzzy entropy. Compared with unoptimized MPE, the propose method has a higher fault recognition rate.
Collapse
Affiliation(s)
- Haiming Wang
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 010044, China;
| | - Qiang Li
- School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing 010044, China;
- Correspondence:
| | - Shaopu Yang
- State Key Laboratory of Mechanical Behavior and System Safety of Traffic Engineering Structures, Shijiazhuang Tiedao University, Shijiazhuang 050043, China; (S.Y.); (Y.L.)
| | - Yongqiang Liu
- State Key Laboratory of Mechanical Behavior and System Safety of Traffic Engineering Structures, Shijiazhuang Tiedao University, Shijiazhuang 050043, China; (S.Y.); (Y.L.)
| |
Collapse
|
26
|
Molinari CA, Bresson P, Palacin F, Billat V. Pace Controlled by a Steady-State Physiological Variable Is Associated with Better Performance in a 3000 M Run. Int J Environ Res Public Health 2021; 18:7886. [PMID: 34360178 PMCID: PMC8345513 DOI: 10.3390/ijerph18157886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/21/2021] [Accepted: 07/23/2021] [Indexed: 11/17/2022]
Abstract
This paper aims to test the hypothesis whereby freely chosen running pace is less effective than pace controlled by a steady-state physiological variable. Methods Eight runners performed four maximum-effort 3000 m time trials on a running track. The first time trial (TT1) was freely paced. In the following 3000 m time trials, the pace was controlled so that the average speed (TT2), average V˙O2 (TT3) or average HR (TT4) recorded in TT1 was maintained throughout the time trial. Results: Physiologically controlled pace was associated with a faster time (mean ± standard deviation: 740 ± 34 s for TT3 and 748 ± 33 s for TT4, vs. 854 ± 53 s for TT1; p < 0.01), a lower oxygen cost of running (200 ± 5 and 220 ± 3 vs. 310 ± 5 mLO2·kg-1·km-1, respectively; p < 0.02), a lower cardiac cost (0.69 ± 0.08 and 0.69 ± 0.04 vs. 0.86 ± 0.09 beat·m-1, respectively; p < 0.01), and a more positively skewed speed distribution (skewness: 1.7 ± 0.9 and 1.3 ± 0.6 vs. 0.2 ± 0.4, p < 0.05). Conclusion: Physiologically controlled pace (at the average V˙O2 or HR recorded in a freely paced run) was associated with a faster time, a more favorable speed distribution and lower levels of physiological strain, relative to freely chosen pace. This finding suggests that non-elite runners do not spontaneously choose the best pace strategy.
Collapse
Affiliation(s)
- Claire A. Molinari
- Unité de Biologie Intégrative des Adaptations à l’Exercice, Université Paris-Saclay, Univ Evry, 91000 Evry-Courcouronnes, France;
- BillaTraining SAS, 32 Rue Paul Vaillant-Couturier, 94140 Alforville, France; (P.B.); (F.P.)
| | - Pierre Bresson
- BillaTraining SAS, 32 Rue Paul Vaillant-Couturier, 94140 Alforville, France; (P.B.); (F.P.)
| | - Florent Palacin
- BillaTraining SAS, 32 Rue Paul Vaillant-Couturier, 94140 Alforville, France; (P.B.); (F.P.)
| | - Véronique Billat
- Unité de Biologie Intégrative des Adaptations à l’Exercice, Université Paris-Saclay, Univ Evry, 91000 Evry-Courcouronnes, France;
| |
Collapse
|
27
|
Iftimie S, Răduţă AM, Dragoman D. Characterization of Monochromatic Aberrated Metalenses in Terms of Intensity-Based Moments. Nanomaterials (Basel) 2021; 11:1805. [PMID: 34361191 DOI: 10.3390/nano11071805] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/08/2021] [Accepted: 07/09/2021] [Indexed: 12/12/2022]
Abstract
Consistent with wave-optics simulations of metasurfaces, aberrations of metalenses should also be described in terms of wave optics and not ray tracing. In this respect, we have shown, through extensive numerical simulations, that intensity-based moments and the associated parameters defined in terms of them (average position, spatial extent, skewness and kurtosis) adequately capture changes in beam shapes induced by aberrations of a metalens with a hyperbolic phase profile. We have studied axial illumination, in which phase-discretization induced aberrations exist, as well as non-axial illumination, when coma could also appear. Our results allow the identification of the parameters most prone to induce changes in the beam shape for metalenses that impart on an incident electromagnetic field a step-like approximation of an ideal phase profile.
Collapse
|
28
|
Manthos D. Delis, Christos S. Savva, Panayiotis Theodossiou. The impact of the coronavirus crisis on the market price of risk. Journal of Financial Stability 2021; 53. [ DOI: 10.1016/j.jfs.2020.100840] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
We study an equilibrium risk and return model to explore the effects of the coronavirus crisis and associated skewness on the market price of risk. We derive the moment and equilibrium equations, specifying skewness price of risk as an additive component of the effect of variance on mean expected return. We estimate our model using the flexible skewed generalized error distribution, for which we derive the distribution of returns and the likelihood function. Using S&P 500 Index returns from January 1980 to mid-October 2020, our results show that the coronavirus crisis generated a deeply negative reaction in the skewness and total market price of risk, more negative even than the subprime and the October 1987 crises.
Collapse
|
29
|
Webb ALM. Reversing the Luminance Polarity of Control Faces: Why Are Some Negative Faces Harder to Recognize, but Easier to See? Front Psychol 2021; 11:609045. [PMID: 33551920 PMCID: PMC7858267 DOI: 10.3389/fpsyg.2020.609045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 12/15/2020] [Indexed: 11/13/2022] Open
Abstract
Control stimuli are key for understanding the extent to which face processing relies on holistic processing, and affective evaluation versus the encoding of low-level image properties. Luminance polarity (LP) reversal combined with face inversion is a popular tool for severely disrupting the recognition of face controls. However, recent findings demonstrate visibility-recognition trade-offs for LP-reversed faces, where these face controls sometimes appear more salient despite being harder to recognize. The present report brings together findings from image analysis, simple stimuli, and behavioral data for facial recognition and visibility, in an attempt to disentangle instances where LP-reversed control faces are associated with a performance bias in terms of their perceived salience. These findings have important implications for studies of subjective face appearance, and highlight that future research must be aware of behavioral artifacts due to the possibility of trade-off effects.
Collapse
Affiliation(s)
- Abigail L M Webb
- Department of Psychology, University of Essex, Colchester, United Kingdom
| |
Collapse
|
30
|
Abstract
We present the development of the approach to thermodynamics based on measurement. First of all, we recall that considering classical thermodynamics as a theory of measurement of extensive variables one gets the description of thermodynamic states as Legendrian or Lagrangian manifolds representing the average of measurable quantities and extremal measures. Secondly, the variance of random vectors induces the Riemannian structures on the corresponding manifolds. Computing higher order central moments, one drives to the corresponding higher order structures, namely the cubic and the fourth order forms. The cubic form is responsible for the skewness of the extremal distribution. The condition for it to be zero gives us so-called symmetric processes. The positivity of the fourth order structure gives us an additional requirement to thermodynamic state.
Collapse
Affiliation(s)
- Valentin Lychagin
- V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, 65 Profsoyuznaya Str., 117997 Moscow, Russia;
| | - Mikhail Roop
- V.A. Trapeznikov Institute of Control Sciences of Russian Academy of Sciences, 65 Profsoyuznaya Str., 117997 Moscow, Russia;
- Faculty of Physics, Lomonosov Moscow State University, Leninskie Gory, 119991 Moscow, Russia
| |
Collapse
|
31
|
Chen T, Wang R. Inference for variance components in linear mixed-effect models with flexible random effect and error distributions. Stat Methods Med Res 2020; 29:3586-3604. [PMID: 32669048 DOI: 10.1177/0962280220933909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In many biomedical investigations, parameters of interest, such as the intraclass correlation coefficient, are functions of higher-order moments reflecting finer distributional characteristics. One popular method to make inference for such parameters is through postulating a parametric random effects model. We relax the standard normality assumptions for both the random effects and errors through the use of the Fleishman distribution, a flexible four-parameter distribution which accounts for the third and fourth cumulants. We propose a Fleishman bootstrap method to construct confidence intervals for correlated data and develop a normality test for the random effect and error distributions. Recognizing that the intraclass correlation coefficient may be heavily influenced by a few extreme observations, we propose a modified, quantile-normalized intraclass correlation coefficient. We evaluate our methods in simulation studies and apply these methods to the Childhood Adenotonsillectomy Trial sleep electroencephalogram data in quantifying wave-frequency correlation among different channels.
Collapse
Affiliation(s)
- Tom Chen
- Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
| | - Rui Wang
- Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA.,Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
32
|
Bastos FDS, Barreto-Souza W. Birnbaum-Saunders sample selection model. J Appl Stat 2020; 48:1896-1916. [PMID: 35706436 DOI: 10.1080/02664763.2020.1780570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The sample selection bias problem occurs when the outcome of interest is only observed according to some selection rule, where there is a dependence structure between the outcome and the selection rule. In a pioneering work, J. Heckman proposed a sample selection model based on a bivariate normal distribution for dealing with this problem. Due to the non-robustness of the normal distribution, many alternatives have been introduced in the literature by assuming extensions of the normal distribution like the Student-t and skew-normal models. One common limitation of the existent sample selection models is that they require a transformation of the outcome of interest, which is common R + -valued, such as income and wage. With this, data are analyzed on a non-original scale which complicates the interpretation of the parameters. In this paper, we propose a sample selection model based on the bivariate Birnbaum-Saunders distribution, which has the same number of parameters that the classical Heckman model. Further, our associated outcome equation is R + -valued. We discuss estimation by maximum likelihood and present some Monte Carlo simulation studies. An empirical application to the ambulatory expenditures data from the 2001 Medical Expenditure Panel Survey is presented.
Collapse
Affiliation(s)
- Fernando de Souza Bastos
- Instituto de Ciências Exatas e Tecnológicas, Universidade Federal de Viçosa - Campus UFV - Florestal, Florestal, Brazil.,Departamento de Estatística, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Wagner Barreto-Souza
- Departamento de Estatística, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil.,Statistics Program, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| |
Collapse
|
33
|
Wirtshafter HS, Wilson MA. Differences in reward biased spatial representations in the lateral septum and hippocampus. eLife 2020; 9:55252. [PMID: 32452763 PMCID: PMC7274787 DOI: 10.7554/elife.55252] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 05/24/2020] [Indexed: 11/13/2022] Open
Abstract
The lateral septum (LS), which is innervated by the hippocampus, is known to represent spatial information. However, the details of place representation in the LS, and whether this place information is combined with reward signaling, remains unknown. We simultaneously recorded from rat CA1 and caudodorsal lateral septum in rat during a rewarded navigation task and compared spatial firing in the two areas. While LS place cells are less numerous than in hippocampus, they are similar to the hippocampus in field size and number of fields per cell, but with field shape and center distributions that are more skewed toward reward. Spike cross-correlations between the hippocampus and LS are greatest for cells that have reward-proximate place fields, suggesting a role for the LS in relaying task-relevant hippocampal spatial information to downstream areas, such as the VTA.
Collapse
Affiliation(s)
- Hannah S Wirtshafter
- Department of Biology, MassachusettsInstitute of Technology, Cambridge, United States.,Picower Institute for Learning andMemory, Massachusetts Institute of Technology, Cambridge, United States
| | - Matthew A Wilson
- Department of Biology, MassachusettsInstitute of Technology, Cambridge, United States.,Picower Institute for Learning andMemory, Massachusetts Institute of Technology, Cambridge, United States.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| |
Collapse
|
34
|
Abstract
Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.
Collapse
Affiliation(s)
- Ludo Waltman
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| | - Vincent A. Traag
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| |
Collapse
|
35
|
Abstract
Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.
Collapse
Affiliation(s)
- Ludo Waltman
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| | - Vincent A. Traag
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| |
Collapse
|
36
|
Smits N, Öğreden O, Garnier-Villarreal M, Terwee CB, Chalmers RP. A study of alternative approaches to non-normal latent trait distributions in item response theory models used for health outcome measurement. Stat Methods Med Res 2020; 29:1030-1048. [PMID: 32156195 PMCID: PMC7221458 DOI: 10.1177/0962280220907625] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
It is often unrealistic to assume normally distributed latent traits in the
measurement of health outcomes. If normality is violated, the item response
theory (IRT) models that are used to calibrate questionnaires may yield
parameter estimates that are biased. Recently, IRT models were developed for
dealing with specific deviations from normality, such as zero-inflation (“excess
zeros”) and skewness. However, these models have not yet been evaluated under
conditions representative of item bank development for health outcomes,
characterized by a large number of polytomous items. A simulation study was
performed to compare the bias in parameter estimates of the graded response
model (GRM), polytomous extensions of the zero-inflated mixture IRT (ZIM-GRM),
and Davidian Curve IRT (DC-GRM). In the case of zero-inflation, the GRM showed
high bias overestimating discrimination parameters and yielding estimates of
threshold parameters that were too high and too close to one another, while
ZIM-GRM showed no bias. In the case of skewness, the GRM and DC-GRM showed
little bias with the GRM showing slightly better results. Consequences for the
development of health outcome measures are discussed.
Collapse
Affiliation(s)
- Niels Smits
- Research Institute of Child Development and Education, University of Amsterdam, Amsterdam, the Netherlands
| | - Oğuzhan Öğreden
- Department of Epidemiology and Biostatistics, VU University Amsterdam, Amsterdam, the Netherlands
| | | | - Caroline B Terwee
- Department of Epidemiology and Biostatistics, VU University Amsterdam, Amsterdam, the Netherlands
| | - R Philip Chalmers
- Quantitative Methods, Faculty of Psychology, York University, Toronto, Canada
| |
Collapse
|
37
|
Szarejko D, Kamiński R, Łaski P, Jarzembska KN. Seed- skewness algorithm for X-ray diffraction signal detection in time-resolved synchrotron Laue photocrystallography. J Synchrotron Radiat 2020; 27:405-413. [PMID: 32153279 PMCID: PMC7064106 DOI: 10.1107/s1600577520000077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/06/2020] [Indexed: 06/10/2023]
Abstract
A one-dimensional seed-skewness algorithm adapted for X-ray diffraction signal detection is presented and discussed. The method, primarily designed for photocrystallographic time-resolved Laue data processing, was shown to work well for the type of data collected at the Advanced Photon Source and European Synchrotron Radiation Facility. Nevertheless, it is also applicable in the case of standard single-crystal X-ray diffraction data. The reported algorithm enables reasonable separation of signal from the background in single one-dimensional data vectors as well as the capability to determine small changes of reflection shapes and intensities resulting from exposure of the sample to laser light. Otherwise, the procedure is objective, and relies only on skewness computation and its subsequent minimization. The new algorithm was proved to yield comparable results to the Kruskal-Wallis test method [Kalinowski, J. A. et al. (2012). J. Synchrotron Rad. 19, 637], while the processing takes a similar amount of time. Importantly, in contrast to the Kruskal-Wallis test, the reported seed-skewness approach does not need redundant input data, which allows for faster data collections and wider applications. Furthermore, as far as the structure refinement is concerned, the reported algorithm leads to the excited-state geometry closest to the one modelled using the quantum-mechanics/molecular-mechanics approach reported previously [Jarzembska, K. N. et al. (2014). Inorg. Chem. 53, 10594], when the t and s algorithm parameters are set to the recommended values of 0.2 and 3.0, respectively.
Collapse
Affiliation(s)
- Dariusz Szarejko
- Department of Chemistry, University of Warsaw, Żwirki i Wigury 101, 02-089 Warsaw, Poland
| | - Radosław Kamiński
- Department of Chemistry, University of Warsaw, Żwirki i Wigury 101, 02-089 Warsaw, Poland
| | - Piotr Łaski
- Department of Chemistry, University of Warsaw, Żwirki i Wigury 101, 02-089 Warsaw, Poland
| | | |
Collapse
|
38
|
Shahbakhti M, Maugeon M, Beiramvand M, Marozas V. Low Complexity Automatic Stationary Wavelet Transform for Elimination of Eye Blinks from EEG. Brain Sci 2019; 9:E352. [PMID: 31810263 DOI: 10.3390/brainsci9120352] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 11/27/2019] [Indexed: 12/20/2022] Open
Abstract
The electroencephalogram signal (EEG) often suffers from various artifacts and noises that have physiological and non-physiological origins. Among these artifacts, eye blink, due to its amplitude is considered to have the most influence on EEG analysis. In this paper, a low complexity approach based on Stationary Wavelet Transform (SWT) and skewness is proposed to remove eye blink artifacts from EEG signals. The proposed method is compared against Automatic Wavelet Independent Components Analysis (AWICA) and Enhanced AWICA. Normalized Root Mean Square Error (NRMSE), Peak Signal-to-Noise Ratio (PSNR), and correlation coefficient ( ρ ) between filtered and pure EEG signals are utilized to quantify artifact removal performance. The proposed approach shows smaller NRMSE, larger PSNR, and larger correlation coefficient values compared to the other methods. Furthermore, the speed of execution of the proposed method is considerably faster than other methods, which makes it more suitable for real-time processing.
Collapse
|
39
|
Chatterjee S. Detection of focal electroencephalogram signals using higher-order moments in EMD-TKEO domain. Healthc Technol Lett 2019; 6:64-69. [PMID: 31341630 PMCID: PMC6595538 DOI: 10.1049/htl.2018.5036] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 02/27/2019] [Accepted: 03/27/2019] [Indexed: 11/19/2022] Open
Abstract
Detection of epileptogenic focus based on electroencephalogram (EEG) signal screening is an important pre-surgical step to remove affected regions inside the human brain. Considering the fact above, in this work, a novel technique for detection of focal EEG signals is proposed using a combination of empirical mode decomposition (EMD) and Teager–Kaiser energy operator (TKEO). EEG signals belonging to focal (Fo) and non-focal (NFo) groups were at first decomposed into a set of intrinsic mode functions (IMFs) using EMD. Next, TKEO was applied on each IMF and two higher-order statistical moments namely skewness and kurtosis were extracted as features from TKEO of each IMF. The statistical significance of the selected features was evaluated using student's t-test and based on the statistical test, features from first three IMFs which show very high discriminative capability were selected as inputs to a support vector machine classifier for discrimination of Fo and NFo signals. It was observed that the classification accuracy of 92.65% is obtained in classifying EEG signals using a radial basis kernel function, which demonstrates the efficacy of proposed EMD-TKEO based feature extraction method for computer-based treatment of patients suffering from focal seizures.
Collapse
Affiliation(s)
- Soumya Chatterjee
- Department of Electrical Engineering, Jadavpur University, Kolkata 700032, India
| |
Collapse
|
40
|
Abstract
Two recent publications in Educational and Psychological Measurement advocated that researchers consider using the a priori procedure. According to this procedure, the researcher specifies, prior to data collection, how close she wishes her sample mean(s) to be to the corresponding population mean(s), and the desired probability of being that close. A priori equations provide the necessary sample size to meet specifications under the normal distribution. Or, if sample size is taken as given, a priori equations provide the precision with which estimates of distribution means can be made. However, there is currently no way to perform these calculations under the more general family of skew-normal distributions. The present research provides the necessary equations. In addition, we show how skewness can increase the precision with which locations of distributions can be estimated. This conclusion, based on the perspective of improving sampling precision, contrasts with a typical argument in favor of performing transformations to normalize skewed data for the sake of performing more efficient significance tests.
Collapse
Affiliation(s)
| | - Tonghui Wang
- New Mexico State University, Las Cruces, NM, USA
| | - Cong Wang
- New Mexico State University, Las Cruces, NM, USA
| |
Collapse
|
41
|
Oh M, Kim YH. Statistical Approach to Spectrogram Analysis for Radio-Frequency Interference Detection and Mitigation in an L-Band Microwave Radiometer. Sensors (Basel) 2019; 19:s19020306. [PMID: 30646536 PMCID: PMC6359279 DOI: 10.3390/s19020306] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 01/09/2019] [Accepted: 01/09/2019] [Indexed: 06/09/2023]
Abstract
For the elimination of radio-frequency interference (RFI) in a passive microwave radiometer, the threshold level is generally calculated from the mean value and standard deviation. However, a serious problem that can arise is an error in the retrieved brightness temperature from a higher threshold level owing to the presence of RFI. In this paper, we propose a method to detect and mitigate RFI contamination using the threshold level from statistical criteria based on a spectrogram technique. Mean and skewness spectrograms are created from a brightness temperature spectrogram by shifting the 2-D window to discriminate the form of the symmetric distribution as a natural thermal emission signal. From the remaining bins of the mean spectrogram eliminated by RFI-flagged bins in the skewness spectrogram for data captured at 0.1-s intervals, two distribution sides are identically created from the left side of the distribution by changing the standard position of the distribution. Simultaneously, kurtosis calculations from these bins for each symmetric distribution are repeatedly performed to determine the retrieved brightness temperature corresponding to the closest kurtosis value of three. The performance is evaluated using experimental data, and the maximum error and root-mean-square error (RMSE) in the retrieved brightness temperature are served to be less than approximately 3 K and 1.7 K, respectively, from a window with a size of 100 × 100 time⁻frequency bins according to the RFI levels and cases.
Collapse
Affiliation(s)
- Myeonggeun Oh
- School of Mechatronics, Gwangju Institute of Science and Technology, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea.
| | - Yong-Hoon Kim
- School of Mechatronics, Gwangju Institute of Science and Technology, 123 Cheomdangwagi-ro, Buk-gu, Gwangju 61005, Korea.
| |
Collapse
|
42
|
Zhao L, Sheppard LW, Reid PC, Walter JA, Reuman DC. Proximate determinants of Taylor's law slopes. J Anim Ecol 2018; 88:484-494. [PMID: 30474262 DOI: 10.1111/1365-2656.12931] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 10/27/2018] [Indexed: 12/01/2022]
Abstract
Taylor's law (TL), a commonly observed and applied pattern in ecology, describes variances of population densities as related to mean densities via log(variance) = log(a) + b*log(mean). Variations among datasets in the slope, b, have been associated with multiple factors of central importance in ecology, including strength of competitive interactions and demographic rates. But these associations are not transparent, and the relative importance of these and other factors for TL slope variation is poorly studied. TL is thus a ubiquitously used indicator in ecology, the understanding of which is still opaque. The goal of this study was to provide tools to help fill this gap in understanding by providing proximate determinants of TL slopes, statistical quantities that are correlated to TL slopes but are simpler than the slope itself and are more readily linked to ecological factors. Using numeric simulations and 82 multi-decadal population datasets, we here propose, test and apply two proximate statistical determinants of TL slopes which we argue can become key tools for understanding the nature and ecological causes of TL slope variation. We find that measures based on population skewness, coefficient of variation and synchrony are effective proximate determinants. We demonstrate their potential for application by using them to help explain covariation in slopes of spatial and temporal TL (two common types of TL). This study provides tools for understanding TL, and demonstrates their usefulness.
Collapse
Affiliation(s)
- Lei Zhao
- Beijing Key Laboratory of Biodiversity and Organic Farming, College of Resources and Environmental Sciences, China Agricultural University, Beijing, China.,Department of Ecology and Evolutionary Biology and Kansas Biological Survey, University of Kansas, Lawrence, Kansas.,Research Center for Engineering Ecology and Nonlinear Science, North China Electric Power University, Beijing, China
| | - Lawrence W Sheppard
- Department of Ecology and Evolutionary Biology and Kansas Biological Survey, University of Kansas, Lawrence, Kansas
| | - Philip C Reid
- The Continuous Plankton Recorder Survey, Marine Biological Association, Plymouth, UK.,School of Biological and Marine Sciences, University of Plymouth, Plymouth, UK
| | - Jonathan A Walter
- Department of Ecology and Evolutionary Biology and Kansas Biological Survey, University of Kansas, Lawrence, Kansas.,Department of Biology, Virginia Commonwealth University, Richmond, Virginia
| | - Daniel C Reuman
- Department of Ecology and Evolutionary Biology and Kansas Biological Survey, University of Kansas, Lawrence, Kansas.,Laboratory of Populations, Rockefeller University, New York, New York
| |
Collapse
|
43
|
Olvera Astivia OL, Zumbo BD. On the solution multiplicity of the Fleishman method and its impact in simulation studies. Br J Math Stat Psychol 2018; 71:437-458. [PMID: 29323414 DOI: 10.1111/bmsp.12126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 10/31/2017] [Indexed: 06/07/2023]
Abstract
The Fleishman third-order polynomial algorithm is one of the most-often used non-normal data-generating methods in Monte Carlo simulations. At the crux of the Fleishman method is the solution of a non-linear system of equations needed to obtain the constants to transform data from normality to non-normality. A rarely acknowledged fact in the literature is that the solution to this system is not unique, and it is currently unknown what influence the different types of solutions have on the computer-generated data. To address this issue, analytical and empirical investigations were conducted, aimed at documenting the impact that each solution type has on the design of computer simulations. In the first study, it was found that certain types of solutions generate data with different multivariate properties and wider coverage of the theoretical range spanned by population correlations. In the second study, it was found that previously published recommendations from Monte Carlo simulations could change if different types of solutions were used to generate the data. A mathematical description of the multiple solutions to the Fleishman polynomials is provided, as well as recommendations for users of this method.
Collapse
Affiliation(s)
| | - Bruno D Zumbo
- Department of ECPS, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
44
|
Kobayashi H, Song C, Ikei H, Park BJ, Lee J, Kagawa T, Miyazaki Y. Forest Walking Affects Autonomic Nervous Activity: A Population-Based Study. Front Public Health 2018; 6:278. [PMID: 30327762 PMCID: PMC6174240 DOI: 10.3389/fpubh.2018.00278] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Accepted: 09/10/2018] [Indexed: 11/17/2022] Open
Abstract
The present study aimed to evaluate the effect of walking in forest environments on autonomic nervous activity with special reference to its distribution characteristics. Heart rate variability (HRV) of 485 male participants while walking for ~15 min in a forest and an urban area was analyzed. The experimental sites were 57 forests and 57 urban areas across Japan. Parasympathetic and sympathetic indicators [lnHF and ln(LF/HF), respectively] of HRV were calculated based on ~15-min heart rate recordings. Skewness and kurtosis of the distributions of lnHF and ln(LF/HF) were almost the same between the two environments, although the means and medians of the indicators differed significantly. Percentages of positive responders [presenting an increase in lnHF or a decrease in ln(LF/HF) in forest environments] were 65.2 and 67.0%, respectively. The percentage of lnHF was significantly smaller than our previous results on HRV during the viewing of urban or forest landscapes, whereas the percentage of ln(LF/HF) was not significantly different. The results suggest that walking in a forest environment has a different effect on autonomic nervous activity than viewing a forest landscape.
Collapse
Affiliation(s)
- Hiromitsu Kobayashi
- Department of Nursing, Ishikawa Prefectural Nursing University, Ishikawa, Japan
| | - Chorong Song
- Center for Environment, Health and Field Sciences, Chiba University, Kashiwa, Japan
| | - Harumi Ikei
- Center for Environment, Health and Field Sciences, Chiba University, Kashiwa, Japan.,Forestry and Forest Products Research Institute, Tsukuba, Japan
| | - Bum-Jin Park
- Department of Environment and Forest Resources, Chungnam National University, Daejeon, South Korea
| | - Juyoung Lee
- Department of Landscape Architecture, Hankyong National University, Anseong-si, South Korea
| | - Takahide Kagawa
- Forestry and Forest Products Research Institute, Tsukuba, Japan
| | - Yoshifumi Miyazaki
- Center for Environment, Health and Field Sciences, Chiba University, Kashiwa, Japan
| |
Collapse
|
45
|
Chen Z, Li J, Li Z, Peng Y, Gao X. [Automatic detection and classification of atrial fibrillation using RR intervals and multi-eigenvalue]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi 2018; 35:550-556. [PMID: 30124017 DOI: 10.7507/1001-5515.201710050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Atrial fibrillation (AF) is a common arrhythmia disease. Detection of atrial fibrillation based on electrocardiogram (ECG) is of great significance for clinical diagnosis. Due to the non-linearity and complexity of ECG signals, the procedure to manually diagnose the ECG signals takes a lot of time and is prone to errors. In order to overcome the above problems, a feature extraction method based on RR interval is proposed in this paper. The discrete degree of RR interval is described with the robust coefficient of variation (RCV), the distribution shape of RR interval is described with the skewness parameter (SKP), and the complexity of RR interval is described with the Lempel-Ziv complexity (LZC). Finally, the feature vectors of RCV, SKP, and LZC are input into the support vector machine (SVM) classifier model to achieve automatic classification and detection of atrial fibrillation. To verify the validity and practicability of the proposed method, the MIT-BIH atrial fibrillation database was used to verify the data. The final classification results show that the sensitivity is 95.81%, the specificity is 96.48%, the accuracy is 96.09%, and the specificity of 95.16% is achieved in the MIT-BIH normal sinus rhythm database. The experimental results show that the proposed method is an effective classification method for atrial fibrillation.
Collapse
Affiliation(s)
- Zhibo Chen
- School of Electronic Information, Sichuan University, Chengdu 610041, P.R.China
| | - Jian Li
- School of Electronic Information, Sichuan University, Chengdu 610041, P.R.China
| | - Zhi Li
- School of Electronic Information, Sichuan University, Chengdu 610041,
| | - Yuntao Peng
- School of Electronic Information, Sichuan University, Chengdu 610041, P.R.China
| | - Xingjiao Gao
- School of Electronic Information, Sichuan University, Chengdu 610041, P.R.China
| |
Collapse
|
46
|
Abstract
Health care expenditures and use are challenging to model because these dependent variables typically have distributions that are skewed with a large mass at zero. In this article, we describe estimation and interpretation of the effects of a natural experiment using two classes of nonlinear statistical models: one for health care expenditures and the other for counts of health care use. We extend prior analyses to test the effect of the ACA's young adult expansion on three different outcomes: total health care expenditures, office-based visits, and emergency department visits. Modeling the outcomes with a two-part or hurdle model, instead of a single-equation model, reveals that the ACA policy increased the number of office-based visits but decreased emergency department visits and overall spending.
Collapse
Affiliation(s)
- Partha Deb
- Department of Economics, Hunter College, City University of New York, New York, NY 10065, USA; and National Bureau of Economic Research;
| | - Edward C Norton
- Departments of Health Management and Policy and Economics, University of Michigan, Ann Arbor, Michigan 48109, USA; and National Bureau of Economic Research;
| |
Collapse
|
47
|
Castro LM, Wang WL, Lachos VH, Inácio de Carvalho V, Bayes CL. Bayesian semiparametric modeling for HIV longitudinal data with censoring and skewness. Stat Methods Med Res 2018; 28:1457-1476. [PMID: 29551086 DOI: 10.1177/0962280218760360] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In biomedical studies, the analysis of longitudinal data based on Gaussian assumptions is common practice. Nevertheless, more often than not, the observed responses are naturally skewed, rendering the use of symmetric mixed effects models inadequate. In addition, it is also common in clinical assays that the patient's responses are subject to some upper and/or lower quantification limit, depending on the diagnostic assays used for their detection. Furthermore, responses may also often present a nonlinear relation with some covariates, such as time. To address the aforementioned three issues, we consider a Bayesian semiparametric longitudinal censored model based on a combination of splines, wavelets, and the skew-normal distribution. Specifically, we focus on the use of splines to approximate the general mean, wavelets for modeling the individual subject trajectories, and on the skew-normal distribution for modeling the random effects. The newly developed method is illustrated through simulated data and real data concerning AIDS/HIV viral loads.
Collapse
Affiliation(s)
- Luis M Castro
- 1 Department of Statistics, Pontificia Universidad Católica de Chile, Chile
| | - Wan-Lun Wang
- 2 Department of Statistics, Graduate Institute of Statistics and Actuarial Science, Feng Chia University, Taichung, Taiwan
| | - Victor H Lachos
- 3 Department of Statistics, University of Connecticut, Storrs, CT, USA
| | | | - Cristian L Bayes
- 5 Department of Sciences, Pontificia Universidad Católica del Perú, Lima, Perú
| |
Collapse
|
48
|
Bishara AJ, Li J, Nash T. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis. Br J Math Stat Psychol 2018; 71:167-185. [PMID: 28872186 DOI: 10.1111/bmsp.12113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 06/19/2017] [Indexed: 06/07/2023]
Abstract
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code.
Collapse
Affiliation(s)
- Anthony J Bishara
- Department of Psychology, College of Charleston, South Carolina, USA
| | - Jiexiang Li
- Department of Mathematics, College of Charleston, South Carolina, USA
| | - Thomas Nash
- Department of Computer Science, College of Charleston, South Carolina, USA
| |
Collapse
|
49
|
Abstract
With necessary condition analysis (NCA), a necessity effect is estimated by calculating the amount of empty space in the upper left corner in a plot with a predictor X and an outcome Y. In the present simulation study, calculated necessity effects were found to have a negative association with the skewness of the predictor and a positive association with the skewness of the outcome. Also the standard error of the necessity effect was found to be influenced by the skewness of the predictor and the skewness of the outcome, as well as by sample size, and a way to calculate a confidence interval for the necessity effect is presented. At least some of the findings obtained with NCA are well within the range of what can be expected from the skewness of the predictor and the outcome alone.
Collapse
Affiliation(s)
- Kimmo Sorjonen
- Division of Psychology, Department of Clinical Neuroscience, Karolinska Institute, Solna, Sweden
| | - Jenny Wikström Alex
- Division of Psychology, Department of Clinical Neuroscience, Karolinska Institute, Solna, Sweden
| | - Bo Melin
- Division of Psychology, Department of Clinical Neuroscience, Karolinska Institute, Solna, Sweden
| |
Collapse
|
50
|
Abstract
To address the complexity of the X-chromosome inactivation (XCI) process, we previously developed a unified approach for the association test for X-chromosomal single-nucleotide polymorphisms (SNPs) and the disease of interest, accounting for different biological possibilities of XCI: random, skewed, and escaping XCI. In the original study, we focused on the SNP-disease association test but did not provide knowledge regarding the underlying XCI models. One can use the highest likelihood ratio (LLR) to select XCI models (max-LLR approach). However, that approach does not formally compare the LLRs corresponding to different XCI models to assess whether the models are distinguishable. Therefore, we propose an LLR comparison procedure (comp-LLR approach), inspired by the Cox test, to formally compare the LLRs of different XCI models to select the most likely XCI model that describes the underlying XCI process. We conduct simulation studies to investigate the max-LLR and comp-LLR approaches. The simulation results show that compared with the max-LLR, the comp-LLR approach has higher probability of identifying the correct underlying XCI model for the scenarios when the underlying XCI process is random XCI, escaping XCI, or skewed XCI to the deleterious allele. We applied both approaches to a head and neck cancer genetic study to investigate the underlying XCI processes for the X-chromosomal genetic variants.
Collapse
Affiliation(s)
- Jian Wang
- Department of Biostatistics–Unit 1411, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Rajesh Talluri
- Department of Data Science, The University of Mississippi Medical Center, Jackson, MS, USA
| | - Sanjay Shete
- Department of Biostatistics–Unit 1411, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Epidemiology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| |
Collapse
|