26
|
Molinari CA, Bresson P, Palacin F, Billat V. Pace Controlled by a Steady-State Physiological Variable Is Associated with Better Performance in a 3000 M Run. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:7886. [PMID: 34360178 PMCID: PMC8345513 DOI: 10.3390/ijerph18157886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/21/2021] [Accepted: 07/23/2021] [Indexed: 11/17/2022]
Abstract
This paper aims to test the hypothesis whereby freely chosen running pace is less effective than pace controlled by a steady-state physiological variable. Methods Eight runners performed four maximum-effort 3000 m time trials on a running track. The first time trial (TT1) was freely paced. In the following 3000 m time trials, the pace was controlled so that the average speed (TT2), average V˙O2 (TT3) or average HR (TT4) recorded in TT1 was maintained throughout the time trial. Results: Physiologically controlled pace was associated with a faster time (mean ± standard deviation: 740 ± 34 s for TT3 and 748 ± 33 s for TT4, vs. 854 ± 53 s for TT1; p < 0.01), a lower oxygen cost of running (200 ± 5 and 220 ± 3 vs. 310 ± 5 mLO2·kg-1·km-1, respectively; p < 0.02), a lower cardiac cost (0.69 ± 0.08 and 0.69 ± 0.04 vs. 0.86 ± 0.09 beat·m-1, respectively; p < 0.01), and a more positively skewed speed distribution (skewness: 1.7 ± 0.9 and 1.3 ± 0.6 vs. 0.2 ± 0.4, p < 0.05). Conclusion: Physiologically controlled pace (at the average V˙O2 or HR recorded in a freely paced run) was associated with a faster time, a more favorable speed distribution and lower levels of physiological strain, relative to freely chosen pace. This finding suggests that non-elite runners do not spontaneously choose the best pace strategy.
Collapse
|
27
|
Characterization of Monochromatic Aberrated Metalenses in Terms of Intensity-Based Moments. NANOMATERIALS 2021; 11:nano11071805. [PMID: 34361191 PMCID: PMC8308444 DOI: 10.3390/nano11071805] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/08/2021] [Accepted: 07/09/2021] [Indexed: 12/12/2022]
Abstract
Consistent with wave-optics simulations of metasurfaces, aberrations of metalenses should also be described in terms of wave optics and not ray tracing. In this respect, we have shown, through extensive numerical simulations, that intensity-based moments and the associated parameters defined in terms of them (average position, spatial extent, skewness and kurtosis) adequately capture changes in beam shapes induced by aberrations of a metalens with a hyperbolic phase profile. We have studied axial illumination, in which phase-discretization induced aberrations exist, as well as non-axial illumination, when coma could also appear. Our results allow the identification of the parameters most prone to induce changes in the beam shape for metalenses that impart on an incident electromagnetic field a step-like approximation of an ideal phase profile.
Collapse
|
28
|
Abstract
We study an equilibrium risk and return model to explore the effects of the coronavirus crisis and associated skewness on the market price of risk. We derive the moment and equilibrium equations, specifying skewness price of risk as an additive component of the effect of variance on mean expected return. We estimate our model using the flexible skewed generalized error distribution, for which we derive the distribution of returns and the likelihood function. Using S&P 500 Index returns from January 1980 to mid-October 2020, our results show that the coronavirus crisis generated a deeply negative reaction in the skewness and total market price of risk, more negative even than the subprime and the October 1987 crises.
Collapse
|
29
|
Webb ALM. Reversing the Luminance Polarity of Control Faces: Why Are Some Negative Faces Harder to Recognize, but Easier to See? Front Psychol 2021; 11:609045. [PMID: 33551920 PMCID: PMC7858267 DOI: 10.3389/fpsyg.2020.609045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 12/15/2020] [Indexed: 11/13/2022] Open
Abstract
Control stimuli are key for understanding the extent to which face processing relies on holistic processing, and affective evaluation versus the encoding of low-level image properties. Luminance polarity (LP) reversal combined with face inversion is a popular tool for severely disrupting the recognition of face controls. However, recent findings demonstrate visibility-recognition trade-offs for LP-reversed faces, where these face controls sometimes appear more salient despite being harder to recognize. The present report brings together findings from image analysis, simple stimuli, and behavioral data for facial recognition and visibility, in an attempt to disentangle instances where LP-reversed control faces are associated with a performance bias in terms of their perceived salience. These findings have important implications for studies of subjective face appearance, and highlight that future research must be aware of behavioral artifacts due to the possibility of trade-off effects.
Collapse
|
30
|
Lychagin V, Roop M. On Higher Order Structures in Thermodynamics. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1147. [PMID: 33286916 PMCID: PMC7597303 DOI: 10.3390/e22101147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Revised: 10/07/2020] [Accepted: 10/10/2020] [Indexed: 11/17/2022]
Abstract
We present the development of the approach to thermodynamics based on measurement. First of all, we recall that considering classical thermodynamics as a theory of measurement of extensive variables one gets the description of thermodynamic states as Legendrian or Lagrangian manifolds representing the average of measurable quantities and extremal measures. Secondly, the variance of random vectors induces the Riemannian structures on the corresponding manifolds. Computing higher order central moments, one drives to the corresponding higher order structures, namely the cubic and the fourth order forms. The cubic form is responsible for the skewness of the extremal distribution. The condition for it to be zero gives us so-called symmetric processes. The positivity of the fourth order structure gives us an additional requirement to thermodynamic state.
Collapse
|
31
|
Chen T, Wang R. Inference for variance components in linear mixed-effect models with flexible random effect and error distributions. Stat Methods Med Res 2020; 29:3586-3604. [PMID: 32669048 DOI: 10.1177/0962280220933909] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In many biomedical investigations, parameters of interest, such as the intraclass correlation coefficient, are functions of higher-order moments reflecting finer distributional characteristics. One popular method to make inference for such parameters is through postulating a parametric random effects model. We relax the standard normality assumptions for both the random effects and errors through the use of the Fleishman distribution, a flexible four-parameter distribution which accounts for the third and fourth cumulants. We propose a Fleishman bootstrap method to construct confidence intervals for correlated data and develop a normality test for the random effect and error distributions. Recognizing that the intraclass correlation coefficient may be heavily influenced by a few extreme observations, we propose a modified, quantile-normalized intraclass correlation coefficient. We evaluate our methods in simulation studies and apply these methods to the Childhood Adenotonsillectomy Trial sleep electroencephalogram data in quantifying wave-frequency correlation among different channels.
Collapse
|
32
|
Bastos FDS, Barreto-Souza W. Birnbaum-Saunders sample selection model. J Appl Stat 2020; 48:1896-1916. [PMID: 35706436 DOI: 10.1080/02664763.2020.1780570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The sample selection bias problem occurs when the outcome of interest is only observed according to some selection rule, where there is a dependence structure between the outcome and the selection rule. In a pioneering work, J. Heckman proposed a sample selection model based on a bivariate normal distribution for dealing with this problem. Due to the non-robustness of the normal distribution, many alternatives have been introduced in the literature by assuming extensions of the normal distribution like the Student-t and skew-normal models. One common limitation of the existent sample selection models is that they require a transformation of the outcome of interest, which is common R + -valued, such as income and wage. With this, data are analyzed on a non-original scale which complicates the interpretation of the parameters. In this paper, we propose a sample selection model based on the bivariate Birnbaum-Saunders distribution, which has the same number of parameters that the classical Heckman model. Further, our associated outcome equation is R + -valued. We discuss estimation by maximum likelihood and present some Monte Carlo simulation studies. An empirical application to the ambulatory expenditures data from the 2001 Medical Expenditure Panel Survey is presented.
Collapse
|
33
|
Wirtshafter HS, Wilson MA. Differences in reward biased spatial representations in the lateral septum and hippocampus. eLife 2020; 9:55252. [PMID: 32452763 PMCID: PMC7274787 DOI: 10.7554/elife.55252] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 05/24/2020] [Indexed: 11/13/2022] Open
Abstract
The lateral septum (LS), which is innervated by the hippocampus, is known to represent spatial information. However, the details of place representation in the LS, and whether this place information is combined with reward signaling, remains unknown. We simultaneously recorded from rat CA1 and caudodorsal lateral septum in rat during a rewarded navigation task and compared spatial firing in the two areas. While LS place cells are less numerous than in hippocampus, they are similar to the hippocampus in field size and number of fields per cell, but with field shape and center distributions that are more skewed toward reward. Spike cross-correlations between the hippocampus and LS are greatest for cells that have reward-proximate place fields, suggesting a role for the LS in relaying task-relevant hippocampal spatial information to downstream areas, such as the VTA.
Collapse
|
34
|
Abstract
Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.
Collapse
|
35
|
Waltman L, Traag VA. Use of the journal impact factor for assessing individual articles need not be statistically wrong. F1000Res 2020; 9:366. [PMID: 33796272 PMCID: PMC7974631 DOI: 10.12688/f1000research.23418.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/17/2020] [Indexed: 07/22/2023] Open
Abstract
Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.
Collapse
|
36
|
Smits N, Öğreden O, Garnier-Villarreal M, Terwee CB, Chalmers RP. A study of alternative approaches to non-normal latent trait distributions in item response theory models used for health outcome measurement. Stat Methods Med Res 2020; 29:1030-1048. [PMID: 32156195 PMCID: PMC7221458 DOI: 10.1177/0962280220907625] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
It is often unrealistic to assume normally distributed latent traits in the
measurement of health outcomes. If normality is violated, the item response
theory (IRT) models that are used to calibrate questionnaires may yield
parameter estimates that are biased. Recently, IRT models were developed for
dealing with specific deviations from normality, such as zero-inflation (“excess
zeros”) and skewness. However, these models have not yet been evaluated under
conditions representative of item bank development for health outcomes,
characterized by a large number of polytomous items. A simulation study was
performed to compare the bias in parameter estimates of the graded response
model (GRM), polytomous extensions of the zero-inflated mixture IRT (ZIM-GRM),
and Davidian Curve IRT (DC-GRM). In the case of zero-inflation, the GRM showed
high bias overestimating discrimination parameters and yielding estimates of
threshold parameters that were too high and too close to one another, while
ZIM-GRM showed no bias. In the case of skewness, the GRM and DC-GRM showed
little bias with the GRM showing slightly better results. Consequences for the
development of health outcome measures are discussed.
Collapse
|
37
|
Szarejko D, Kamiński R, Łaski P, Jarzembska KN. Seed- skewness algorithm for X-ray diffraction signal detection in time-resolved synchrotron Laue photocrystallography. JOURNAL OF SYNCHROTRON RADIATION 2020; 27:405-413. [PMID: 32153279 PMCID: PMC7064106 DOI: 10.1107/s1600577520000077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/06/2020] [Indexed: 06/10/2023]
Abstract
A one-dimensional seed-skewness algorithm adapted for X-ray diffraction signal detection is presented and discussed. The method, primarily designed for photocrystallographic time-resolved Laue data processing, was shown to work well for the type of data collected at the Advanced Photon Source and European Synchrotron Radiation Facility. Nevertheless, it is also applicable in the case of standard single-crystal X-ray diffraction data. The reported algorithm enables reasonable separation of signal from the background in single one-dimensional data vectors as well as the capability to determine small changes of reflection shapes and intensities resulting from exposure of the sample to laser light. Otherwise, the procedure is objective, and relies only on skewness computation and its subsequent minimization. The new algorithm was proved to yield comparable results to the Kruskal-Wallis test method [Kalinowski, J. A. et al. (2012). J. Synchrotron Rad. 19, 637], while the processing takes a similar amount of time. Importantly, in contrast to the Kruskal-Wallis test, the reported seed-skewness approach does not need redundant input data, which allows for faster data collections and wider applications. Furthermore, as far as the structure refinement is concerned, the reported algorithm leads to the excited-state geometry closest to the one modelled using the quantum-mechanics/molecular-mechanics approach reported previously [Jarzembska, K. N. et al. (2014). Inorg. Chem. 53, 10594], when the t and s algorithm parameters are set to the recommended values of 0.2 and 3.0, respectively.
Collapse
|
38
|
Low Complexity Automatic Stationary Wavelet Transform for Elimination of Eye Blinks from EEG. Brain Sci 2019; 9:brainsci9120352. [PMID: 31810263 PMCID: PMC6955982 DOI: 10.3390/brainsci9120352] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 11/27/2019] [Indexed: 12/20/2022] Open
Abstract
The electroencephalogram signal (EEG) often suffers from various artifacts and noises that have physiological and non-physiological origins. Among these artifacts, eye blink, due to its amplitude is considered to have the most influence on EEG analysis. In this paper, a low complexity approach based on Stationary Wavelet Transform (SWT) and skewness is proposed to remove eye blink artifacts from EEG signals. The proposed method is compared against Automatic Wavelet Independent Components Analysis (AWICA) and Enhanced AWICA. Normalized Root Mean Square Error (NRMSE), Peak Signal-to-Noise Ratio (PSNR), and correlation coefficient ( ρ ) between filtered and pure EEG signals are utilized to quantify artifact removal performance. The proposed approach shows smaller NRMSE, larger PSNR, and larger correlation coefficient values compared to the other methods. Furthermore, the speed of execution of the proposed method is considerably faster than other methods, which makes it more suitable for real-time processing.
Collapse
|
39
|
Chatterjee S. Detection of focal electroencephalogram signals using higher-order moments in EMD-TKEO domain. Healthc Technol Lett 2019; 6:64-69. [PMID: 31341630 PMCID: PMC6595538 DOI: 10.1049/htl.2018.5036] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 02/27/2019] [Accepted: 03/27/2019] [Indexed: 11/19/2022] Open
Abstract
Detection of epileptogenic focus based on electroencephalogram (EEG) signal screening is an important pre-surgical step to remove affected regions inside the human brain. Considering the fact above, in this work, a novel technique for detection of focal EEG signals is proposed using a combination of empirical mode decomposition (EMD) and Teager–Kaiser energy operator (TKEO). EEG signals belonging to focal (Fo) and non-focal (NFo) groups were at first decomposed into a set of intrinsic mode functions (IMFs) using EMD. Next, TKEO was applied on each IMF and two higher-order statistical moments namely skewness and kurtosis were extracted as features from TKEO of each IMF. The statistical significance of the selected features was evaluated using student's t-test and based on the statistical test, features from first three IMFs which show very high discriminative capability were selected as inputs to a support vector machine classifier for discrimination of Fo and NFo signals. It was observed that the classification accuracy of 92.65% is obtained in classifying EEG signals using a radial basis kernel function, which demonstrates the efficacy of proposed EMD-TKEO based feature extraction method for computer-based treatment of patients suffering from focal seizures.
Collapse
|
40
|
Trafimow D, Wang T, Wang C. From a Sampling Precision Perspective, Skewness Is a Friend and Not an Enemy! EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2019; 79:129-150. [PMID: 30636785 PMCID: PMC6318746 DOI: 10.1177/0013164418764801] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Two recent publications in Educational and Psychological Measurement advocated that researchers consider using the a priori procedure. According to this procedure, the researcher specifies, prior to data collection, how close she wishes her sample mean(s) to be to the corresponding population mean(s), and the desired probability of being that close. A priori equations provide the necessary sample size to meet specifications under the normal distribution. Or, if sample size is taken as given, a priori equations provide the precision with which estimates of distribution means can be made. However, there is currently no way to perform these calculations under the more general family of skew-normal distributions. The present research provides the necessary equations. In addition, we show how skewness can increase the precision with which locations of distributions can be estimated. This conclusion, based on the perspective of improving sampling precision, contrasts with a typical argument in favor of performing transformations to normalize skewed data for the sake of performing more efficient significance tests.
Collapse
|
41
|
Oh M, Kim YH. Statistical Approach to Spectrogram Analysis for Radio-Frequency Interference Detection and Mitigation in an L-Band Microwave Radiometer. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19020306. [PMID: 30646536 PMCID: PMC6359279 DOI: 10.3390/s19020306] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 01/09/2019] [Accepted: 01/09/2019] [Indexed: 06/09/2023]
Abstract
For the elimination of radio-frequency interference (RFI) in a passive microwave radiometer, the threshold level is generally calculated from the mean value and standard deviation. However, a serious problem that can arise is an error in the retrieved brightness temperature from a higher threshold level owing to the presence of RFI. In this paper, we propose a method to detect and mitigate RFI contamination using the threshold level from statistical criteria based on a spectrogram technique. Mean and skewness spectrograms are created from a brightness temperature spectrogram by shifting the 2-D window to discriminate the form of the symmetric distribution as a natural thermal emission signal. From the remaining bins of the mean spectrogram eliminated by RFI-flagged bins in the skewness spectrogram for data captured at 0.1-s intervals, two distribution sides are identically created from the left side of the distribution by changing the standard position of the distribution. Simultaneously, kurtosis calculations from these bins for each symmetric distribution are repeatedly performed to determine the retrieved brightness temperature corresponding to the closest kurtosis value of three. The performance is evaluated using experimental data, and the maximum error and root-mean-square error (RMSE) in the retrieved brightness temperature are served to be less than approximately 3 K and 1.7 K, respectively, from a window with a size of 100 × 100 time⁻frequency bins according to the RFI levels and cases.
Collapse
|
42
|
Zhao L, Sheppard LW, Reid PC, Walter JA, Reuman DC. Proximate determinants of Taylor's law slopes. J Anim Ecol 2018; 88:484-494. [PMID: 30474262 DOI: 10.1111/1365-2656.12931] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Accepted: 10/27/2018] [Indexed: 12/01/2022]
Abstract
Taylor's law (TL), a commonly observed and applied pattern in ecology, describes variances of population densities as related to mean densities via log(variance) = log(a) + b*log(mean). Variations among datasets in the slope, b, have been associated with multiple factors of central importance in ecology, including strength of competitive interactions and demographic rates. But these associations are not transparent, and the relative importance of these and other factors for TL slope variation is poorly studied. TL is thus a ubiquitously used indicator in ecology, the understanding of which is still opaque. The goal of this study was to provide tools to help fill this gap in understanding by providing proximate determinants of TL slopes, statistical quantities that are correlated to TL slopes but are simpler than the slope itself and are more readily linked to ecological factors. Using numeric simulations and 82 multi-decadal population datasets, we here propose, test and apply two proximate statistical determinants of TL slopes which we argue can become key tools for understanding the nature and ecological causes of TL slope variation. We find that measures based on population skewness, coefficient of variation and synchrony are effective proximate determinants. We demonstrate their potential for application by using them to help explain covariation in slopes of spatial and temporal TL (two common types of TL). This study provides tools for understanding TL, and demonstrates their usefulness.
Collapse
|
43
|
Olvera Astivia OL, Zumbo BD. On the solution multiplicity of the Fleishman method and its impact in simulation studies. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2018; 71:437-458. [PMID: 29323414 DOI: 10.1111/bmsp.12126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Revised: 10/31/2017] [Indexed: 06/07/2023]
Abstract
The Fleishman third-order polynomial algorithm is one of the most-often used non-normal data-generating methods in Monte Carlo simulations. At the crux of the Fleishman method is the solution of a non-linear system of equations needed to obtain the constants to transform data from normality to non-normality. A rarely acknowledged fact in the literature is that the solution to this system is not unique, and it is currently unknown what influence the different types of solutions have on the computer-generated data. To address this issue, analytical and empirical investigations were conducted, aimed at documenting the impact that each solution type has on the design of computer simulations. In the first study, it was found that certain types of solutions generate data with different multivariate properties and wider coverage of the theoretical range spanned by population correlations. In the second study, it was found that previously published recommendations from Monte Carlo simulations could change if different types of solutions were used to generate the data. A mathematical description of the multiple solutions to the Fleishman polynomials is provided, as well as recommendations for users of this method.
Collapse
|
44
|
Kobayashi H, Song C, Ikei H, Park BJ, Lee J, Kagawa T, Miyazaki Y. Forest Walking Affects Autonomic Nervous Activity: A Population-Based Study. Front Public Health 2018; 6:278. [PMID: 30327762 PMCID: PMC6174240 DOI: 10.3389/fpubh.2018.00278] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2018] [Accepted: 09/10/2018] [Indexed: 11/17/2022] Open
Abstract
The present study aimed to evaluate the effect of walking in forest environments on autonomic nervous activity with special reference to its distribution characteristics. Heart rate variability (HRV) of 485 male participants while walking for ~15 min in a forest and an urban area was analyzed. The experimental sites were 57 forests and 57 urban areas across Japan. Parasympathetic and sympathetic indicators [lnHF and ln(LF/HF), respectively] of HRV were calculated based on ~15-min heart rate recordings. Skewness and kurtosis of the distributions of lnHF and ln(LF/HF) were almost the same between the two environments, although the means and medians of the indicators differed significantly. Percentages of positive responders [presenting an increase in lnHF or a decrease in ln(LF/HF) in forest environments] were 65.2 and 67.0%, respectively. The percentage of lnHF was significantly smaller than our previous results on HRV during the viewing of urban or forest landscapes, whereas the percentage of ln(LF/HF) was not significantly different. The results suggest that walking in a forest environment has a different effect on autonomic nervous activity than viewing a forest landscape.
Collapse
|
45
|
Chen Z, Li J, Li Z, Peng Y, Gao X. [Automatic detection and classification of atrial fibrillation using RR intervals and multi-eigenvalue]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2018; 35:550-556. [PMID: 30124017 DOI: 10.7507/1001-5515.201710050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Atrial fibrillation (AF) is a common arrhythmia disease. Detection of atrial fibrillation based on electrocardiogram (ECG) is of great significance for clinical diagnosis. Due to the non-linearity and complexity of ECG signals, the procedure to manually diagnose the ECG signals takes a lot of time and is prone to errors. In order to overcome the above problems, a feature extraction method based on RR interval is proposed in this paper. The discrete degree of RR interval is described with the robust coefficient of variation (RCV), the distribution shape of RR interval is described with the skewness parameter (SKP), and the complexity of RR interval is described with the Lempel-Ziv complexity (LZC). Finally, the feature vectors of RCV, SKP, and LZC are input into the support vector machine (SVM) classifier model to achieve automatic classification and detection of atrial fibrillation. To verify the validity and practicability of the proposed method, the MIT-BIH atrial fibrillation database was used to verify the data. The final classification results show that the sensitivity is 95.81%, the specificity is 96.48%, the accuracy is 96.09%, and the specificity of 95.16% is achieved in the MIT-BIH normal sinus rhythm database. The experimental results show that the proposed method is an effective classification method for atrial fibrillation.
Collapse
|
46
|
Abstract
Health care expenditures and use are challenging to model because these dependent variables typically have distributions that are skewed with a large mass at zero. In this article, we describe estimation and interpretation of the effects of a natural experiment using two classes of nonlinear statistical models: one for health care expenditures and the other for counts of health care use. We extend prior analyses to test the effect of the ACA's young adult expansion on three different outcomes: total health care expenditures, office-based visits, and emergency department visits. Modeling the outcomes with a two-part or hurdle model, instead of a single-equation model, reveals that the ACA policy increased the number of office-based visits but decreased emergency department visits and overall spending.
Collapse
|
47
|
Castro LM, Wang WL, Lachos VH, Inácio de Carvalho V, Bayes CL. Bayesian semiparametric modeling for HIV longitudinal data with censoring and skewness. Stat Methods Med Res 2018; 28:1457-1476. [PMID: 29551086 DOI: 10.1177/0962280218760360] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In biomedical studies, the analysis of longitudinal data based on Gaussian assumptions is common practice. Nevertheless, more often than not, the observed responses are naturally skewed, rendering the use of symmetric mixed effects models inadequate. In addition, it is also common in clinical assays that the patient's responses are subject to some upper and/or lower quantification limit, depending on the diagnostic assays used for their detection. Furthermore, responses may also often present a nonlinear relation with some covariates, such as time. To address the aforementioned three issues, we consider a Bayesian semiparametric longitudinal censored model based on a combination of splines, wavelets, and the skew-normal distribution. Specifically, we focus on the use of splines to approximate the general mean, wavelets for modeling the individual subject trajectories, and on the skew-normal distribution for modeling the random effects. The newly developed method is illustrated through simulated data and real data concerning AIDS/HIV viral loads.
Collapse
|
48
|
Bishara AJ, Li J, Nash T. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2018; 71:167-185. [PMID: 28872186 DOI: 10.1111/bmsp.12113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 06/19/2017] [Indexed: 06/07/2023]
Abstract
When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code.
Collapse
|
49
|
Sorjonen K, Wikström Alex J, Melin B. Necessity as a Function of Skewness. Front Psychol 2017; 8:2192. [PMID: 29312058 PMCID: PMC5742232 DOI: 10.3389/fpsyg.2017.02192] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2017] [Accepted: 12/01/2017] [Indexed: 11/13/2022] Open
Abstract
With necessary condition analysis (NCA), a necessity effect is estimated by calculating the amount of empty space in the upper left corner in a plot with a predictor X and an outcome Y. In the present simulation study, calculated necessity effects were found to have a negative association with the skewness of the predictor and a positive association with the skewness of the outcome. Also the standard error of the necessity effect was found to be influenced by the skewness of the predictor and the skewness of the outcome, as well as by sample size, and a way to calculate a confidence interval for the necessity effect is presented. At least some of the findings obtained with NCA are well within the range of what can be expected from the skewness of the predictor and the outcome alone.
Collapse
|
50
|
Wang J, Talluri R, Shete S. Selection of X-chromosome Inactivation Model. Cancer Inform 2017; 16:1176935117747272. [PMID: 29308008 PMCID: PMC5751921 DOI: 10.1177/1176935117747272] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 11/18/2017] [Indexed: 11/15/2022] Open
Abstract
To address the complexity of the X-chromosome inactivation (XCI) process, we previously developed a unified approach for the association test for X-chromosomal single-nucleotide polymorphisms (SNPs) and the disease of interest, accounting for different biological possibilities of XCI: random, skewed, and escaping XCI. In the original study, we focused on the SNP-disease association test but did not provide knowledge regarding the underlying XCI models. One can use the highest likelihood ratio (LLR) to select XCI models (max-LLR approach). However, that approach does not formally compare the LLRs corresponding to different XCI models to assess whether the models are distinguishable. Therefore, we propose an LLR comparison procedure (comp-LLR approach), inspired by the Cox test, to formally compare the LLRs of different XCI models to select the most likely XCI model that describes the underlying XCI process. We conduct simulation studies to investigate the max-LLR and comp-LLR approaches. The simulation results show that compared with the max-LLR, the comp-LLR approach has higher probability of identifying the correct underlying XCI model for the scenarios when the underlying XCI process is random XCI, escaping XCI, or skewed XCI to the deleterious allele. We applied both approaches to a head and neck cancer genetic study to investigate the underlying XCI processes for the X-chromosomal genetic variants.
Collapse
|