1
|
Testing homogeneity: the trouble with sparse functional data. J R Stat Soc Series B Stat Methodol 2023; 85:705-731. [PMID: 37521166 PMCID: PMC10376451 DOI: 10.1093/jrsssb/qkad021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/06/2022] [Accepted: 02/25/2023] [Indexed: 08/01/2023]
Abstract
Testing the homogeneity between two samples of functional data is an important task. While this is feasible for intensely measured functional data, we explain why it is challenging for sparsely measured functional data and show what can be done for such data. In particular, we show that testing the marginal homogeneity based on point-wise distributions is feasible under some mild constraints and propose a new two-sample statistic that works well with both intensively and sparsely measured functional data. The proposed test statistic is formulated upon energy distance, and the convergence rate of the test statistic to its population version is derived along with the consistency of the associated permutation test. The aptness of our method is demonstrated on both synthetic and real data sets.
Collapse
|
2
|
Ergodic Measure and Potential Control of Anomalous Diffusion. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1012. [PMID: 37509959 PMCID: PMC10377995 DOI: 10.3390/e25071012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 06/15/2023] [Accepted: 06/27/2023] [Indexed: 07/30/2023]
Abstract
In statistical mechanics, the ergodic hypothesis (i.e., the long-time average is the same as the ensemble average) accompanying anomalous diffusion has become a continuous topic of research, being closely related to irreversibility and increasing entropy. While measurement time is finite for a given process, the time average of an observable quantity might be a random variable, whose distribution width narrows with time, and one wonders how long it takes for the convergence rate to become a constant. This is also the premise of ergodic establishment, because the ensemble average is always equal to the constant. We focus on the time-dependent fluctuation width for the time average of both the velocity and kinetic energy of a force-free particle described by the generalized Langevin equation, where the stationary velocity autocorrelation function is considered. Subsequently, the shortest time scale can be estimated for a system transferring from a stationary state to an effective ergodic state. Moreover, a logarithmic spatial potential is used to modulate the processes associated with free ballistic diffusion and the control of diffusion, as well as the minimal realization of the whole power-law regime. The results presented suggest that non-ergodicity mimics the sparseness of the medium and reveals the unique role of logarithmic potential in modulating diffusion behavior.
Collapse
|
3
|
Hyperparameter Optimization of Bayesian Neural Network Using Bayesian Optimization and Intelligent Feature Engineering for Load Forecasting. SENSORS 2022; 22:s22124446. [PMID: 35746227 PMCID: PMC9231108 DOI: 10.3390/s22124446] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 06/02/2022] [Accepted: 06/09/2022] [Indexed: 01/27/2023]
Abstract
This paper proposes a new hybrid framework for short-term load forecasting (STLF) by combining the Feature Engineering (FE) and Bayesian Optimization (BO) algorithms with a Bayesian Neural Network (BNN). The FE module comprises feature selection and extraction phases. Firstly, by merging the Random Forest (RaF) and Relief-F (ReF) algorithms, we developed a hybrid feature selector based on grey correlation analysis (GCA) to eliminate feature redundancy. Secondly, a radial basis Kernel function and principal component analysis (KPCA) are integrated into the feature-extraction module for dimensional reduction. Thirdly, the Bayesian Optimization (BO) algorithm is used to fine-tune the control parameters of a BNN and provides more accurate results by avoiding the optimal local trapping. The proposed FE-BNN-BO framework works in such a way to ensure stability, convergence, and accuracy. The proposed FE-BNN-BO model is tested on the hourly load data obtained from the PJM, USA, electricity market. In addition, the simulation results are also compared with other benchmark models such as Bi-Level, long short-term memory (LSTM), an accurate and fast convergence-based ANN (ANN-AFC), and a mutual-information-based ANN (ANN-MI). The results show that the proposed model has significantly improved the accuracy with a fast convergence rate and reduced the mean absolute percent error (MAPE).
Collapse
|
4
|
Promotion time cure rate model with a neural network estimated nonparametric component. Stat Med 2021; 40:3516-3532. [PMID: 33928665 DOI: 10.1002/sim.8980] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 03/18/2021] [Accepted: 03/25/2021] [Indexed: 11/07/2022]
Abstract
Promotion time cure rate models (PCM) are often used to model the survival data with a cure fraction. Medical images or biomarkers derived from medical images can be the key predictors in survival models. However, incorporating images in the PCM is challenging using traditional nonparametric methods such as splines. We propose to use neural network to model the nonparametric or unstructured predictors' effect in the PCM context. Expectation-maximization algorithm with neural network for the M-step is used for parameter estimation. Asymptotic properties of the proposed estimates are derived. Simulation studies show good performance in terms of both prediction and estimation. We finally apply our methods to analyze the brain images from open access series of imaging studies data.
Collapse
|
5
|
Abstract
In this paper, we investigate the mean change-point models based on associated sequences. Under some weak conditions, we obtain a limit distribution of CUSUM statistic which can be used to judge the mean change-mount δ n is satisfied or dissatisfied n 1 / 2 δ n = o ( 1 ) . We also study the consistency of sample covariances and change-point location statistics. Based on Normality and Lognormality data, some simulations such as empirical sizes, empirical powers and convergence are presented to test our results. As an important application, we use CUSUM statistics to do the mean change-point analysis for a financial series.
Collapse
|
6
|
Predicting the Severity of Parkinson's Disease Dementia by Assessing the Neuropsychiatric Symptoms with an SVM Regression Model. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:2551. [PMID: 33806474 PMCID: PMC7967659 DOI: 10.3390/ijerph18052551] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 03/02/2021] [Indexed: 12/12/2022]
Abstract
In this study, we measured the convergence rate using the mean-squared error (MSE) of the standardized neuropsychological test to determine the severity of Parkinson's disease dementia (PDD), which is based on support vector machine (SVM) regression (SVR) and present baseline data in order to develop a model to predict the severity of PDD. We analyzed 328 individuals with PDD who were 60 years or older. To identify the SVR with the best prediction power, we compared the classification performance (convergence rate) of eight SVR models (Eps-SVR and Nu-SVR with four kernel functions (a radial basis function (RBF), linear algorithm, polynomial algorithm, and sigmoid)). Among the eight models, the MSE of Nu-SVR-RBF was the lowest (0.078), with the highest convergence rate, whereas the MSE of Eps-SVR-sigmoid was 0.110, with the lowest convergence rate. The results of this study imply that this approach could be useful for measuring the severity of dementia by comprehensively examining axial atypical features, the Korean instrumental activities of daily living (K-IADL), changes in rapid eye movement sleep behavior disorder (RBD), etc. for optimal intervention and caring of the elderly living alone or patients with PDD residing in medically vulnerable areas.
Collapse
|
7
|
Accelerated strategy for the MLEM algorithm. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:135-149. [PMID: 33252106 DOI: 10.3233/xst-200749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
BACKGROUND A statistical method called maximum likelihood expectation maximization (MLEM) is quite attractive, especially in PET/SPECT. However, the convergence rate of the iterative scheme of MLEM is quite slow. OBJECTIVE This study aims to develop and test a new method to speed up the convergence rate of the MLEM algorithm. METHODS We introduce a relaxation parameter in the conventional MLEM iterative formula and propose the relaxation strategy on the condition that the spectral radius of the derived iterative matrix from the iterative scheme with the accelerated parameter reaches a minimum value. RESULTS Experiments with Shepp-Logan phantom and an annual tree image demonstrate that the new computational strategy effectively accelerates computation time while maintains reasonable image quality. CONCLUSIONS The proposed new computational method involving the relaxation strategy has a faster convergence speed than the original method.
Collapse
|
8
|
A second-order dynamical approach with variable damping to nonconvex smooth minimization. APPLICABLE ANALYSIS 2018; 99:361-378. [PMID: 32256253 PMCID: PMC7077366 DOI: 10.1080/00036811.2018.1495330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Accepted: 06/26/2018] [Indexed: 06/11/2023]
Abstract
We investigate a second-order dynamical system with variable damping in connection with the minimization of a nonconvex differentiable function. The dynamical system is formulated in the spirit of the differential equation which models Nesterov's accelerated convex gradient method. We show that the generated trajectory converges to a critical point, if a regularization of the objective function satisfies the Kurdyka- Lojasiewicz property. We also provide convergence rates for the trajectory formulated in terms of the Lojasiewicz exponent.
Collapse
|
9
|
Technical Note: Emission expectation maximization look-alike algorithms for x-ray CT and other applications. Med Phys 2018; 45:10.1002/mp.13077. [PMID: 29963702 PMCID: PMC6314922 DOI: 10.1002/mp.13077] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2017] [Revised: 05/24/2018] [Accepted: 06/20/2018] [Indexed: 11/10/2022] Open
Abstract
PURPOSE In emission tomography, the expectation maximization (EM) algorithm is easy to use with only one parameter to adjust - the number of iterations. On the other hand, the EM algorithms for transmission tomography are not so user-friendly and have many problems. This paper develops a new transmission algorithm similar to the emission EM algorithm. METHODS This paper develops a family of emission-EM-look-alike algorithms by expressing the emission EM algorithm in the additive form and changing the weighting factor. One of the family members can be applied to transmission tomography such as the x-ray computed tomography (CT). RESULTS Computer simulations are performed and compared with a similar algorithm by a different group using the transmission CT noise model. Our algorithm has the same convergence rate as theirs, and our algorithm provides better contrast-to-noise ratio for lesion detection. CONCLUSIONS For any noise variance function, an emission-EM-look-alike algorithm can be derived. This algorithm preserves many properties of the emission EM algorithm such as multiplicative update, non-negativity, faster convergence rate for the bright objects, and ease of implementation.
Collapse
|
10
|
Investigation of the preconditioner-parameter in the preconditioned Chambolle-Pock algorithm applied to optimization-based image reconstruction. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2018; 26:435-448. [PMID: 29562580 DOI: 10.3233/xst-17337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The optimization-based image reconstruction methods have been thoroughly investigated in the field of medical imaging. The Chambolle-Pock (CP) algorithm may be employed to solve these convex optimization image reconstruction programs. The preconditioned CP (PCP) algorithm has been shown to have much higher convergence rate than the ordinary CP (OCP) algorithm. This algorithm utilizes a preconditioner-parameter to tune the implementation of the algorithm to the specific application, which ranges from 0 and 2, but is often set to 1. In this work, we investigated the impact of the preconditioner-parameter on the convergence rate of the PCP algorithm when it is applied to the TV constrained, data-divergence minimization (TVDM) optimization based image reconstruction. We performed the investigations in the context of 2D computed tomography (CT) and 3D electron paramagnetic resonance imaging (EPRI). For 2D CT, we used the Shepp-Logan and two FORBILD phantoms. For 3D EPRI, we used a simulated 6-spheres phantom and a physical phantom. Study results showed that the optimal preconditioner-parameter depends on the specific imaging conditions. Simply setting the parameter equal to 1 cannot guarantee a fast convergence rate. Thus, this study suggests that one should adaptively tune the preconditioner-parameter to obtain the optimal convergence rate of the PCP algorithm.
Collapse
|
11
|
Convergence rates in the law of large numbers for long-range dependent linear processes. JOURNAL OF INEQUALITIES AND APPLICATIONS 2017; 2017:241. [PMID: 29046604 PMCID: PMC5624989 DOI: 10.1186/s13660-017-1517-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2017] [Accepted: 09/11/2017] [Indexed: 06/07/2023]
Abstract
Baum and Katz (Trans. Am. Math. Soc. 120:108-123, 1965) obtained convergence rates in the Marcinkiewicz-Zygmund law of large numbers. Their result has already been extended to the short-range dependent linear processes by many authors. In this paper, we extend the result of Baum and Katz to the long-range dependent linear processes. As a corollary, we obtain convergence rates in the Marcinkiewicz-Zygmund law of large numbers for short-range dependent linear processes.
Collapse
|
12
|
Proximal-gradient algorithms for fractional programming. OPTIMIZATION 2017; 66:1383-1396. [PMID: 33116346 PMCID: PMC5632963 DOI: 10.1080/02331934.2017.1294592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Accepted: 02/02/2017] [Indexed: 06/11/2023]
Abstract
In this paper, we propose two proximal-gradient algorithms for fractional programming problems in real Hilbert spaces, where the numerator is a proper, convex and lower semicontinuous function and the denominator is a smooth function, either concave or convex. In the iterative schemes, we perform a proximal step with respect to the nonsmooth numerator and a gradient step with respect to the smooth denominator. The algorithm in case of a concave denominator has the particularity that it generates sequences which approach both the (global) optimal solutions set and the optimal objective value of the underlying fractional programming problem. In case of a convex denominator the numerical scheme approaches the set of critical points of the objective function, provided the latter satisfies the Kurdyka-ᴌojasiewicz property.
Collapse
|
13
|
A modified subgradient extragradient method for solving monotone variational inequalities. JOURNAL OF INEQUALITIES AND APPLICATIONS 2017; 2017:89. [PMID: 28515617 PMCID: PMC5408075 DOI: 10.1186/s13660-017-1366-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2017] [Accepted: 04/17/2017] [Indexed: 06/07/2023]
Abstract
In the setting of Hilbert space, a modified subgradient extragradient method is proposed for solving Lipschitz-continuous and monotone variational inequalities defined on a level set of a convex function. Our iterative process is relaxed and self-adaptive, that is, in each iteration, calculating two metric projections onto some half-spaces containing the domain is involved only and the step size can be selected in some adaptive ways. A weak convergence theorem for our algorithm is proved. We also prove that our method has [Formula: see text] convergence rate.
Collapse
|
14
|
Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks. SENSORS 2017; 17:s17010141. [PMID: 28098750 PMCID: PMC5298714 DOI: 10.3390/s17010141] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 12/28/2016] [Accepted: 01/09/2017] [Indexed: 11/16/2022]
Abstract
Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.
Collapse
|
15
|
Time-varying coefficients models for recurrent event data when different varying coefficients admit different degrees of smoothness: application to heart disease modeling. Stat Med 2016; 35:4166-82. [PMID: 27238093 DOI: 10.1002/sim.6995] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2015] [Revised: 04/12/2016] [Accepted: 04/26/2016] [Indexed: 11/09/2022]
Abstract
We consider a class of semiparametric marginal rate models for analyzing recurrent event data. In these models, both time-varying and time-free effects are present, and the estimation of time-varying effects may result in non-smooth regression functions. A typical approach for avoiding this problem and producing smooth functions is based on kernel methods. The traditional kernel-based approach, however, assumes a common degree of smoothness for all time-varying regression functions, which may result in suboptimal estimators if the functions have different levels of smoothness. In this paper, we extend the traditional approach by introducing different bandwidths for different regression functions. First, we establish the asymptotic properties of the suggested estimators. Next, we demonstrate the superiority of our proposed method using two finite-sample simulation studies. Finally, we illustrate our methodology by analyzing a real-world heart disease dataset. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
|
16
|
An Effective Cuckoo Search Algorithm for Node Localization in Wireless Sensor Network. SENSORS 2016; 16:s16091390. [PMID: 27589756 PMCID: PMC5038668 DOI: 10.3390/s16091390] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2016] [Revised: 08/03/2016] [Accepted: 08/10/2016] [Indexed: 11/29/2022]
Abstract
Localization is an essential requirement in the increasing prevalence of wireless sensor network (WSN) applications. Reducing the computational complexity, communication overhead in WSN localization is of paramount importance in order to prolong the lifetime of the energy-limited sensor nodes and improve localization performance. This paper proposes an effective Cuckoo Search (CS) algorithm for node localization. Based on the modification of step size, this approach enables the population to approach global optimal solution rapidly, and the fitness of each solution is employed to build mutation probability for avoiding local convergence. Further, the approach restricts the population in the certain range so that it can prevent the energy consumption caused by insignificant search. Extensive experiments were conducted to study the effects of parameters like anchor density, node density and communication range on the proposed algorithm with respect to average localization error and localization success ratio. In addition, a comparative study was conducted to realize the same localization task using the same network deployment. Experimental results prove that the proposed CS algorithm can not only increase convergence rate but also reduce average localization error compared with standard CS algorithm and Particle Swarm Optimization (PSO) algorithm.
Collapse
|
17
|
Generalized Alternating Direction Method of Multipliers: New Theoretical Insights and Applications. MATHEMATICAL PROGRAMMING COMPUTATION 2015; 7:149-187. [PMID: 28428830 PMCID: PMC5394583 DOI: 10.1007/s12532-015-0078-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Recently, the alternating direction method of multipliers (ADMM) has received intensive attention from a broad spectrum of areas. The generalized ADMM (GADMM) proposed by Eckstein and Bertsekas is an efficient and simple acceleration scheme of ADMM. In this paper, we take a deeper look at the linearized version of GADMM where one of its subproblems is approximated by a linearization strategy. This linearized version is particularly efficient for a number of applications arising from different areas. Theoretically, we show the worst-case 𝒪(1/k) convergence rate measured by the iteration complexity (k represents the iteration counter) in both the ergodic and a nonergodic senses for the linearized version of GADMM. Numerically, we demonstrate the efficiency of this linearized version of GADMM by some rather new and core applications in statistical learning. Code packages in Matlab for these applications are also developed.
Collapse
|
18
|
A STRICTLY CONTRACTIVE PEACEMAN-RACHFORD SPLITTING METHOD FOR CONVEX PROGRAMMING. SIAM JOURNAL ON OPTIMIZATION : A PUBLICATION OF THE SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS 2014; 24:1011-1040. [PMID: 25620862 PMCID: PMC4302964 DOI: 10.1137/13090849x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we focus on the application of the Peaceman-Rachford splitting method (PRSM) to a convex minimization model with linear constraints and a separable objective function. Compared to the Douglas-Rachford splitting method (DRSM), another splitting method from which the alternating direction method of multipliers originates, PRSM requires more restrictive assumptions to ensure its convergence, while it is always faster whenever it is convergent. We first illustrate that the reason for this difference is that the iterative sequence generated by DRSM is strictly contractive, while that generated by PRSM is only contractive with respect to the solution set of the model. With only the convexity assumption on the objective function of the model under consideration, the convergence of PRSM is not guaranteed. But for this case, we show that the first t iterations of PRSM still enable us to find an approximate solution with an accuracy of O(1/t). A worst-case O(1/t) convergence rate of PRSM in the ergodic sense is thus established under mild assumptions. After that, we suggest attaching an underdetermined relaxation factor with PRSM to guarantee the strict contraction of its iterative sequence and thus propose a strictly contractive PRSM. A worst-case O(1/t) convergence rate of this strictly contractive PRSM in a nonergodic sense is established. We show the numerical efficiency of the strictly contractive PRSM by some applications in statistical learning and image processing.
Collapse
|
19
|
Abstract
Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.
Collapse
|