26
|
Zhang S, Wang D, Liu F. Separate block-based parameter estimation method for Hammerstein systems. ROYAL SOCIETY OPEN SCIENCE 2018; 5:172194. [PMID: 30110418 PMCID: PMC6030268 DOI: 10.1098/rsos.172194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 05/22/2018] [Indexed: 06/08/2023]
Abstract
Different from the output-input representation-based identification methods of two-block Hammerstein systems, this paper concerns a separate block-based parameter estimation method for each block of a two-block Hammerstein CARMA system, without combining the parameters of two parts together. The idea is to consider each block as a subsystem and to estimate the parameters of the nonlinear block and the linear block separately (interactively), by using two least-squares algorithms in one recursive step. The internal variable between the two blocks (the output of the nonlinear block, and also the input of the linear block) is replaced by different estimates: when estimating the parameters of the nonlinear part, the internal variable between the two blocks is computed by the linear function; when estimating the parameters of the linear part, the internal variable is computed by the nonlinear function. The proposed parameter estimation method possesses property of the higher computational efficiency compared with the previous over-parametrization method in which many redundant parameters need to be computed. The simulation results show the effectiveness of the proposed algorithm.
Collapse
|
27
|
Shahid A, Choi JH, Rana AUHS, Kim HS. Least Squares Neural Network-Based Wireless E-Nose System Using an SnO₂ Sensor Array. SENSORS 2018; 18:s18051446. [PMID: 29734783 PMCID: PMC5982671 DOI: 10.3390/s18051446] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 05/02/2018] [Accepted: 05/03/2018] [Indexed: 11/17/2022]
Abstract
Over the last few decades, the development of the electronic nose (E-nose) for detection and quantification of dangerous and odorless gases, such as methane (CH4) and carbon monoxide (CO), using an array of SnO2 gas sensors has attracted considerable attention. This paper addresses sensor cross sensitivity by developing a classifier and estimator using an artificial neural network (ANN) and least squares regression (LSR), respectively. Initially, the ANN was implemented using a feedforward pattern recognition algorithm to learn the collective behavior of an array as the signature of a particular gas. In the second phase, the classified gas was quantified by minimizing the mean square error using LSR. The combined approach produced 98.7% recognition probability, with 95.5 and 94.4% estimated gas concentration accuracies for CH4 and CO, respectively. The classifier and estimator parameters were deployed in a remote microcontroller for the actualization of a wireless E-nose system.
Collapse
|
28
|
Waller N. An Introduction to Kristof's Theorem for Solving Least-Square Optimization Problems Without Calculus. MULTIVARIATE BEHAVIORAL RESEARCH 2018; 53:190-198. [PMID: 29323539 DOI: 10.1080/00273171.2017.1412294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.
Collapse
|
29
|
A Review of Depth and Normal Fusion Algorithms. SENSORS 2018; 18:s18020431. [PMID: 29389903 PMCID: PMC5855899 DOI: 10.3390/s18020431] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Revised: 12/21/2017] [Accepted: 01/26/2018] [Indexed: 11/17/2022]
Abstract
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain.
Collapse
|
30
|
Shaw CB, Hui ES, Helpern JA, Jensen JH. Tensor estimation for double-pulsed diffusional kurtosis imaging. NMR IN BIOMEDICINE 2017; 30:e3722. [PMID: 28328072 DOI: 10.1002/nbm.3722] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Revised: 02/08/2017] [Accepted: 02/09/2017] [Indexed: 06/06/2023]
Abstract
Double-pulsed diffusional kurtosis imaging (DP-DKI) represents the double diffusion encoding (DDE) MRI signal in terms of six-dimensional (6D) diffusion and kurtosis tensors. Here a method for estimating these tensors from experimental data is described. A standard numerical algorithm for tensor estimation from conventional (i.e. single diffusion encoding) diffusional kurtosis imaging (DKI) data is generalized to DP-DKI. This algorithm is based on a weighted least squares (WLS) fit of the signal model to the data combined with constraints designed to minimize unphysical parameter estimates. The numerical algorithm then takes the form of a quadratic programming problem. The principal change required to adapt the conventional DKI fitting algorithm to DP-DKI is replacing the three-dimensional diffusion and kurtosis tensors with the 6D tensors needed for DP-DKI. In this way, the 6D diffusion and kurtosis tensors for DP-DKI can be conveniently estimated from DDE data by using constrained WLS, providing a practical means for condensing DDE measurements into well-defined mathematical constructs that may be useful for interpreting and applying DDE MRI. Data from healthy volunteers for brain are used to demonstrate the DP-DKI tensor estimation algorithm. In particular, representative parametric maps of selected tensor-derived rotational invariants are presented.
Collapse
|
31
|
A Novel Real-Time Reference Key Frame Scan Matching Method. SENSORS 2017; 17:s17051060. [PMID: 28481285 PMCID: PMC5469665 DOI: 10.3390/s17051060] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2017] [Revised: 04/23/2017] [Accepted: 05/03/2017] [Indexed: 11/17/2022]
Abstract
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Collapse
|
32
|
Tang LL, Yuan A, Collins J, Che X, Chan L. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard. Cancer Inform 2017; 16:1176935116686063. [PMID: 28469385 PMCID: PMC5392027 DOI: 10.1177/1176935116686063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 11/24/2016] [Indexed: 12/29/2022] Open
Abstract
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input "data." It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration.
Collapse
|
33
|
Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array. SENSORS 2016; 17:s17010071. [PMID: 28042828 PMCID: PMC5298644 DOI: 10.3390/s17010071] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 12/20/2016] [Accepted: 12/27/2016] [Indexed: 11/17/2022]
Abstract
The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.
Collapse
|
34
|
Berberidis D, Kekatos V, Giannakis GB. Online Censoring for Large-Scale Regressions with Application to Streaming Big Data. IEEE TRANSACTIONS ON SIGNAL PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 64:3854-3867. [PMID: 28042229 PMCID: PMC5198787 DOI: 10.1109/tsp.2016.2546225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
On par with data-intensive applications, the sheer size of modern linear regression problems creates an ever-growing demand for efficient solvers. Fortunately, a significant percentage of the data accrued can be omitted while maintaining a certain quality of statistical inference with an affordable computational budget. This work introduces means of identifying and omitting less informative observations in an online and data-adaptive fashion. Given streaming data, the related maximum-likelihood estimator is sequentially found using first- and second-order stochastic approximation algorithms. These schemes are well suited when data are inherently censored or when the aim is to save communication overhead in decentralized learning setups. In a different operational scenario, the task of joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup. Novel online algorithms are developed enjoying simple closed-form updates and provable (non)asymptotic convergence guarantees. To attain desired censoring patterns and levels of dimensionality reduction, thresholding rules are investigated too. Numerical tests on real and synthetic datasets corroborate the efficacy of the proposed data-adaptive methods compared to data-agnostic random projection-based alternatives.
Collapse
|
35
|
Zhu B, Li J, Chu Z, Tang W, Wang B, Li D. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms. SENSORS 2016; 16:s16071078. [PMID: 27420066 PMCID: PMC4970124 DOI: 10.3390/s16071078] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Revised: 06/16/2016] [Accepted: 07/08/2016] [Indexed: 11/19/2022]
Abstract
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Collapse
|
36
|
Gong H, Zhang S, Wang J, Gong H, Zeng J. Constructing Structure Ensembles of Intrinsically Disordered Proteins from Chemical Shift Data. J Comput Biol 2016; 23:300-10. [PMID: 27159632 PMCID: PMC4876552 DOI: 10.1089/cmb.2015.0184] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Modeling the structural ensemble of intrinsically disordered proteins (IDPs), which lack fixed structures, is essential in understanding their cellular functions and revealing their regulation mechanisms in signaling pathways of related diseases (e.g., cancers and neurodegenerative disorders). Though the ensemble concept has been widely believed to be the most accurate way to depict 3D structures of IDPs, few of the traditional ensemble-based approaches effectively address the degeneracy problem that occurs when multiple solutions are consistent with experimental data and is the main challenge in the IDP ensemble construction task. In this article, based on a predefined conformational library, we formalize the structure ensemble construction problem into a least squares framework, which provides the optimal solution when the data constraints outnumber unknown variables. To deal with the degeneracy problem, we further propose a regularized regression approach based on the elastic net technique with the assumption that the weights to be estimated for individual structures in the ensemble are sparse. We have validated our methods through a reference ensemble approach as well as by testing the real biological data of three proteins, including alpha-synuclein, the translocation domain of Colocin N, and the K18 domain of Tau protein.
Collapse
|
37
|
Abstract
We prove that the convex least squares estimator (LSE) attains a n-1/2 pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show that the convex LSE adapts to the optimal rate at the boundary points of the region where the truth is linear, up to a log-log factor. These conclusions are valid in the context of both density estimation and regression function estimation.
Collapse
|
38
|
Balabdaoui F, Basu S. Letter to the editor comments on Groparu-Cojocaru and Doray (2013). COMMUN STAT-SIMUL C 2015; 46:3833-3840. [PMID: 28584394 PMCID: PMC5455332 DOI: 10.1080/03610918.2015.1024857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Although estimating the five parameters of an unknown Generalized Normal Laplace (GNL) density by minimizing the distance between the empirical and true characteristic functions seems appealing, the approach cannot be advocated in practice. This conclusion is based on extensive numerical simulations in which a fast minimization procedure delivers deceiving estimators with values that are quite far away from the truth. These findings can be predicted by the very large values obtained for the true asymptotic variances of the estimators of the five parameters of the true GNL density.
Collapse
|
39
|
Bourhis LJ, Dolomanov OV, Gildea RJ, Howard JAK, Puschmann H. The anatomy of a comprehensive constrained, restrained refinement program for the modern computing environment - Olex2 dissected. Acta Crystallogr A Found Adv 2015; 71:59-75. [PMID: 25537389 PMCID: PMC4283469 DOI: 10.1107/s2053273314022207] [Citation(s) in RCA: 894] [Impact Index Per Article: 99.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Accepted: 10/08/2014] [Indexed: 11/25/2022] Open
Abstract
This paper describes the mathematical basis for olex2.refine, the new refinement engine which is integrated within the Olex2 program. Precise and clear equations are provided for every computation performed by this engine, including structure factors and their derivatives, constraints, restraints and twinning; a general overview is also given of the different components of the engine and their relation to each other. A framework for adding multiple general constraints with dependencies on common physical parameters is described. Several new restraints on atomic displacement parameters are also presented.
Collapse
|
40
|
Giacovazzo C. From direct-space discrepancy functions to crystallographic least squares. Acta Crystallogr A Found Adv 2014; 71:36-45. [PMID: 25537387 DOI: 10.1107/s2053273314019056] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2014] [Accepted: 08/22/2014] [Indexed: 11/10/2022] Open
Abstract
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
Collapse
|
41
|
Xu Y, Iglewicz B, Chervoneva I. Robust Estimation of the Parameters of g - and - h Distributions, with Applications to Outlier Detection. Comput Stat Data Anal 2014; 75:66-80. [PMID: 24665144 DOI: 10.1016/j.csda.2014.01.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The g - and - h distributional family is generated from a relatively simple transformation of the standard normal and can approximate a broad spectrum of distributions. Consequently, it is easy to use in simulation studies and has been applied in multiple areas, including risk management, stock return analysis and missing data imputation studies. A rapidly convergent quantile based least squares (QLS) estimation method to fit the g - and - h distributional family parameters is proposed and then extended to a robust version. The robust version is then used as a more general outlier detection approach. Several properties of the QLS method are derived and comparisons made with competing methods through simulation. Real data examples of microarray and stock index data are used as illustrations.
Collapse
|
42
|
Wolzt M, Gouya G, Kapiotis S, Becka M, Mueck W, Kubitza D. Open-label, randomized study of the effect of rivaroxaban with or without acetylsalicylic acid on thrombus formation in a perfusion chamber. Thromb Res 2013; 132:240-7. [PMID: 23786894 DOI: 10.1016/j.thromres.2013.05.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Revised: 05/10/2013] [Accepted: 05/21/2013] [Indexed: 01/16/2023]
Abstract
INTRODUCTION Rivaroxaban, a direct factor Xa inhibitor, has demonstrated effectiveness for the management of both venous and arterial thrombosis. This study was designed to investigate the antithrombotic effect of rivaroxaban, with or without acetylsalicylic acid (ASA), in an ex vivo perfusion chamber at both low and high shear rates. MATERIALS AND METHODS Healthy subjects (N=51) were enrolled in a randomized, crossover (rivaroxaban 5, 10 or 20mg with or without ASA), and parallel-group (compared with ASA plus clopidogrel) study. Thrombi formed on pig aorta strips were measured after a 5-minute perfusion at low and high shear rates with blood from the subjects by measuring D-dimer concentration (for fibrin deposition) and P-selectin content (for platelet deposition). RESULTS ASA alone had no impact on thrombus D-dimer levels, whereas rivaroxaban alone at peak concentrations decreased D-dimer levels by 9%, 84% and 65% at low shear rate and 37%, 73% and 74% at high shear rate after doses of 5, 10 and 20mg, respectively. Steady-state ASA plus rivaroxaban 5mg caused a greater reduction in D-dimer levels (63%) than monotherapy at low shear rate. Co-administration of ASA with clopidogrel was associated with a 30% decrease in D-dimer levels at low shear rate and a 14% decrease at high shear rate. No conclusive effect on P-selectin content was observed across the treatment groups. CONCLUSIONS Rivaroxaban dose-dependently inhibited ex vivo thrombus formation under low and high shear rates. Co-administration of ASA had an additional effect on the antithrombotic action of low-dose rivaroxaban.
Collapse
|
43
|
Gurbel PA, Bliden KP, Logan DK, Kereiakes DJ, Lasseter KC, White A, Angiolillo DJ, Nolin TD, Maa JF, Bailey WL, Jakubowski JA, Ojeh CK, Jeong YH, Tantry US, Baker BA. The influence of smoking status on the pharmacokinetics and pharmacodynamics of clopidogrel and prasugrel: the PARADOX study. J Am Coll Cardiol 2013; 62:505-12. [PMID: 23602770 DOI: 10.1016/j.jacc.2013.03.037] [Citation(s) in RCA: 95] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2012] [Revised: 02/21/2013] [Accepted: 03/20/2013] [Indexed: 02/08/2023]
Abstract
OBJECTIVES The goal of this study was to evaluate the effect of smoking on the pharmacokinetics and pharmacodynamics (PD) of clopidogrel and prasugrel therapy. BACKGROUND Major randomized trial data demonstrated that nonsmokers experience less or no benefit from clopidogrel treatment compared with smokers (i.e., the "smokers' paradox"). METHODS PARADOX was a prospective, randomized, double-blind, double-dummy, placebo-controlled, crossover study of objectively assessed nonsmokers (n = 56) and smokers (n = 54) with stable coronary artery disease receiving aspirin therapy. Patients were randomized to receive clopidogrel (75 mg daily) or prasugrel (10 mg daily) for 10 days and crossed over after a 14-day washout. PD was assessed by using VerifyNow P2Y12 and vasodilator-stimulated phosphoprotein phosphorylation assays. Clopidogrel and prasugrel metabolite levels, cytochrome P450 1A2 activity, CYP2C19 genotype, and safety parameters were determined. RESULTS During clopidogrel therapy, device-reported inhibition of platelet aggregation (IPA) trended lower in nonsmokers than smokers (least squares mean treatment difference ± SE: 7.7 ± 4.1%; p = 0.062). Device-reported IPA was significantly lower in clopidogrel-treated smokers than prasugrel-treated smokers (least squares mean treatment difference: 31.8 ± 3.4%; p < 0.0001). During clopidogrel therapy, calculated IPA was lower and P2Y12 reaction units and vasodilator-stimulated phosphoprotein phosphorylation and platelet reactivity index were higher in nonsmokers than in smokers (p = 0.043, p = 0.005, and p = 0.042, respectively). Greater antiplatelet effects were present after prasugrel treatment regardless of smoking status (p < 0.001 for all comparisons). CONCLUSIONS PARADOX demonstrated lower clopidogrel active metabolite exposure and PD effects of clopidogrel in nonsmokers relative to smokers. Prasugrel was associated with greater active metabolite exposure and PD effects than clopidogrel regardless of smoking status. The poorer antiplatelet response in clopidogrel-treated nonsmokers may provide an explanation for the smokers' paradox. (The Influence of Smoking Status on Prasugrel and Clopidogrel Treated Subjects Taking Aspirin and Having Stable Coronary Artery Disease; NCT01260584).
Collapse
|
44
|
Veraart J, Rajan J, Peeters RR, Leemans A, Sunaert S, Sijbers J. Comprehensive framework for accurate diffusion MRI parameter estimation. Magn Reson Med 2012; 70:972-84. [PMID: 23132517 DOI: 10.1002/mrm.24529] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Revised: 09/21/2012] [Accepted: 09/24/2012] [Indexed: 11/12/2022]
Abstract
During the last decade, many approaches have been proposed for improving the estimation of diffusion measures. These techniques have already shown an increase in accuracy based on theoretical considerations, such as incorporating prior knowledge of the data distribution. The increased accuracy of diffusion metric estimators is typically observed in well-defined simulations, where the assumptions regarding properties of the data distribution are known to be valid. In practice, however, correcting for subject motion and geometric eddy current deformations alters the data distribution tremendously such that it can no longer be expressed in a closed form. The image processing steps that precede the model fitting will render several assumptions on the data distribution invalid, potentially nullifying the benefit of applying more advanced diffusion estimators. In this work, we present a generic diffusion model fitting framework that considers some statistics of diffusion MRI data. A central role in the framework is played by the conditional least squares estimator. We demonstrate that the accuracy of that particular estimator can generally be preserved, regardless the applied preprocessing steps, if the noise parameter is known a priori. To fulfill that condition, we also propose an approach for the estimation of spatially varying noise levels.
Collapse
|
45
|
Weighted least squares techniques for improved received signal strength based localization. SENSORS 2011; 11:8569-92. [PMID: 22164092 PMCID: PMC3231493 DOI: 10.3390/s110908569] [Citation(s) in RCA: 94] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Revised: 08/30/2011] [Accepted: 08/31/2011] [Indexed: 11/16/2022]
Abstract
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Collapse
|
46
|
Wang Q, Dinse GE. Linear regression analysis of survival data with missing censoring indicators. LIFETIME DATA ANALYSIS 2011; 17:256-279. [PMID: 20559722 PMCID: PMC3020262 DOI: 10.1007/s10985-010-9175-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2008] [Accepted: 06/02/2010] [Indexed: 05/29/2023]
Abstract
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.
Collapse
|
47
|
Chandola H. A lower bound on the error in dimensionality reduction resulting from projection onto a restricted subspace. LINEAR ALGEBRA AND ITS APPLICATIONS 2010; 433:2147-2151. [PMID: 21057654 PMCID: PMC2968740 DOI: 10.1016/j.laa.2010.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We obtain the lower bound on a variant of the common problem of dimensionality reduction. In this version, the dataset is projected on to a k dimensional subspace with the property that the first k-1 basis vectors are fixed, leaving a single degree of freedom in terms of basis vectors.
Collapse
|
48
|
Robust fitting of [11C]-WAY-100635 PET data. J Cereb Blood Flow Metab 2010; 30:1366-72. [PMID: 20179725 PMCID: PMC2949218 DOI: 10.1038/jcbfm.2010.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fitting of a positron emission tomography (PET) time-activity curve is typically accomplished according to the least squares (LS) criterion, which is optimal for data having Gaussian distributed errors, but not robust in the presence of outliers. Conversely, quantile regression (QR) provides robust estimates not heavily influenced by outliers, sacrificing a little efficiency relative to LS when no outliers are present. Given these considerations, we hypothesized that QR would improve parameter estimate accuracy as measured by reduced intersubject variance in distribution volume (V(T)) compared with LS in PET modeling. We compare V(T) values after applying QR with those using LS on 49 controls studied with [(11)C]-WAY-100635. QR decreases the standard deviation of the V(T) estimates (relative improvement range: 0.08% to 3.24%), while keeping the within-group average V(T) values almost unchanged. QR variance reduction results in fewer subjects required to maintain the same statistical power in group analysis without additional hardware and/or image registration to correct head motion.
Collapse
|
49
|
Balabdaoui F, Wellner JA. Estimation of a k-monotone density: characterizations, consistency and minimax lower bounds. STAT NEERL 2010; 64:45-70. [PMID: 20436949 PMCID: PMC2860328 DOI: 10.1111/j.1467-9574.2009.00438.x] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The classes of monotone or convex (and necessarily monotone) densities on ℝ(+) can be viewed as special cases of the classes of k-monotone densities on ℝ(+). These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (∞-monotone) densities on ℝ(+). In this paper we consider non-parametric maximum likelihood and least squares estimators of a k-monotone density g(0).We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k - 1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivatives[Formula: see text], at a fixed point x(0) under the assumption that [Formula: see text].
Collapse
|
50
|
Jankowski HK, Wellner JA. Computation of nonparametric convex hazard estimators via profile methods. J Nonparametr Stat 2009; 21:505-518. [PMID: 20300560 PMCID: PMC2838722 DOI: 10.1080/10485250902745359] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
This paper proposes a profile likelihood algorithm to compute the nonparametric maximum likelihood estimator of a convex hazard function. The maximisation is performed in two steps: First the support reduction algorithm is used to maximise the likelihood over all hazard functions with a given point of minimum (or antimode). Then it is shown that the profile (or partially maximised) likelihood is quasi-concave as a function of the antimode, so that a bisection algorithm can be applied to find the maximum of the profile likelihood, and hence also the global maximum. The new algorithm is illustrated using both artificial and real data, including lifetime data for Canadian males and females.
Collapse
|