1
|
Macías R, Vera JF, Heiser WJ. A cluster differences unfolding method for large datasets of preference ratings on an interval scale: Minimizing the mean squared centred residuals. Br J Math Stat Psychol 2024; 77:356-374. [PMID: 38213088 DOI: 10.1111/bmsp.12332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Revised: 11/23/2023] [Accepted: 12/19/2023] [Indexed: 01/13/2024]
Abstract
Clustering and spatial representation methods are often used in combination, to analyse preference ratings when a large number of individuals and/or object is involved. When analysed under an unfolding model, row-conditional linear transformations are usually most appropriate when the goal is to determine clusters of individuals with similar preferences. However, a significant problem with transformations that include both slope and intercept is the occurrence of degenerate solutions. In this paper, we propose a least squares unfolding method that performs clustering of individuals while simultaneously estimating the location of cluster centres and object locations in low-dimensional space. The method is based on minimising the mean squared centred residuals of the preference ratings with respect to the distances between cluster centres and object locations. At the same time, the distances are row-conditionally transformed with optimally estimated slope parameters. It is computationally efficient for large datasets, and does not suffer from the appearance of degenerate solutions. The performance of the method is analysed in an extensive Monte Carlo experiment. It is illustrated for a real data set and the results are compared with those obtained using a two-step clustering and unfolding procedure.
Collapse
Affiliation(s)
- Rodrigo Macías
- Centro de Investigación en Matemáticas, Unidad Monterrey, Monterrey, México
| | | | | |
Collapse
|
2
|
Huang H, Zeng P, Yang Q. Phase transition and higher order analysis of Lq regularization under dependence. Inf inference 2024; 13:iaae005. [PMID: 38384283 PMCID: PMC10878746 DOI: 10.1093/imaiai/iaae005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/08/2023] [Accepted: 01/26/2024] [Indexed: 02/23/2024]
Abstract
We study the problem of estimating a [Formula: see text]-sparse signal [Formula: see text] from a set of noisy observations [Formula: see text] under the model [Formula: see text], where [Formula: see text] is the measurement matrix the row of which is drawn from distribution [Formula: see text]. We consider the class of [Formula: see text]-regularized least squares (LQLS) given by the formulation [Formula: see text], where [Formula: see text] [Formula: see text] denotes the [Formula: see text]-norm. In the setting [Formula: see text] with fixed [Formula: see text] and [Formula: see text], we derive the asymptotic risk of [Formula: see text] for arbitrary covariance matrix [Formula: see text] that generalizes the existing results for standard Gaussian design, i.e. [Formula: see text]. The results were derived from the non-rigorous replica method. We perform a higher-order analysis for LQLS in the small-error regime in which the first dominant term can be used to determine the phase transition behavior of LQLS. Our results show that the first dominant term does not depend on the covariance structure of [Formula: see text] in the cases [Formula: see text] and [Formula: see text] which indicates that the correlations among predictors only affect the phase transition curve in the case [Formula: see text] a.k.a. LASSO. To study the influence of the covariance structure of [Formula: see text] on the performance of LQLS in the cases [Formula: see text] and [Formula: see text], we derive the explicit formulas for the second dominant term in the expansion of the asymptotic risk in terms of small error. Extensive computational experiments confirm that our analytical predictions are consistent with numerical results.
Collapse
Affiliation(s)
- Hanwen Huang
- Department of Biostatistics, Data Science and Epidemiology, Medical College of Georgia, Augusta University, Augusta, 30912 GA, USA
| | - Peng Zeng
- Department of Mathematics & Statistics, Auburn University, Auburn, 36849 AL, USA
| | - Qinglong Yang
- School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, Hubei, P. R. China
| |
Collapse
|
3
|
Ren J, Pan W. Statistical inference with large-scale trait imputation. Stat Med 2024; 43:625-641. [PMID: 38038193 PMCID: PMC10848238 DOI: 10.1002/sim.9975] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/26/2023] [Accepted: 11/17/2023] [Indexed: 12/02/2023]
Abstract
Recently a nonparametric method called LS-imputation has been proposed for large-scale trait imputation based on a GWAS summary dataset and a large set of genotyped individuals. The imputed trait values, along with the genotypes, can be treated as an individual-level dataset for downstream genetic analyses, including those that cannot be done with GWAS summary data. However, since the covariance matrix of the imputed trait values is often too large to calculate, the current method imposes a working assumption that the imputed trait values are identically and independently distributed, which is incorrect in truth. Here we propose a "divide and conquer/combine" strategy to estimate and account for the covariance matrix of the imputed trait values via batches, thus relaxing the incorrect working assumption. Applications of the methods to the UK Biobank data for marginal association analysis showed some improvement by the new method in some cases, but overall the original method performed well, which was explained by nearly constant variances of and mostly weak correlations among imputed trait values.
Collapse
Affiliation(s)
- Jingchen Ren
- School of Statistics, University of Minnesota, Minneapolis, MN, 55455
- Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, 55455
| | - Wei Pan
- Division of Biostatistics, School of Public Health, University of Minnesota, Minneapolis, MN, 55455
| |
Collapse
|
4
|
Ali SM, Yadav NN, Wirestam R, Singh M, Heo HY, van Zijl PC, Knutsson L. Deep learning-based Lorentzian fitting of water saturation shift referencing spectra in MRI. Magn Reson Med 2023; 90:1610-1624. [PMID: 37279008 PMCID: PMC10524193 DOI: 10.1002/mrm.29718] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/27/2023] [Accepted: 05/12/2023] [Indexed: 06/07/2023]
Abstract
PURPOSE Water saturation shift referencing (WASSR) Z-spectra are used commonly for field referencing in chemical exchange saturation transfer (CEST) MRI. However, their analysis using least-squares (LS) Lorentzian fitting is time-consuming and prone to errors because of the unavoidable noise in vivo. A deep learning-based single Lorentzian Fitting Network (sLoFNet) is proposed to overcome these shortcomings. METHODS A neural network architecture was constructed and its hyperparameters optimized. Training was conducted on a simulated and in vivo-paired data sets of discrete signal values and their corresponding Lorentzian shape parameters. The sLoFNet performance was compared with LS on several WASSR data sets (both simulated and in vivo 3T brain scans). Prediction errors, robustness against noise, effects of sampling density, and time consumption were compared. RESULTS LS and sLoFNet performed comparably in terms of RMS error and mean absolute error on all in vivo data with no statistically significant difference. Although the LS method fitted well on samples with low noise, its error increased rapidly when increasing sample noise up to 4.5%, whereas the error of sLoFNet increased only marginally. With the reduction of Z-spectral sampling density, prediction errors increased for both methods, but the increase occurred earlier (at 25 vs. 15 frequency points) and was more pronounced for LS. Furthermore, sLoFNet performed, on average, 70 times faster than the LS-method. CONCLUSION Comparisons between LS and sLoFNet on simulated and in vivo WASSR MRI Z-spectra in terms of robustness against noise and decreased sample resolution, as well as time consumption, showed significant advantages for sLoFNet.
Collapse
Affiliation(s)
| | - Nirbhay N. Yadav
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, United States
| | - Ronnie Wirestam
- Department of Medical Radiation Physics, Lund University, Lund, Sweden
| | - Munendra Singh
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, United States
| | - Hye-Young Heo
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, United States
| | - Peter C. van Zijl
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, United States
| | - Linda Knutsson
- Department of Medical Radiation Physics, Lund University, Lund, Sweden
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, United States
| |
Collapse
|
5
|
Tehrani A, Anderson JSM, Chakraborty D, Rodriguez-Hernandez JI, Thompson DC, Verstraelen T, Ayers PW, Heidar-Zadeh F. An information-theoretic approach to basis-set fitting of electron densities and other non-negative functions. J Comput Chem 2023; 44:1998-2015. [PMID: 37526138 DOI: 10.1002/jcc.27170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 05/03/2023] [Accepted: 05/08/2023] [Indexed: 08/02/2023]
Abstract
The numerical ill-conditioning associated with approximating an electron density with a convex sum of Gaussian or Slater-type functions is overcome by using the (extended) Kullback-Leibler divergence to measure the deviation between the target and approximate density. The optimized densities are non-negative and normalized, and they are accurate enough to be used in applications related to molecular similarity, the topology of the electron density, and numerical molecular integration. This robust, efficient, and general approach can be used to fit any non-negative normalized functions (e.g., the kinetic energy density and molecular electron density) to a convex sum of non-negative basis functions. We present a fixed-point iteration method for optimizing the Kullback-Leibler divergence and compare it to conventional gradient-based optimization methods. These algorithms are released through the free and open-source BFit package, which also includes a L2-norm squared optimization routine applicable to any square-integrable scalar function.
Collapse
Affiliation(s)
- Alireza Tehrani
- Department of Chemistry, Queen's University, Kingston, Ontario, Canada
| | - James S M Anderson
- Instituto de Química, Universidad Nacional Autónoma de México, Ciudad de México, Mexico
| | - Debajit Chakraborty
- Department of Physics, Wake Forest University, Winston-Salem, North Carolina, USA
- Center for Functional Materials, Wake Forest University, Winston-Salem, North Carolina, USA
| | | | | | - Toon Verstraelen
- Center for Molecular Modeling (CMM), Ghent University, Zwijnaarde, Belgium
| | - Paul W Ayers
- Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario, Canada
| | | |
Collapse
|
6
|
Zhao D, Li S, Wang F, Zhao W, Huang S. Estimation of Wideband Multi-Component Phasors Considering Signal Damping. Sensors (Basel) 2023; 23:7071. [PMID: 37631610 PMCID: PMC10459816 DOI: 10.3390/s23167071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 08/01/2023] [Accepted: 08/04/2023] [Indexed: 08/27/2023]
Abstract
Harmonic and interharmonic content in power system signals is increasing with the development of renewable energy generation and power electronic devices. These multiple signal components can seriously degrade power quality, trip thermal generators, cause oscillations, and threaten system stability, especially the interharmonic tones with positive damping factors. The first step to mitigate these adverse effects is to accurately and quickly monitor signal features, including frequency, damping factor, amplitude, and phase. This paper proposes a concise and robust index to identify the number of modes present in the signal using the singular values of the Hankel matrix and discusses the scope of its application by testing the influence of various factors. Next, the simplified matrix pencil theory is employed to estimate the signal component frequency and damping factor. Then their estimates are considered in the modified least-squares algorithm to extract the wideband multi-component phasors accurately. Finally, this paper designs a series of scenarios considering varying signal frequency, damping factor, amplitude, and phase to test the proposed algorithm thoroughly. The results verify that the proposed method can achieve a maximum total vector error of less than 1.5%, which is more accurate than existing phasor estimators in various signal environments. The high accuracy of the proposed method is because it considers both the estimation of the frequency number and the effect of signal damping.
Collapse
Affiliation(s)
- Dongfang Zhao
- Department of Electrical Engineering, Tsinghua University, Beijing 100084, China; (D.Z.); (S.L.); (W.Z.); (S.H.)
- China Electric Power Planning & Engineering Institute, Beijing 100120, China
| | - Shisong Li
- Department of Electrical Engineering, Tsinghua University, Beijing 100084, China; (D.Z.); (S.L.); (W.Z.); (S.H.)
| | - Fuping Wang
- Department of Electrical Engineering, Tsinghua University, Beijing 100084, China; (D.Z.); (S.L.); (W.Z.); (S.H.)
| | - Wei Zhao
- Department of Electrical Engineering, Tsinghua University, Beijing 100084, China; (D.Z.); (S.L.); (W.Z.); (S.H.)
| | - Songling Huang
- Department of Electrical Engineering, Tsinghua University, Beijing 100084, China; (D.Z.); (S.L.); (W.Z.); (S.H.)
| |
Collapse
|
7
|
Xu Z, Li M, Han Y, Li X, Shi G. Robust Flow Estimation Algorithm of Multichannel Ultrasonic Flowmeter Based on Random Sampling Least Squares. Sensors (Basel) 2022; 22:7660. [PMID: 36236755 PMCID: PMC9573497 DOI: 10.3390/s22197660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 09/19/2022] [Accepted: 10/04/2022] [Indexed: 06/16/2023]
Abstract
The multi-path ultrasonic flowmeter is widely used in engineering practice, and the flow algorithm is important for its accuracy. The least-squares estimation method is simple and efficient and has good engineering application value. In practical applications, noises are inevitably introduced to the measurement process because of the flowmeter itself or flow-field interference. The results of classical least squares will deviate from reality because it lacks robustness. In this regard, two flow algorithms of multi-path ultrasonic flowmeter are proposed based on least-squares and random-sampling consensus algorithms, which are widely used in the image field. The two algorithms can resist gross errors effectively by avoiding the interference of external points in the sampling points. To verify the effectiveness of the proposed algorithms, we take the double-bend flow field, which is a typical damaged flow field in engineering, as the research object, and then we compare the four algorithms. It can be seen that the two flow algorithms have higher accuracy and better robustness in the presence of interference noise.
Collapse
Affiliation(s)
- Zhijia Xu
- Institute of Systems Engineering, China Academy of Engineering Physics, Mianyang 621999, China
| | - Minghai Li
- Institute of Systems Engineering, China Academy of Engineering Physics, Mianyang 621999, China
| | - Yuqiang Han
- Institute of Systems Engineering, China Academy of Engineering Physics, Mianyang 621999, China
| | - Xin Li
- College of Geology Engineering and Geomantic, Chang’an University, Xi’an 710054, China
| | - Guangmei Shi
- Institute of Systems Engineering, China Academy of Engineering Physics, Mianyang 621999, China
| |
Collapse
|
8
|
Yu Y, Jiang H, Zhang X, Chen Y. Identifying Irregular Potatoes Using Hausdorff Distance and Intersection over Union. Sensors (Basel) 2022; 22:5740. [PMID: 35957297 PMCID: PMC9370970 DOI: 10.3390/s22155740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/29/2022] [Accepted: 07/29/2022] [Indexed: 06/15/2023]
Abstract
Further processing and the added value of potatoes are limited by irregular potatoes. An ellipse-fitting-based Hausdorff distance and intersection over union (IoU) method for identifying irregular potatoes is proposed to solve the problem. First, the acquired potato image is resized, translated, segmented, and filtered to obtain the potato contour information. Secondly, a least-squares fitting method fits the extracted contour to an ellipse. Then, the similarity between the irregular potato contour and the fitted ellipse is characterized using the perimeter ratio, area ratio, Hausdorff distance, and IoU. Next, the characterization ability of the four features is analyzed, and an identification standard of irregular potatoes is established. Finally, we discuss the algorithm's shortcomings in this paper and draw the advantages of the algorithm by comparison. The experimental results showed that the characterization ability of perimeter ratio and area ratio was inferior to that of Hausdorff distance and IoU, and using Hausdorff distance and IoU as feature parameters can effectively identify irregular potatoes. Using Hausdorff distance separately as a feature parameter, the algorithm achieved excellent performance, with precision, recall, and F1 scores reaching 0.9423, 0.98, and 0.9608, respectively. Using IoU separately as a feature parameter, the algorithm achieved a higher overall recognition rate, with precision, recall, and F1 scores of 1, 0.96, and 0.9796, respectively. Compared with existing studies, the proposed algorithm identifies irregular potatoes using only one feature, avoiding the complexity of high-dimensional features and significantly reducing the computing effort. Moreover, simple threshold segmentation does not require data training and saves algorithm execution time.
Collapse
|
9
|
Abstract
Clinically useful proton Computed Tomography images will rely on algorithms to find the three-dimensional proton stopping power distribution that optimally fits the measured proton data. We present a least squares iterative method with many features to put proton imaging into a more quantitative framework. These include the definition of a unique solution that optimally fits the protons, the definition of an iteration vector that takes into account proton measurement uncertainties, the definition of an optimal step size for each iteration individually, the ability to simultaneously optimize the step sizes of many iterations, the ability to divide the proton data into arbitrary numbers of blocks for parallel processing and use of graphical processing units, and the definition of stopping criteria to determine when to stop iterating. We find that it is possible, for any object being imaged, to provide assurance that the image is quantifiably close to an optimal solution, and the optimization of step sizes reduces the total number of iterations required for convergence. We demonstrate the use of these algorithms on real data.
Collapse
Affiliation(s)
- Don F. DeJongh
- ProtonVDA LLC, 1700 Park St Ste 208, Naperville, IL 60563 USA
| | | |
Collapse
|
10
|
Li J, Dogancay K, Hmam H. Closed-Form Pseudolinear Estimators for DRSS-AOA Localization. Sensors (Basel) 2021; 21:s21217159. [PMID: 34770465 PMCID: PMC8588383 DOI: 10.3390/s21217159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 10/26/2021] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
This paper investigates the hybrid source localization problem using differential received signal strength (DRSS) and angle of arrival (AOA) measurements. The main advantage of hybrid measurements is to improve the localization accuracy with respect to a single sensor modality. For sufficiently short wavelengths, AOA sensors can be constructed with size, weight, power and cost (SWAP-C) requirements in mind, making the proposed hybrid DRSS-AOA sensing feasible at a low cost. Firstly the maximum likelihood estimation solution is derived, which is computationally expensive and likely to become unstable for large noise levels. Then a novel closed-form pseudolinear estimation method is developed by incorporating the AOA measurements into a linearized form of DRSS equations. This method eliminates the nuisance parameter associated with linearized DRSS equations, hence improving the estimation performance. The estimation bias arising from the injection of measurement noise into the pseudolinear data matrix is examined. The method of instrumental variables is employed to reduce this bias. As the performance of the resulting weighted instrumental variable (WIV) estimator depends on the correlation between the IV matrix and data matrix, a selected-hybrid-measurement WIV (SHM-WIV) estimator is proposed to maintain a strong correlation. The superior bias and mean-squared error performance of the new SHM-WIV estimator is illustrated with simulation examples.
Collapse
Affiliation(s)
- Jun Li
- UniSA STEM, University of South Australia, Mawson Lakes Campus, Mawson Lakes, SA 5095, Australia;
| | - Kutluyil Dogancay
- UniSA STEM, University of South Australia, Mawson Lakes Campus, Mawson Lakes, SA 5095, Australia;
- Correspondence:
| | - Hatem Hmam
- Defence Science & Technology Group, Cyber and Electronic Warfare Division, Edinburgh, SA 5111, Australia;
| |
Collapse
|
11
|
Yao L, Gao Q, Zhang D, Zhang W, Chen Y. An Integrated Compensation Method for the Force Disturbance of a Six-Axis Force Sensor in Complex Manufacturing Scenarios. Sensors (Basel) 2021; 21:s21144706. [PMID: 34300443 PMCID: PMC8309603 DOI: 10.3390/s21144706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 07/05/2021] [Accepted: 07/06/2021] [Indexed: 11/27/2022]
Abstract
As one of the key components for active compliance control and human–robot collaboration, a six-axis force sensor is often used for a robot to obtain contact forces. However, a significant problem is the distortion between the contact forces and the data conveyed by the six-axis force sensor because of its zero drift, system error, and gravity of robot end-effector. To eliminate the above disturbances, an integrated compensation method is proposed, which uses a deep learning network and the least squares method to realize the zero-point prediction and tool load identification, respectively. After that, the proposed method can automatically complete compensation for the six-axis force sensor in complex manufacturing scenarios. Additionally, the experimental results demonstrate that the proposed method can provide effective and robust compensation for force disturbance and achieve high measurement accuracy.
Collapse
|
12
|
Hamiye Beyaztas B, Bandyopadhyay S. Robust estimation for linear panel data models. Stat Med 2020; 39:4421-4438. [PMID: 32901960 DOI: 10.1002/sim.8732] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Revised: 07/16/2020] [Accepted: 07/27/2020] [Indexed: 11/06/2022]
Abstract
In different fields of applications including, but not limited to, behavioral, environmental, medical sciences, and econometrics, the use of panel data regression models has become increasingly popular as a general framework for making meaningful statistical inferences. However, when the ordinary least squares (OLS) method is used to estimate the model parameters, presence of outliers may significantly alter the adequacy of such models by producing biased and inefficient estimates. In this work, we propose a new, weighted likelihood based robust estimation procedure for linear panel data models with fixed and random effects. The finite sample performances of the proposed estimators have been illustrated through an extensive simulation study as well as with an application to blood pressure dataset. Our thorough study demonstrates that the proposed estimators show significantly better performances over the traditional methods in the presence of outliers and produce competitive results to the OLS based estimates when no outliers are present in the dataset.
Collapse
Affiliation(s)
| | - Soutir Bandyopadhyay
- Department of Applied Mathematics, Statistics Colorado School of Mines, Golden, Colorado, USA
| |
Collapse
|
13
|
Gao D, Zeng X, Wang J, Su Y. Application of LSTM Network to Improve Indoor Positioning Accuracy. Sensors (Basel) 2020; 20:s20205824. [PMID: 33076259 PMCID: PMC7602445 DOI: 10.3390/s20205824] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 11/16/2022]
Abstract
Various indoor positioning methods have been developed to solve the “last mile on Earth”. Ultra-wideband positioning technology stands out among all indoor positioning methods due to its unique communication mechanism and has a broad application prospect. Under non-line-of-sight (NLOS) conditions, the accuracy of this positioning method is greatly affected. Unlike traditional inspection and rejection of NLOS signals, all base stations are involved in positioning to improve positioning accuracy. In this paper, a Long Short-Term Memory (LSTM) network is used while maximizing the use of positioning equipment. The LSTM network is applied to process the raw Channel Impulse Response (CIR) to calculate the ranging error, and combined with the improved positioning algorithm to improve the positioning accuracy. It has been verified that the accuracy of the predicted ranging error is up to centimeter level. Using this prediction for the positioning algorithm, the average positioning accuracy improved by about 62%.
Collapse
Affiliation(s)
- Dongqi Gao
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China;
| | - Xiangye Zeng
- Hebei Key Laboratory of Advanced Laser Technology and Equipment, School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China;
- Tianjin Key Laboratory of Electronic Materials and Devices, School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (J.W.); (Y.S.)
- Correspondence:
| | - Jingyi Wang
- Tianjin Key Laboratory of Electronic Materials and Devices, School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (J.W.); (Y.S.)
| | - Yanmang Su
- Tianjin Key Laboratory of Electronic Materials and Devices, School of Electronics and Information Engineering, Hebei University of Technology, Tianjin 300401, China; (J.W.); (Y.S.)
| |
Collapse
|
14
|
Karaçuha E, Önal NÖ, Ergün E, Tabatadze V, Alkaş H, Karaçuha K, Tontuş HÖ, Nu NVN. Modeling and Prediction of the Covid-19 Cases With Deep Assessment Methodology and Fractional Calculus. IEEE Access 2020; 8:164012-164034. [PMID: 34812356 PMCID: PMC8545307 DOI: 10.1109/access.2020.3021952] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 08/30/2020] [Indexed: 05/29/2023]
Abstract
This study focuses on modeling, prediction, and analysis of confirmed, recovered, and death cases of COVID-19 by using Fractional Calculus in comparison with other models for eight countries including China, France, Italy, Spain, Turkey, the UK, and the US. First, the dataset is modeled using our previously proposed approach Deep Assessment Methodology, next, one step prediction of the future is made using two methods: Deep Assessment Methodology and Long Short-Term Memory. Later, a Gaussian prediction model is proposed to predict the short-term (30 Days) future of the pandemic, and prediction performance is evaluated. The proposed Gaussian model is compared to a time-dependent susceptible-infected-recovered (SIR) model. Lastly, an analysis of understanding the effect of history is made on memory vectors using wavelet-based denoising and correlation coefficients. Results prove that Deep Assessment Methodology successfully models the dataset with 0.6671%, 0.6957%, and 0.5756% average errors for confirmed, recovered, and death cases, respectively. We found that using the proposed Gaussian approach underestimates the trend of the pandemic and the fastest increase is observed in the US while the slowest is observed in China and Spain. Analysis of the past showed that, for all countries except Turkey, the current time instant is mainly dependent on the past two weeks where countries like Germany, Italy, and the UK have a shorter average incubation period when compared to the US and France.
Collapse
Affiliation(s)
- Ertuğrul Karaçuha
- Informatics Institute, Istanbul Technical University34467IstanbulTurkey
| | - Nisa Özge Önal
- Informatics Institute, Istanbul Technical University34467IstanbulTurkey
| | - Esra Ergün
- Informatics Institute, Istanbul Technical University34467IstanbulTurkey
| | - Vasil Tabatadze
- Informatics Institute, Istanbul Technical University34467IstanbulTurkey
| | - Hasan Alkaş
- Faculty of Society and EconomicsRhine-Waal University of Applied Science47533KleveGermany
| | - Kamil Karaçuha
- Informatics Institute, Istanbul Technical University34467IstanbulTurkey
| | - Haci Ömer Tontuş
- Faculty of Science and LettersIstanbul Technical University34000IstanbulTurkey
| | - Nguyen Vinh Ngoc Nu
- Faculty of Society and EconomicsRhine-Waal University of Applied Science47533KleveGermany
| |
Collapse
|
15
|
Ye C, Xu D, Qin Y, Wang L, Wang R, Li W, Kuai Z, Zhu Y. Accurate intravoxel incoherent motion parameter estimation using Bayesian fitting and reduced number of low b-values. Med Phys 2020; 47:4372-4385. [PMID: 32403175 DOI: 10.1002/mp.14233] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 03/02/2020] [Accepted: 04/15/2020] [Indexed: 12/28/2022] Open
Abstract
PURPOSE Intravoxel incoherent motion (IVIM) magnetic resonance imaging is a potential noninvasive technique for the diagnosis of brain tumors. However, perfusion-related parameter mapping is a persistent problem. The purpose of this paper is to investigate the IVIM parameter mapping of brain tumors using Bayesian fitting and low b-values. METHODS Bayesian shrinkage prior (BSP) fitting method and different low b-value distributions were used to estimate IVIM parameters (diffusion D, pseudo-diffusion D*, and perfusion fraction F). The results were compared to those obtained by least squares (LSQ) on both simulated and in vivo brain data. Relative error (RE) and reproducibility were used to evaluate the results. The differences of IVIM parameters between brain tumor and normal regions were compared and used to assess the performance of Bayesian fitting in the IVIM application of brain tumor. RESULTS In tumor regions, the value of D* tended to be decreased when the number of low b-values was insufficient, especially with LSQ. BSP required less low b-values than LSQ for the correct estimation of perfusion parameters of brain tumors. The IVIM parameter maps of brain tumors yielded by BSP had smaller variability, lower RE, and higher reproducibility with respect to those obtained by LSQ. Obvious differences were observed between tumor and normal regions in parameters D (P < 0.05) and F (P < 0.001), especially F. BSP generated fewer outliers than LSQ, and distinguished better tumors from normal regions in parameter F. CONCLUSIONS Intravoxel incoherent motion parameters clearly allow brain tumors to be differentiated from normal regions. Bayesian fitting yields robust IVIM parameter mapping with fewer outliers and requires less low b-values than LSQ for the parameter estimation.
Collapse
Affiliation(s)
- Chen Ye
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, School of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Daoyun Xu
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, School of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Yongbin Qin
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, School of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Lihui Wang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, School of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Wuchao Li
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Zixiang Kuai
- Harbin Medical University Cancer Hospital, Harbin, China
| | - Yuemin Zhu
- Univ Lyon, INSA Lyon, CNRS, INSERM, CREATIS UMR 5220, U1206, Lyon, F-69621, France
| |
Collapse
|
16
|
Dunham J, Johnson E, Feron E, German B. Automatic Updates of Transition Potential Matrices in Dempster-Shafer Networks Based on Evidence Inputs. Sensors (Basel) 2020; 20:E3727. [PMID: 32635275 DOI: 10.3390/s20133727] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 06/24/2020] [Accepted: 06/29/2020] [Indexed: 11/23/2022]
Abstract
Sensor fusion is a topic central to aerospace engineering and is particularly applicable to unmanned aerial systems (UAS). Evidential Reasoning, also known as Dempster-Shafer theory, is used heavily in sensor fusion for detection classification. High computing requirements typically limit use on small UAS platforms. Valuation networks, the general name given to evidential reasoning networks by Shenoy, provides a means to reduce computing requirements through knowledge structure. However, these networks use conditional probabilities or transition potential matrices to describe the relationships between nodes, which typically require expert information to define and update. This paper proposes and tests a novel method to learn these transition potential matrices based on evidence injected at nodes. Novel refinements to the method are also introduced, demonstrating improvements in capturing the relationships between the node belief distributions. Finally, novel rules are introduced and tested for evidence weighting at nodes during simultaneous evidence injections, correctly balancing the injected evidenced used to learn the transition potential matrices. Together, these methods enable updating a Dempster-Shafer network with significantly less user input, thereby making these networks more useful for scenarios in which sufficient information concerning relationships between nodes is not known a priori.
Collapse
|
17
|
Sanchez JM. Linear calibrations in chromatography: The incorrect use of ordinary least squares for determinations at low levels, and the need to redefine the limit of quantification with this regression model. J Sep Sci 2020; 43:2708-2717. [PMID: 32251542 DOI: 10.1002/jssc.202000094] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 03/27/2020] [Accepted: 03/30/2020] [Indexed: 11/08/2022]
Abstract
Ordinary least squares is widely applied as the standard regression method for analytical calibrations, and it is usually accepted that this regression method can be used for quantification starting at the limit of quantification. However, it requires calibration being homoscedastic and this is not common. Different calibrations have been evaluated to assess whether ordinary least squares is adequate to quantify estimates at low levels. All calibrations evaluated were linear and heteroscedastic. Despite acceptable values for precision at limit of quantification levels were obtained, ordinary least squares fitting resulted in significant and unacceptable bias at low levels. When weighted least squares regression was applied, bias at low levels was solved and accurate estimates were obtained. With heteroscedastic calibrations, limit values determined by conventional methods are only appropriate if weighted least squares are used. A "practical limit of quantification" can be determined with ordinary least squares in heteroscedastic calibrations, which should be fixed at a minimum of 20 times the value calculated with conventional methods. Biases obtained above this "practical limit" were acceptable applying ordinary least squares and no significant differences were obtained between the estimates measured using weighted and ordinary least squares when analyzing real-world samples.
Collapse
Affiliation(s)
- Juan M Sanchez
- Science Faculty, Chemistry Department, University of Girona, Girona, Spain
| |
Collapse
|
18
|
Zou Y, Liu H. An Efficient NLOS Errors Mitigation Algorithm for TOA-Based Localization. Sensors (Basel) 2020; 20:s20051403. [PMID: 32143425 PMCID: PMC7085785 DOI: 10.3390/s20051403] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2019] [Revised: 02/13/2020] [Accepted: 03/02/2020] [Indexed: 06/10/2023]
Abstract
In time-of-arrival (TOA) localization systems, errors caused by non-line-of-sight (NLOS) signal propagation could significantly degrade the location accuracy. Existing works on NLOS error mitigation commonly assume that NLOS error statistics or the TOA measurement noise variances are known. Such information is generally unavailable in practice. The goal of this paper is to develop an NLOS error mitigation scheme without requiring such information. The core of the proposed algorithm is a constrained least-squares optimization, which is converted into a semidefinite programming (SDP) problem that can be easily solved by using the CVX toolbox. This scheme is then extended for cooperative source localization. Additionally, its performance is better than existing schemes for most of the scenarios, which will be validated via extensive simulation.
Collapse
Affiliation(s)
- Yanbin Zou
- Department of Electronic and Information Engineering, Shantou University, Shantou 515063, China
| | - Huaping Liu
- School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR 97331, USA;
| |
Collapse
|
19
|
Abstract
Commonly used methods for estimating parameters of a spatial dynamic panel data model include the two-stage least squares, quasi-maximum likelihood, and generalized moments. In this paper, we present an approach that uses the eigenvalues and eigenvectors of a spatial weight matrix to directly construct consistent least-squares estimators of parameters of a general spatial dynamic panel data model. The proposed methodology is conceptually simple and efficient and can be easily implemented. We show that the proposed parameter estimators are consistent and asymptotically normally distributed under mild conditions. We demonstrate the superior performance of our approach via extensive simulation studies. We also provide a real data example.
Collapse
|
20
|
Przysowa R, Russhard P. Non-Contact Measurement of Blade Vibration in an Axial Compressor. Sensors (Basel) 2019; 20:E68. [PMID: 31877689 DOI: 10.3390/s20010068] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2019] [Revised: 12/18/2019] [Accepted: 12/19/2019] [Indexed: 11/16/2022]
Abstract
Complex blade responses such as a rotating stall or simultaneous resonances are common in modern engines and their observation can be a challenge even for state-of-the-art tip-timing systems and trained operators. This paper analyses forced vibrations of axial compressor blades, measured during the bench tests of the SO-3 turbojet. In relation to earlier studies conducted in Poland with a small number of sensors, a multichannel tip-timing system let us observe simultaneous responses or higher-order modes. To find possible symptoms of a failure, blade responses in a healthy and unhealthy engine configuration with an inlet blocker were studied. The used analysis methods covered all-blade spectrum and the circumferential fitting of blade deflections to the harmonic oscillator model. The Pearson coefficient of correlation between the measured and predicted tip deflection is calculated to evaluate fitting results. It helps to avoid common operator mistakes and misinterpreting the results. The proposed modal solver can track the vibration frequency and adjust the engine order on the fly. That way, synchronous and asynchronous vibrations are observed and analysed together with an extended variant of least squares. This approach saves a lot of work related to configuring the conventional tip-timing solver.
Collapse
|
21
|
Sidhu G. Locally Linear Embedding and fMRI Feature Selection in Psychiatric Classification. IEEE J Transl Eng Health Med 2019; 7:2200211. [PMID: 31497410 PMCID: PMC6726465 DOI: 10.1109/jtehm.2019.2936348] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/10/2019] [Revised: 07/12/2019] [Accepted: 08/15/2019] [Indexed: 01/29/2023]
Abstract
BACKGROUND Functional magnetic resonance imaging (fMRI) provides non-invasive measures of neuronal activity using an endogenous Blood Oxygenation-Level Dependent (BOLD) contrast. This article introduces a nonlinear dimensionality reduction (Locally Linear Embedding) to extract informative measures of the underlying neuronal activity from BOLD time-series. The method is validated using the Leave-One-Out-Cross-Validation (LOOCV) accuracy of classifying psychiatric diagnoses using resting-state and task-related fMRI. METHODS Locally Linear Embedding of BOLD time-series (into each voxel's respective tensor) was used to optimise feature selection. This uses Gauß' Principle of Least Constraint to conserve quantities over both space and time. This conservation was assessed using LOOCV to greedily select time points in an incremental fashion on training data that was categorised in terms of psychiatric diagnoses. FINDINGS The embedded fMRI gave highly diagnostic performances (> 80%) on eleven publicly-available datasets containing healthy controls and patients with either Schizophrenia, Attention-Deficit Hyperactivity Disorder (ADHD), or Autism Spectrum Disorder (ASD). Furthermore, unlike the original fMRI data before or after using Principal Component Analysis (PCA) for artefact reduction, the embedded fMRI furnished significantly better than chance classification (defined as the majority class proportion) on ten of eleven datasets. INTERPRETATION Locally Linear Embedding appears to be a useful feature extraction procedure that retains important information about patterns of brain activity distinguishing among psychiatric cohorts.
Collapse
Affiliation(s)
- Gagan Sidhu
- Department of Computing Science1-337 Athabasca HallUniversity of AlbertaEdmontonABT6G 2E8Canada
| |
Collapse
|
22
|
Sha W, Li J, Xiao W, Ling P, Lu C. Quantitative Analysis of Elements in Fertilizer Using Laser-Induced Breakdown Spectroscopy Coupled with Support Vector Regression Model. Sensors (Basel) 2019; 19:s19153277. [PMID: 31349648 PMCID: PMC6696108 DOI: 10.3390/s19153277] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Revised: 07/18/2019] [Accepted: 07/22/2019] [Indexed: 11/16/2022]
Abstract
The rapid detection of the elements nitrogen (N), phosphorus (P), and potassium (K) is beneficial to the control of the compound fertilizer production process, and it is of great significance in the fertilizer industry. The aim of this work was to compare the detection ability of laser-induced breakdown spectroscopy (LIBS) coupled with support vector regression (SVR) and obtain an accurate and reliable method for the rapid detection of all three elements. A total of 58 fertilizer samples were provided by Anhui Huilong Group. The collection of samples was divided into a calibration set (43 samples) and a prediction set (15 samples) by the Kennard–Stone (KS) method. Four different parameter optimization methods were used to construct the SVR calibration models by element concentration and the intensity of characteristic line variables, namely the traditional grid search method (GSM), genetic algorithm (GA), particle swarm optimization (PSO), and least squares (LS). The training time, determination coefficient, and the root-mean-square error for all parameter optimization methods were analyzed. The results indicated that the LIBS technique coupled with the least squares–support vector regression (LS-SVR) method could be a reliable and accurate method in the quantitative determination of N, P, and K elements in complex matrix like compound fertilizers.
Collapse
Affiliation(s)
- Wen Sha
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electric Engineering and Automation, Anhui University, Hefei 230061, China
| | - Jiangtao Li
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electric Engineering and Automation, Anhui University, Hefei 230061, China
| | - Wubing Xiao
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electric Engineering and Automation, Anhui University, Hefei 230061, China
| | - Pengpeng Ling
- Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, School of Electric Engineering and Automation, Anhui University, Hefei 230061, China
| | - Cuiping Lu
- Laboratory of Intelligent Decision, Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China.
| |
Collapse
|
23
|
Abstract
We introduce a structured low rank algorithm for the calibration-free compensation of field inhomogeneity artifacts in echo planar imaging (EPI) MRI data. We acquire the data using two EPI readouts that differ in echo-time. Using time segmentation, we reformulate the field inhomogeneity compensation problem as the recovery of an image time series from highly undersampled Fourier measurements. The temporal profile at each pixel is modeled as a single exponential, which is exploited to fill in the missing entries. We show that the exponential behavior at each pixel, along with the spatial smoothness of the exponential parameters, can be exploited to derive a 3-D annihilation relation in the Fourier domain. This relation translates to a low rank property on a structured multi-fold Toeplitz matrix, whose entries correspond to the measured k-space samples. We introduce a fast two-step algorithm for the completion of the Toeplitz matrix from the available samples. In the first step, we estimate the null space vectors of the Toeplitz matrix using only its fully sampled rows. The null space is then used to estimate the signal subspace, which facilitates the efficient recovery of the time series of images. We finally demonstrate the proposed approach on spherical MR phantom data and human data and show that the artifacts are significantly reduced.
Collapse
Affiliation(s)
- Arvind Balachandrasekaran
- Arvind Balachandrasekaran, Mathews Jacob are with the Department of Electrical and Computer Engineering and Merry Mani is with the Department of Radiology, University of Iowa, Iowa City, IA, 52245, USA
| | - Merry Mani
- Arvind Balachandrasekaran, Mathews Jacob are with the Department of Electrical and Computer Engineering and Merry Mani is with the Department of Radiology, University of Iowa, Iowa City, IA, 52245, USA
| | - Mathews Jacob
- Arvind Balachandrasekaran, Mathews Jacob are with the Department of Electrical and Computer Engineering and Merry Mani is with the Department of Radiology, University of Iowa, Iowa City, IA, 52245, USA
| |
Collapse
|
24
|
Eldén L, Trendafilov N. Semi-sparse PCA. Psychometrika 2019; 84:164-185. [PMID: 30483924 DOI: 10.1007/s11336-018-9650-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2017] [Indexed: 06/09/2023]
Abstract
It is well known that the classical exploratory factor analysis (EFA) of data with more observations than variables has several types of indeterminacy. We study the factor indeterminacy and show some new aspects of this problem by considering EFA as a specific data matrix decomposition. We adopt a new approach to the EFA estimation and achieve a new characterization of the factor indeterminacy problem. A new alternative model is proposed, which gives determinate factors and can be seen as a semi-sparse principal component analysis (PCA). An alternating algorithm is developed, where in each step a Procrustes problem is solved. It is demonstrated that the new model/algorithm can act as a specific sparse PCA and as a low-rank-plus-sparse matrix decomposition. Numerical examples with several large data sets illustrate the versatility of the new model, and the performance and behaviour of its algorithmic implementation.
Collapse
Affiliation(s)
- Lars Eldén
- Department of Mathematics, Linköping University, Linköping, Sweden
| | | |
Collapse
|
25
|
Bi J, Wang Y, Li Z, Xu S, Zhou J, Sun M, Si M. Fast Radio Map Construction by using Adaptive Path Loss Model Interpolation in Large-Scale Building. Sensors (Basel) 2019; 19:s19030712. [PMID: 30744141 PMCID: PMC6387199 DOI: 10.3390/s19030712] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2019] [Revised: 02/04/2019] [Accepted: 02/07/2019] [Indexed: 11/17/2022]
Abstract
The radio map construction is usually time-consuming and labor-sensitive in indoor fingerprinting localization. We propose a fast construction method by using an adaptive path loss model interpolation. Received signal strength (RSS) fingerprints are collected at sparse reference points by using multiple smartphones based on crowdsourcing. Then, the path loss model of an access point (AP) can be built with several reference points by the least squares method in a small area. Afterwards, the RSS value can be calculated based on the constructed model and corresponding AP’s location. In the small area, all models of detectable APs can be built. The corresponding RSS values can be estimated at each interpolated point for forming the interpolated fingerprints considering RSS loss, RSS noise and RSS threshold. Through combining all interpolated and sparse reference fingerprints, the radio map of the whole area can be obtained. Experiments are conducted in corridors with a length of 211 m. To evaluate the performance of RSS estimation and positioning accuracy, inverse distance weighted and Kriging interpolation methods are introduced for comparing with the proposed method. Experimental results show that our proposed method can achieve the same positioning accuracy as complete manual radio map even with the interval of 9.6 m, reducing 85% efforts and time of construction.
Collapse
Affiliation(s)
- Jingxue Bi
- NASG Key Laboratory of Land Environment and Disaster Monitoring, China University of Mining and Technology, Xuzhou 221116, China.
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| | - Yunjia Wang
- NASG Key Laboratory of Land Environment and Disaster Monitoring, China University of Mining and Technology, Xuzhou 221116, China.
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| | - Zengke Li
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| | - Shenglei Xu
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| | - Jiapeng Zhou
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| | - Meng Sun
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| | - Minghao Si
- School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
26
|
Zhang S, Wang D, Liu F. Separate block-based parameter estimation method for Hammerstein systems. R Soc Open Sci 2018; 5:172194. [PMID: 30110418 PMCID: PMC6030268 DOI: 10.1098/rsos.172194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 05/22/2018] [Indexed: 06/08/2023]
Abstract
Different from the output-input representation-based identification methods of two-block Hammerstein systems, this paper concerns a separate block-based parameter estimation method for each block of a two-block Hammerstein CARMA system, without combining the parameters of two parts together. The idea is to consider each block as a subsystem and to estimate the parameters of the nonlinear block and the linear block separately (interactively), by using two least-squares algorithms in one recursive step. The internal variable between the two blocks (the output of the nonlinear block, and also the input of the linear block) is replaced by different estimates: when estimating the parameters of the nonlinear part, the internal variable between the two blocks is computed by the linear function; when estimating the parameters of the linear part, the internal variable is computed by the nonlinear function. The proposed parameter estimation method possesses property of the higher computational efficiency compared with the previous over-parametrization method in which many redundant parameters need to be computed. The simulation results show the effectiveness of the proposed algorithm.
Collapse
Affiliation(s)
- Shuo Zhang
- College of Automation and Electrical Engineering, Qingdao University, Qingdao, 266071, People's Republic of China
| | - Dongqing Wang
- College of Automation and Electrical Engineering, Qingdao University, Qingdao, 266071, People's Republic of China
- Collaborative Innovation Center for Eco-Textiles of Shandong Province, Qingdao, 266071, People's Republic of China
| | - Feng Liu
- Department of Industrial Engineering, University of Texas at Arlington, TX 76019, USA
| |
Collapse
|
27
|
Shahid A, Choi JH, Rana AUHS, Kim HS. Least Squares Neural Network-Based Wireless E-Nose System Using an SnO₂ Sensor Array. Sensors (Basel) 2018; 18:s18051446. [PMID: 29734783 PMCID: PMC5982671 DOI: 10.3390/s18051446] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 05/02/2018] [Accepted: 05/03/2018] [Indexed: 11/17/2022]
Abstract
Over the last few decades, the development of the electronic nose (E-nose) for detection and quantification of dangerous and odorless gases, such as methane (CH4) and carbon monoxide (CO), using an array of SnO2 gas sensors has attracted considerable attention. This paper addresses sensor cross sensitivity by developing a classifier and estimator using an artificial neural network (ANN) and least squares regression (LSR), respectively. Initially, the ANN was implemented using a feedforward pattern recognition algorithm to learn the collective behavior of an array as the signature of a particular gas. In the second phase, the classified gas was quantified by minimizing the mean square error using LSR. The combined approach produced 98.7% recognition probability, with 95.5 and 94.4% estimated gas concentration accuracies for CH4 and CO, respectively. The classifier and estimator parameters were deployed in a remote microcontroller for the actualization of a wireless E-nose system.
Collapse
Affiliation(s)
- Areej Shahid
- Division of Electronics and Electrical Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| | - Jong-Hyeok Choi
- Division of Electronics and Electrical Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| | | | - Hyun-Seok Kim
- Division of Electronics and Electrical Engineering, Dongguk University-Seoul, Seoul 04620, Korea.
| |
Collapse
|
28
|
Waller N. An Introduction to Kristof's Theorem for Solving Least-Square Optimization Problems Without Calculus. Multivariate Behav Res 2018; 53:190-198. [PMID: 29323539 DOI: 10.1080/00273171.2017.1412294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.
Collapse
|
29
|
Antensteiner D, Štolc S, Pock T. A Review of Depth and Normal Fusion Algorithms. Sensors (Basel) 2018; 18:E431. [PMID: 29389903 DOI: 10.3390/s18020431] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Revised: 12/21/2017] [Accepted: 01/26/2018] [Indexed: 11/17/2022]
Abstract
Geometric surface information such as depth maps and surface normals can be acquired by various methods such as stereo light fields, shape from shading and photometric stereo techniques. We compare several algorithms which deal with the combination of depth with surface normal information in order to reconstruct a refined depth map. The reasons for performance differences are examined from the perspective of alternative formulations of surface normals for depth reconstruction. We review and analyze methods in a systematic way. Based on our findings, we introduce a new generalized fusion method, which is formulated as a least squares problem and outperforms previous methods in the depth error domain by introducing a novel normal weighting that performs closer to the geodesic distance measure. Furthermore, a novel method is introduced based on Total Generalized Variation (TGV) which further outperforms previous approaches in terms of the geodesic normal distance error and maintains comparable quality in the depth error domain.
Collapse
|
30
|
Shaw CB, Hui ES, Helpern JA, Jensen JH. Tensor estimation for double-pulsed diffusional kurtosis imaging. NMR Biomed 2017; 30:e3722. [PMID: 28328072 DOI: 10.1002/nbm.3722] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Revised: 02/08/2017] [Accepted: 02/09/2017] [Indexed: 06/06/2023]
Abstract
Double-pulsed diffusional kurtosis imaging (DP-DKI) represents the double diffusion encoding (DDE) MRI signal in terms of six-dimensional (6D) diffusion and kurtosis tensors. Here a method for estimating these tensors from experimental data is described. A standard numerical algorithm for tensor estimation from conventional (i.e. single diffusion encoding) diffusional kurtosis imaging (DKI) data is generalized to DP-DKI. This algorithm is based on a weighted least squares (WLS) fit of the signal model to the data combined with constraints designed to minimize unphysical parameter estimates. The numerical algorithm then takes the form of a quadratic programming problem. The principal change required to adapt the conventional DKI fitting algorithm to DP-DKI is replacing the three-dimensional diffusion and kurtosis tensors with the 6D tensors needed for DP-DKI. In this way, the 6D diffusion and kurtosis tensors for DP-DKI can be conveniently estimated from DDE data by using constrained WLS, providing a practical means for condensing DDE measurements into well-defined mathematical constructs that may be useful for interpreting and applying DDE MRI. Data from healthy volunteers for brain are used to demonstrate the DP-DKI tensor estimation algorithm. In particular, representative parametric maps of selected tensor-derived rotational invariants are presented.
Collapse
Affiliation(s)
- Calvin B Shaw
- Center for Biomedical Imaging, Medical University of South Carolina, Charleston, South Carolina, USA
- Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Edward S Hui
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Pokfulam, Hong Kong, SAR, China
| | - Joseph A Helpern
- Center for Biomedical Imaging, Medical University of South Carolina, Charleston, South Carolina, USA
- Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA
- Department of Neuroscience, Medical University of South Carolina, Charleston, South Carolina, USA
- Department of Neurology, Medical University of South Carolina, Charleston, South Carolina, USA
| | - Jens H Jensen
- Center for Biomedical Imaging, Medical University of South Carolina, Charleston, South Carolina, USA
- Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina, USA
| |
Collapse
|
31
|
Mohamed H, Moussa A, Elhabiby M, El-Sheimy N, Sesay A. A Novel Real-Time Reference Key Frame Scan Matching Method. Sensors (Basel) 2017; 17:E1060. [PMID: 28481285 DOI: 10.3390/s17051060] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2017] [Revised: 04/23/2017] [Accepted: 05/03/2017] [Indexed: 11/17/2022]
Abstract
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Collapse
|
32
|
Tang LL, Yuan A, Collins J, Che X, Chan L. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard. Cancer Inform 2017; 16:1176935116686063. [PMID: 28469385 PMCID: PMC5392027 DOI: 10.1177/1176935116686063] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 11/24/2016] [Indexed: 12/29/2022] Open
Abstract
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration.
Collapse
Affiliation(s)
- Liansheng Larry Tang
- Department of Statistics, George Mason University, Fairfax, VA, USA.,Rehabilitation Medicine Department, NIH Clinical Center, Bethesda, MD, USA
| | - Ao Yuan
- Rehabilitation Medicine Department, NIH Clinical Center, Bethesda, MD, USA.,Department of Biostatistics, Bioinformatics and Biomathematics, Georgetown University, Washington, DC, USA
| | - John Collins
- Department of Statistics, George Mason University, Fairfax, VA, USA.,Rehabilitation Medicine Department, NIH Clinical Center, Bethesda, MD, USA
| | - Xuan Che
- Rehabilitation Medicine Department, NIH Clinical Center, Bethesda, MD, USA
| | - Leighton Chan
- Rehabilitation Medicine Department, NIH Clinical Center, Bethesda, MD, USA
| |
Collapse
|
33
|
Wang Q, Wang Y, Zhu G. Matched Field Processing Based on Least Squares with a Small Aperture Hydrophone Array. Sensors (Basel) 2016; 17:E71. [PMID: 28042828 DOI: 10.3390/s17010071] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 12/20/2016] [Accepted: 12/27/2016] [Indexed: 11/17/2022]
Abstract
The receiver hydrophone array is the signal front-end and plays an important role in matched field processing, which usually covers the whole water column from the sea surface to the bottom. Such a large aperture array is very difficult to realize. To solve this problem, an approach called matched field processing based on least squares with a small aperture hydrophone array is proposed, which decomposes the received acoustic fields into depth function matrix and amplitudes of the normal modes at the beginning. Then all the mode amplitudes are estimated using the least squares in the sense of minimum norm, and the amplitudes estimated are used to recalculate the received acoustic fields of the small aperture array, which means the recalculated ones contain more environmental information. In the end, lots of numerical experiments with three small aperture arrays are processed in the classical shallow water, and the performance of matched field passive localization is evaluated. The results show that the proposed method can make the recalculated fields contain more acoustic information of the source, and the performance of matched field passive localization with small aperture array is improved, so the proposed algorithm is proved to be effective.
Collapse
|
34
|
Berberidis D, Kekatos V, Giannakis GB. Online Censoring for Large-Scale Regressions with Application to Streaming Big Data. IEEE Trans Signal Process 2016; 64:3854-3867. [PMID: 28042229 PMCID: PMC5198787 DOI: 10.1109/tsp.2016.2546225] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
On par with data-intensive applications, the sheer size of modern linear regression problems creates an ever-growing demand for efficient solvers. Fortunately, a significant percentage of the data accrued can be omitted while maintaining a certain quality of statistical inference with an affordable computational budget. This work introduces means of identifying and omitting less informative observations in an online and data-adaptive fashion. Given streaming data, the related maximum-likelihood estimator is sequentially found using first- and second-order stochastic approximation algorithms. These schemes are well suited when data are inherently censored or when the aim is to save communication overhead in decentralized learning setups. In a different operational scenario, the task of joint censoring and estimation is put forth to solve large-scale linear regressions in a centralized setup. Novel online algorithms are developed enjoying simple closed-form updates and provable (non)asymptotic convergence guarantees. To attain desired censoring patterns and levels of dimensionality reduction, thresholding rules are investigated too. Numerical tests on real and synthetic datasets corroborate the efficacy of the proposed data-adaptive methods compared to data-agnostic random projection-based alternatives.
Collapse
|
35
|
Zhu B, Li J, Chu Z, Tang W, Wang B, Li D. A Robust and Multi-Weighted Approach to Estimating Topographically Correlated Tropospheric Delays in Radar Interferograms. Sensors (Basel) 2016; 16:s16071078. [PMID: 27420066 PMCID: PMC4970124 DOI: 10.3390/s16071078] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Revised: 06/16/2016] [Accepted: 07/08/2016] [Indexed: 11/19/2022]
Abstract
Spatial and temporal variations in the vertical stratification of the troposphere introduce significant propagation delays in interferometric synthetic aperture radar (InSAR) observations. Observations of small amplitude surface deformations and regional subsidence rates are plagued by tropospheric delays, and strongly correlated with topographic height variations. Phase-based tropospheric correction techniques assuming a linear relationship between interferometric phase and topography have been exploited and developed, with mixed success. Producing robust estimates of tropospheric phase delay however plays a critical role in increasing the accuracy of InSAR measurements. Meanwhile, few phase-based correction methods account for the spatially variable tropospheric delay over lager study regions. Here, we present a robust and multi-weighted approach to estimate the correlation between phase and topography that is relatively insensitive to confounding processes such as regional subsidence over larger regions as well as under varying tropospheric conditions. An expanded form of robust least squares is introduced to estimate the spatially variable correlation between phase and topography by splitting the interferograms into multiple blocks. Within each block, correlation is robustly estimated from the band-filtered phase and topography. Phase-elevation ratios are multiply- weighted and extrapolated to each persistent scatter (PS) pixel. We applied the proposed method to Envisat ASAR images over the Southern California area, USA, and found that our method mitigated the atmospheric noise better than the conventional phase-based method. The corrected ground surface deformation agreed better with those measured from GPS.
Collapse
Affiliation(s)
- Bangyan Zhu
- School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Jiancheng Li
- School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Zhengwei Chu
- Nanjing Institute of Surveying, Mapping and Geotechnical Investigation, Nanjing 210019, China.
| | - Wei Tang
- State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China.
| | - Bin Wang
- School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| | - Dawei Li
- School of Geodesy and Geomatics, Wuhan University, 129 Luoyu Road, Wuhan 430079, China.
| |
Collapse
|
36
|
Abstract
Modeling the structural ensemble of intrinsically disordered proteins (IDPs), which lack fixed structures, is essential in understanding their cellular functions and revealing their regulation mechanisms in signaling pathways of related diseases (e.g., cancers and neurodegenerative disorders). Though the ensemble concept has been widely believed to be the most accurate way to depict 3D structures of IDPs, few of the traditional ensemble-based approaches effectively address the degeneracy problem that occurs when multiple solutions are consistent with experimental data and is the main challenge in the IDP ensemble construction task. In this article, based on a predefined conformational library, we formalize the structure ensemble construction problem into a least squares framework, which provides the optimal solution when the data constraints outnumber unknown variables. To deal with the degeneracy problem, we further propose a regularized regression approach based on the elastic net technique with the assumption that the weights to be estimated for individual structures in the ensemble are sparse. We have validated our methods through a reference ensemble approach as well as by testing the real biological data of three proteins, including alpha-synuclein, the translocation domain of Colocin N, and the K18 domain of Tau protein.
Collapse
Affiliation(s)
- Huichao Gong
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| | - Sai Zhang
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
| | - Jiangdian Wang
- Biostatistics and Research Decision Sciences—Asia Pacific, Merck Research Laboratory, Beijing, China
| | - Haipeng Gong
- School of Life Sciences, Tsinghua University, Beijing, China
- MOE Key Laboratory of Bioinformatics, Tsinghua University, Beijing, China
| | - Jianyang Zeng
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
- MOE Key Laboratory of Bioinformatics, Tsinghua University, Beijing, China
| |
Collapse
|
37
|
Abstract
We prove that the convex least squares estimator (LSE) attains a n-1/2 pointwise rate of convergence in any region where the truth is linear. In addition, the asymptotic distribution can be characterized by a modified invelope process. Analogous results hold when one uses the derivative of the convex LSE to perform derivative estimation. These asymptotic results facilitate a new consistent testing procedure on the linearity against a convex alternative. Moreover, we show that the convex LSE adapts to the optimal rate at the boundary points of the region where the truth is linear, up to a log-log factor. These conclusions are valid in the context of both density estimation and regression function estimation.
Collapse
Affiliation(s)
- Yining Chen
- London School of Economics and Political Science and University of Washington
| | - Jon A Wellner
- London School of Economics and Political Science and University of Washington
| |
Collapse
|
38
|
Balabdaoui F, Basu S. Letter to the editor comments on Groparu-Cojocaru and Doray (2013). COMMUN STAT-SIMUL C 2015; 46:3833-3840. [PMID: 28584394 PMCID: PMC5455332 DOI: 10.1080/03610918.2015.1024857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Although estimating the five parameters of an unknown Generalized Normal Laplace (GNL) density by minimizing the distance between the empirical and true characteristic functions seems appealing, the approach cannot be advocated in practice. This conclusion is based on extensive numerical simulations in which a fast minimization procedure delivers deceiving estimators with values that are quite far away from the truth. These findings can be predicted by the very large values obtained for the true asymptotic variances of the estimators of the five parameters of the true GNL density.
Collapse
Affiliation(s)
| | - Saonli Basu
- Division of Biostatistics, School of Public Health,
University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
39
|
Bourhis LJ, Dolomanov OV, Gildea RJ, Howard JAK, Puschmann H. The anatomy of a comprehensive constrained, restrained refinement program for the modern computing environment - Olex2 dissected. Acta Crystallogr A Found Adv 2015; 71:59-75. [PMID: 25537389 PMCID: PMC4283469 DOI: 10.1107/s2053273314022207] [Citation(s) in RCA: 883] [Impact Index Per Article: 98.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2014] [Accepted: 10/08/2014] [Indexed: 11/25/2022] Open
Abstract
This paper describes the mathematical basis for olex2.refine, the new refinement engine which is integrated within the Olex2 program. Precise and clear equations are provided for every computation performed by this engine, including structure factors and their derivatives, constraints, restraints and twinning; a general overview is also given of the different components of the engine and their relation to each other. A framework for adding multiple general constraints with dependencies on common physical parameters is described. Several new restraints on atomic displacement parameters are also presented.
Collapse
Affiliation(s)
- Luc J. Bourhis
- Bruker AXS–SAS, 4 Allée Lorentz, 77447 Marne-la-Vallée cedex 2, France
| | - Oleg V. Dolomanov
- OlexSys Ltd, Department of Chemistry, Durham University, South Road, Durham, DH1 3LE, England
| | - Richard J. Gildea
- Diamond Light Source Ltd, Diamond House, Harwell Oxford, Didcot, Oxfordshire, OX11 0DE, England
| | - Judith A. K. Howard
- Department of Chemistry, Durham University, South Road, Durham, DH1 3LE, England
| | - Horst Puschmann
- OlexSys Ltd, Department of Chemistry, Durham University, South Road, Durham, DH1 3LE, England
| |
Collapse
|
40
|
Giacovazzo C. From direct-space discrepancy functions to crystallographic least squares. Acta Crystallogr A Found Adv 2014; 71:36-45. [PMID: 25537387 DOI: 10.1107/s2053273314019056] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2014] [Accepted: 08/22/2014] [Indexed: 11/10/2022] Open
Abstract
Crystallographic least squares are a fundamental tool for crystal structure analysis. In this paper their properties are derived from functions estimating the degree of similarity between two electron-density maps. The new approach leads also to modifications of the standard least-squares procedures, potentially able to improve their efficiency. The role of the scaling factor between observed and model amplitudes is analysed: the concept of unlocated model is discussed and its scattering contribution is combined with that arising from the located model. Also, the possible use of an ancillary parameter, to be associated with the classical weight related to the variance of the observed amplitudes, is studied. The crystallographic discrepancy factors, basic tools often combined with least-squares procedures in phasing approaches, are analysed. The mathematical approach here described includes, as a special case, the so-called vector refinement, used when accurate estimates of the target phases are available.
Collapse
Affiliation(s)
- Carmelo Giacovazzo
- Istituto di Cristallografia - CNR, Via G. Amendola, 122/O 70126 Bari, Italy
| |
Collapse
|
41
|
Abstract
The g - and - h distributional family is generated from a relatively simple transformation of the standard normal and can approximate a broad spectrum of distributions. Consequently, it is easy to use in simulation studies and has been applied in multiple areas, including risk management, stock return analysis and missing data imputation studies. A rapidly convergent quantile based least squares (QLS) estimation method to fit the g - and - h distributional family parameters is proposed and then extended to a robust version. The robust version is then used as a more general outlier detection approach. Several properties of the QLS method are derived and comparisons made with competing methods through simulation. Real data examples of microarray and stock index data are used as illustrations.
Collapse
Affiliation(s)
- Yihuan Xu
- ImClone LLC, A Wholly Owned Subsidiary of Eli Lilly and Company, 440 Route 22, Bridgewater, NJ 08807
| | - Boris Iglewicz
- Department of Statistics, The Fox School of Business, Temple University, 1810 North 13 Street 06-012, Philadelphia, PA 19122
| | - Inna Chervoneva
- Division of Biostatistics, Department of Pharmacology and Experimental Therapeutics, Thomas Jefferson University, 1015 Chestnut St, Suite M100, Philadelphia, PA 19107
| |
Collapse
|
42
|
Wolzt M, Gouya G, Kapiotis S, Becka M, Mueck W, Kubitza D. Open-label, randomized study of the effect of rivaroxaban with or without acetylsalicylic acid on thrombus formation in a perfusion chamber. Thromb Res 2013; 132:240-7. [PMID: 23786894 DOI: 10.1016/j.thromres.2013.05.019] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Revised: 05/10/2013] [Accepted: 05/21/2013] [Indexed: 01/16/2023]
Abstract
INTRODUCTION Rivaroxaban, a direct factor Xa inhibitor, has demonstrated effectiveness for the management of both venous and arterial thrombosis. This study was designed to investigate the antithrombotic effect of rivaroxaban, with or without acetylsalicylic acid (ASA), in an ex vivo perfusion chamber at both low and high shear rates. MATERIALS AND METHODS Healthy subjects (N=51) were enrolled in a randomized, crossover (rivaroxaban 5, 10 or 20mg with or without ASA), and parallel-group (compared with ASA plus clopidogrel) study. Thrombi formed on pig aorta strips were measured after a 5-minute perfusion at low and high shear rates with blood from the subjects by measuring D-dimer concentration (for fibrin deposition) and P-selectin content (for platelet deposition). RESULTS ASA alone had no impact on thrombus D-dimer levels, whereas rivaroxaban alone at peak concentrations decreased D-dimer levels by 9%, 84% and 65% at low shear rate and 37%, 73% and 74% at high shear rate after doses of 5, 10 and 20mg, respectively. Steady-state ASA plus rivaroxaban 5mg caused a greater reduction in D-dimer levels (63%) than monotherapy at low shear rate. Co-administration of ASA with clopidogrel was associated with a 30% decrease in D-dimer levels at low shear rate and a 14% decrease at high shear rate. No conclusive effect on P-selectin content was observed across the treatment groups. CONCLUSIONS Rivaroxaban dose-dependently inhibited ex vivo thrombus formation under low and high shear rates. Co-administration of ASA had an additional effect on the antithrombotic action of low-dose rivaroxaban.
Collapse
Affiliation(s)
- Michael Wolzt
- Department of Clinical Pharmacology, Medical University of Vienna, Vienna, Austria.
| | | | | | | | | | | |
Collapse
|
43
|
Gurbel PA, Bliden KP, Logan DK, Kereiakes DJ, Lasseter KC, White A, Angiolillo DJ, Nolin TD, Maa JF, Bailey WL, Jakubowski JA, Ojeh CK, Jeong YH, Tantry US, Baker BA. The influence of smoking status on the pharmacokinetics and pharmacodynamics of clopidogrel and prasugrel: the PARADOX study. J Am Coll Cardiol 2013; 62:505-12. [PMID: 23602770 DOI: 10.1016/j.jacc.2013.03.037] [Citation(s) in RCA: 95] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/21/2012] [Revised: 02/21/2013] [Accepted: 03/20/2013] [Indexed: 02/08/2023]
Abstract
OBJECTIVES The goal of this study was to evaluate the effect of smoking on the pharmacokinetics and pharmacodynamics (PD) of clopidogrel and prasugrel therapy. BACKGROUND Major randomized trial data demonstrated that nonsmokers experience less or no benefit from clopidogrel treatment compared with smokers (i.e., the "smokers' paradox"). METHODS PARADOX was a prospective, randomized, double-blind, double-dummy, placebo-controlled, crossover study of objectively assessed nonsmokers (n = 56) and smokers (n = 54) with stable coronary artery disease receiving aspirin therapy. Patients were randomized to receive clopidogrel (75 mg daily) or prasugrel (10 mg daily) for 10 days and crossed over after a 14-day washout. PD was assessed by using VerifyNow P2Y12 and vasodilator-stimulated phosphoprotein phosphorylation assays. Clopidogrel and prasugrel metabolite levels, cytochrome P450 1A2 activity, CYP2C19 genotype, and safety parameters were determined. RESULTS During clopidogrel therapy, device-reported inhibition of platelet aggregation (IPA) trended lower in nonsmokers than smokers (least squares mean treatment difference ± SE: 7.7 ± 4.1%; p = 0.062). Device-reported IPA was significantly lower in clopidogrel-treated smokers than prasugrel-treated smokers (least squares mean treatment difference: 31.8 ± 3.4%; p < 0.0001). During clopidogrel therapy, calculated IPA was lower and P2Y12 reaction units and vasodilator-stimulated phosphoprotein phosphorylation and platelet reactivity index were higher in nonsmokers than in smokers (p = 0.043, p = 0.005, and p = 0.042, respectively). Greater antiplatelet effects were present after prasugrel treatment regardless of smoking status (p < 0.001 for all comparisons). CONCLUSIONS PARADOX demonstrated lower clopidogrel active metabolite exposure and PD effects of clopidogrel in nonsmokers relative to smokers. Prasugrel was associated with greater active metabolite exposure and PD effects than clopidogrel regardless of smoking status. The poorer antiplatelet response in clopidogrel-treated nonsmokers may provide an explanation for the smokers' paradox. (The Influence of Smoking Status on Prasugrel and Clopidogrel Treated Subjects Taking Aspirin and Having Stable Coronary Artery Disease; NCT01260584).
Collapse
Affiliation(s)
- Paul A Gurbel
- Sinai Center for Thrombosis Research, Baltimore, MD 21215, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
44
|
Veraart J, Rajan J, Peeters RR, Leemans A, Sunaert S, Sijbers J. Comprehensive framework for accurate diffusion MRI parameter estimation. Magn Reson Med 2012; 70:972-84. [PMID: 23132517 DOI: 10.1002/mrm.24529] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2012] [Revised: 09/21/2012] [Accepted: 09/24/2012] [Indexed: 11/12/2022]
Abstract
During the last decade, many approaches have been proposed for improving the estimation of diffusion measures. These techniques have already shown an increase in accuracy based on theoretical considerations, such as incorporating prior knowledge of the data distribution. The increased accuracy of diffusion metric estimators is typically observed in well-defined simulations, where the assumptions regarding properties of the data distribution are known to be valid. In practice, however, correcting for subject motion and geometric eddy current deformations alters the data distribution tremendously such that it can no longer be expressed in a closed form. The image processing steps that precede the model fitting will render several assumptions on the data distribution invalid, potentially nullifying the benefit of applying more advanced diffusion estimators. In this work, we present a generic diffusion model fitting framework that considers some statistics of diffusion MRI data. A central role in the framework is played by the conditional least squares estimator. We demonstrate that the accuracy of that particular estimator can generally be preserved, regardless the applied preprocessing steps, if the noise parameter is known a priori. To fulfill that condition, we also propose an approach for the estimation of spatially varying noise levels.
Collapse
Affiliation(s)
- Jelle Veraart
- IBBT Vision Laboratory, Department of Physics, University of Antwerp, Antwerp, Belgium
| | | | | | | | | | | |
Collapse
|
45
|
Tarrío P, Bernardos AM, Casar JR. Weighted least squares techniques for improved received signal strength based localization. Sensors (Basel) 2011; 11:8569-92. [PMID: 22164092 DOI: 10.3390/s110908569] [Citation(s) in RCA: 94] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2011] [Revised: 08/30/2011] [Accepted: 08/31/2011] [Indexed: 11/16/2022]
Abstract
The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.
Collapse
|
46
|
Wang Q, Dinse GE. Linear regression analysis of survival data with missing censoring indicators. Lifetime Data Anal 2011; 17:256-279. [PMID: 20559722 PMCID: PMC3020262 DOI: 10.1007/s10985-010-9175-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2008] [Accepted: 06/02/2010] [Indexed: 05/29/2023]
Abstract
Linear regression analysis has been studied extensively in a random censorship setting, but typically all of the censoring indicators are assumed to be observed. In this paper, we develop synthetic data methods for estimating regression parameters in a linear model when some censoring indicators are missing. We define estimators based on regression calibration, imputation, and inverse probability weighting techniques, and we prove all three estimators are asymptotically normal. The finite-sample performance of each estimator is evaluated via simulation. We illustrate our methods by assessing the effects of sex and age on the time to non-ambulatory progression for patients in a brain cancer clinical trial.
Collapse
Affiliation(s)
- Qihua Wang
- Department of Mathematics and Statistics, Yunnan University, Kunming 650091, China
- Academy of Mathematics and Systems Science, Chinese Academy of Science, Beijing 100190, China
| | - Gregg E. Dinse
- Biostatistics Branch, National Institute of Environmental Health Sciences, Research Triangle Park, North Carolina 27709, USA
| |
Collapse
|
47
|
Chandola H. A lower bound on the error in dimensionality reduction resulting from projection onto a restricted subspace. Linear Algebra Appl 2010; 433:2147-2151. [PMID: 21057654 PMCID: PMC2968740 DOI: 10.1016/j.laa.2010.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We obtain the lower bound on a variant of the common problem of dimensionality reduction. In this version, the dataset is projected on to a k dimensional subspace with the property that the first k-1 basis vectors are fixed, leaving a single degree of freedom in terms of basis vectors.
Collapse
|
48
|
Zanderigo F, Ogden RT, Chang C, Choy S, Wong A, Parsey RV. Robust fitting of [11C]-WAY-100635 PET data. J Cereb Blood Flow Metab 2010; 30:1366-72. [PMID: 20179725 DOI: 10.1038/jcbfm.2010.20] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fitting of a positron emission tomography (PET) time-activity curve is typically accomplished according to the least squares (LS) criterion, which is optimal for data having Gaussian distributed errors, but not robust in the presence of outliers. Conversely, quantile regression (QR) provides robust estimates not heavily influenced by outliers, sacrificing a little efficiency relative to LS when no outliers are present. Given these considerations, we hypothesized that QR would improve parameter estimate accuracy as measured by reduced intersubject variance in distribution volume (V(T)) compared with LS in PET modeling. We compare V(T) values after applying QR with those using LS on 49 controls studied with [(11)C]-WAY-100635. QR decreases the standard deviation of the V(T) estimates (relative improvement range: 0.08% to 3.24%), while keeping the within-group average V(T) values almost unchanged. QR variance reduction results in fewer subjects required to maintain the same statistical power in group analysis without additional hardware and/or image registration to correct head motion.
Collapse
|
49
|
Abstract
The classes of monotone or convex (and necessarily monotone) densities on ℝ(+) can be viewed as special cases of the classes of k-monotone densities on ℝ(+). These classes bridge the gap between the classes of monotone (1-monotone) and convex decreasing (2-monotone) densities for which asymptotic results are known, and the class of completely monotone (∞-monotone) densities on ℝ(+). In this paper we consider non-parametric maximum likelihood and least squares estimators of a k-monotone density g(0).We prove existence of the estimators and give characterizations. We also establish consistency properties, and show that the estimators are splines of degree k - 1 with simple knots. We further provide asymptotic minimax risk lower bounds for estimating the derivatives[Formula: see text], at a fixed point x(0) under the assumption that [Formula: see text].
Collapse
Affiliation(s)
- Fadoua Balabdaoui
- CEREMADE, Université Paris-Dauphine, Place du Maréchal de Lattre de Tassigny, 75775, Paris, CEDEX 16, France
| | - Jon A. Wellner
- Department of Statistics, University of Washington, Box 354322, Seattle, WA 98195-4322, USA
| |
Collapse
|
50
|
Dumett M, Rosen G, Sabat J, Shaman A, Tempelman L, Wang C, Swift R. Deconvolving an Estimate of Breath Measured Blood Alcohol Concentration from Biosensor Collected Transdermal Ethanol Data. Appl Math Comput 2008; 196:724-743. [PMID: 19255617 PMCID: PMC2597868 DOI: 10.1016/j.amc.2007.07.026] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Biosensor measurement of transdermal alcohol oncentration in perspiration exhibits significant variance from subject to subject and device to device. Short duration data collected in a controlled clinical setting is used to calibrate a forward model for ethanol transport from the blood to the sensor. The calibrated model is then used to invert transdermal signals collected in the field (short or long duration) to obtain an estimate for breath measured blood alcohol concentration. A distributed parameter model for the forward transport of ethanol from the blood through the skin and its processing by the sensor is developed. Model calibration is formulated as a nonlinear least squares fit to data. The fit model is then used as part of a spline based scheme in the form of a regularized, non-negatively constrained linear deconvolution. Fully discrete, steepest descent based schemes for solving the resulting optimization problems are developed. The adjoint method is used to accurately and efficiently compute requisite gradients. Efficacy is demonstrated on subject field data.
Collapse
Affiliation(s)
- M Dumett
- University of Southern California, Department of Mathematics, Kaprielian Hall, Room 108, 3620 Vermont Avenue, Los Angeles, CA 90089-2532
| | | | | | | | | | | | | |
Collapse
|