1
|
Hoogland J, Debray TPA, Crowther MJ, Riley RD, IntHout J, Reitsma JB, Zwinderman AH. Regularized parametric survival modeling to improve risk prediction models. Biom J 2024; 66:e2200319. [PMID: 37775946 DOI: 10.1002/bimj.202200319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/30/2023] [Accepted: 09/17/2023] [Indexed: 10/01/2023]
Abstract
We propose to combine the benefits of flexible parametric survival modeling and regularization to improve risk prediction modeling in the context of time-to-event data. Thereto, we introduce ridge, lasso, elastic net, and group lasso penalties for both log hazard and log cumulative hazard models. The log (cumulative) hazard in these models is represented by a flexible function of time that may depend on the covariates (i.e., covariate effects may be time-varying). We show that the optimization problem for the proposed models can be formulated as a convex optimization problem and provide a user-friendly R implementation for model fitting and penalty parameter selection based on cross-validation. Simulation study results show the advantage of regularization in terms of increased out-of-sample prediction accuracy and improved calibration and discrimination of predicted survival probabilities, especially when sample size was relatively small with respect to model complexity. An applied example illustrates the proposed methods. In summary, our work provides both a foundation for and an easily accessible implementation of regularized parametric survival modeling and suggests that it improves out-of-sample prediction performance.
Collapse
Affiliation(s)
- J Hoogland
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
- Department of Epidemiology and Data Science, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - T P A Debray
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
- Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - M J Crowther
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden
| | - R D Riley
- School for Medicine, Keele University, Keele, Staffordshire, UK
| | - J IntHout
- Radboud Institute for Health Sciences (RIHS), Radboud University Medical Center, Nijmegen, The Netherlands
| | - J B Reitsma
- Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
- Cochrane Netherlands, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| | - A H Zwinderman
- Department of Epidemiology and Data Science, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Kim K, Niknam BA, Zubizarreta JR. Scalable kernel balancing weights in a nationwide observational study of hospital profit status and heart attack outcomes. Biostatistics 2023:kxad032. [PMID: 38123487 DOI: 10.1093/biostatistics/kxad032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 10/27/2023] [Accepted: 11/03/2023] [Indexed: 12/23/2023] Open
Abstract
Weighting is a general and often-used method for statistical adjustment. Weighting has two objectives: first, to balance covariate distributions, and second, to ensure that the weights have minimal dispersion and thus produce a more stable estimator. A recent, increasingly common approach directly optimizes the weights toward these two objectives. However, this approach has not yet been feasible in large-scale datasets when investigators wish to flexibly balance general basis functions in an extended feature space. To address this practical problem, we describe a scalable and flexible approach to weighting that integrates a basis expansion in a reproducing kernel Hilbert space with state-of-the-art convex optimization techniques. Specifically, we use the rank-restricted Nyström method to efficiently compute a kernel basis for balancing in nearly linear time and space, and then use the specialized first-order alternating direction method of multipliers to rapidly find the optimal weights. In an extensive simulation study, we provide new insights into the performance of weighting estimators in large datasets, showing that the proposed approach substantially outperforms others in terms of accuracy and speed. Finally, we use this weighting approach to conduct a national study of the relationship between hospital profit status and heart attack outcomes in a comprehensive dataset of 1.27 million patients. We find that for-profit hospitals use interventional cardiology to treat heart attacks at similar rates as other hospitals but have higher mortality and readmission rates.
Collapse
Affiliation(s)
- Kwangho Kim
- Department of Health Care Policy, Harvard Medical School, 180-A Longwood Avenue, Boston, MA 02115, United States
- Department of Statistics, College of Political Science and Economics, Korea University, Seoul, 02841, Korea
| | - Bijan A Niknam
- Department of Health Care Policy, Harvard Medical School, 180-A Longwood Avenue, Boston, MA 02115, United States
| | - José R Zubizarreta
- Department of Health Care Policy, Harvard Medical School, 180-A Longwood Avenue, Boston, MA 02115, United States
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, 677 Huntington Avenue, Boston, MA 02115, United States
- Department of Statistics, Faculty of Arts and Sciences, Harvard University, Science Center 400 Suite, One Oxford Street, Cambridge, MA 02138, United States
| |
Collapse
|
3
|
Yoshida T, Hanada H, Nakagawa K, Taji K, Tsuda K, Takeuchi I. Efficient model selection for predictive pattern mining model by safe pattern pruning. Patterns (N Y) 2023; 4:100890. [PMID: 38106611 PMCID: PMC10724371 DOI: 10.1016/j.patter.2023.100890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 11/02/2023] [Accepted: 11/09/2023] [Indexed: 12/19/2023]
Abstract
Predictive pattern mining is an approach used to construct prediction models when the input is represented by structured data, such as sets, graphs, and sequences. The main idea behind predictive pattern mining is to build a prediction model by considering unified inconsistent notation sub-structures, such as subsets, subgraphs, and subsequences (referred to as patterns), present in the structured data as features of the model. The primary challenge in predictive pattern mining lies in the exponential growth of the number of patterns with the complexity of the structured data. In this study, we propose the safe pattern pruning method to address the explosion of pattern numbers in predictive pattern mining. We also discuss how it can be effectively employed throughout the entire model building process in practical data analysis. To demonstrate the effectiveness of the proposed method, we conduct numerical experiments on regression and classification problems involving sets, graphs, and sequences.
Collapse
Affiliation(s)
- Takumi Yoshida
- Department of Engineering, Nagoya Institute of Technology, Nagoya, Aichi 466-8555, Japan
| | - Hiroyuki Hanada
- Center for Advanced Intelligence Project, RIKEN, Tokyo 103-0027, Japan
| | - Kazuya Nakagawa
- Department of Engineering, Nagoya Institute of Technology, Nagoya, Aichi 466-8555, Japan
| | - Kouichi Taji
- Department of Mechanical Systems Engineering, Nagoya University, Nagoya, Aichi 464-8603, Japan
| | - Koji Tsuda
- Center for Advanced Intelligence Project, RIKEN, Tokyo 103-0027, Japan
- Department of Bioinformatics and Systems Biology, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Ichiro Takeuchi
- Center for Advanced Intelligence Project, RIKEN, Tokyo 103-0027, Japan
- Department of Mechanical Systems Engineering, Nagoya University, Nagoya, Aichi 464-8603, Japan
| |
Collapse
|
4
|
Ganapathy V, Ramachandran R, Ohtsuki T. Resource Allocation for Secure MIMO-SWIPT Systems in the Presence of Multi-Antenna Eavesdropper in Vehicular Networks. Sensors (Basel) 2023; 23:8069. [PMID: 37836899 PMCID: PMC10575119 DOI: 10.3390/s23198069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 09/14/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023]
Abstract
In this paper, we optimize the secrecy capacity of the legitimate user under resource allocation and security constraints for a multi-antenna environment for the simultaneous transmission of wireless information and power in a dynamic downlink scenario. We study the relationship between secrecy capacity and harvested energy in a power-splitting configuration for a nonlinear energy-harvesting model under co-located conditions. The capacity maximization problem is formulated for the vehicle-to-vehicle communication scenario. The formulated problem is non-convex NP-hard, so we reformulate it into a convex form using a divide-and-conquer approach. We obtain the optimal transmit power matrix and power-splitting ratio values that guarantee positive values of the secrecy capacity. We analyze different vehicle-to-vehicle communication settings to validate the differentiation of the proposed algorithm in maintaining both reliability and security. We also substantiate the effectiveness of the proposed approach by analyzing the trade-offs between secrecy capacity and harvested energy.
Collapse
Affiliation(s)
- Vieeralingaam Ganapathy
- Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India; (V.G.); (R.R.)
| | - Ramanathan Ramachandran
- Department of Electronics and Communication Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Coimbatore 641112, India; (V.G.); (R.R.)
| | - Tomoaki Ohtsuki
- Department of Information and Computer Science, Keio University, Tokyo 108-8345, Japan
| |
Collapse
|
5
|
Lei T, Chintam P, Luo C, Liu L, Jan GE. A Convex Optimization Approach to Multi-Robot Task Allocation and Path Planning. Sensors (Basel) 2023; 23:s23115103. [PMID: 37299829 DOI: 10.3390/s23115103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/13/2023] [Accepted: 05/22/2023] [Indexed: 06/12/2023]
Abstract
In real-world applications, multiple robots need to be dynamically deployed to their appropriate locations as teams while the distance cost between robots and goals is minimized, which is known to be an NP-hard problem. In this paper, a new framework of team-based multi-robot task allocation and path planning is developed for robot exploration missions through a convex optimization-based distance optimal model. A new distance optimal model is proposed to minimize the traveled distance between robots and their goals. The proposed framework fuses task decomposition, allocation, local sub-task allocation, and path planning. To begin, multiple robots are firstly divided and clustered into a variety of teams considering interrelation and dependencies of robots, and task decomposition. Secondly, the teams with various arbitrary shape enclosing intercorrelative robots are approximated and relaxed into circles, which are mathematically formulated to convex optimization problems to minimize the distance between teams, as well as between a robot and their goals. Once the robot teams are deployed into their appropriate locations, the robot locations are further refined by a graph-based Delaunay triangulation method. Thirdly, in the team, a self-organizing map-based neural network (SOMNN) paradigm is developed to complete the dynamical sub-task allocation and path planning, in which the robots are dynamically assigned to their nearby goals locally. Simulation and comparison studies demonstrate the proposed hybrid multi-robot task allocation and path planning framework is effective and efficient.
Collapse
Affiliation(s)
- Tingjun Lei
- Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS 39762, USA
| | - Pradeep Chintam
- Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS 39762, USA
| | - Chaomin Luo
- Department of Electrical and Computer Engineering, Mississippi State University, Mississippi State, MS 39762, USA
| | - Lantao Liu
- Department of Intelligent Systems Engineering, Indiana University, Bloomington, IN 47408, USA
| | - Gene Eu Jan
- Department of Electrical Engineering, National Taipei University, New Taipei City 23741, Taiwan
- Tainan National University of the Arts, Tainan City 72045, Taiwan
| |
Collapse
|
6
|
Tavori J, Levy H. On the Convexity of the Effective Reproduction Number. J Comput Biol 2023. [PMID: 37130305 DOI: 10.1089/cmb.2022.0371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023] Open
Abstract
In this study, we analyze the evolution of the effective reproduction number, R, through a Susceptible-Infective-Recovered spreading process in heterogeneous populations; Characterizing its decay process allows to analytically study the effects of countermeasures on the progress of the virus under heterogeneity, and to optimize their policies. A striking result of recent studies has shown that heterogeneity across individuals (or superspreading) may have a drastic effect on the spreading process progression, which may cause a nonlinear decrease of R in the number of infected individuals. We account for heterogeneity and analyze the stochastic progression of the spreading process. We show that the decrease of R is, in fact, convex in the number of infected individuals, where this convexity stems from heterogeneity. The analysis is based on establishing stochastic monotonic relations between the susceptible populations in varying times of the spread. We demonstrate that the convex behavior of the effective reproduction number affects the performance of countermeasures used to fight the spread of a virus. The results are applicable to the control of virus and malware spreading in computer networks as well. We examine numerically the sensitivity of the herd immunity threshold to the heterogeneity level and to the chosen countermeasures policy.
Collapse
Affiliation(s)
- Jhonatan Tavori
- Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel
| | - Hanoch Levy
- Blavatnik School of Computer Science, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
7
|
Heng Q, Zhou H, Chi EC. Bayesian Trend Filtering via Proximal Markov Chain Monte Carlo. J Comput Graph Stat 2023; 32:938-949. [PMID: 37822489 PMCID: PMC10564381 DOI: 10.1080/10618600.2023.2170089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 01/09/2023] [Indexed: 01/21/2023]
Abstract
Proximal Markov Chain Monte Carlo is a novel construct that lies at the intersection of Bayesian computation and convex optimization, which helped popularize the use of nondifferentiable priors in Bayesian statistics. Existing formulations of proximal MCMC, however, require hyperparameters and regularization parameters to be prespecified. In this work, we extend the paradigm of proximal MCMC through introducing a novel new class of nondifferentiable priors called epigraph priors. As a proof of concept, we place trend filtering, which was originally a nonparametric regression problem, in a parametric setting to provide a posterior median fit along with credible intervals as measures of uncertainty. The key idea is to replace the nonsmooth term in the posterior density with its Moreau-Yosida envelope, which enables the application of the gradient-based MCMC sampler Hamiltonian Monte Carlo. The proposed method identifies the appropriate amount of smoothing in a data-driven way, thereby automating regularization parameter selection. Compared with conventional proximal MCMC methods, our method is mostly tuning free, achieving simultaneous calibration of the mean, scale and regularization parameters in a fully Bayesian framework.
Collapse
Affiliation(s)
- Qiang Heng
- Department of Statistics, North Carolina State University
| | - Hua Zhou
- Departments of Biostatistics and Computational Medicine, UCLA
| | | |
Collapse
|
8
|
Luo J, Shang J. Reliable Optimization of Arbitrary Functions over Quantum Measurements. Entropy (Basel) 2023; 25:358. [PMID: 36832724 PMCID: PMC9955991 DOI: 10.3390/e25020358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 02/09/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
As the connection between classical and quantum worlds, quantum measurements play a unique role in the era of quantum information processing. Given an arbitrary function of quantum measurements, how to obtain its optimal value is often considered as a basic yet important problem in various applications. Typical examples include but are not limited to optimizing the likelihood functions in quantum measurement tomography, searching the Bell parameters in Bell-test experiments, and calculating the capacities of quantum channels. In this work, we propose reliable algorithms for optimizing arbitrary functions over the space of quantum measurements by combining the so-called Gilbert's algorithm for convex optimization with certain gradient algorithms. With extensive applications, we demonstrate the efficacy of our algorithms with both convex and nonconvex functions.
Collapse
|
9
|
Rosenman ETR, Friedberg R, Baiocchi M. Robust Designs for Prospective Randomized Trials Surveying Sensitive Topics. Am J Epidemiol 2023; 192:812-820. [PMID: 36749012 DOI: 10.1093/aje/kwad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 11/09/2022] [Accepted: 02/01/2023] [Indexed: 02/08/2023] Open
Abstract
We consider the problem of designing a prospective randomized trial in which the outcome data will be self-reported, and will involve sensitive topics. Our interest is in how a researcher can adequately power her study when some respondents misreport the binary outcome of interest. To correct the power calculations, we first obtain expressions for the bias and variance induced by misreporting. We model the problem by assuming each individual in our study is a member of one "reporting class": aTrue-reporter, False-reporter, Never-reporter, or Always-reporter. We show that the joint distribution of reporting classes and "response classes" (characterizing individuals' response to the treatment) will exactly define the error terms for our causalestimate. We propose a novel procedure for determining adequate sample sizes under the worst-case power corresponding to a given level of misreporting. Our problem is motivated by prior experience implementing a randomized controlled trial of a sexual violence prevention program among adolescent girls in Kenya.
Collapse
Affiliation(s)
- Evan T R Rosenman
- Harvard Data Science Initiative, Harvard University, Cambridge, Massachusetts United States
| | - Rina Friedberg
- LinkedIn Data Science and Applied Research, Sunnyvale, California, United States
| | - Mike Baiocchi
- Stanford University School of Medicine, Stanford, California, United States
| |
Collapse
|
10
|
Shirzadi M, Marateb HR, Rojas-Martínez M, Mansourian M, Botter A, Vieira Dos Anjos F, Martins Vieira T, Mañanas MA. A real-time and convex model for the estimation of muscle force from surface electromyographic signals in the upper and lower limbs. Front Physiol 2023; 14:1098225. [PMID: 36923291 PMCID: PMC10009160 DOI: 10.3389/fphys.2023.1098225] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/01/2023] [Indexed: 03/02/2023] Open
Abstract
Surface electromyography (sEMG) is a signal consisting of different motor unit action potential trains and records from the surface of the muscles. One of the applications of sEMG is the estimation of muscle force. We proposed a new real-time convex and interpretable model for solving the sEMG-force estimation. We validated it on the upper limb during isometric voluntary flexions-extensions at 30%, 50%, and 70% Maximum Voluntary Contraction in five subjects, and lower limbs during standing tasks in thirty-three volunteers, without a history of neuromuscular disorders. Moreover, the performance of the proposed method was statistically compared with that of the state-of-the-art (13 methods, including linear-in-the-parameter models, Artificial Neural Networks and Supported Vector Machines, and non-linear models). The envelope of the sEMG signals was estimated, and the representative envelope of each muscle was used in our analysis. The convex form of an exponential EMG-force model was derived, and each muscle's coefficient was estimated using the Least Square method. The goodness-of-fit indices, the residual signal analysis (bias and Bland-Altman plot), and the running time analysis were provided. For the entire model, 30% of the data was used for estimation, while the remaining 20% and 50% were used for validation and testing, respectively. The average R-square (%) of the proposed method was 96.77 ± 1.67 [94.38, 98.06] for the test sets of the upper limb and 91.08 ± 6.84 [62.22, 96.62] for the lower-limb dataset (MEAN ± SD [min, max]). The proposed method was not significantly different from the recorded force signal (p-value = 0.610); that was not the case for the other tested models. The proposed method significantly outperformed the other methods (adj. p-value < 0.05). The average running time of each 250 ms signal of the training and testing of the proposed method was 25.7 ± 4.0 [22.3, 40.8] and 11.0 ± 2.9 [4.7, 17.8] in microseconds for the entire dataset. The proposed convex model is thus a promising method for estimating the force from the joints of the upper and lower limbs, with applications in load sharing, robotics, rehabilitation, and prosthesis control for the upper and lower limbs.
Collapse
Affiliation(s)
- Mehdi Shirzadi
- Automatic Control Department (ESAII), Biomedical Engineering Research Centre (CREB), Universitat Politècnica de Catalunya-Barcelona Tech (UPC), Barcelona, Spain
| | - Hamid Reza Marateb
- Biomedical Engineering Department, Engineering Faculty, University of Isfahan, Isfahan, Iran
| | - Mónica Rojas-Martínez
- Automatic Control Department (ESAII), Biomedical Engineering Research Centre (CREB), Universitat Politècnica de Catalunya-Barcelona Tech (UPC), Barcelona, Spain.,Biomedical Research Networking Center in Bioengineering, Biomaterials, and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Marjan Mansourian
- Automatic Control Department (ESAII), Biomedical Engineering Research Centre (CREB), Universitat Politècnica de Catalunya-Barcelona Tech (UPC), Barcelona, Spain
| | - Alberto Botter
- Laboratory for Engineering of the Neuromuscular System (LISiN), Department of Electronics and Telecommunication, Politecnico di Torino, Turin, Italy
| | - Fabio Vieira Dos Anjos
- Postgraduate Program of Rehabilitation Sciences, Augusto Motta University (UNISUAM), Rio de Janeiro, Brazil
| | - Taian Martins Vieira
- Laboratory for Engineering of the Neuromuscular System (LISiN), Department of Electronics and Telecommunication, Politecnico di Torino, Turin, Italy
| | - Miguel Angel Mañanas
- Automatic Control Department (ESAII), Biomedical Engineering Research Centre (CREB), Universitat Politècnica de Catalunya-Barcelona Tech (UPC), Barcelona, Spain.,Biomedical Research Networking Center in Bioengineering, Biomaterials, and Nanomedicine (CIBER-BBN), Madrid, Spain
| |
Collapse
|
11
|
Chen DH, Jiang EH. Joint Power and Time Allocation in Hybrid NoMA/OMA IoT Networks for Two-Way Communications. Entropy (Basel) 2022; 24:1756. [PMID: 36554160 PMCID: PMC9778168 DOI: 10.3390/e24121756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 11/25/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
This article investigates two-way communications between an access point (AP) and multiple terminals in low-cost Internet of Things (IoT) networks. The main issues considered are the asymmetric transmission traffic on the uplink (UL) and downlink (DL), and the unbalanced receivers processing capability at the AP and the terminals. As a solution, a hybrid non-orthogonal multiple access/orthogonal multiple access (NoMA/OMA) scheme together with a joint power and time allocation method is proposed to address these issues. For the system design, we formulated the optimization problem with the aim of minimizing the system power and satisfying the UL and DL transmission rate constraints. Due to the coupling of power and time variables in the objective function and the multi-user interference (MUI) in the UL transmission rate constraints, the formulated problem is shown to be non-linear and non-convex and thus is hard to solve. To obtain a numerical, efficient solution, the original problem is first reformulated to be a convex one relying on the successive convex approximation (SCA) method, and then a numerical efficient solution is thus obtained by using an iterative routine. The proposed transmission scheme is shown to be not only physically feasible but also power-efficient.
Collapse
|
12
|
Muñoz O, Pascual-Iserte A, San Arranz G. Robust Precoding for Multi-User Visible Light Communications with Quantized Channel Information. Sensors (Basel) 2022; 22:9238. [PMID: 36501940 PMCID: PMC9740479 DOI: 10.3390/s22239238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/17/2022] [Accepted: 11/25/2022] [Indexed: 06/17/2023]
Abstract
In this paper, we address the design of multi-user multiple-input single-output (MU-MISO) precoders for indoor visible light communication (VLC) systems. The goal is to minimize the transmitted optical power per light emitting diode (LED) under imperfect channel state information (CSI) at the transmitter side. Robust precoders for imperfect CSI available in the literature include noisy and outdated channel estimation cases. However, to the best of our knowledge, no work has considered adding robustness against channel quantization. In this paper, we fill this gap by addressing the case of imperfect CSI due to the quantization of VLC channels. We model the quantization errors in the CSI through polyhedric uncertainty regions. For polyhedric uncertainty regions and positive real channels, as is the case of VLC channels, we show that the robust precoder against channel quantization errors that minimizes the transmitted optical power while guaranteeing a target signal to noise plus interference ratio (SNIR) per user is the solution of a second order cone programming (SOCP) problem. Finally, we evaluate its performance under different quantization levels through numerical simulations.
Collapse
|
13
|
Park G, Lee K. Optimization of the Trajectory, Transmit Power, and Power Splitting Ratio for Maximizing the Available Energy of a UAV-Aided SWIPT System. Sensors (Basel) 2022; 22:9081. [PMID: 36501780 PMCID: PMC9741088 DOI: 10.3390/s22239081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 11/21/2022] [Accepted: 11/21/2022] [Indexed: 06/17/2023]
Abstract
In this study, we investigate the maximization of the available energy for an unmanned aerial vehicle (UAV)-aided simultaneous wireless information and power transfer (SWIPT) system, in which the ground terminals (GTs) decode information and collect energy simultaneously from the downlink signal sent by the UAV based on a power splitting (PS) policy. To guarantee that each GT has a fair amount of available energy, our aim is to optimize the trajectory and transmit power of the UAV and the PS ratio of the GTs to maximize the minimum average available energy among all GTs while ensuring the average spectral efficiency requirement. To address the nonconvexity of the formulated optimization problem, we apply a successive convex optimization technique and propose an iterative algorithm to derive the optimal strategies of the UAV and GTs. Through performance evaluations, we show that the proposed scheme outperforms the existing baseline schemes in terms of the max-min available energy by adaptively controlling the optimization variables according to the situation.
Collapse
|
14
|
Pu S, Olshevsky A, Paschalidis IC. A Sharp Estimate on the Transient Time of Distributed Stochastic Gradient Descent. IEEE Trans Automat Contr 2022; 67:5900-5915. [PMID: 37284602 PMCID: PMC10241409 DOI: 10.1109/tac.2021.3126253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper is concerned with minimizing the average of n cost functions over a network in which agents may communicate and exchange information with each other. We consider the setting where only noisy gradient information is available. To solve the problem, we study the distributed stochastic gradient descent (DSGD) method and perform a non-asymptotic convergence analysis. For strongly convex and smooth objective functions, in expectation, DSGD asymptotically achieves the optimal network independent convergence rate compared to centralized stochastic gradient descent (SGD). Our main contribution is to characterize the transient time needed for DSGD to approach the asymptotic convergence rate. Moreover, we construct a "hard" optimization problem that proves the sharpness of the obtained result. Numerical experiments demonstrate the tightness of the theoretical results.
Collapse
Affiliation(s)
- Shi Pu
- School of Data Science, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen, China
| | - Alex Olshevsky
- Department of Electrical and Computer Engineering and the Division of Systems Engineering, Boston University, Boston, MA
| | - Ioannis Ch Paschalidis
- Department of Electrical and Computer Engineering and the Division of Systems Engineering, Boston University, Boston, MA
| |
Collapse
|
15
|
Rixon Fuchs L, Maki A, Gällström A. Optimization Method for Wide Beam Sonar Transmit Beamforming. Sensors (Basel) 2022; 22:s22197526. [PMID: 36236625 PMCID: PMC9570710 DOI: 10.3390/s22197526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 09/08/2022] [Accepted: 09/30/2022] [Indexed: 06/12/2023]
Abstract
Imaging and mapping sonars such as forward-looking sonars (FLS) and side-scan sonars (SSS) are sensors frequently used onboard autonomous underwater vehicles. To acquire information from around the vehicle, it is desirable for these sonar systems to insonify a large area; thus, the sonar transmit beampattern should have a wide field of view. In this work, we study the problem of the optimization of wide transmission beampatterns. We consider the conventional phased-array beampattern design problem where all array elements transmit an identical waveform. The complex weight vector is adjusted to create the desired beampattern shape. In our experiments, we consider wide transmission beampatterns (≥20∘) with uniform output power. In this paper, we introduce a new iterative-convex optimization method for narrowband linear phased arrays and compare it to existing approaches for convex and concave-convex optimization. In the iterative-convex method, the phase of the weight parameters is allowed to be complex as in disciplined convex-concave programming (DCCP). Comparing the iterative-convex optimization method and DCCP to the standard convex optimization, we see that the former methods archive optimized beampatterns closer to the desired beampatterns. Furthermore, for the same number of iterations, the proposed iterative-convex method achieves optimized beampatterns, which are closer to the desired beampattern than the beampatterns achieved by optimization with DCCP.
Collapse
Affiliation(s)
- Louise Rixon Fuchs
- Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
- Saab Dynamics, SE-581 88 Linköping, Sweden
| | - Atsuto Maki
- Division of Robotics, Perception and Learning, KTH Royal Institute of Technology, SE-100 44 Stockholm, Sweden
| | | |
Collapse
|
16
|
Zhou L, Wu J, Wei Q, Shi W, Jin Y. Multi-Sensor Scheduling Method Based on Joint Risk Assessment with Variable Weight. Entropy (Basel) 2022; 24:1315. [PMID: 36141201 PMCID: PMC9497982 DOI: 10.3390/e24091315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Revised: 09/16/2022] [Accepted: 09/16/2022] [Indexed: 06/16/2023]
Abstract
In multi-sensor cooperative detection systems, to reduce target threat risk caused by attack tasks and target loss risk induced by uncertain environmental factors, this paper proposes a multi-sensor scheduling method based on joint risk assessment with variable weight. Firstly, considering the target state and prior expert experience of sensor scheduling, this paper gives a new scheme of target threat risk. Then, by combining the given target threat risk and the target loss risk, this paper constructs a joint risk model to meet the diversity of risk assessment. Secondly, a variable-weighted joint risk assessment model is given based on the adaptive weight of target loss risk and target threat risk, and the optimization problem of multi-sensor scheduling is described to minimize the multi-step prediction of the variable-weighted joint risk model. Finally, this paper relaxes above the non-convex optimization problem as a subconvex problem and designs the scheme of multi-sensor scheduling, improving the rapidity and optimization of the sensor scheduling solution. The simulation results show that the proposed method can adaptively schedule sensors and accurately track targets by using minimum sensor resources.
Collapse
Affiliation(s)
- Lin Zhou
- School of Artificial Intelligence, Henan University, Zhengzhou 450046, China
| | - Jiawei Wu
- School of Artificial Intelligence, Henan University, Zhengzhou 450046, China
| | - Qian Wei
- School of Artificial Intelligence, Henan University, Zhengzhou 450046, China
| | - Wentao Shi
- School of Marine Science and Technology, Northwestern Polytechnical University, Xi’an 710072, China
| | - Yong Jin
- School of Artificial Intelligence, Henan University, Zhengzhou 450046, China
| |
Collapse
|
17
|
Cai TT, Zhang AR, Zhou Y. Sparse Group Lasso: Optimal Sample Complexity, Convergence Rate, and Statistical Inference. IEEE Trans Inf Theory 2022; 68:5975-6002. [PMID: 36865503 PMCID: PMC9974176 DOI: 10.1109/tit.2022.3175455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
We study sparse group Lasso for high-dimensional double sparse linear regression, where the parameter of interest is simultaneously element-wise and group-wise sparse. This problem is an important instance of the simultaneously structured model - an actively studied topic in statistics and machine learning. In the noiseless case, matching upper and lower bounds on sample complexity are established for the exact recovery of sparse vectors and for stable estimation of approximately sparse vectors, respectively. In the noisy case, upper and matching minimax lower bounds for estimation error are obtained. We also consider the debiased sparse group Lasso and investigate its asymptotic property for the purpose of statistical inference. Finally, numerical studies are provided to support the theoretical results.
Collapse
Affiliation(s)
- T Tony Cai
- Department of Statistics & Data Science, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104
| | - Anru R Zhang
- Departments of Biostatistics & Bioinformatics, Computer Science, Mathematics, and Statistical Science, Duke University, Durham, NC 27710
- Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706
| | - Yuchen Zhou
- Department of Statistics & Data Science, The Wharton School, University of Pennsylvania, Philadelphia, PA 19104
- Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706
| |
Collapse
|
18
|
Amaya L, Inga E. Compressed Sensing Technique for the Localization of Harmonic Distortions in Electrical Power Systems. Sensors (Basel) 2022; 22:6434. [PMID: 36080893 PMCID: PMC9460648 DOI: 10.3390/s22176434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 08/15/2022] [Accepted: 08/19/2022] [Indexed: 06/15/2023]
Abstract
The present work proposes to locate harmonic frequencies that distort the fundamental voltage and current waves in electrical systems using the compressed sensing (CS) technique. With the compressed sensing algorithm, data compression is revolutionized, a few samples are taken randomly, a measurement matrix is formed, and according to a linear transformation, the signal is taken from the time domain to the frequency domain in a compressed form. Then, the inverse linear transformation is used to reconstruct the signal with a few sensed samples of an electrical signal. Therefore, to demonstrate the benefits of CS in the detection of harmonics in the electrical network of this work, power quality analyzer equipment (commercial) is used. It measures the current of a nonlinear load and issues its results of harmonic current distortion (THD-I) on its screen and the number of harmonics detected in the network; this equipment acquires the data based on the Shannon-Nyquist theorem taken as a standard of measurement. At the same time, an electronic prototype senses the current signal of the nonlinear load. The prototype takes data from the current signal of the nonlinear load randomly and incoherently, so it takes fewer samples than the power quality analyzer equipment used as a measurement standard. The data taken by the prototype are entered into the Matlab software via USB, and the CS algorithm run and delivers, as a result, the harmonic distortions of the current signal THD-I and the number of harmonics. The results obtained with the compressed sensing algorithm versus the standard measurement equipment are analyzed, the error is calculated, and the number of samples taken by the standard equipment and the prototype, the machine time, and the maximum sampling frequency are analyzed.
Collapse
Affiliation(s)
- Luis Amaya
- Master of Electricity Program, Universidad Politécnica Salesiana, Quito 170525, Ecuador
| | - Esteban Inga
- Master in ICT for Education, Smart Grid Research Group (GIREI), Universidad Politécnica Salesiana, Quito 170525, Ecuador
| |
Collapse
|
19
|
Xie H, Wang Q. Modeling Dual-Drive Gantry Stages with Heavy-Load and Optimal Synchronous Controls with Force-Feed-Forward Decoupling. Entropy (Basel) 2022; 24:1153. [PMID: 36010817 PMCID: PMC9407488 DOI: 10.3390/e24081153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 07/28/2022] [Accepted: 08/13/2022] [Indexed: 06/15/2023]
Abstract
The application of precision dual-drive gantry stages in intelligent manufacturing is increasing. However, the loads of dual drive motors can be severely inconsistent due to the movement of heavy loads on the horizontal crossbeam, resulting in synchronization errors in the same direction movement of dual-drive motors. This phenomenon affects the machining accuracy of the gantry stage and is an critical problem that should be immediately solved. A novel optimal synchronization control algorithm based on model decoupling is proposed to solve the problem. First, an accurate physical model is established to obtain the essential characteristics of the heavy-load dual-drive gantry stage in which the rigid-flexible coupling dynamic is considered. It includes the crossbeam's linear motion and rotational motion of the non-constant moment of inertia. The established model is verified by using the actual system. By defining the virtual centroid of the crossbeam, the cross-coupling force between dual-drive motors is quantified. Then, the virtual-centroid-based Gantry Synchronization Linear Quadratic Regulator (GSLQR) optimal control and force-Feed-Forward (FF) decoupling control algorithm is proposed. The result of the comparative experiment shows the effectiveness and superiority of the proposed algorithm.
Collapse
|
20
|
You Z, Hu G, Zhou H, Zheng G. Joint Estimation Method of DOD and DOA of Bistatic Coprime Array MIMO Radar for Coherent Targets Based on Low-Rank Matrix Reconstruction. Sensors (Basel) 2022; 22:4625. [PMID: 35746406 PMCID: PMC9229399 DOI: 10.3390/s22124625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/09/2022] [Accepted: 06/14/2022] [Indexed: 06/15/2023]
Abstract
Based on low-rank matrix reconstruction theory, this paper proposes a joint DOD and DOA estimation method for coherent targets with bistatic coprime array MIMO radar. Unlike the conventional vectorization, the proposed method processed the coprime array with virtual sensor interpolation, which obtained a uniform linear array to generate the covariance matrix. Then, we reconstructed the Toeplitz matrix and established a matrix optimization recovery model according to the kernel norm minimization theory. Finally, the reduced dimension multiple signal classification algorithm was applied to estimate the angle of the coherent targets, with which the automatic pairing of DOD and DOA could be realized. With the same number of physical sensors, the proposed method expanded the array aperture effectively, so that the degree of freedom and angular resolution could be improved significantly for coherent signals. However, the effectiveness of the method was largely limited by the signal-to-noise ratio. The superiority and effectiveness of the method were proved using simulation experiments.
Collapse
Affiliation(s)
| | - Guoping Hu
- Correspondence: ; Tel.: +86-177-8325-2036
| | | | | |
Collapse
|
21
|
Peng YT, Chen YR, Chen Z, Wang JH, Huang SC. Underwater Image Enhancement Based on Histogram-Equalization Approximation Using Physics-Based Dichromatic Modeling. Sensors (Basel) 2022; 22:2168. [PMID: 35336336 DOI: 10.3390/s22062168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 02/28/2022] [Accepted: 03/04/2022] [Indexed: 12/10/2022]
Abstract
This work proposes to develop an underwater image enhancement method based on histogram-equalization (HE) approximation using physics-based dichromatic modeling (PDM). Images captured underwater usually suffer from low contrast and color distortions due to light scattering and attenuation. The PDM describes the image formation process, which can be used to restore nature-degraded images, such as underwater images. However, it does not assure that the restored images have good contrast. Thus, we propose approximating the conventional HE based on the PDM to recover the color distortions of underwater images and enhance their contrast through convex optimization. Experimental results demonstrate the proposed method performs favorably against state-of-the-art underwater image restoration approaches.
Collapse
|
22
|
Suder PM, Molstad AJ. Scalable algorithms for semiparametric accelerated failure time models in high dimensions. Stat Med 2022; 41:933-949. [PMID: 35014701 DOI: 10.1002/sim.9264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 09/21/2021] [Accepted: 10/29/2021] [Indexed: 11/11/2022]
Abstract
Semiparametric accelerated failure time (AFT) models are a useful alternative to Cox proportional hazards models, especially when the assumption of constant hazard ratios is untenable. However, rank-based criteria for fitting AFT models are often nondifferentiable, which poses a computational challenge in high-dimensional settings. In this article, we propose a new alternating direction method of multipliers algorithm for fitting semiparametric AFT models by minimizing a penalized rank-based loss function. Our algorithm scales well in both the number of subjects and number of predictors, and can easily accommodate a wide range of popular penalties. To improve the selection of tuning parameters, we propose a new criterion which avoids some common problems in cross-validation with censored responses. Through extensive simulation studies, we show that our algorithm and software is much faster than existing methods (which can only be applied to special cases), and we show that estimators which minimize a penalized rank-based criterion often outperform alternative estimators which minimize penalized weighted least squares criteria. Application to nine cancer datasets further demonstrates that rank-based estimators of semiparametric AFT models are competitive with estimators assuming proportional hazards in high-dimensional settings, whereas weighted least squares estimators are often not. A software package implementing the algorithm, along with a set of auxiliary functions, is available for download at github.com/ajmolstad/penAFT.
Collapse
Affiliation(s)
- Piotr M Suder
- Department of Statistics, University of Florida, Gainesville, Florida, USA
| | - Aaron J Molstad
- Department of Statistics, University of Florida, Gainesville, Florida, USA.,Genetics Institute, University of Florida, Gainesville, Florida, USA
| |
Collapse
|
23
|
Abstract
We introduce a user-friendly computational framework for implementing robust versions of a wide variety of structured regression methods with the L2 criterion. In addition to introducing an algorithm for performing L2E regression, our framework enables robust regression with the L2 criterion for additional structural constraints, works without requiring complex tuning procedures on the precision parameter, can be used to identify heterogeneous subpopulations, and can incorporate readily available non-robust structured regression solvers. We provide convergence guarantees for the framework and demonstrate its flexibility with some examples. Supplementary materials for this article are available online.
Collapse
Affiliation(s)
| | - Eric C Chi
- Rice University, Statistics, Houston, 77005-1892 United States
| |
Collapse
|
24
|
Ten Eikelder SCM, Ajdari A, Bortfeld T, den Hertog D. Conic formulation of fluence map optimization problems. Phys Med Biol 2021; 66. [PMID: 34587600 DOI: 10.1088/1361-6560/ac2b82] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 09/29/2021] [Indexed: 11/11/2022]
Abstract
The convexity of objectives and constraints in fluence map optimization (FMO) for radiation therapy has been extensively studied. Next to convexity, there is another important characteristic of optimization functions and problems, which has thus far not been considered in FMO literature: conic representation. Optimization problems that are conically representable using quadratic, exponential and power cones are solvable with advanced primal-dual interior-point algorithms. These algorithms guarantee an optimal solution in polynomial time and have good performance in practice. In this paper, we construct conic representations for most FMO objectives and constraints. This paper is the first that shows that FMO problems containing multiple biological evaluation criteria can be solved in polynomial time. For fractionation-corrected functions for which no exact conic reformulation is found, we provide an accurate approximation that is conically representable. We present numerical results on the TROTS data set, which demonstrate very stable numerical performance for solving FMO problems in conic form. With ongoing research in the optimization community, improvements in speed can be expected, which makes conic optimization a promising alternative for solving FMO problems.
Collapse
Affiliation(s)
- S C M Ten Eikelder
- Department of Econometrics and Operations Research, Tilburg University, The Netherlands
| | - A Ajdari
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - T Bortfeld
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, United States of America
| | - D den Hertog
- Department of Operations Management, University of Amsterdam, The Netherlands
| |
Collapse
|
25
|
Abstract
The past decade has seen the flourish of model based image reconstruction (MBIR) algorithms, which are often applications or adaptations of convex optimization algorithms from the optimization community. We review some state-of-the-art algorithms that have enjoyed wide popularity in medical image reconstruction, emphasize known connections between different algorithms, and discuss practical issues such as computation and memory cost. More recently, deep learning (DL) has forayed into medical imaging, where the latest development tries to exploit the synergy between DL and MBIR to elevate the MBIR's performance. We present existing approaches and emerging trends in DL-enhanced MBIR methods, with particular attention to the underlying role of convexity and convex algorithms on network architecture. We also discuss how convexity can be employed to improve the generalizability and representation power of DL networks in general.
Collapse
Affiliation(s)
- Jingyan Xu
- Radiology, Johns Hopkins University, Baltimore, UNITED STATES
| | - Frédéric Noo
- Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah, UNITED STATES
| |
Collapse
|
26
|
Zdun L, Brandt C. Fast MPI reconstruction with non-smooth priors by stochastic optimization and data-driven splitting. Phys Med Biol 2021; 66. [PMID: 34298534 DOI: 10.1088/1361-6560/ac176c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Magnetic particle images are currently most often reconstructed using classical Tikhonov regularization (i.e. anℓ2regularization term) combined with Kaczmarz method. Quality enhancing choices like sparsity promotingℓ1-regularization or TV regularization lead to problems that cannot be solved by standard Kaczmarz method. We propose to use stochastic primal-dual hybrid gradient method to gain more flexibility concerning the choice of data fitting term and regularization, respectively, and still obtain an algorithm which is at least as fast as Kaczmarz method. The proposed algorithm performs comparably to the current state-of-the-art method in terms of run time. The quality of reconstructions can be significantly improved as different regularization terms can be easily integrated. Moreover, in order to achieve further speed up of the method, we propose two new step size rules which lead to fast convergence and make the algorithm very easy to handle. We improve the performance of the algorithm further by applying a data-driven splitting scheme leading to a significant speed-up during the first iterations.
Collapse
Affiliation(s)
- Lena Zdun
- Universität Hamburg, Department of Mathematics, Bundesstrasse 55, D-20146 Hamburg, Germany
| | - Christina Brandt
- Universität Hamburg, Department of Mathematics, Bundesstrasse 55, D-20146 Hamburg, Germany
| |
Collapse
|
27
|
Jørgensen JS, Ametova E, Burca G, Fardell G, Papoutsellis E, Pasca E, Thielemans K, Turner M, Warr R, Lionheart WRB, Withers PJ. Core Imaging Library - Part I: a versatile Python framework for tomographic imaging. Philos Trans A Math Phys Eng Sci 2021; 379:20200192. [PMID: 34218673 PMCID: PMC8255949 DOI: 10.1098/rsta.2020.0192] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
We present the Core Imaging Library (CIL), an open-source Python framework for tomographic imaging with particular emphasis on reconstruction of challenging datasets. Conventional filtered back-projection reconstruction tends to be insufficient for highly noisy, incomplete, non-standard or multi-channel data arising for example in dynamic, spectral and in situ tomography. CIL provides an extensive modular optimization framework for prototyping reconstruction methods including sparsity and total variation regularization, as well as tools for loading, preprocessing and visualizing tomographic data. The capabilities of CIL are demonstrated on a synchrotron example dataset and three challenging cases spanning golden-ratio neutron tomography, cone-beam X-ray laminography and positron emission tomography. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 2'.
Collapse
Affiliation(s)
- J. S. Jørgensen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark
- Department of Mathematics, The University of Manchester, Manchester, UK
| | - E. Ametova
- Laboratory for Applications of Synchrotron Radiation, Karlsruhe Institute of Technology, Karlsruhe, Germany
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | - G. Burca
- ISIS Neutron and Muon Source, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
- Department of Mathematics, The University of Manchester, Manchester, UK
| | - G. Fardell
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
| | - E. Papoutsellis
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | - E. Pasca
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
| | - K. Thielemans
- Institute of Nuclear Medicine and Centre for Medical Image Computing, University College London, London, UK
| | - M. Turner
- Research IT Services, The University of Manchester, Manchester, UK
| | - R. Warr
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | | | - P. J. Withers
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| |
Collapse
|
28
|
Jørgensen JS, Ametova E, Burca G, Fardell G, Papoutsellis E, Pasca E, Thielemans K, Turner M, Warr R, Lionheart WRB, Withers PJ. Core Imaging Library - Part I: a versatile Python framework for tomographic imaging. Philos Trans A Math Phys Eng Sci 2021. [PMID: 34218673 DOI: 10.5281/zenodo.4744394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present the Core Imaging Library (CIL), an open-source Python framework for tomographic imaging with particular emphasis on reconstruction of challenging datasets. Conventional filtered back-projection reconstruction tends to be insufficient for highly noisy, incomplete, non-standard or multi-channel data arising for example in dynamic, spectral and in situ tomography. CIL provides an extensive modular optimization framework for prototyping reconstruction methods including sparsity and total variation regularization, as well as tools for loading, preprocessing and visualizing tomographic data. The capabilities of CIL are demonstrated on a synchrotron example dataset and three challenging cases spanning golden-ratio neutron tomography, cone-beam X-ray laminography and positron emission tomography. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 2'.
Collapse
Affiliation(s)
- J S Jørgensen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark
- Department of Mathematics, The University of Manchester, Manchester, UK
| | - E Ametova
- Laboratory for Applications of Synchrotron Radiation, Karlsruhe Institute of Technology, Karlsruhe, Germany
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | - G Burca
- ISIS Neutron and Muon Source, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
- Department of Mathematics, The University of Manchester, Manchester, UK
| | - G Fardell
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
| | - E Papoutsellis
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | - E Pasca
- Scientific Computing Department, STFC, UKRI, Rutherford Appleton Laboratory, Didcot, UK
| | - K Thielemans
- Institute of Nuclear Medicine and Centre for Medical Image Computing, University College London, London, UK
| | - M Turner
- Research IT Services, The University of Manchester, Manchester, UK
| | - R Warr
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| | - W R B Lionheart
- Department of Mathematics, The University of Manchester, Manchester, UK
| | - P J Withers
- Henry Royce Institute, Department of Materials, The University of Manchester, Manchester, UK
| |
Collapse
|
29
|
Lee SJ, Lee JW, Lee W, Jang C. Constrained Multiple Planar Reconstruction for Automatic Camera Calibration of Intelligent Vehicles. Sensors (Basel) 2021; 21:4643. [PMID: 34300383 DOI: 10.3390/s21144643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 07/01/2021] [Accepted: 07/04/2021] [Indexed: 11/16/2022]
Abstract
In intelligent vehicles, extrinsic camera calibration is preferable to be conducted on a regular basis to deal with unpredictable mechanical changes or variations on weight load distribution. Specifically, high-precision extrinsic parameters between the camera coordinate and the world coordinate are essential to implement high-level functions in intelligent vehicles such as distance estimation and lane departure warning. However, conventional calibration methods, which solve a Perspective-n-Point problem, require laborious work to measure the positions of 3D points in the world coordinate. To reduce this inconvenience, this paper proposes an automatic camera calibration method based on 3D reconstruction. The main contribution of this paper is a novel reconstruction method to recover 3D points on planes perpendicular to the ground. The proposed method jointly optimizes reprojection errors of image features projected from multiple planar surfaces, and finally, it significantly reduces errors in camera extrinsic parameters. Experiments were conducted in synthetic simulation and real calibration environments to demonstrate the effectiveness of the proposed method.
Collapse
|
30
|
Wang M, Allen GI. Integrative Generalized Convex Clustering Optimization and Feature Selection for Mixed Multi-View Data. J Mach Learn Res 2021; 22:55. [PMID: 34744522 PMCID: PMC8570363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In mixed multi-view data, multiple sets of diverse features are measured on the same set of samples. By integrating all available data sources, we seek to discover common group structure among the samples that may be hidden in individualistic cluster analyses of a single data view. While several techniques for such integrative clustering have been explored, we propose and develop a convex formalization that enjoys strong empirical performance and inherits the mathematical properties of increasingly popular convex clustering methods. Specifically, our Integrative Generalized Convex Clustering Optimization (iGecco) method employs different convex distances, losses, or divergences for each of the different data views with a joint convex fusion penalty that leads to common groups. Additionally, integrating mixed multi-view data is often challenging when each data source is high-dimensional. To perform feature selection in such scenarios, we develop an adaptive shifted group-lasso penalty that selects features by shrinking them towards their loss-specific centers. Our so-called iGecco+ approach selects features from each data view that are best for determining the groups, often leading to improved integrative clustering. To solve our problem, we develop a new type of generalized multi-block ADMM algorithm using sub-problem approximations that more efficiently fits our model for big data sets. Through a series of numerical experiments and real data examples on text mining and genomics, we show that iGecco+ achieves superior empirical performance for high-dimensional mixed multi-view data.
Collapse
Affiliation(s)
- Minjie Wang
- Department of Statistics, Rice University, Houston, TX 77005, USA
| | - Genevera I Allen
- Departments of Electrical and Computer Engineering, Statistics, and Computer Science, Rice University, Houston, TX 77005, USA; Jan and Dan Duncan Neurological Research Institute, Baylor College of Medicine, Houston, TX 77030, USA
| |
Collapse
|
31
|
Chakrabarty A, Healey E, Shi D, Zavitsanou S, Doyle FJ, Dassau E. Embedded Model Predictive Control for a Wearable Artificial Pancreas. IEEE Trans Control Syst Technol 2020; 28:2600-2607. [PMID: 33762804 PMCID: PMC7983018 DOI: 10.1109/tcst.2019.2939122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
While artificial pancreas (AP) systems are expected to improve the quality of life among people with type 1 diabetes mellitus (T1DM), the design of convenient systems that optimize the user experience, especially for those with active lifestyles, such as children and adolescents, still remains an open research question. In this work, we introduce an embeddable design and implementation of model predictive control (MPC) of AP systems for people with T1DM that significantly reduces the weight and on-body footprint of the AP system. The embeddable controller is based on a zone MPC that has been evaluated in multiple clinical studies. The proposed embedded zone MPC features a simpler design of the periodic safe zone in the cost function and the utilization of state-of-the-art alternating minimization algorithms for solving the convex programming problems inherent to MPC with linear models subject to convex constraints. Off-line closed-loop data generated by the FDA-accepted UVA/Padova simulator is used to select an optimization algorithm and corresponding tuning parameters. Through hardware-in-the-loop in silico results on a limited-resource Arduino Zero (Feather M0) platform, we demonstrate the potential of the proposed embedded MPC. In spite of resource limitations, our embedded zone MPC manages to achieve comparable performance of that of the full-version zone MPC implemented in a 64-bit desktop for scenarios with/without meal-disturbance compensations. Metrics for performance comparison included median percent time in the euglycemic ([70, 180] mg/dL range) of 84.3% vs. 83.1% for announced meals, with an equivalence test yielding p = 0.0013 and 66.2% vs. 66.0% for unannounced meals with p = 0.0028.
Collapse
Affiliation(s)
- Ankush Chakrabarty
- Control and Dynamical Systems Group, Mitsubishi Electric Research Laboratories, Cambridge, MA, USA
| | - Elizabeth Healey
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Dawei Shi
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Stamatina Zavitsanou
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Francis J. Doyle
- Harvard John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138, USA
| | - Eyal Dassau
- Corresponding author. ; Phone: +1 (617) 496-0358
| |
Collapse
|
32
|
Bilibashi D, Vitucci EM, Degli-Esposti V, Giorgetti A. An Energy-Efficient Unselfish Spectrum Leasing Scheme for Cognitive Radio Networks. Sensors (Basel) 2020; 20:E6161. [PMID: 33138059 DOI: 10.3390/s20216161] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/23/2020] [Accepted: 10/26/2020] [Indexed: 11/17/2022]
Abstract
Cooperative Communications in Cognitive Radio (CR) have been introduced as an essential and efficient technique to improve the transmission performance of primary users and offer transmission opportunities for secondary users. In a typical multiuser Cooperative Communication in CR, each primary user can choose one secondary user as a relay node. To encourage the cooperative behavior of the secondary users, primary users lease a fraction of their allocated spectrum to the relay secondary users to transmit their data packets. In this work, a novel unselfish spectrum leasing scheme in CR networks is proposed that offers an energy-efficient solution minimizing the environmental impact of our network. A network management architecture is introduced, and resource allocation is proposed as a constrained sum energy efficiency maximization problem. The optimization problem is formulated and solved using non-linear programming methods and based on a modified Kuhn-Munkres bipartite matching algorithm. System simulations demonstrate an increment in the energy efficiency of the primary users' network compared with previously proposed algorithms.
Collapse
|
33
|
Abstract
We introduce a powerful and yet seldom used numerical approach in statistics for solving a broad class of optimization problems where the search space is discretized. This optimization tool is widely used in engineering for solving semidefinite programming (SDP) problems and is called SeDuMi (self-dual minimization). We focus on optimal design problems and demonstrate how to formulate A-, A s -, c-, I-, and L-optimal design problems as SDP problems and show how they can be effectively solved by SeDuMi in MATLAB. We also show the numerical approach is flexible by applying it to further find optimal designs based on the weighted least squares estimator or when there are constraints on the weight distribution of the sought optimal design. For approximate designs, the optimality of the SDP-generated designs can be verified using the Kiefer-Wolfowitz equivalence theorem. SDP also finds optimal designs for nonlinear regression models commonly used in social and biomedical research. Several examples are presented for linear and nonlinear models.
Collapse
Affiliation(s)
- Weng Kee Wong
- Department of Biostatistics, University of California, Los Angeles, CA 90095-1772, USA
| | - Yue Yin
- Department of Mathematics and Statistics, University of Victoria, Victoria, BC, Canada V8W 2Y2
| | - Julie Zhou
- Department of Mathematics and Statistics, University of Victoria, Victoria, BC, Canada V8W 2Y2
| |
Collapse
|
34
|
Srivastav PS, Chen L, Wahla AH. On the Performance of Efficient Channel Estimation Strategies for Hybrid Millimeter Wave MIMO System. Entropy (Basel) 2020; 22:E1121. [PMID: 33286890 PMCID: PMC7597249 DOI: 10.3390/e22101121] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 09/29/2020] [Accepted: 09/30/2020] [Indexed: 11/16/2022]
Abstract
Millimeter wave (mmWave) relying upon the multiple output multiple input (MIMO) is a new potential candidate for fulfilling the huge emerging bandwidth requirements. Due to the short wavelength and the complicated hardware architecture of mmWave MIMO systems, the conventional estimation strategies based on the individual exploitation of sparsity or low rank properties are no longer efficient and hence more modern and advance estimation strategies are required to recapture the targeted channel matrix. Therefore, in this paper, we proposed a novel channel estimation strategy based on the symmetrical version of alternating direction methods of multipliers (S-ADMM), which exploits the sparsity and low rank property of channel altogether in a symmetrical manner. In S-ADMM, at each iteration, the Lagrange multipliers are updated twice which results symmetrical handling of all of the available variables in optimization problem. To validate the proposed algorithm, numerous computer simulations have been carried out which straightforwardly depicts that the S-ADMM performed well in terms of convergence as compared to other benchmark algorithms and also able to provide global optimal solutions for the strictly convex mmWave joint channel estimation optimization problem.
Collapse
Affiliation(s)
- Prateek Saurabh Srivastav
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China;
- School of Electronics, Electrical and Communication, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lan Chen
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China;
| | - Arfan Haider Wahla
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China;
- School of Electronics, Electrical and Communication, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
35
|
Wu VW, Epelman MA, Pasupathy KS, Sir MY, Deufel CL. A new optimization algorithm for HDR brachytherapy that improves DVH-based planning: Truncated Conditional Value-at-Risk (TCVaR). Biomed Phys Eng Express 2020; 6. [PMID: 35102005 DOI: 10.1088/2057-1976/abb4bc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 09/02/2020] [Indexed: 11/12/2022]
Abstract
Purpose:To introduce a new optimization algorithm that improves DVH results and is designed for the type of heterogeneous dose distributions that occur in brachytherapy.Methods:The new optimization algorithm is based on a prior mathematical approach that uses mean doses of the DVH metric tails. The prior mean dose approach is referred to as conditional value-at-risk (CVaR), and unfortunately produces noticeably worse DVH metric results than gradient-based approaches. We have improved upon the CVaR approach, using the so-called Truncated CVaR (TCVaR), by excluding the hottest or coldest voxels in the structure from the calculations of the mean dose of the tail. Our approach applies an iterative sequence of convex approximations to improve the selection of the excluded voxels. Data Envelopment Analysis was used to quantify the sensitivity of TCVaR results to parameter choice and to compare the quality of a library of 256 TCVaR plans created for each of prostate, breast, and cervix treatment sites with commercially-generated plans.Results:In terms of traditional DVH metrics, TCVaR outperformed CVaR and the improvements increased monotonically as more iterations were used to identify and exclude the hottest/coldest voxels from the optimization problem. TCVaR also outperformed the Eclipse-Brachyvision TPS, with an improvement in PTVD95% (for equivalent organ-at-risk doses) of up to 5% (prostate), 3% (breast), and 1% (cervix).Conclusions:A novel optimization algorithm for HDR treatment planning produced plans with superior DVH metrics compared with a prior convex optimization algorithm as well as Eclipse-Brachyvision. The algorithm is computationally efficient and has potential applications as a primary optimization algorithm or quality assurance for existing optimization approaches.
Collapse
Affiliation(s)
- Victor W Wu
- Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, United States of America.,Department of Radiation Oncology, Mayo Clinic, Rochester, MN 55905, United States of America
| | - Marina A Epelman
- Department of Industrial and Operations Engineering, University of Michigan, Ann Arbor, MI 48109, United States of America
| | - Kalyan S Pasupathy
- Department of Health Sciences Research, Mayo Clinic, Rochester, MN 55905, United States of America.,Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN 55905, United States of America
| | - Mustafa Y Sir
- Department of Health Sciences Research, Mayo Clinic, Rochester, MN 55905, United States of America.,Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN 55905, United States of America
| | - Christopher L Deufel
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN 55905, United States of America
| |
Collapse
|
36
|
Han D, Liu T, Qi Y. Optimization of Mixed Energy Supply of IoT Network Based on Matching Game and Convex Optimization. Sensors (Basel) 2020; 20:E5458. [PMID: 32977565 DOI: 10.3390/s20195458] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 09/20/2020] [Accepted: 09/22/2020] [Indexed: 11/17/2022]
Abstract
The interaction capability provided by the Internet of Things (IoT) significantly increases communication between human and machine, changing our lives gradually. However, the abundant constructions of 5G small base stations (SBSs) and large-scaled access of IoT terminal equipment (TE) will surely cause a dramatic increase in energy expense costs of a wireless communication system. In this study, we designed a bilateral random model of TE allocation and energy decisions in IoT, and proposed a mixed energy supply algorithm based on a matching game and convex optimization to minimize the energy expense cost of the wireless communication system in IoT. This study divided the problem of minimizing energy expense cost of the system into two steps. First, the random allocation problem of TEs in IoT was modeled to a matching game problem. This step is to obtain the TE matching scheme that minimizes the energy consumption of the whole system on the basis of guaranteeing the quality of service of TEs. Second, the energy decision problem of SBS was modeled into a convex optimization problem. The energy purchase scheme of SBSs with the minimum energy expense cost of the system was obtained by solving the optimal solution of the convex optimization. According to the simulation results, the proposed mixed energy supply scheme can decrease the energy expense cost of the system effectively.
Collapse
|
37
|
Park J, Kim Y, Kim JH. Integrated Guidance and Control Using Model Predictive Control with Flight Path Angle Prediction against Pull-Up Maneuvering Target. Sensors (Basel) 2020; 20:E3143. [PMID: 32498281 DOI: 10.3390/s20113143] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 05/28/2020] [Accepted: 05/29/2020] [Indexed: 11/25/2022]
Abstract
Integrated guidance and control using model predictive control against a maneuvering target is proposed. Equations of motion for terminal homing are developed with the consideration of short-period dynamics as well as actuator dynamics of a missile. The convex optimization problem is solved considering inequality constraints that consist of acceleration and look angle limits. A discrete-time extended Kalman filter is used to estimate the position of the target with a look angle as a measurement. This is utilized to form a flight-path angle of the target, and polynomial fitting is applied for prediction. Numerical simulation including a Monte Carlo simulation is performed to verify the performance of the proposed algorithm.
Collapse
|
38
|
You G, Lv Y, Jiang Y, Yi C. A Novel Fault Diagnosis Scheme for Rolling Bearing Based on Convex Optimization in Synchroextracting Chirplet Transform. Sensors (Basel) 2020; 20:E2813. [PMID: 32429156 DOI: 10.3390/s20102813] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 05/12/2020] [Accepted: 05/12/2020] [Indexed: 11/30/2022]
Abstract
Synchroextracting transform (SET) developed from synchrosqueezing transform (SST) is a novel time-frequency (TF) analysis method. Its concentrated TF spectrum is obtained by applying a synchroextracting operator into TF transformation co-efficients on the TF plane. For this class of post-processing TF analysis methods, the main research focuses on the accurate estimation of instantaneous frequency (IF). However, the performance of TF analysis is greatly affected by the strong frequency modulation (FM) signal. In particular, the actual measured mechanical vibration signals always contain strong background noise, which decreases the resolution of TF representation, resulting in an inaccurate ridge extraction. To solve this problem, an improved penalty function based on the convex optimization scheme is firstly introduced for signal denoising. Based on the superiority of the linear chirplet transform (LCT) in dealing with modulated signals, the synchroextracting chirplet transform (SECT) is employed to sharpen the TF representation after the convex optimization denoising operation. To verify the effectiveness of the proposed method, the numerical simulation signals and the measured fault signals of rolling bearing are carried out, respectively. The results demonstrate that the proposed method leads to a better solution in rolling bearing fault feature extraction.
Collapse
|
39
|
Vo ND, Hong M, Jung JJ. Implicit Stochastic Gradient Descent Method for Cross-Domain Recommendation System. Sensors (Basel) 2020; 20:E2510. [PMID: 32365513 DOI: 10.3390/s20092510] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/25/2020] [Accepted: 04/26/2020] [Indexed: 11/16/2022]
Abstract
The previous recommendation system applied the matrix factorization collaborative filtering (MFCF) technique to only single domains. Due to data sparsity, this approach has a limitation in overcoming the cold-start problem. Thus, in this study, we focus on discovering latent features from domains to understand the relationships between domains (called domain coherence). This approach uses potential knowledge of the source domain to improve the quality of the target domain recommendation. In this paper, we consider applying MFCF to multiple domains. Mainly, by adopting the implicit stochastic gradient descent algorithm to optimize the objective function for prediction, multiple matrices from different domains are consolidated inside the cross-domain recommendation system (CDRS). Additionally, we design a conceptual framework for CDRS, which applies to different industrial scenarios for recommenders across domains. Moreover, an experiment is devised to validate the proposed method. By using a real-world dataset gathered from Amazon Food and MovieLens, experimental results show that the proposed method improves 15.2% and 19.7% in terms of computation time and MSE over other methods on a utility matrix. Notably, a much lower convergence value of the loss function has been obtained from the experiment. Furthermore, a critical analysis of the obtained results shows that there is a dynamic balance between prediction accuracy and computational complexity.
Collapse
|
40
|
Posner DC, Lin H, Meigs JB, Kolaczyk ED, Dupuis J. Convex combination sequence kernel association test for rare-variant studies. Genet Epidemiol 2020; 44:352-367. [PMID: 32100372 DOI: 10.1002/gepi.22287] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 12/17/2019] [Accepted: 01/27/2020] [Indexed: 02/06/2023]
Abstract
We propose a novel variant set test for rare-variant association studies, which leverages multiple single-nucleotide variant (SNV) annotations. Our approach optimizes a convex combination of different sequence kernel association test (SKAT) statistics, where each statistic is constructed from a different annotation and combination weights are optimized through a multiple kernel learning algorithm. The combination test statistic is evaluated empirically through data splitting. In simulations, we find our method preserves type I error at α = 2.5 × 1 0 - 6 and has greater power than SKAT(-O) when SNV weights are not misspecified and sample sizes are large ( N ≥ 5 , 000 ). We utilize our method in the Framingham Heart Study (FHS) to identify SNV sets associated with fasting glucose. While we are unable to detect any genome-wide significant associations between fasting glucose and 4-kb windows of rare variants ( p < 1 0 - 7 ) in 6,419 FHS participants, our method identifies suggestive associations between fasting glucose and rare variants near ROCK2 ( p = 2.1 × 1 0 - 5 ) and within CPLX1 ( p = 5.3 × 1 0 - 5 ). These two genes were previously reported to be involved in obesity-mediated insulin resistance and glucose-induced insulin secretion by pancreatic beta-cells, respectively. These findings will need to be replicated in other cohorts and validated by functional genomic studies.
Collapse
Affiliation(s)
- Daniel C Posner
- Department of Biostatistics, Boston University School of Public Health, Boston, Massachusetts
| | - Honghuang Lin
- National Heart Lung and Blood Institute's, Boston University's Framingham Heart Study, Framingham, Massachusetts.,Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine, Boston, Massachusetts
| | - James B Meigs
- Division of General Internal Medicine, Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Eric D Kolaczyk
- Department of Mathematics and Statistics, Boston University, Boston, Massachusetts
| | - Josée Dupuis
- Department of Biostatistics, Boston University School of Public Health, Boston, Massachusetts.,National Heart Lung and Blood Institute's, Boston University's Framingham Heart Study, Framingham, Massachusetts
| |
Collapse
|
41
|
Abstract
The problem of variable clustering is that of estimating groups of similar components of a p-dimensional vector X = (X 1, … , X p ) from n independent copies of X. There exists a large number of algorithms that return data-dependent groups of variables, but their interpretation is limited to the algorithm that produced them. An alternative is model-based clustering, in which one begins by defining population level clusters relative to a model that embeds notions of similarity. Algorithms tailored to such models yield estimated clusters with a clear statistical interpretation. We take this view here and introduce the class of G-block covariance models as a background model for variable clustering. In such models, two variables in a cluster are deemed similar if they have similar associations will all other variables. This can arise, for instance, when groups of variables are noise corrupted versions of the same latent factor. We quantify the difficulty of clustering data generated from a G-block covariance model in terms of cluster proximity, measured with respect to two related, but different, cluster separation metrics. We derive minimax cluster separation thresholds, which are the metric values below which no algorithm can recover the model-defined clusters exactly, and show that they are different for the two metrics. We therefore develop two algorithms, COD and PECOK, tailored to G-block covariance models, and study their minimax-optimality with respect to each metric. Of independent interest is the fact that the analysis of the PECOK algorithm, which is based on a corrected convex relaxation of the popular K-means algorithm, provides the first statistical analysis of such algorithms for variable clustering. Additionally, we compare our methods with another popular clustering method, spectral clustering. Extensive simulation studies, as well as our data analyses, confirm the applicability of our approach.
Collapse
Affiliation(s)
| | - Christophe Giraud
- Laboratoire de Mathématiques d’Orsay, CNRS, Université Paris-Sud, Université Paris-Saclay
| | - Xi Luo
- Department of Biostatistics and Data Science, School of Public Health, University of Texas Health Science Center at Houston
| | - Martin Royer
- Laboratoire de Mathématiques d’Orsay, CNRS, Université Paris-Sud, Université Paris-Saclay
| | | |
Collapse
|
42
|
Liu J, Wang W, Song H. Optimization of Weighting Window Functions for SAR Imaging via QCQP Approach. Sensors (Basel) 2020; 20:s20020419. [PMID: 31940819 PMCID: PMC7013442 DOI: 10.3390/s20020419] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/03/2020] [Accepted: 01/07/2020] [Indexed: 11/18/2022]
Abstract
Weighting window functions are commonly used in Synthetic Aperture Radar (SAR) imaging to suppress the high Peak SideLobe Ratio (PSLR) at the price of probable Signal-to-Noise Ratio (SNR) loss and mainlobe widening. In this paper, based on the method of designing a mismatched filter, we have proposed a Quadratically Constrained Quadratic Program (QCQP) approach, which is a convex that can be solved efficiently, to optimize the weighting window function with both amplitude and phase, expecting to offer better imaging performance, especially on PSLR, SNR loss, and mainlobe width. According to this approach and its modified form, we are able to design window functions to optimize the PSLR or the SNR loss under different kinds of flexible and practical constraints. Compared to the ordinary real-valued and symmetric window functions, like the Taylor window, the designed window functions are complex-valued and can be asymmetric. By using Synthetic Aperture Radar (SAR) point target imaging simulation, we show that the optimized weighting window function can clearly show the weak target hidden in the sidelobes of the strong target.
Collapse
Affiliation(s)
- Jin Liu
- Department of Space Microwave Remote Sensing System, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China; (W.W.); (H.S.)
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100039, China
- Correspondence:
| | - Wei Wang
- Department of Space Microwave Remote Sensing System, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China; (W.W.); (H.S.)
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
| | - Hongjun Song
- Department of Space Microwave Remote Sensing System, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China; (W.W.); (H.S.)
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China
| |
Collapse
|
43
|
Kim Y, Carbonetto P, Stephens M, Anitescu M. A Fast Algorithm for Maximum Likelihood Estimation of Mixture Proportions Using Sequential Quadratic Programming. J Comput Graph Stat 2020; 29:261-273. [PMID: 33762803 PMCID: PMC7986967 DOI: 10.1080/10618600.2019.1689985] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 10/30/2019] [Accepted: 11/01/2019] [Indexed: 10/25/2022]
Abstract
Maximum likelihood estimation of mixture proportions has a long history, and continues to play an important role in modern statistics, including in development of nonparametric empirical Bayes methods. Maximum likelihood of mixture proportions has traditionally been solved using the expectation maximization (EM) algorithm, but recent work by Koenker & Mizera shows that modern convex optimization techniques-in particular, interior point methods-are substantially faster and more accurate than EM. Here, we develop a new solution based on sequential quadratic programming (SQP). It is substantially faster than the interior point method, and just as accurate. Our approach combines several ideas: first, it solves a reformulation of the original problem; second, it uses an SQP approach to make the best use of the expensive gradient and Hessian computations; third, the SQP iterations are implemented using an active set method to exploit the sparse nature of the quadratic subproblems; fourth, it uses accurate low-rank approximations for more efficient gradient and Hessian computations. We illustrate the benefits of the SQP approach in experiments on synthetic data sets and a large genetic association data set. In large data sets ( n ≈ 106 observations, m ≈ 103 mixture components), our implementation achieves at least 100-fold reduction in runtime compared with a state-of-the-art interior point solver. Our methods are implemented in Julia and in an R package available on CRAN (https://CRAN.R-project.org/package=mixsqp).
Collapse
Affiliation(s)
- Youngseok Kim
- Department of Statistics, Department of Human Genetics and the Research Computing Center at the University of Chicago, and Mathematics and Computer Science Division at Argonne National Laboratory
| | - Peter Carbonetto
- Department of Statistics, Department of Human Genetics and the Research Computing Center at the University of Chicago, and Mathematics and Computer Science Division at Argonne National Laboratory
| | - Matthew Stephens
- Department of Statistics, Department of Human Genetics and the Research Computing Center at the University of Chicago, and Mathematics and Computer Science Division at Argonne National Laboratory
| | - Mihai Anitescu
- Department of Statistics, Department of Human Genetics and the Research Computing Center at the University of Chicago, and Mathematics and Computer Science Division at Argonne National Laboratory
| |
Collapse
|
44
|
Ivanenko Y, Nedic M, Gustafsson M, Jonsson BLG, Luger A, Nordebo S. Quasi-Herglotz functions and convex optimization. R Soc Open Sci 2020; 7:191541. [PMID: 32218971 PMCID: PMC7029951 DOI: 10.1098/rsos.191541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2019] [Accepted: 11/22/2019] [Indexed: 06/10/2023]
Abstract
We introduce the set of quasi-Herglotz functions and demonstrate that it has properties useful in the modelling of non-passive systems. The linear space of quasi-Herglotz functions constitutes a natural extension of the convex cone of Herglotz functions. It consists of differences of Herglotz functions and we show that several of the important properties and modelling perspectives are inherited by the new set of quasi-Herglotz functions. In particular, this applies to their integral representations, the associated integral identities or sum rules (with adequate additional assumptions), their boundary values on the real axis and the associated approximation theory. Numerical examples are included to demonstrate the modelling of a non-passive gain medium formulated as a convex optimization problem, where the generating measure is modelled by using a finite expansion of B-splines and point masses.
Collapse
Affiliation(s)
- Y. Ivanenko
- Department of Physics and Electrical Engineering, Linnæus University, 351 95 Växjö, Sweden
| | - M. Nedic
- Department of Mathematics, Stockholm University, 106 91 Stockholm, Sweden
| | - M. Gustafsson
- Department of Electrical and Information Technology, Lund University, Box 118, 221 00 Lund, Sweden
| | - B. L. G. Jonsson
- School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 100 44 Stockholm, Sweden
| | - A. Luger
- Department of Mathematics, Stockholm University, 106 91 Stockholm, Sweden
| | - S. Nordebo
- Department of Physics and Electrical Engineering, Linnæus University, 351 95 Växjö, Sweden
| |
Collapse
|
45
|
Chung KJ, Kuang H, Federico A, Choi HS, Kasickova L, Al Sultan AS, Horn M, Crowther M, Connolly SJ, Yue P, Curnutte JT, Demchuk AM, Menon BK, Qiu W. Semi-automatic measurement of intracranial hemorrhage growth on non-contrast CT. Int J Stroke 2019; 16:192-199. [PMID: 31847733 DOI: 10.1177/1747493019895704] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Manual segmentations of intracranial hemorrhage on non-contrast CT images are the gold-standard in measuring hematoma growth but are prone to rater variability. AIMS We demonstrate that a convex optimization-based interactive segmentation approach can accurately and reliably measure intracranial hemorrhage growth. METHODS Baseline and 16-h follow-up head non-contrast CT images of 46 subjects presenting with intracranial hemorrhage were selected randomly from the ANNEXA-4 trial imaging database. Three users semi-automatically segmented intracranial hemorrhage to measure hematoma volume for each timepoint using our proposed method. Segmentation accuracy was quantitatively evaluated compared to manual segmentations by using Dice similarity coefficient, Pearson correlation, and Bland-Altman analysis. Intra- and inter-rater reliability of the Dice similarity coefficient and intracranial hemorrhage volumes and volume change were assessed by the intraclass correlation coefficient and minimum detectable change. RESULTS Among the three users, the mean Dice similarity coefficient, Pearson correlation, and mean difference ranged from 76.79% to 79.76%, 0.970 to 0.980 (p < 0.001), and -1.5 to -0.4 ml, respectively, for all intracranial hemorrhage segmentations. Inter-rater intraclass correlation coefficients between the three users for Dice similarity coefficient and intracranial hemorrhage volume were 0.846 and 0.962, respectively, and the corresponding minimum detectable change was 2.51 ml. Inter-rater intraclass correlation coefficient for intracranial hemorrhage volume change ranged from 0.915 to 0.958 for each user compared to manual measurements, resulting in an minimum detectable change range of 2.14 to 4.26 ml. CONCLUSIONS We spatially and volumetrically validate a novel interactive segmentation method for delineating intracranial hemorrhage on head non-contrast CT images. Good spatial overlap, excellent volume correlation, and good repeatability suggest its usefulness for measuring intracranial hemorrhage volume and volume change on non-contrast CT images.
Collapse
Affiliation(s)
- Kevin J Chung
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada.,Department of Mechanical and Manufacturing Engineering, 2129University of Calgary, Calgary, Canada
| | - Hulin Kuang
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada
| | - Alyssa Federico
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada
| | - Hyun Seok Choi
- Department of Radiology, Yonsei University College of Medicine, Seoul, South Korea
| | - Linda Kasickova
- Department of Neurology, 48228University Hospital Ostrava, Ostrava, Czech Republic
| | | | - MacKenzie Horn
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada
| | - Mark Crowther
- Department of Medicine, 3710McMaster University, Hamilton, Canada
| | - Stuart J Connolly
- Population Health Research Institute, 3710McMaster University, Hamilton, Canada
| | - Patrick Yue
- 33275Portola Pharmaceuticals Inc, San Francisco, CA, USA
| | | | - Andrew M Demchuk
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada
| | - Bijoy K Menon
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada
| | - Wu Qiu
- Department of Clinical Neurosciences, 2129University of Calgary, Calgary, Canada
| |
Collapse
|
46
|
Loganathan A, Ahmad NS, Goh P. Self-Adaptive Filtering Approach for Improved Indoor Localization of a Mobile Node with Zigbee-Based RSSI and Odometry. Sensors (Basel) 2019; 19:E4748. [PMID: 31683837 DOI: 10.3390/s19214748] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 10/24/2019] [Accepted: 10/30/2019] [Indexed: 11/17/2022]
Abstract
This study presents a new technique to improve the indoor localization of a mobile node by utilizing a Zigbee-based received-signal-strength indicator (RSSI) and odometry. As both methods suffer from their own limitations, this work contributes to a novel methodological framework in which coordinates of the mobile node can more accurately be predicted by improving the path-loss propagation model and optimizing the weighting parameter for each localization technique via a convex search. A self-adaptive filtering approach is also proposed which autonomously optimizes the weighting parameter during the target node's translational and rotational motions, thus resulting in an efficient localization scheme with less computational effort. Several real-time experiments consisting of four different trajectories with different number of straight paths and curves were carried out to validate the proposed methods. Both temporal and spatial analyses demonstrate that when odometry data and RSSI values are available, the proposed methods provide significant improvements on localization performance over existing approaches.
Collapse
|
47
|
Xiong K, Zhao G, Shi G, Wang Y. A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method. Sensors (Basel) 2019; 19:s19204540. [PMID: 31635423 PMCID: PMC6832202 DOI: 10.3390/s19204540] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 10/11/2019] [Accepted: 10/15/2019] [Indexed: 01/02/2023]
Abstract
The Split Bregman method (SBM), a popular and universal CS reconstruction algorithm for inverse problems with both l1-norm and TV-norm regularization, has been extensively applied in complex domains through the complex-to-real transforming technique, e.g., MRI imaging and radar. However, SBM still has great potential in complex applications due to the following two points; Bregman Iteration (BI), employed in SBM, may not make good use of the phase information for complex variables. In addition, the converting technique may consume more time. To address that, this paper presents the complex-valued Split Bregman method (CV-SBM), which theoretically generalizes the original SBM into the complex domain. The complex-valued Bregman distance (CV-BD) is first defined by replacing the corresponding regularization in the inverse problem. Then, we propose the complex-valued Bregman Iteration (CV-BI) to solve this new problem. How well-defined and the convergence of CV-BI are analyzed in detail according to the complex-valued calculation rules and optimization theory. These properties prove that CV-BI is able to solve inverse problems if the regularization is convex. Nevertheless, CV-BI needs the help of other algorithms for various kinds of regularization. To avoid the dependence on extra algorithms and simplify the iteration process simultaneously, we adopt the variable separation technique and propose CV-SBM for resolving convex inverse problems. Simulation results on complex-valued l1-norm problems illustrate the effectiveness of the proposed CV-SBM. CV-SBM exhibits remarkable superiority compared with SBM in the complex-to-real transforming technique. Specifically, in the case of large signal scale n = 512, CV-SBM yields 18.2%, 17.6%, and 26.7% lower mean square error (MSE) as well as takes 28.8%, 25.6%, and 23.6% less time cost than the original SBM in 10 dB, 15 dB, and 20 dB SNR situations, respectively.
Collapse
Affiliation(s)
- Kai Xiong
- School of Artificial Intelligence, Xidian University, Xi'an 710071, Shaanxi, China.
| | | | - Guangming Shi
- School of Artificial Intelligence, Xidian University, Xi'an 710071, Shaanxi, China.
| | | |
Collapse
|
48
|
Levivier M, Carrillo RE, Charrier R, Martin A, Thiran JP. A real-time optimal inverse planning for Gamma Knife radiosurgery by convex optimization: description of the system and first dosimetry data. J Neurosurg 2019; 129:111-117. [PMID: 30544294 DOI: 10.3171/2018.7.gks181572] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 07/19/2018] [Indexed: 11/06/2022]
Abstract
OBJECTIVEThe authors developed a new, real-time interactive inverse planning approach, based on a fully convex framework, to be used for Gamma Knife radiosurgery.METHODSThe convex framework is based on the precomputation of a dictionary composed of the individual dose distributions of all possible shots, considering all their possible locations, sizes, and shapes inside the target volume. The convex problem is solved to determine the plan, i.e., which shots and with which weights, that will actually be used, considering a sparsity constraint on the shots to fulfill the constraints while minimizing the beam-on time. The system is called IntuitivePlan and allows data to be transferred from generated dose plans into the Gamma Knife treatment planning software for further dosimetry evaluation.RESULTSThe system has been very efficiently implemented, and an optimal plan is usually obtained in less than 1 to 2 minutes, depending on the complexity of the problem, on a desktop computer or in only a few minutes on a high-end laptop. Dosimetry data from 5 cases, 2 meningiomas and 3 vestibular schwannomas, were generated with IntuitivePlan. Results of evaluation of the dosimetry characteristics are very satisfactory and adequate in terms of conformity, selectivity, gradient, protection of organs at risk, and treatment time.CONCLUSIONSThe possibility of using optimal, interactive real-time inverse planning in conjunction with the Leksell Gamma Knife opens new perspectives in radiosurgery, especially considering the potential use of the full capabilities of the latest generations of the Leksell Gamma Knife. This approach gives new users the possibility of using the system for easier and quicker access to good-quality plans with a shorter technical training period and opens avenues for new planning strategies for expert users. The use of a convex optimization approach allows an optimal plan to be provided in a very short processing time. This way, innovative graphical user interfaces can be developed, allowing the user to interact directly with the planning system to graphically define the desired dose map and to modify on-the-fly the dose map by moving, in a very user-friendly manner, the isodose surfaces of an initial plan. Further independent quantitative prospective evaluation comparing inverse planned and forward planned cases is warranted to validate this novel and promising treatment planning approach.
Collapse
Affiliation(s)
- Marc Levivier
- 1Department of Neurosurgery and Gamma Knife Center, Lausanne University Hospital, Lausanne
| | - Rafael E Carrillo
- 2Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne (EPFL).,3CSEM SA, Neuchâtel; and
| | - Rémi Charrier
- 4Intuitive Therapeutics SA, Saint-Sulpice, Switzerland
| | - André Martin
- 4Intuitive Therapeutics SA, Saint-Sulpice, Switzerland
| | - Jean-Philippe Thiran
- 2Signal Processing Laboratory (LTS5), Ecole Polytechnique Fédérale de Lausanne (EPFL)
| |
Collapse
|
49
|
Abstract
Ensemble Kalman methods constitute an increasingly important tool in both state and parameter estimation problems. Their popularity stems from the derivative-free nature of the methodology which may be readily applied when computer code is available for the underlying state-space dynamics (for state estimation) or for the parameter-to-observable map (for parameter estimation). There are many applications in which it is desirable to enforce prior information in the form of equality or inequality constraints on the state or parameter. This paper establishes a general framework for doing so, describing a widely applicable methodology, a theory which justifies the methodology, and a set of numerical experiments exemplifying it.
Collapse
Affiliation(s)
- David J Albers
- Department of Biomedical Informatics, Columbia University, New York, NY 10032
- Department of Pediatrics, Division of Informatics, University of Colorado Medicine, Aurora, CO 80045
| | | | - Matthew E Levine
- Department of Computational and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125
| | | | - Andrew Stuart
- Department of Computational and Mathematical Sciences, California Institute of Technology, Pasadena, CA 91125
| |
Collapse
|
50
|
Glendening ED, Wright SJ, Weinhold F. Efficient optimization of natural resonance theory weightings and bond orders by gram-based convex programming. J Comput Chem 2019; 40:2028-2035. [PMID: 31077408 DOI: 10.1002/jcc.25855] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 04/15/2019] [Accepted: 04/24/2019] [Indexed: 11/07/2022]
Abstract
We describe the formal algorithm and numerical applications of a novel convex quadratic programming (QP) strategy for performing the variational minimization that underlies natural resonance theory (NRT). The QP algorithm vastly improves the numerical efficiency, thoroughness, and accuracy of variational NRT description, which now allows uniform treatment of all reference structures at the high level of detail previously reserved only for leading "reference" structures, with little or no user guidance. We illustrate overall QPNRT search strategy, program I/O, and numerical results for a specific application to adenine, and we summarize more extended results for a data set of 338 species from throughout the organic, bioorganic, and inorganic domain. The improved QP-based implementation of NRT is a principal feature of the newly released NBO 7.0 program version. © 2019 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Eric D Glendening
- Department of Chemistry and Physics, Indiana State University, Terre Haute, Indiana, 47809
| | - Stephen J Wright
- Department of Computer Science, University of Wisconsin-Madison, Madison, Wisconsin, 53705
| | - Frank Weinhold
- Theoretical Chemistry Institute and Department of Chemistry, University of Wisconsin-Madison, Madison, Wisconsin, 53705
| |
Collapse
|