1
|
Multistep estimators of the between-study covariance matrix under the multivariate random-effects model for meta-analysis. Stat Med 2024; 43:756-773. [PMID: 38110725 DOI: 10.1002/sim.9985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 09/25/2023] [Accepted: 11/22/2023] [Indexed: 12/20/2023]
Abstract
A wide variety of methods are available to estimate the between-study variance under the univariate random-effects model for meta-analysis. Some, but not all, of these estimators have been extended so that they can be used in the multivariate setting. We begin by extending the univariate generalised method of moments, which immediately provides a wider class of multivariate methods than was previously available. However, our main proposal is to use this new type of estimator to derive multivariate multistep estimators of the between-study covariance matrix. We then use the connection between the univariate multistep and Paule-Mandel estimators to motivate taking the limit, where the number of steps tends toward infinity. We illustrate our methodology using two contrasting examples and investigate its properties in a simulation study. We conclude that the proposed methodology is a fully viable alternative to existing estimation methods, is well suited to sensitivity analyses that explore the use of alternative estimators, and should be used instead of the existing DerSimonian and Laird-type moments based estimator in application areas where data are expected to be heterogeneous. However, multistep estimators do not seem to outperform the existing estimators when the data are more homogeneous. Advantages of the new multivariate multistep estimator include its semi-parametric nature and that it is computationally feasible in high dimensions. Our proposed estimation methods are also applicable for multivariate random-effects meta-regression, where study-level covariates are included in the model.
Collapse
|
2
|
Extension of Operational Matrix Technique for the Solution of Nonlinear System of Caputo Fractional Differential Equations Subjected to Integral Type Boundary Constrains. ENTROPY 2021; 23:e23091154. [PMID: 34573779 PMCID: PMC8471013 DOI: 10.3390/e23091154] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/12/2021] [Accepted: 08/18/2021] [Indexed: 11/16/2022]
Abstract
We extend the operational matrices technique to design a spectral solution of nonlinear fractional differential equations (FDEs). The derivative is considered in the Caputo sense. The coupled system of two FDEs is considered, subjected to more generalized integral type conditions. The basis of our approach is the most simple orthogonal polynomials. Several new matrices are derived that have strong applications in the development of computational scheme. The scheme presented in this article is able to convert nonlinear coupled system of FDEs to an equivalent S-lvester type algebraic equation. The solution of the algebraic structure is constructed by converting the system into a complex Schur form. After conversion, the solution of the resultant triangular system is obtained and transformed back to construct the solution of algebraic structure. The solution of the matrix equation is used to construct the solution of the related nonlinear system of FDEs. The convergence of the proposed method is investigated analytically and verified experimentally through a wide variety of test problems.
Collapse
|
3
|
Electrocardiographic Imaging: A Comparison of Iterative Solvers. Front Physiol 2021; 12:620250. [PMID: 33613311 PMCID: PMC7886787 DOI: 10.3389/fphys.2021.620250] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 01/08/2021] [Indexed: 11/13/2022] Open
Abstract
Cardiac disease is a leading cause of morbidity and mortality in developed countries. Currently, non-invasive techniques that can identify patients at risk and provide accurate diagnosis and ablation guidance therapy are under development. One of these is electrocardiographic imaging (ECGI). In ECGI, the first step is to formulate a forward problem that relates the unknown potential sources on the cardiac surface to the measured body surface potentials. Then, the unknown potential sources on the cardiac surface are reconstructed through the solution of an inverse problem. Unfortunately, ECGI still lacks accuracy due to the underlying inverse problem being ill-posed, and this consequently imposes limitations on the understanding and treatment of many cardiac diseases. Therefore, it is necessary to improve the solution of the inverse problem. In this work, we transfer and adapt four inverse problem methods to the ECGI setting: algebraic reconstruction technique (ART), random ART, ART Split Bregman (ART-SB) and range restricted generalized minimal residual (RRGMRES) method. We test all these methods with data from the Experimental Data and Geometric Analysis Repository (EDGAR) and compare their solution with the recorded epicardial potentials provided by EDGAR and a generalized minimal residual (GMRES) iterative method computed solution. Activation maps are also computed and compared. The results show that ART achieved the most stable solutions and, for some datasets, returned the best reconstruction. Differences between the solutions derived from ART and random ART are almost negligible, and the accuracy of their solutions is followed by RRGMRES, ART-SB and finally the GMRES (which returned the worst reconstructions). The RRGMRES method provided the best reconstruction for some datasets but appeared to be less stable than ART when comparing different datasets. In conclusion, we show that the proposed methods (ART, random ART, and RRGMRES) improve the GMRES solution, which has been suggested as inverse problem solution for ECGI.
Collapse
|
4
|
Traumatic brain injury probability of survival assessment in adults using iterative random comparison classification. Healthc Technol Lett 2020; 7:119-124. [PMID: 33282321 PMCID: PMC7704143 DOI: 10.1049/htl.2019.0029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 04/22/2020] [Accepted: 07/07/2020] [Indexed: 11/19/2022] Open
Abstract
Trauma brain injury (TBI) is the most common cause of death and disability in young adults. A method to determine the probability of survival (Ps) in trauma called iterative random comparison classification (IRCC) was developed and its performance was evaluated in TBI. IRCC operates by iteratively comparing the test case with randomly chosen subgroups of cases from a database of known outcomes (survivors and not survivors) and determines the overall percentage match. The performance of IRCC to determine Ps in TBI was compared with two existing methods. One was Ps14 that uses regression and the other was predictive statistical diagnosis (PSD) that is based on Bayesian statistic. The TBI database contained 4124 adult cases (mean age 67.9 years, standard deviation 21.6) of which 3553 (86.2%) were survivors and 571 (13.8%) were not survivors. IRCC determined Ps for the survivors and not survivors with an accuracy of 79.0 and 71.4%, respectively, while the corresponding values for Ps14 were 97.4% (survivors) and 40.2% (not survivors) and for PSD were 90.8% (survivors) and 50% (not survivors). IRCC could be valuable for determining Ps in TBI and with a suitable database in other traumas.
Collapse
|
5
|
Application of conditional robust calibration to ordinary differential equations models in computational systems biology: a comparison of two sampling strategies. IET Syst Biol 2020; 14:107-119. [PMID: 32406375 PMCID: PMC8687221 DOI: 10.1049/iet-syb.2018.5091] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2018] [Revised: 07/30/2019] [Accepted: 11/15/2019] [Indexed: 01/01/2023] Open
Abstract
Mathematical modelling is a widely used technique for describing the temporal behaviour of biological systems. One of the most challenging topics in computational systems biology is the calibration of non-linear models; i.e. the estimation of their unknown parameters. The state-of-the-art methods in this field are the frequentist and Bayesian approaches. For both of them, the performance and accuracy of results greatly depend on the sampling technique employed. Here, the authors test a novel Bayesian procedure for parameter estimation, called conditional robust calibration (CRC), comparing two different sampling techniques: uniform and logarithmic Latin hypercube sampling. CRC is an iterative algorithm based on parameter space sampling and on the estimation of parameter density functions. They apply CRC with both sampling strategies to the three ordinary differential equations (ODEs) models of increasing complexity. They obtain a more precise and reliable solution through logarithmically spaced samples.
Collapse
|
6
|
On the cost of iterative computations. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2020; 378:20190050. [PMID: 31955684 PMCID: PMC7015299 DOI: 10.1098/rsta.2019.0050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/23/2019] [Indexed: 06/10/2023]
Abstract
With exascale-level computation on the horizon, the art of predicting the cost of computations has acquired a renewed focus. This task is especially challenging in the case of iterative methods, for which convergence behaviour often cannot be determined with certainty a priori (unless we are satisfied with potentially outrageous overestimates) and which typically suffer from performance bottlenecks at scale due to synchronization cost. Moreover, the amplification of rounding errors can substantially affect the practical performance, in particular for methods with short recurrences. In this article, we focus on what we consider to be key points which are crucial to understanding the cost of iteratively solving linear algebraic systems. This naturally leads us to questions on the place of numerical analysis in relation to mathematics, computer science and sciences, in general. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.
Collapse
|
7
|
Applying an iterative method numerically to solve n × n matrix Wiener-Hopf equations with exponential factors. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2020; 378:20190241. [PMID: 31760896 PMCID: PMC6894519 DOI: 10.1098/rsta.2019.0241] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/16/2019] [Indexed: 06/10/2023]
Abstract
This paper presents a generalization of a recent iterative approach to solving a class of 2 × 2 matrix Wiener-Hopf equations involving exponential factors. We extend the method to square matrices of arbitrary dimension n, as arise in mixed boundary value problems with n junctions. To demonstrate the method, we consider the classical problem of scattering a plane wave by a set of collinear plates. The results are compared to other known methods. We describe an effective implementation using a spectral method to compute the required Cauchy transforms. The approach is ideally suited to obtaining far-field directivity patterns of utility to applications. Convergence in iteration is fastest for large wavenumbers, but remains practical at modest wavenumbers to achieve a high degree of accuracy. This article is part of the theme issue 'Modelling of dynamic phenomena and localization in structured media (part 2)'.
Collapse
|
8
|
Identification of a time-varying intracellular signalling model through data clustering and parameter selection: application to NF-[inline-formula removed]B signalling pathway induced by LPS in the presence of BFA. IET Syst Biol 2019; 13:169-179. [PMID: 31318334 PMCID: PMC8687386 DOI: 10.1049/iet-syb.2018.5079] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 02/07/2019] [Accepted: 02/14/2019] [Indexed: 01/02/2023] Open
Abstract
Developing a model for a signalling pathway requires several iterations of experimentation and model refinement to obtain an accurate model. However, the implementation of such an approach to model a signalling pathway induced by a poorly-known stimulus can become labour intensive because only limited information on the pathway is available beforehand to formulate an initial model. Therefore, a large number of iterations are required since the initial model is likely to be erroneous. In this work, a numerical scheme is proposed to construct a time-varying model for a signalling pathway induced by a poorly-known stimulus when its nominal model is available in the literature. Here, the nominal model refers to one that describes the signalling dynamics under a well-characterised stimulus. First, global sensitivity analysis is implemented on the nominal model to identify the most important parameters, which are assumed to be piecewise constants. Second, measurement data are clustered to determine temporal subdomains where the parameters take different values. Finally, a least-squares problem is solved to estimate the parameter values in each temporal subdomain. The effectiveness of this approach is illustrated by developing a time-varying model for NF-[inline-formula removed]B signalling dynamics induced by lipopolysaccharide in the presence of brefeldin A.
Collapse
|
9
|
Synthesis of Polysubstituted Iodoarenes Enabled by Iterative Iodine-Directed para and ortho C-H Functionalization. Angew Chem Int Ed Engl 2019; 58:2617-2621. [PMID: 30496639 DOI: 10.1002/anie.201809657] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Revised: 11/08/2018] [Indexed: 01/14/2023]
Abstract
Among halogenated aromatics, iodoarenes are unique in their ability to produce the bench-stable halogen(III) form. Earlier, such iodine(III) centers were shown to enable C-H functionalization ortho to iodine via halogen-centered rearrangement. The broader implications of this phenomenon are explored by testing the extent of an unusual iodane-directed para C-H benzylation, as well as by developing an efficient C-H coupling with sulfonyl-substituted allylic silanes. Through the combination of the one-shot nature of the coupling event and the iodine retention, multisubstituted arenes can be prepared by sequentially engaging up to three aromatic C-H sites. This type of iodine-based iterative synthesis will serve as a tool for the formation of value-added aromatic cores.
Collapse
|
10
|
Fast projection/backprojection and incremental methods applied to synchrotron light tomographic reconstruction. JOURNAL OF SYNCHROTRON RADIATION 2018; 25:248-256. [PMID: 29271774 DOI: 10.1107/s1600577517015715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2017] [Accepted: 10/27/2017] [Indexed: 06/07/2023]
Abstract
Iterative methods for tomographic image reconstruction have the computational cost of each iteration dominated by the computation of the (back)projection operator, which take roughly O(N3) floating point operations (flops) for N × N pixels images. Furthermore, classical iterative algorithms may take too many iterations in order to achieve acceptable images, thereby making the use of these techniques unpractical for high-resolution images. Techniques have been developed in the literature in order to reduce the computational cost of the (back)projection operator to O(N2logN) flops. Also, incremental algorithms have been devised that reduce by an order of magnitude the number of iterations required to achieve acceptable images. The present paper introduces an incremental algorithm with a cost of O(N2logN) flops per iteration and applies it to the reconstruction of very large tomographic images obtained from synchrotron light illuminated data.
Collapse
|
11
|
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery. Healthc Technol Lett 2017; 4:168-173. [PMID: 29184659 PMCID: PMC5683202 DOI: 10.1049/htl.2017.0066] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 08/02/2017] [Indexed: 12/12/2022] Open
Abstract
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
Collapse
|
12
|
Electrocardiograph signal denoising based on sparse decomposition. Healthc Technol Lett 2017; 4:134-137. [PMID: 28868150 PMCID: PMC5569915 DOI: 10.1049/htl.2016.0097] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Revised: 04/29/2017] [Accepted: 05/11/2017] [Indexed: 11/20/2022] Open
Abstract
Noise in ECG signals will affect the result of post-processing if left untreated. Since ECG is highly subjective, the linear denoising method with a specific threshold working well on one subject could fail on another. Therefore, in this Letter, sparse-based method, which represents every segment of signal using different linear combinations of atoms from a dictionary, is used to denoise ECG signals, with a view to myoelectric interference existing in ECG signals. Firstly, a denoising model for ECG signals is constructed. Then the model is solved by matching pursuit algorithm. In order to get better results, four kinds of dictionaries are investigated with the ECG signals from MIT-BIH arrhythmia database, compared with wavelet transform (WT)-based method. Signal-noise ratio (SNR) and mean square error (MSE) between estimated signal and original signal are used as indicators to evaluate the performance. The results show that by using the present method, the SNR is higher while the MSE between estimated signal and original signal is smaller.
Collapse
|
13
|
Computational methods for image reconstruction. NMR IN BIOMEDICINE 2017; 30:e3545. [PMID: 27226213 DOI: 10.1002/nbm.3545] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2015] [Revised: 02/25/2016] [Accepted: 03/31/2016] [Indexed: 06/05/2023]
Abstract
Reconstructing images from indirect measurements is a central problem in many applications, including the subject of this special issue, quantitative susceptibility mapping (QSM). The process of image reconstruction typically requires solving an inverse problem that is ill-posed and large-scale and thus challenging to solve. Although the research field of inverse problems is thriving and very active with diverse applications, in this part of the special issue we will focus on recent advances in inverse problems that are specific to deconvolution problems, the class of problems to which QSM belongs. We will describe analytic tools that can be used to investigate underlying ill-posedness and apply them to the QSM reconstruction problem and the related extensively studied image deblurring problem. We will discuss state-of-the-art computational tools and methods for image reconstruction, including regularization approaches and regularization parameter selection methods. We finish by outlining some of the current trends and future challenges. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
|
14
|
An Assessment of Iterative Reconstruction Methods for Sparse Ultrasound Imaging. SENSORS 2017; 17:s17030533. [PMID: 28282862 PMCID: PMC5375819 DOI: 10.3390/s17030533] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 02/21/2017] [Accepted: 02/28/2017] [Indexed: 11/16/2022]
Abstract
Ultrasonic image reconstruction using inverse problems has recently appeared as an alternative to enhance ultrasound imaging over beamforming methods. This approach depends on the accuracy of the acquisition model used to represent transducers, reflectivity, and medium physics. Iterative methods, well known in general sparse signal reconstruction, are also suited for imaging. In this paper, a discrete acquisition model is assessed by solving a linear system of equations by an ℓ1-regularized least-squares minimization, where the solution sparsity may be adjusted as desired. The paper surveys 11 variants of four well-known algorithms for sparse reconstruction, and assesses their optimization parameters with the goal of finding the best approach for iterative ultrasound imaging. The strategy for the model evaluation consists of using two distinct datasets. We first generate data from a synthetic phantom that mimics real targets inside a professional ultrasound phantom device. This dataset is contaminated with Gaussian noise with an estimated SNR, and all methods are assessed by their resulting images and performances. The model and methods are then assessed with real data collected by a research ultrasound platform when scanning the same phantom device, and results are compared with beamforming. A distinct real dataset is finally used to further validate the proposed modeling. Although high computational effort is required by iterative methods, results show that the discrete model may lead to images closer to ground-truth than traditional beamforming. However, computing capabilities of current platforms need to evolve before frame rates currently delivered by ultrasound equipments are achievable.
Collapse
|
15
|
Bias reduction for low-statistics PET: maximum likelihood reconstruction with a modified Poisson distribution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:126-136. [PMID: 25137726 PMCID: PMC4465546 DOI: 10.1109/tmi.2014.2347810] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Positron emission tomography data are typically reconstructed with maximum likelihood expectation maximization (MLEM). However, MLEM suffers from positive bias due to the non-negativity constraint. This is particularly problematic for tracer kinetic modeling. Two reconstruction methods with bias reduction properties that do not use strict Poisson optimization are presented and compared to each other, to filtered backprojection (FBP), and to MLEM. The first method is an extension of NEGML, where the Poisson distribution is replaced by a Gaussian distribution for low count data points. The transition point between the Gaussian and the Poisson regime is a parameter of the model. The second method is a simplification of ABML. ABML has a lower and upper bound for the reconstructed image whereas AML has the upper bound set to infinity. AML uses a negative lower bound to obtain bias reduction properties. Different choices of the lower bound are studied. The parameter of both algorithms determines the effectiveness of the bias reduction and should be chosen large enough to ensure bias-free images. This means that both algorithms become more similar to least squares algorithms, which turned out to be necessary to obtain bias-free reconstructions. This comes at the cost of increased variance. Nevertheless, NEGML and AML have lower variance than FBP. Furthermore, randoms handling has a large influence on the bias. Reconstruction with smoothed randoms results in lower bias compared to reconstruction with unsmoothed randoms or randoms precorrected data. However, NEGML and AML yield both bias-free images for large values of their parameter.
Collapse
|
16
|
Derivation of coarse-grained potentials via multistate iterative Boltzmann inversion. J Chem Phys 2014; 140:224104. [PMID: 24929371 PMCID: PMC4187284 DOI: 10.1063/1.4880555] [Citation(s) in RCA: 115] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2014] [Accepted: 05/19/2014] [Indexed: 11/15/2022] Open
Abstract
In this work, an extension is proposed to the standard iterative Boltzmann inversion (IBI) method used to derive coarse-grained potentials. It is shown that the inclusion of target data from multiple states yields a less state-dependent potential, and is thus better suited to simulate systems over a range of thermodynamic states than the standard IBI method. The inclusion of target data from multiple states forces the algorithm to sample regions of potential phase space that match the radial distribution function at multiple state points, thus producing a derived potential that is more representative of the underlying interactions. It is shown that the algorithm is able to converge to the true potential for a system where the underlying potential is known. It is also shown that potentials derived via the proposed method better predict the behavior of n-alkane chains than those derived via the standard IBI method. Additionally, through the examination of alkane monolayers, it is shown that the relative weight given to each state in the fitting procedure can impact bulk system properties, allowing the potentials to be further tuned in order to match the properties of reference atomistic and/or experimental systems.
Collapse
|
17
|
Quantifying admissible undersampling for sparsity-exploiting iterative image reconstruction in X-ray CT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013. [PMID: 23204282 PMCID: PMC3992296 DOI: 10.1109/tmi.2012.2230185] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Iterative image reconstruction with sparsity-exploiting methods, such as total variation (TV) minimization, investigated in compressive sensing claim potentially large reductions in sampling requirements. Quantifying this claim for computed tomography (CT) is nontrivial, because both full sampling in the discrete-to-discrete imaging model and the reduction in sampling admitted by sparsity-exploiting methods are ill-defined. The present article proposes definitions of full sampling by introducing four sufficient-sampling conditions (SSCs). The SSCs are based on the condition number of the system matrix of a linear imaging model and address invertibility and stability. In the example application of breast CT, the SSCs are used as reference points of full sampling for quantifying the undersampling admitted by reconstruction through TV-minimization. In numerical simulations, factors affecting admissible undersampling are studied. Differences between few-view and few-detector bin reconstruction as well as a relation between object sparsity and admitted undersampling are quantified.
Collapse
|
18
|
Reduced-Rank Approximations to the Far-Field Transform in the Gridded Fast Multipole Method. JOURNAL OF COMPUTATIONAL PHYSICS 2011; 230:3656-3667. [PMID: 21552350 PMCID: PMC3086302 DOI: 10.1016/j.jcp.2011.02.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.
Collapse
|
19
|
The Fast Multipole Method and Fourier Convolution for the Solution of Acoustic Scattering on Regular Volumetric Grids. JOURNAL OF COMPUTATIONAL PHYSICS 2010; 229:8199-8210. [PMID: 20835366 PMCID: PMC2936276 DOI: 10.1016/j.jcp.2010.07.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The fast multipole method (FMM) is applied to the solution of large-scale, three-dimensional acoustic scattering problems involving inhomogeneous objects defined on a regular grid. The grid arrangement is especially well suited to applications in which the scattering geometry is not known a priori and is reconstructed on a regular grid using iterative inverse scattering algorithms or other imaging techniques. The regular structure of unknown scattering elements facilitates a dramatic reduction in the amount of storage and computation required for the FMM, both of which scale linearly with the number of scattering elements. In particular, the use of fast Fourier transforms to compute Green's function convolutions required for neighboring interactions lowers the often-significant cost of finest-level FMM computations and helps mitigate the dependence of FMM cost on finest-level box size. Numerical results demonstrate the efficiency of the composite method as the number of scattering elements in each finest-level box is increased.
Collapse
|
20
|
GPU computing with Kaczmarz's and other iterative algorithms for linear systems. PARALLEL COMPUTING 2010; 36:215-231. [PMID: 20526446 PMCID: PMC2879082 DOI: 10.1016/j.parco.2009.12.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The graphics processing unit (GPU) is used to solve large linear systems derived from partial differential equations. The differential equations studied are strongly convection-dominated, of various sizes, and common to many fields, including computational fluid dynamics, heat transfer, and structural mechanics. The paper presents comparisons between GPU and CPU implementations of several well-known iterative methods, including Kaczmarz's, Cimmino's, component averaging, conjugate gradient normal residual (CGNR), symmetric successive overrelaxation-preconditioned conjugate gradient, and conjugate-gradient-accelerated component-averaged row projections (CARP-CG). Computations are preformed with dense as well as general banded systems. The results demonstrate that our GPU implementation outperforms CPU implementations of these algorithms, as well as previously studied parallel implementations on Linux clusters and shared memory systems. While the CGNR method had begun to fall out of favor for solving such problems, for the problems studied in this paper, the CGNR method implemented on the GPU performed better than the other methods, including a cluster implementation of the CARP-CG method.
Collapse
|