151
|
Trzasko J, Manduca A. Highly undersampled magnetic resonance image reconstruction via homotopic l(0) -minimization. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:106-21. [PMID: 19116193 DOI: 10.1109/tmi.2008.927346] [Citation(s) in RCA: 155] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In clinical magnetic resonance imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in compressive sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex l(1) -minimization problem. Although l(1)-based techniques are extremely powerful, they inherently require a degree of over-sampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the CS paradigm based on homotopic approximation of the l(0) quasi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard CS methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented.
Collapse
Affiliation(s)
- Joshua Trzasko
- Center for Advanced Imaging Research, Mayo Clinic College of Medicine, Rochester, MN 55905 USA.
| | | |
Collapse
|
152
|
Barmpoutis A, Vemuri BC, Howland D, Forder JR. Extracting tractosemas from a displacement probability field for tractography in DW-MRI. ACTA ACUST UNITED AC 2008; 11:9-16. [PMID: 18979726 DOI: 10.1007/978-3-540-85988-8_2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/15/2023]
Abstract
In this paper we present a novel method for estimating a field of asymmetric spherical functions, dubbed tractosemas, given the intra-voxel displacement probability information. The peaks of tractosemas correspond to directions of distinct fibers, which can have either symmetric or asymmetric local fiber structure. This is in contrast to the existing methods that estimate fiber orientation distributions which are naturally symmetric and therefore cannot model asymmetries such as splaying fibers. We propose a method for extracting tractosemas from a given field of displacement probability iso-surfaces via a diffusion process. The diffusion is performed by minimizing a kernel convolution integral, which leads to an update formula expressed in the convenient form of a discrete kernel convolution. The kernel expresses the probability of diffusion between two neighboring spherical functions and we model it by the product of Gaussian and von Mises distributions. The model is validated via experiments on synthetic and real diffusion-weighted magnetic resonance (DW-MRI) datasets from a rat hippocampus and spinal cord.
Collapse
|
153
|
Tosun D, Prince JL. A geometry-driven optical flow warping for spatial normalization of cortical surfaces. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1739-53. [PMID: 19033090 PMCID: PMC2597639 DOI: 10.1109/tmi.2008.925080] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Spatial normalization is frequently used to map data to a standard coordinate system by removing intersubject morphological differences, thereby allowing for group analysis to be carried out. The work presented in this paper is motivated by the need for an automated cortical surface normalization technique that will automatically identify homologous cortical landmarks and map them to the same coordinates on a standard manifold. The geometry of a cortical surface is analyzed using two shape measures that distinguish the sulcal and gyral regions in a multiscale framework. A multichannel optical flow warping procedure aligns these shape measures between a reference brain and a subject brain, creating the desired normalization. The partial differential equation that carries out the warping is implemented in a Euclidean framework in order to facilitate a multiresolution strategy, thereby permitting large deformations between the two surfaces. The technique is demonstrated by aligning 33 normal cortical surfaces and showing both improved structural alignment in manually labeled sulci and improved functional alignment in positron emission tomography data mapped to the surfaces. A quantitative comparison between our proposed surface-based spatial normalization method and a leading volumetric spatial normalization method is included to show that the surface-based spatial normalization performs better in matching homologous cortical anatomies.
Collapse
Affiliation(s)
- Duygu Tosun
- Department of Electrical and Computer Engineering, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | | |
Collapse
|
154
|
Gilboa G. Nonlinear scale space with spatially varying stopping time. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:2175-2187. [PMID: 18988950 DOI: 10.1109/tpami.2008.23] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
A general scale space algorithm is presented for denoising signals and images with spatially varying dominant scales. The process is formulated as a partial differential equation with spatially varying time. The proposed adaptivity is semi-local and is in conjunction with the classical gradient-based diffusion coefficient, designed to preserve edges. The new algorithm aims at maximizing a local SNR measure of the denoised image. It is based on a generalization of a global stopping time criterion presented recently by the author and colleagues. Most notably, the method works well also for partially textured images and outperforms any selection of a global stopping time. Given an estimate of the noise variance, the procedure is automatic and can be applied well to most natural images.
Collapse
|
155
|
Shen X, Dietlein CR, Grossman E, Popovic Z, Meyer FG. Detection and segmentation of concealed objects in terahertz images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:2465-2475. [PMID: 19004716 DOI: 10.1109/tip.2008.2006662] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Terahertz imaging makes it possible to acquire images of objects concealed underneath clothing by measuring the radiometric temperatures of different objects on a human subject. The goal of this work is to automatically detect and segment concealed objects in broadband 0.1-1 THz images. Due to the inherent physical properties of passive terahertz imaging and associated hardware, images have poor contrast and low signal to noise ratio. Standard segmentation algorithms are unable to segment or detect concealed objects. Our approach relies on two stages. First, we remove the noise from the image using the anisotropic diffusion algorithm. We then detect the boundaries of the concealed objects. We use a mixture of Gaussian densities to model the distribution of the temperature inside the image. We then evolve curves along the isocontours of the image to identify the concealed objects. We have compared our approach with two state-of-the-art segmentation methods. Both methods fail to identify the concealed objects, while our method accurately detected the objects. In addition, our approach was more accurate than a state-of-the-art supervised image segmentation algorithm that required that the concealed objects be already identified. Our approach is completely unsupervised and could work in real-time on dedicated hardware.
Collapse
Affiliation(s)
- Xilin Shen
- Department of Radiology, Yale University, New Haven, CT 06519, USA
| | | | | | | | | |
Collapse
|
156
|
Satoh S, Usui S. Engineering-approach accelerates computational understanding of V1-V2 neural properties. Cogn Neurodyn 2008; 3:1-8. [PMID: 19003454 DOI: 10.1007/s11571-008-9065-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2008] [Revised: 09/07/2008] [Accepted: 09/07/2008] [Indexed: 10/21/2022] Open
Abstract
We present two computational models (i) long-range horizontal connections and the nonlinear effect in V1 and (ii) the filling-in process at the blind spot. Both models are obtained deductively from standard regularization theory to show that physiological evidence of V1 and V2 neural properties is essential for efficient image processing. We stress that the engineering approach should be imported to understand visual systems computationally, even though this approach usually ignores physiological evidence and the target is neither neurons nor the brain.
Collapse
Affiliation(s)
- Shunji Satoh
- Laboratory for Neuroinformatics, RIKEN Brain Science Institute, Hirosawa 2-1, Wako, Saitama, 351-0198, Japan,
| | | |
Collapse
|
157
|
Balocco S, Basset O, Azencot J, Tortoli P, Cachard C. 3D dynamic model of healthy and pathologic arteries for ultrasound technique evaluation. Med Phys 2008; 35:5440-50. [DOI: 10.1118/1.3006948] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
158
|
Bristow MS, Poulin BW, Simon JE, Hill MD, Kosior JC, Coutts SB, Frayne R, Mitchell JR, Demchuk AM. Identifying lesion growth with MR imaging in acute ischemic stroke. J Magn Reson Imaging 2008; 28:837-46. [DOI: 10.1002/jmri.21507] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
159
|
Papari G, Petkov N. Adaptive pseudo dilation for gestalt edge grouping and contour detection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1950-1962. [PMID: 18784041 DOI: 10.1109/tip.2008.2002306] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We consider the problem of detecting object contours in natural images. In many cases, local luminance changes turn out to be stronger in textured areas than on object contours. Therefore, local edge features, which only look at a small neighborhood of each pixel, cannot be reliable indicators of the presence of a contour, and some global analysis is needed. We introduce a new morphological operator, called adaptive pseudo-dilation (APD), which uses context dependent structuring elements in order to identify long curvilinear structure in the edge map. We show that grouping edge pixels as the connected components of the output of APD results in a good agreement with the gestalt law of good continuation. The novelty of this operator is that dilation is limited to the Voronoi cell of each edge pixel. An efficient implementation of APD is presented. The grouping algorithm is then embedded in a multithreshold contour detector. At each threshold level, small groups of edges are removed, and contours are completed by means of a generalized reconstruction from markers. The use of different thresholds makes the algorithm much less sensitive to the values of the input parameters. Both qualitative and quantitative comparison with existing approaches prove the superiority of the proposed contour detector in terms of larger amount of suppressed texture and more effective detection of low-contrast contours.
Collapse
Affiliation(s)
- Giuseppe Papari
- Institute of Mathematics and Computing Science, University of Groningen, Groningen, The Netherlands.
| | | |
Collapse
|
160
|
Vanhamel I, Mihai C, Sahli H, Katartzis A, Pratikakis I. Scale Selection for Compact Scale-Space Representation of Vector-Valued Images. Int J Comput Vis 2008. [DOI: 10.1007/s11263-008-0154-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
161
|
Boulanger J, Kervrann C, Bouthemy P. A simulation and estimation framework for intracellular dynamics and trafficking in video-microscopy and fluorescence imagery. Med Image Anal 2008; 13:132-42. [PMID: 18723385 DOI: 10.1016/j.media.2008.06.017] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2007] [Revised: 04/22/2008] [Accepted: 06/23/2008] [Indexed: 11/18/2022]
Abstract
Image sequence analysis in video-microscopy has now gained importance since molecular biology is presently having a profound impact on the way research is being conducted in medicine. However, image processing techniques that are currently used for modeling intracellular dynamics, are still relatively crude and yield imprecise results. Indeed, complex interactions between a large number of small moving particles in a complex scene cannot be easily modeled, limiting the performance of object detection and tracking algorithms. This motivates our present research effort which is to develop a general estimation/simulation framework able to produce image sequences showing small moving spots in interaction, with variable velocities, and corresponding to intracellular dynamics and trafficking in biology. It is now well established that spot/object trajectories can play a role in the analysis of living cell dynamics and simulating realistic image sequences is then of major importance. We demonstrate the potential of the proposed simulation/estimation framework in experiments, and show that this approach can also be used to evaluate the performance of object detection/tracking algorithms in video-microscopy and fluorescence imagery.
Collapse
Affiliation(s)
- Jérôme Boulanger
- IRISA/INRIA Rennes, Campus Universitaire de Beaulieu, F-35042 Rennes, France.
| | | | | |
Collapse
|
162
|
Yu J, Wang Y, Shen Y. Noise reduction and edge detection via kernel anisotropic diffusion. Pattern Recognit Lett 2008. [DOI: 10.1016/j.patrec.2008.03.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
163
|
Adams M, Tang Fan, Wijesoma W, Chhay Sok. Convergent Smoothing and Segmentation of Noisy Range Data in Multiscale Space. IEEE T ROBOT 2008. [DOI: 10.1109/tro.2008.919294] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
164
|
Lee JA, Geets X, Grégoire V, Bol A. Edge-preserving filtering of images with low photon counts. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:1014-1027. [PMID: 18421107 DOI: 10.1109/tpami.2008.16] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Edge-preserving filters such as local M-smoothers or bilateral filtering are usually designed for Gaussian noise. This paper investigates how these filters can be adapted in order to efficiently deal with Poissonian noise. In addition, the issue of photometry invariance is addressed by changing the way filter coefficients are normalized. The proposed normalization is additive, instead of being multiplicative, and leads to a strong connection with anisotropic diffusion. Experiments show that ensuring the photometry invariance leads to comparable denoising performances in terms of the root mean square error computed on the signal.
Collapse
Affiliation(s)
- John A Lee
- Molecular Imaging and Experimental Radiotherapy Unit (IMRE), Université Catholique de Louvain, Brussels Belgium.
| | | | | | | |
Collapse
|
165
|
Kokkinos I, Deriche R, Faugeras O, Maragos P. Computational analysis and learning for a biologically motivated model of boundary detection. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.11.031] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
166
|
Zhang B, Allebach JP. Adaptive bilateral filter for sharpness enhancement and noise removal. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:664-678. [PMID: 18390373 DOI: 10.1109/tip.2008.919949] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
In this paper, we present the adaptive bilateral filter (ABF) for sharpness enhancement and noise removal. The ABF sharpens an image by increasing the slope of the edges without producing overshoot or undershoot. It is an approach to sharpness enhancement that is fundamentally different from the unsharp mask (USM). This new approach to slope restoration also differs significantly from previous slope restoration algorithms in that the ABF does not involve detection of edges or their orientation, or extraction of edge profiles. In the ABF, the edge slope is enhanced by transforming the histogram via a range filter with adaptive offset and width. The ABF is able to smooth the noise, while enhancing edges and textures in the image. The parameters of the ABF are optimized with a training procedure. ABF restored images are significantly sharper than those restored by the bilateral filter. Compared with an USM based sharpening method-the optimal unsharp mask (OUM), ABF restored edges are as sharp as those rendered by the OUM, but without the halo artifacts that appear in the OUM restored image. In terms of noise removal, ABF also outperforms the bilateral filter and the OUM. We demonstrate that ABF works well for both natural images and text images.
Collapse
|
167
|
El-Yamany NA, Papamichalis PE, Christensen MP. Adaptive framework for robust high-resolution image reconstruction in multiplexed computational imaging architectures. APPLIED OPTICS 2008; 47:B117-B127. [PMID: 18382547 DOI: 10.1364/ao.47.00b117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
In multiplexed computational imaging schemes, high-resolution images are reconstructed by fusing the information in multiple low-resolution images detected by a two-dimensional array of low-resolution image sensors. The reconstruction procedure assumes a mathematical model for the imaging process that could have generated the low-resolution observations from an unknown high-resolution image. In practical settings, the parameters of the mathematical imaging model are known only approximately and are typically estimated before the reconstruction procedure takes place. Violations to the assumed model, such as inaccurate knowledge of the field of view of the imagers, erroneous estimation of the model parameters, and/or accidental scene or environmental changes can be detrimental to the reconstruction quality, even if they are small in number. We present an adaptive algorithm for robust reconstruction of high-resolution images in multiplexed computational imaging architectures. Using robust M-estimators and incorporating a similarity measure, the proposed scheme adopts an adaptive estimation strategy that effectively deals with violations to the assumed imaging model. Comparisons with nonadaptive reconstruction techniques demonstrate the superior performance of the proposed algorithm in terms of reconstruction quality and robustness.
Collapse
Affiliation(s)
- Noha A El-Yamany
- Department of Electrical Engineering, Southern Methodist University, Dallas, TX 75275, USA.
| | | | | |
Collapse
|
168
|
Level set method for positron emission tomography. Int J Biomed Imaging 2008; 2007:26950. [PMID: 18354724 PMCID: PMC2266822 DOI: 10.1155/2007/26950] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2006] [Accepted: 05/06/2007] [Indexed: 11/18/2022] Open
Abstract
In positron emission tomography (PET), a radioactive compound is injected into the body to promote a tissue-dependent emission rate.
Expectation maximization (EM) reconstruction algorithms are iterative techniques which estimate the concentration coefficients
that provide the best fitted solution, for example, a maximum likelihood estimate. In this paper, we combine the EM algorithm with a level set approach.
The level set method is used to capture the coarse scale information and the discontinuities of the concentration coefficients.
An intrinsic advantage of the level set formulation is that anatomical information can be efficiently incorporated and used in an easy and natural way.
We utilize a multiple level set formulation to represent the geometry of the objects in the scene. The proposed algorithm can be applied to any PET configuration, without major modifications.
Collapse
|
169
|
Favaro P, Soatto S, Burger M, Osher SJ. Shape from defocus via diffusion. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:518-531. [PMID: 18195444 DOI: 10.1109/tpami.2007.1175] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Defocus can be modeled as a diffusion process and represented mathematically using the heat equation, where image blur corresponds to the diffusion of heat. This analogy can be extended to non-planar scenes by allowing a space-varying diffusion coefficient. The inverse problem of reconstructing 3-D structure from blurred images corresponds to an "inverse diffusion" that is notoriously ill-posed. We show how to bypass this problem by using the notion of relative blur. Given two images, within each neighborhood, the amount of diffusion necessary to transform the sharper image into the blurrier one depends on the depth of the scene. This can be used to devise a global algorithm to estimate the depth profile of the scene without recovering the deblurred image, using only forward diffusion.
Collapse
Affiliation(s)
- Paolo Favaro
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK.
| | | | | | | |
Collapse
|
170
|
Liu C, Szeliski R, Bing Kang S, Zitnick CL, Freeman WT. Automatic estimation and removal of noise from a single image. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2008; 30:299-314. [PMID: 18084060 DOI: 10.1109/tpami.2007.1176] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Image denoising algorithms often assume an additive white Gaussian noise (AWGN) process that is independent of the actual RGB values. Such approaches are not fully automatic and cannot effectively remove color noise produced by todays CCD digital camera. In this paper, we propose a unified framework for two tasks: automatic estimation and removal of color noise from a single image using piecewise smooth image models. We introduce the noise level function (NLF), which is a continuous function describing the noise level as a function of image brightness. We then estimate an upper bound of the real noise level function by fitting a lower envelope to the standard deviations of per-segment image variances. For denoising, the chrominance of color noise is significantly removed by projecting pixel values onto a line fit to the RGB values in each segment. Then, a Gaussian conditional random field (GCRF) is constructed to obtain the underlying clean image from the noisy input. Extensive experiments are conducted to test the proposed algorithm, which is shown to outperform state-of-the-art denoising algorithms.
Collapse
Affiliation(s)
- Ce Liu
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar Street, Cambridge, MA 02139, USA.
| | | | | | | | | |
Collapse
|
171
|
Abstract
The echogenicity, echotexture, shape, and contour of a lesion are revealed to be effective sonographic features for physicians to identify a tumor as either benign or malignant. Automatic contouring for breast tumors in sonography may assist physicians without relevant experience, in making correct diagnoses. This study develops an efficient method for automatically detecting contours of breast tumors in sonography. First, a sophisticated preprocessing filter reduces the noise, but preserves the shape and contrast of the breast tumor. An adaptive initial contouring method is then performed to obtain an approximate circular contour of the tumor. Finally, the deformation-based level set segmentation automatically extracts the precise contours of breast tumors from ultrasound (US) images. The proposed contouring method evaluates US images from 118 patients with breast tumors. The contouring results, obtained with computer simulation, reveal that the proposed method always identifies similar contours to those obtained with manual sketching. The proposed method provides robust and fast automatic contouring for breast US images. The potential role of this approach might save much of the time required to sketch a precise contour with very high stability.
Collapse
Affiliation(s)
- Yu-Len Huang
- Department of Computer Science and Information Engineering, Tunghai University, Taichung, Taiwan, Republic of China.
| | | | | | | |
Collapse
|
172
|
Loizou CP, Pattichis CS. Despeckle Filtering Algorithms and Software for Ultrasound Imaging. ACTA ACUST UNITED AC 2008. [DOI: 10.2200/s00116ed1v01y200805ase001] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
173
|
Nonlinear Systems for Image Processing. ACTA ACUST UNITED AC 2008. [DOI: 10.1016/s1076-5670(08)00603-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2023]
|
174
|
|
175
|
Retrospective Shading Correction of Confocal Laser Scanning Microscopy Beef Images for Three-Dimensional Visualization. FOOD BIOPROCESS TECH 2007. [DOI: 10.1007/s11947-007-0032-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
176
|
Local Adaptivity to Variable Smoothness for Exemplar-Based Image Regularization and Representation. Int J Comput Vis 2007. [DOI: 10.1007/s11263-007-0096-2] [Citation(s) in RCA: 119] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
177
|
Jerbi T, Burdin V, Ghorbel F, Jacq JJ. Modified data fidelity speed in anisotropic diffusion. ACTA ACUST UNITED AC 2007; 2007:804-7. [PMID: 18002078 DOI: 10.1109/iembs.2007.4352412] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, we use an anisotropic diffusion in a level set framework for low-level segmentation of necrotic femoral heads. Our segmentation is based on three speed terms. The first one includes an adaptive estimation of the contrast level. We use the entropy for evaluating our diffusion on synthetic 3D data. We notice that using the data fidelity term in the last iterations excessively penalizes the diffusion process. To provide better segmentation results, we propose some modifications in the data fidelity speed: we propose to build its reference data term from previous iterations results and hence lessening influence of initial noisy data.
Collapse
Affiliation(s)
- T Jerbi
- Ecole Nationale des Sciences de l'Informatique, La Manouba University, Tunis, Tunisia.
| | | | | | | |
Collapse
|
178
|
Barmpoutis A, Vemuri BC, Shepherd TM, Forder JR. Tensor splines for interpolation and approximation of DT-MRI with applications to segmentation of isolated rat hippocampi. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:1537-1546. [PMID: 18041268 PMCID: PMC2759271 DOI: 10.1109/tmi.2007.903195] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we present novel algorithms for statistically robust interpolation and approximation of diffusion tensors-which are symmetric positive definite (SPD) matrices-and use them in developing a significant extension to an existing probabilistic algorithm for scalar field segmentation, in order to segment diffusion tensor magnetic resonance imaging (DT-MRI) datasets. Using the Riemannian metric on the space of SPD matrices, we present a novel and robust higher order (cubic) continuous tensor product of B-splines algorithm to approximate the SPD diffusion tensor fields. The resulting approximations are appropriately dubbed tensor splines. Next, we segment the diffusion tensor field by jointly estimating the label (assigned to each voxel) field, which is modeled by a Gauss Markov measure field (GMMF) and the parameters of each smooth tensor spline model representing the labeled regions. Results of interpolation, approximation, and segmentation are presented for synthetic data and real diffusion tensor fields from an isolated rat hippocampus, along with validation. We also present comparisons of our algorithms with existing methods and show significantly improved results in the presence of noise as well as outliers.
Collapse
Affiliation(s)
- Angelos Barmpoutis
- Department of Computer and Information Science and Engineering (CISE), University of Florida, Gainesville, FL 32611 USA (e-mail: )
| | - Baba C. Vemuri
- Department of Computer and Information Science and Engineering (CISE), University of Florida, Gainesville, FL 32611 USA (e-mail: )
| | - Timothy M. Shepherd
- Department of Radiology, University of Florida, Gainesville, FL 32611 USA (e-mail: )
| | - John R. Forder
- Department of Radiology, University of Florida, Gainesville, FL 32611 USA (e-mail: )
| |
Collapse
|
179
|
Pereira MC, Kassab F. An electrical stimulator for sensory substitution. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2006:6016-20. [PMID: 17946735 DOI: 10.1109/iembs.2006.260380] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This work presents an electrical stimulator system for use in sensory substitution (SS), as a mobility aid for visually handicapped people. The whole system passes visual information via cutaneous stimulation, and consists of a webcam, a PC, dedicated hardware to generate stimuli and a 15 x 20 electrode matrix. The same system can also be used in psycophysical and somesthesic research, or even for SS of deaf people, by changing the input signal from a camera to a microphone, and adapting its control software. Circuits for pixel addressing, for signal generation and for switching are described, as well as the software involved in generating a pulse train, which configures the stimuli patterns.
Collapse
Affiliation(s)
- Mauro C Pereira
- Dept. of Mechatronics Eng., Univ. Catolica Dom Bosco, Campo Grande, MS, Brazil.
| | | |
Collapse
|
180
|
Yerly J, Hu Y, Jones SM, Martinuzzi RJ. A two-step procedure for automatic and accurate segmentation of volumetric CLSM biofilm images. J Microbiol Methods 2007; 70:424-33. [PMID: 17618700 DOI: 10.1016/j.mimet.2007.05.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2007] [Revised: 05/28/2007] [Accepted: 05/28/2007] [Indexed: 11/23/2022]
Abstract
This paper presents a robust two-step segmentation procedure for the study of biofilm structure. Without user intervention, the procedure segments volumetric biofilm images generated by a confocal laser scanning microscopy (CLSM). This automated procedure implements an anisotropic diffusion filter as a preprocessing step and a 3D extension of the Otsu method for thresholding. Applying the anisotropic diffusion filter to even low-contrast CLSM images significantly improves the segmentation obtained with the 3D Otsu method. A comparison of the results for several CLSM data sets demonstrated that the accuracy of this procedure, unlike that of the objective threshold selection algorithm (OTS), is not affected by biofilm coverage levels and thus fills an important gap in developing a robust and objective segmenting procedure. The effectiveness of the present segmentation procedure is shown for CLSM images containing different bacterial strains. The image saturation handling capability of this procedure relaxes the constraints on user-selected gain and intensity settings of a CLSM. Therefore, this two-step procedure provides an automatic and accurate segmentation of biofilms that is independent of biofilm coverage levels and, in turn, lays a solid foundation for achieving objective analysis of biofilm structural parameters.
Collapse
Affiliation(s)
- Jerome Yerly
- Electrical and Computer Engineering, Schulich School of Engineering, University of Calgary, 2500 University Drive NW, Calgary, AB, Canada T2N 1N4
| | | | | | | |
Collapse
|
181
|
Saylor JR, Sivasubramanian NA. Edge detection methods applied to the analysis of spherical raindrop images. APPLIED OPTICS 2007; 46:5352-67. [PMID: 17676151 DOI: 10.1364/ao.46.005352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Optical imaging of raindrops provides important information on the statistical distribution of raindrop size and raindrop shape. These distributions are critical for extracting rainfall rates from both dual- and single-polarization radar signals. A large number of raindrop images are required to obtain these statistics, necessitating automatic processing of the imagery. The accuracy of the measured drop size depends critically on the characteristics of the digital image processing algorithm used to identify and size the drop. Additionally, the algorithm partially determines the effective depth of field of the camera/image processing system. Because a large number of drop images are required to obtain accurate statistics, a large depth of field is needed, which tends to increase errors in drop size measurement. This trade-off between accuracy and depth of field (dof) is also affected by the algorithm used to identify the drop outline. In this paper, eight edge detection algorithms are investigated and compared to determine which is best suited for accurately extracting the drop outline and measuring the diameter of an imaged raindrop while maintaining a relatively large depth of field. The algorithm which overall gave the largest dof along with the most accurate estimate of the size of the drop was the Hueckel algorithm [J. Assoc. Comput. Mach. 20, 634 (1973)].
Collapse
Affiliation(s)
- J R Saylor
- Department of Mechanical Engineering, Clemson University, Clemson, South Carolina 29634, USA.
| | | |
Collapse
|
182
|
Pock T, Pock M, Bischof H. Algorithmic differentiation: application to variational problems in computer vision. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2007; 29:1180-93. [PMID: 17496376 DOI: 10.1109/tpami.2007.1044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Many vision problems can be formulated as minimization of appropriate energy functionals. These energy functionals are usually minimized, based on the calculus of variations (Euler-Lagrange equation). Once the Euler-Lagrange equation has been determined, it needs to be discretized in order to implement it on a digital computer. This is not a trivial task and, is moreover, error-prone. In this paper, we propose a flexible alternative. We discretize the energy functional and, subsequently, apply the mathematical concept of algorithmic differentiation to directly derive algorithms that implement the energy functional's derivatives. This approach has several advantages: First, the computed derivatives are exact with respect to the implementation of the energy functional. Second, it is basically straightforward to compute second-order derivatives and, thus, the Hessian matrix of the energy functional. Third, algorithmic differentiation is a process which can be automated. We demonstrate this novel approach on three representative vision problems (namely, denoising, segmentation, and stereo) and show that state-of-the-art results are obtained with little effort.
Collapse
Affiliation(s)
- Thomas Pock
- Institute for Computer Graphics and Vision, Graz University of Technology, Graz, Austria.
| | | | | |
Collapse
|
183
|
Wang Y, Zhang L, Li P. Local variance-controlled forward-and-backward diffusion for image enhancement and noise reduction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:1854-64. [PMID: 17605383 DOI: 10.1109/tip.2007.899002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
In order to improve signal-to-noise ratio (SNR) and contrast-to-noise ratio, this paper introduces a local variance-controlled forward-and-backward (LVCFAB) diffusion algorithm for edge enhancement and noise reduction. In our algorithm, an alternative FAB diffusion algorithm is proposed. The results for the alternative FAB algorithm show better algorithm behavior than other existing diffusion FAB approaches. Furthermore, two distinct discontinuity measures and the alternative FAB diffusion are incorporated into a LVCFAB diffusion algorithm, where the joint use of the two measures leads to a complementary effect for preserving edge features in digital images. This LVC mechanism adaptively modifies the degree of diffusion at any image location and is dependent on both local gradient and inhomogeneity. Qualitative experiments, based on general digital images and magnetic resonance images, show significant improvements when the LVCFAB diffusion algorithm is used versus the existing anisotropic diffusion and the previous FAB diffusion algorithms for enhancing edge features and improving image contrast. Quantitative analyses, based on peak SNR, confirm the superiority of the proposed LVCFAB diffusion algorithm.
Collapse
Affiliation(s)
- Yi Wang
- State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan, China
| | | | | |
Collapse
|
184
|
Zhou J, Zhu H, Shu H, Luo L. A generalized diffusion based inter-iteration nonlinear bilateral filtering scheme for PET image reconstruction. Comput Med Imaging Graph 2007; 31:447-57. [PMID: 17574817 DOI: 10.1016/j.compmedimag.2007.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2005] [Revised: 06/30/2006] [Accepted: 04/16/2007] [Indexed: 10/23/2022]
Abstract
In this paper, a new inter-iteration filtering scheme based on diffusion Maximum a Posteriori (MAP) estimation for Positron emission tomography (PET) image reconstruction is proposed. This is achieved by gaining the insights into the classical MAP iteration (e.g. the 'one-step-late' algorithm, OSL) and the several well-established approximations to the diffusion process. We show that such a new technique allows additional insight and sufficient flexibility for further investigations on some nonlinear filters based reconstruction algorithms. In particular, we suggest the bilateral filter as a nice application in which image smoothing while edge preserving can be readily obtained by the combination of the range and domain filters. The feasibility and efficiency of the proposed method are verified in the substantiating experiments conducted on both the computer simulated and real clinical data.
Collapse
Affiliation(s)
- Jian Zhou
- Laboratory of Image Science and Technology, Southeast University, China.
| | | | | | | |
Collapse
|
185
|
Boulanger J, Kervrann C, Bouthemy P. Space-time adaptation for patch-based image sequence restoration. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2007; 29:1096-102. [PMID: 17431307 DOI: 10.1109/tpami.2007.1064] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
We present a novel space-time patch-based method for image sequence restoration. We propose an adaptive statistical estimation framework based on the local analysis of the bias-variance trade-off. At each pixel, the space-time neighborhood is adapted to improve the performance of the proposed patch-based estimator. The proposed method is unsupervised and requires no motion estimation. Nevertheless, it can also be combined with motion estimation to cope with very large displacements due to camera motion. Experiments show that this method is able to drastically improve the quality of highly corrupted image sequences. Quantitative evaluations on standard artificially noise-corrupted image sequences demonstrate that our method outperforms other recent competitive methods. We also report convincing results on real noisy image sequences.
Collapse
Affiliation(s)
- Jérôme Boulanger
- Institut National de la Recherche Agronomique, UR 341 Mathématiques et informatique appliquées, F-78352 Jouy-en-Josas, France.
| | | | | |
Collapse
|
186
|
Duarte-Carvajalino JM, Castillo PE, Velez-Reyes M. Comparative study of semi-implicit schemes for nonlinear diffusion in hyperspectral imagery. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:1303-14. [PMID: 17491461 DOI: 10.1109/tip.2007.894266] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Nonlinear diffusion has been successfully employed over the past two decades to enhance images by reducing undesirable intensity variability within the objects in the image, while enhancing the contrast of the boundaries (edges) in scalar and, more recently, in vector-valued images, such as color, multispectral, and hyperspectral imagery. In this paper, we show that nonlinear diffusion can improve the classification accuracy of hyperspectral imagery by reducing the spatial and spectral variability of the image, while preserving the boundaries of the objects. We also show that semi-implicit schemes can speedup significantly the evolution of the nonlinear diffusion equation with respect to traditional explicit schemes.
Collapse
Affiliation(s)
- Julio M Duarte-Carvajalino
- Laboratory of ApIlied Remote Sensing and Image Processing (LARSIP), University of Puerto Rico, Mayagüez, PR 00681-9048, USA.
| | | | | |
Collapse
|
187
|
Tosun D, Prince JL. Cortical surface alignment using geometry driven multispectral optical flow. ACTA ACUST UNITED AC 2007; 19:480-92. [PMID: 17354719 DOI: 10.1007/11505730_40] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Spatial normalization is frequently used to map data to a standard coordinate system by removing inter-subject morphological differences, thereby allowing for group analysis to be carried out. In this paper, we analyze the geometry of the cortical surface using two shape measures that are the key to distinguish sulcal and gyral regions from each other. Then a multispectral optical flow (OF) warping procedure that aims to align the shape measure maps of an atlas and a subject brain's normalized maps is described. The variational problem to estimate the OF field is solved using a Euclidean framework. After warping one brain given the OF result, we obtain a better structural and functional alignment across multiple brains.
Collapse
Affiliation(s)
- Duygu Tosun
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | |
Collapse
|
188
|
|
189
|
Chen Y, Raheja A. Wavelet lifting for speckle noise reduction in ultrasound images. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2005:3129-32. [PMID: 17282907 DOI: 10.1109/iembs.2005.1617138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
In this paper, a wavelet domain method for speckle noise filtering is presented. It uses non-decimated wavelet transform and Generalized Cross Validation thresholding technique. The spatial correlation of ultrasound speckle noise is broken by multiresolution analysis. Level dependent thresholding removes noise in the wavelet domain based on automatic estimation of noise energy in each subband. The efficacy of this filter is demonstrated on both simulated and real medical ultrasound images. The result is shown to be promising and outperforms other de-noising approaches. A single adjustable parameter can be used by medical experts to balance the relevant image feature preservation and the speckle noise suppression. Lifting scheme as a way of constructing new biorthogonal wavelets based on existing wavelet as well as a way of performing wavelet transform is studied in this research to improve the performance of wavelet de-noising.
Collapse
Affiliation(s)
- Yuan Chen
- Department of Computer Science, California State Polytechnic University, Pomona, 3801 W. Temple Avenue, Pomona, CA 91768, USA
| | | |
Collapse
|
190
|
Kim HS, Yoo JM, Park MS, Dinh TN, Lee GS. An Anisotropic Diffusion Based on Diagonal Edges. ACTA ACUST UNITED AC 2007. [DOI: 10.1109/icact.2007.358377] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
191
|
Takeda H, Farsiu S, Milanfar P. Kernel regression for image processing and reconstruction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:349-66. [PMID: 17269630 DOI: 10.1109/tip.2006.888330] [Citation(s) in RCA: 111] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples.
Collapse
Affiliation(s)
- Hiroyuki Takeda
- Electrical Engineering Department, University of California, Santa Cruz 95064, USA.
| | | | | |
Collapse
|
192
|
Zhang F, Yoo YM, Koh LM, Kim Y. Nonlinear diffusion in Laplacian pyramid domain for ultrasonic speckle reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:200-11. [PMID: 17304734 DOI: 10.1109/tmi.2006.889735] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
A new speckle reduction method, i.e., Laplacian pyramid-based nonlinear diffusion (LPND), is proposed for medical ultrasound imaging. With this method, speckle is removed by nonlinear diffusion filtering of bandpass ultrasound images in Laplacian pyramid domain. For nonlinear diffusion in each pyramid layer, a gradient threshold is automatically determined by a variation of median absolute deviation (MAD) estimator. The performance of the proposed LPND method has been compared with that of other speckle reduction methods, including the recently proposed speckle reducing anisotropic diffusion (SRAD) and nonlinear coherent diffusion (NCD). In simulation and phantom studies, an average gain of 1.55 dB and 1.34 dB in contrast-to-noise ratio was obtained compared to SRAD and NCD, respectively. The visual comparison of despeckled in vivo ultrasound images from liver and carotid artery shows that the proposed LPND method could effectively preserve edges and detailed structures while thoroughly suppressing speckle. These preliminary results indicate that the proposed speckle reduction method could improve image quality and the visibility of small structures and fine details in medical ultrasound imaging.
Collapse
Affiliation(s)
- Fan Zhang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
| | | | | | | |
Collapse
|
193
|
Trzasko J, Manduca A, Borisch E. Robust kernel methods for sparse MR image reconstruction. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2007; 10:809-816. [PMID: 18051133 DOI: 10.1007/978-3-540-75757-3_98] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A major challenge in contemporary magnetic resonance imaging (MRI) lies in providing the highest resolution exam possible in the shortest acquisition period. Recently, several authors have proposed the use of L1-norm minimization for the reconstruction of sparse MR images from highly-undersampled k-space data. Despite promising results demonstrating the ability to accurately reconstruct images sampled at rates significantly below the Nyquist criterion, the extensive computational complexity associated with the existing framework limits its clinical practicality. In this work, we propose an alternative recovery framework based on homotopic approximation of the L0-norm and extend the reconstruction problem to a multiscale formulation. In addition to several interesting theoretical properties, practical implementation of this technique effectively resorts to a simple iterative alternation between bilteral filtering and projection of the measured k-space sample set that can be computed in a matter of seconds on a standard PC.
Collapse
Affiliation(s)
- Joshua Trzasko
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine, Rochester, MN, USA.
| | | | | |
Collapse
|
194
|
Zeng G, Paris S, Quan L, Sillion F. Accurate and scalable surface representation and reconstruction from images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2007; 29:141-58. [PMID: 17108389 DOI: 10.1109/tpami.2007.250605] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We introduce a new surface representation method, called patchwork, to extend three-dimensional surface reconstruction capabilities from multiple images. A patchwork is the combination of several patches that are built one by one. This design potentially allows for the reconstruction of an object with arbitrarily large dimensions while preserving a fine level of detail. We formally demonstrate that this strategy leads to a spatial complexity independent of the dimensions of the reconstructed object and to a time complexity that is linear with respect to the object area. The former property ensures that we never run out of storage and the latter means that reconstructing an object can be done in a reasonable amount of time. In addition, we show that the patchwork representation handles equivalently open and closed surfaces, whereas most of the existing approaches are limited to a specific scenario, an open or closed surface, but not both. The patchwork concept is orthogonal to the method chosen for surface optimization. Most of the existing optimization techniques can be cast into this framework. To illustrate the possibilities offered by this approach, we propose two applications that demonstrate how our method dramatically extends a recent accurate graph technique based on minimal cuts. We first revisit the popular carving techniques. This results in a well-posed reconstruction problem that still enjoys the tractability of voxel space. We also show how we can advantageously combine several image-driven criteria to achieve a finely detailed geometry by surface propagation. These two examples demonstrate the versatility and flexibility of patchwork reconstruction. They underscore other properties inherited from patchwork representation: Although some min-cut methods have difficulty in handling complex shapes (e.g., with complex topologies), they can naturally manipulate any geometry through the patchwork representation while preserving their intrinsic qualities. The above properties of patchwork representation and reconstruction are demonstrated with real image sequences.
Collapse
Affiliation(s)
- Gang Zeng
- Department of Computer Science, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong, China.
| | | | | | | |
Collapse
|
195
|
Antoniadis A. Wavelet methods in statistics: some recent developments and their applications. STATISTICS SURVEYS 2007. [DOI: 10.1214/07-ss014] [Citation(s) in RCA: 94] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
196
|
Wang YL. Computational Restoration of Fluorescence Images: Noise Reduction, Deconvolution, and Pattern Recognition. Methods Cell Biol 2007; 81:435-45. [PMID: 17519178 DOI: 10.1016/s0091-679x(06)81020-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Affiliation(s)
- Yu-Li Wang
- Department of Physiology, University of Massachusetts Medical School, Worcester, Massachusetts 01605, USA
| |
Collapse
|
197
|
|
198
|
Grady L. Random walks for image segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2006; 28:1768-83. [PMID: 17063682 DOI: 10.1109/tpami.2006.233] [Citation(s) in RCA: 642] [Impact Index Per Article: 35.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs.
Collapse
Affiliation(s)
- Leo Grady
- Siemens Corporate Research, Department of Imaging and Visualization, Princeton, NJ 08540, USA.
| |
Collapse
|
199
|
Zhu H, Shu H, Zhou J, Toumoulin C, Luo L. Image reconstruction for positron emission tomography using fuzzy nonlinear anisotropic diffusion penalty. Med Biol Eng Comput 2006; 44:983-97. [PMID: 17061117 PMCID: PMC2235198 DOI: 10.1007/s11517-006-0115-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2006] [Accepted: 09/26/2006] [Indexed: 10/24/2022]
Abstract
Iterative algorithms such as maximum likelihood-expectation maximization (ML-EM) become the standard for the reconstruction in emission computed tomography. However, such algorithms are sensitive to noise artifacts so that the reconstruction begins to degrade when the number of iterations reaches a certain value. In this paper, we have investigated a new iterative algorithm for penalized-likelihood image reconstruction that uses the fuzzy nonlinear anisotropic diffusion (AD) as a penalty function. The proposed algorithm does not suffer from the same problem as that of ML-EM algorithm, and it converges to a low noisy solution even if the iteration number is high. The fuzzy reasoning instead of a nonnegative monotonically decreasing function was used to calculate the diffusion coefficients which control the whole diffusion. Thus, the diffusion strength is controlled by fuzzy rules expressed in a linguistic form. The proposed method makes use of the advantages of fuzzy set theory in dealing with uncertain problems and nonlinear AD techniques in removing the noise as well as preserving the edges. Quantitative analysis shows that the proposed reconstruction algorithm is suitable to produce better reconstructed images when compared with ML-EM, ordered subsets EM (OS-EM), Gaussian-MAP, MRP, TV-EM reconstructed images.
Collapse
Affiliation(s)
- Hongqing Zhu
- Laboratory of Image Science and Technology, Department of Computer Science and Engineering, Southeast University, Nanjing, People's Republic of China.
| | | | | | | | | |
Collapse
|
200
|
Kervrann C, Boulanger J. Optimal spatial adaptation for patch-based image denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:2866-78. [PMID: 17022255 DOI: 10.1109/tip.2006.877529] [Citation(s) in RCA: 59] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A novel adaptive and patch-based approach is proposed for image denoising and representation. The method is based on a pointwise selection of small image patches of fixed size in the variable neighborhood of each pixel. Our contribution is to associate with each pixel the weighted sum of data points within an adaptive neighborhood, in a manner that it balances the accuracy of approximation and the stochastic error, at each spatial position. This method is general and can be applied under the assumption that there exists repetitive patterns in a local neighborhood of a point. By introducing spatial adaptivity, we extend the work earlier described by Buades et al. which can be considered as an extension of bilateral filtering to image patches. Finally, we propose a nearly parameter-free algorithm for image denoising. The method is applied to both artificially corrupted (white Gaussian noise) and real images and the performance is very close to, and in some cases even surpasses, that of the already published denoising methods.
Collapse
|