8901
|
|
8902
|
Raghunath N, Faber TL, Suryanarayanan S, Votaw JR. Motion correction of PET brain images through deconvolution: II. Practical implementation and algorithm optimization. Phys Med Biol 2009; 54:813-29. [PMID: 19131667 DOI: 10.1088/0031-9155/54/3/022] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Image quality is significantly degraded even by small amounts of patient motion in very high-resolution PET scanners. When patient motion is known, deconvolution methods can be used to correct the reconstructed image and reduce motion blur. This paper describes the implementation and optimization of an iterative deconvolution method that uses an ordered subset approach to make it practical and clinically viable. We performed ten separate FDG PET scans using the Hoffman brain phantom and simultaneously measured its motion using the Polaris Vicra tracking system (Northern Digital Inc., Ontario, Canada). The feasibility and effectiveness of the technique was studied by performing scans with different motion and deconvolution parameters. Deconvolution resulted in visually better images and significant improvement as quantified by the Universal Quality Index (UQI) and contrast measures. Finally, the technique was applied to human studies to demonstrate marked improvement. Thus, the deconvolution technique presented here appears promising as a valid alternative to existing motion correction methods for PET. It has the potential for deblurring an image from any modality if the causative motion is known and its effect can be represented in a system matrix.
Collapse
Affiliation(s)
- N Raghunath
- Department of Radiology, Emory University Hospital, 1364 Clifton Road, N.E. Atlanta, GA 30322, USA
| | | | | | | |
Collapse
|
8903
|
Faber TL, Raghunath N, Tudorascu D, Votaw JR. Motion correction of PET brain images through deconvolution: I. Theoretical development and analysis in software simulations. Phys Med Biol 2009; 54:797-811. [DOI: 10.1088/0031-9155/54/3/021] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
8904
|
Yu X, Yang EH, Wang H. Down-sampling design in DCT domain with arbitrary ratio for image/video transcoding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:75-89. [PMID: 19095520 DOI: 10.1109/tip.2008.2007761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
This paper proposes a designing framework for down-sampling compressed images/video with arbitrary ratio in the discrete cosine transform (DCT) domain. In this framework, we first derive a set of DCT-domain down-sampling methods which can be represented by a linear transform with double-sided matrix multiplication (LTDS) in the DCT domain and show that the set contains a wide range of methods with various complexity and visual quality. Then, for a preselected spatial-domain down-sampling method, we formulate an optimization problem for finding an LTDS to approximate the given spatial-domain down-sampling method for a trade-off between the visual quality and the complexity. By modeling LTDS as a multiple layer network, a so-called structural learning with forgetting algorithm is then applied to solve the optimization problem. The proposed framework has been applied to discover optimal LTDSs corresponding to a spatial down-sampling method with Butterworth low-pass filtering and bicubic interpolation. Experimental results show that the resulting LTDS achieves a significant reduction on the complexity when compared with other methods in the literature with similar visual quality.
Collapse
Affiliation(s)
- Xiang Yu
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada.
| | | | | |
Collapse
|
8905
|
Choi H, Castleman KR, Bovik AC. Color compensation of multicolor fish images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2009; 28:129-136. [PMID: 19116195 DOI: 10.1109/tmi.2008.928177] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Multicolor fluorescence in situ hybridization (M-FISH) techniques provide color karyotyping that allows simultaneous analysis of numerical and structural abnormalities of whole human chromosomes. Chromosomes are stained combinatorially in M-FISH. By analyzing the intensity combinations of each pixel, all chromosome pixels in an image are classified. Due to the overlap of excitation and emission spectra and the broad sensitivity of image sensors, the obtained images contain crosstalk between the color channels. The crosstalk complicates both visual and automatic image analysis and may eventually affect the classification accuracy in M-FISH. The removal of crosstalk is possible by finding the color compensation matrix, which quantifies the color spillover between channels. However, there exists no simple method of finding the color compensation matrix from multichannel fluorescence images whose specimens are combinatorially hybridized. In this paper, we present a method of calculating the color compensation matrix for multichannel fluorescence images whose specimens are combinatorially stained.
Collapse
|
8906
|
Vu H, Echigo T, Sagawa R, Yagi K, Shiba M, Higuchi K, Arakawa T, Yagi Y. Detection of contractions in adaptive transit time of the small bowel from wireless capsule endoscopy videos. Comput Biol Med 2009; 39:16-26. [DOI: 10.1016/j.compbiomed.2008.10.005] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2007] [Revised: 06/04/2008] [Accepted: 10/22/2008] [Indexed: 12/22/2022]
|
8907
|
Tristán-Vega A, Aja-Fernández S. Design and construction of a realistic DWI phantom for filtering performance assessment. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2009; 12:951-8. [PMID: 20426080 DOI: 10.1007/978-3-642-04268-3_117] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
A methodology to build a realistic phantom for the assessment of filtering performance in Diffusion Weighted Images (DWI) is presented. From a real DWI data-set, a regularization process is carried out taking into account the diffusion model. This process drives to a model which accurately preserves the structural characteristics of actual DWI volumes, being in addition regular enough to be considered as a noise-free data-set and therefore to be used as a ground-truth. We compare our phantom with a kind of simplified phantoms commonly used in the literature (those based on homogeneous cross sections), concluding that the latter may introduce important biases in common quality measures used in the filtering performance assessment, and even drive to erroneous conclusions in the comparison of different filtering techniques.
Collapse
|
8908
|
|
8909
|
|
8910
|
Tristán-Vega A, Aja-Fernández S. Joint LMMSE estimation of DWI data for DTI processing. ACTA ACUST UNITED AC 2008; 11:27-34. [PMID: 18982586 DOI: 10.1007/978-3-540-85990-1_4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2023]
Abstract
We propose a new methodology for Linear Minimum Mean Square Error (LMMSE) filtering of Diffusion Weighted Imaging (DWI). We consider each voxel as an N-dimensional vector that comprises all the DWI volumes, and then compute the LMMSE estimator for the whole DWI data set jointly, taking into account the underlying tensor model. Our experiments, both with phantom and real data, show that this is a more convenient approach compared to the separate processing of each DWI, that translates to better removal of noise and preservation of structural information. Besides, our model has a simple algebraic formulation which makes the overall computational complexity very close to that of the scalar case, and it does not need multiple samples per DWI.
Collapse
|
8911
|
An C, Nguyen TQ. Resource allocation for error resilient video coding over AWGN using optimization approach. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:2347-2355. [PMID: 19004707 DOI: 10.1109/tip.2008.2005825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The number of slices for error resilient video coding is jointly optimized with 802.11a-like media access control and the physical layers with automatic repeat request and rate compatible punctured convolutional code over additive white gaussian noise channel as well as channel times allocation for time division multiple access. For error resilient video coding, the relation between the number of slices and coding efficiency is analyzed and formulated as a mathematical model. It is applied for the joint optimization problem, and the problem is solved by a convex optimization method such as the primal-dual decomposition method. We compare the performance of a video communication system which uses the optimal number of slices with one that codes a picture as one slice. From numerical examples, end-to-end distortion of utility functions can be significantly reduced with the optimal slices of a picture especially at low signal-to-noise ratio.
Collapse
Affiliation(s)
- Cheolhong An
- Electrical and Computer Engineering Department, University of California, San Diego, La Jolla, CA 92093, USA.
| | | |
Collapse
|
8912
|
|
8913
|
Almeida MS, Almeida LB. Wavelet-based separation of nonlinear show-through and bleed-through image mixtures. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.12.048] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
8914
|
Agaian S, Danahy E, Panetta K. Logical System Representation of Images and Removal of Impulse Noise. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/tsmca.2008.2003475] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8915
|
Park JS, Lee SW. An example-based face hallucination method for single-frame, low-resolution facial images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1806-1816. [PMID: 18784029 DOI: 10.1109/tip.2008.2001394] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper proposes a face hallucination method for the reconstruction of high-resolution facial images from single-frame, low-resolution facial images. The proposed method has been derived from example-based hallucination methods and morphable face models. First, we propose a recursive error back-projection method to compensate for residual errors, and a region-based reconstruction method to preserve characteristics of local facial regions. Then, we define an extended morphable face model, in which an extended face is composed of the interpolated high-resolution face from a given low-resolution face, and its original high-resolution equivalent. Then, the extended face is separated into an extended shape and an extended texture. We performed various hallucination experiments using the MPI, XM2VTS, and KF databases, compared the reconstruction errors, structural similarity index, and recognition rates, and showed the effects of face detection errors and shape estimation errors. The encouraging results demonstrate that the proposed methods can improve the performance of face recognition systems. Especially the proposed method can enhance the resolution of single-frame, low-resolution facial images.
Collapse
Affiliation(s)
- Jeong-Seon Park
- Department of Multimedia, Chonnam National University, Jeollanam-do, Korea.
| | | |
Collapse
|
8916
|
Bae SH, Juang BH. Multidimensional incremental parsing for universal source coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1837-1848. [PMID: 18784032 DOI: 10.1109/tip.2008.2002308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A multidimensional incremental parsing algorithm (MDIP) for multidimensional discrete sources, as a generalization of the Lempel-Ziv coding algorithm, is investigated. It consists of three essential component schemes, maximum decimation matching, hierarchical structure of multidimensional source coding, and dictionary augmentation. As a counterpart of the longest match search in the Lempel-Ziv algorithm, two classes of maximum decimation matching are studied. Also, an underlying behavior of the dictionary augmentation scheme for estimating the source statistics is examined. For an m-dimensional source, m augmentative patches are appended into the dictionary at each coding epoch, thus requiring the transmission of a substantial amount of information to the decoder. The property of the hierarchical structure of the source coding algorithm resolves this issue by successively incorporating lower dimensional coding procedures in the scheme. In regard to universal lossy source coders, we propose two distortion functions, the local average distortion and the local minimax distortion with a set of threshold levels for each source symbol. For performance evaluation, we implemented three image compression algorithms based upon the MDIP; one is lossless and the others are lossy. The lossless image compression algorithm does not perform better than the Lempel-Ziv-Welch coding, but experimentally shows efficiency in capturing the source structure. The two lossy image compression algorithms are implemented using the two distortion functions, respectively. The algorithm based on the local average distortion is efficient at minimizing the signal distortion, but the images by the one with the local minimax distortion have a good perceptual fidelity among other compression algorithms. Our insights inspire future research on feature extraction of multidimensional discrete sources.
Collapse
Affiliation(s)
- Soo Hyun Bae
- Center for Signal and Image Processing, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30332-0250, USA.
| | | |
Collapse
|
8917
|
Aja-Fernandez S, Niethammer M, Kubicki M, Shenton ME, Westin CF. Restoration of DWI data using a Rician LMMSE estimator. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:1389-403. [PMID: 18815091 PMCID: PMC2756835 DOI: 10.1109/tmi.2008.920609] [Citation(s) in RCA: 99] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper introduces and analyzes a linear minimum mean square error (LMMSE) estimator using a Rician noise model and its recursive version (RLMMSE) for the restoration of diffusion weighted images. A method to estimate the noise level based on local estimations of mean or variance is used to automatically parametrize the estimator. The restoration performance is evaluated using quality indexes and compared to alternative estimation schemes. The overall scheme is simple, robust, fast, and improves estimations. Filtering diffusion weighted magnetic resonance imaging (DW-MRI) with the proposed methodology leads to more accurate tensor estimations. Real and synthetic datasets are analyzed.
Collapse
Affiliation(s)
- Santiago Aja-Fernandez
- Laboratory for Mathematics in Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | | | | | | | | |
Collapse
|
8918
|
Rahman SMM, Ahmad MO, Swamy MNS. Bayesian wavelet-based image denoising using the Gauss-Hermite expansion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1755-1771. [PMID: 18784025 DOI: 10.1109/tip.2008.2002163] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
The probability density functions (PDFs) of the wavelet coefficients play a key role in many wavelet-based image processing algorithms, such as denoising. The conventional PDFs usually have a limited number of parameters that are calculated from the first few moments only. Consequently, such PDFs cannot be made to fit very well with the empirical PDF of the wavelet coefficients of an image. As a result, the shrinkage function utilizing any of these density functions provides a substandard denoising performance. In order for the probabilistic model of the image wavelet coefficients to be able to incorporate an appropriate number of parameters that are dependent on the higher order moments, a PDF using a series expansion in terms of the Hermite polynomials that are orthogonal with respect to the standard Gaussian weight function, is introduced. A modification in the series function is introduced so that only a finite number of terms can be used to model the image wavelet coefficients, ensuring at the same time the resulting PDF to be non-negative. It is shown that the proposed PDF matches the empirical one better than some of the standard ones, such as the generalized Gaussian or Bessel K-form PDF. A Bayesian image denoising technique is then proposed, wherein the new PDF is exploited to statistically model the subband as well as the local neighboring image wavelet coefficients. Experimental results on several test images demonstrate that the proposed denoising method, both in the subband-adaptive and locally adaptive conditions, provides a performance better than that of most of the methods that use PDFs with limited number of parameters.
Collapse
Affiliation(s)
- S M Mahbubur Rahman
- Center for Signal Processing and Communications, Department of Electrical and Computer Engineering, Concordia University, Montréal, QC, Canada.
| | | | | |
Collapse
|
8919
|
Wang Z, Simoncelli EP. Maximum differentiation (MAD) competition: a methodology for comparing computational models of perceptual quantities. J Vis 2008; 8:8.1-13. [PMID: 18831621 PMCID: PMC4143340 DOI: 10.1167/8.12.8] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2007] [Accepted: 07/01/2008] [Indexed: 11/24/2022] Open
Abstract
We propose an efficient methodology for comparing computational models of a perceptually discriminable quantity. Rather than comparing model responses to subjective responses on a set of pre-selected stimuli, the stimuli are computer-synthesized so as to optimally distinguish the models. Specifically, given two computational models that take a stimulus as an input and predict a perceptually discriminable quantity, we first synthesize a pair of stimuli that maximize/minimize the response of one model while holding the other fixed. We then repeat this procedure, but with the roles of the two models reversed. Subjective testing on pairs of such synthesized stimuli provides a strong indication of the relative strengths and weaknesses of the two models. Specifically, the model whose extremal stimulus pairs are easier for subjects to discriminate is the better model. Moreover, careful study of the synthesized stimuli may suggest potential ways to improve a model or to combine aspects of multiple models. We demonstrate the methodology for two example perceptual quantities: contrast and image quality.
Collapse
Affiliation(s)
- Zhou Wang
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada.
| | | |
Collapse
|
8920
|
Miao J, Huo D, Wilson DL. Quantitative image quality evaluation of MR images using perceptual difference models. Med Phys 2008; 35:2541-53. [PMID: 18649487 DOI: 10.1118/1.2903207] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The authors are using a perceptual difference model (Case-PDM) to quantitatively evaluate image quality of the thousands of test images which can be created when optimizing fast magnetic resonance (MR) imaging strategies and reconstruction techniques. In this validation study, they compared human evaluation of MR images from multiple organs and from multiple image reconstruction algorithms to Case-PDM and similar models. The authors found that Case-PDM compared very favorably to human observers in double-stimulus continuous-quality scale and functional measurement theory studies over a large range of image quality. The Case-PDM threshold for nonperceptible differences in a 2-alternative forced choice study varied with the type of image under study, but was approximately 1.1 for diffuse image effects, providing a rule of thumb. Ordering the image quality evaluation models, we found in overall Case-PDM approximately IDM (Sarnoff Corporation) approximately SSIM [Wang et al. IEEE Trans. Image Process. 13, 600-612 (2004)] > mean squared error NR [Wang et al. (2004) (unpublished)] > DCTune (NASA) > IQM (MITRE Corporation). The authors conclude that Case-PDM is very useful in MR image evaluation but that one should probably restrict studies to similar images and similar processing, normally not a limitation in image reconstruction studies.
Collapse
Affiliation(s)
- Jun Miao
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106, USA
| | | | | |
Collapse
|
8921
|
Channappayya SS, Bovik AC, Heath RW. Rate bounds on SSIM index of quantized images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1624-1639. [PMID: 18701399 DOI: 10.1109/tip.2008.2001400] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
In this paper, we derive bounds on the structural similarity (SSIM) index as a function of quantization rate for fixed-rate uniform quantization of image discrete cosine transform (DCT) coefficients under the high-rate assumption. The space domain SSIM index is first expressed in terms of the DCT coefficients of the space domain vectors. The transform domain SSIM index is then used to derive bounds on the average SSIM index as a function of quantization rate for uniform, Gaussian, and Laplacian sources. As an illustrative example, uniform quantization of the DCT coefficients of natural images is considered. We show that the SSIM index between the reference and quantized images fall within the bounds for a large set of natural images. Further, we show using a simple example that the proposed bounds could be very useful for rate allocation problems in practical image and video coding applications.
Collapse
Affiliation(s)
- Sumohana S Channappayya
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin TX 78712-0240, USA.
| | | | | |
Collapse
|
8922
|
Rabbani H, Vafadust M, Abolmaesumi P, Gazor S. Speckle Noise Reduction of Medical Ultrasound Images in Complex Wavelet Domain Using Mixture Priors. IEEE Trans Biomed Eng 2008; 55:2152-60. [DOI: 10.1109/tbme.2008.923140] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
8923
|
Monaco JP, Bovik AC, Cormack LK. Nonlinearities in stereoscopic phase-differencing. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1672-1684. [PMID: 18713673 DOI: 10.1109/tip.2008.2001405] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Exploiting the quasi-linear relationship between local phase and disparity, phase-differencing registration algorithms provide a fast, powerful means for disparity estimation. Unfortunately, these phase-differencing techniques suffer a significant impediment: phase nonlinearities. In regions of phase nonlinearity, the signals under consideration possess properties that invalidate the use of phase for disparity estimation. This paper uses the amenable properties of Gaussian white noise images to analytically quantify these properties. The improved understanding gained from this analysis enables us to better understand current methodologies for detecting regions of phase instability. Most importantly, we introduce a new, more effective means for identifying these regions based on the second derivative of phase.
Collapse
Affiliation(s)
- James Peter Monaco
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin TX 78712-1084, USA.
| | | | | |
Collapse
|
8924
|
|
8925
|
Gargesha M, Jenkins MW, Rollins AM, Wilson DL. Denoising and 4D visualization of OCT images. OPTICS EXPRESS 2008; 16:12313-33. [PMID: 18679509 PMCID: PMC2748663 DOI: 10.1364/oe.16.012313] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of th new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications.
Collapse
Affiliation(s)
- Madhusudhana Gargesha
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
| | - Michael W. Jenkins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
| | - Andrew M. Rollins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
| | - David L. Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
| |
Collapse
|
8926
|
Aja-Fernandez S, Alberola-Lopez C, Westin CF. Noise and signal estimation in magnitude MRI and Rician distributed images: a LMMSE approach. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1383-1398. [PMID: 18632347 DOI: 10.1109/tip.2008.925382] [Citation(s) in RCA: 124] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A new method for noise filtering in images that follow a Rician model-with particular attention to magnetic resonance imaging-is proposed. To that end, we have derived a (novel) closed-form solution of the linear minimum mean square error (LMMSE) estimator for this distribution. Additionally, a set of methods that automatically estimate the noise power are developed. These methods use information of the sample distribution of local statistics of the image, such as the local variance, the local mean, and the local mean square value. Accordingly, the dynamic estimation of noise leads to a recursive version of the LMMSE, which shows a good performance in both noise cleaning and feature preservation. This paper also includes the derivation of the probability density function of several local sample statistics for the Rayleigh and Rician model, upon which the estimators are built.
Collapse
|
8927
|
Brooks AC, Zhao X, Pappas TN. Structural similarity quality metrics in a coding context: exploring the space of realistic distortions. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1261-1273. [PMID: 18632337 DOI: 10.1109/tip.2008.926161] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Perceptual image quality metrics have explicitly accounted for human visual system (HVS) sensitivity to subband noise by estimating just noticeable distortion (JND) thresholds. A recently proposed class of quality metrics, known as structural similarity metrics (SSIM), models perception implicitly by taking into account the fact that the HVS is adapted for extracting structural information from images. We evaluate SSIM metrics and compare their performance to traditional approaches in the context of realistic distortions that arise from compression and error concealment in video compression/transmission applications. In order to better explore this space of distortions, we propose models for simulating typical distortions encountered in such applications. We compare specific SSIM implementations both in the image space and the wavelet domain; these include the complex wavelet SSIM (CWSSIM), a translation-insensitive SSIM implementation. We also propose a perceptually weighted multiscale variant of CWSSIM, which introduces a viewing distance dependence and provides a natural way to unify the structural similarity approach with the traditional JND-based perceptual approaches.
Collapse
Affiliation(s)
- Alan C Brooks
- Defensive Systems Division, Northrop Grumman Corporation, Rolling Meadows, IL 60008, USA.
| | | | | |
Collapse
|
8928
|
Vanhamel I, Mihai C, Sahli H, Katartzis A, Pratikakis I. Scale Selection for Compact Scale-Space Representation of Vector-Valued Images. Int J Comput Vis 2008. [DOI: 10.1007/s11263-008-0154-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
8929
|
Roussos A, Maragos P. Reversible Interpolation of Vectorial Images by an Anisotropic Diffusion-Projection PDE. Int J Comput Vis 2008. [DOI: 10.1007/s11263-008-0132-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
8930
|
Concurrent recall of serially learned visual discrimination problems in dwarf goats (Capra hircus). Behav Processes 2008; 79:156-64. [PMID: 18694810 DOI: 10.1016/j.beproc.2008.07.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2008] [Revised: 07/07/2008] [Accepted: 07/10/2008] [Indexed: 11/23/2022]
Abstract
Studies of cognitive ability in farm animals are valuable, not only because they provide indicators of the commonality of comparative influence, but understanding farm animal cognition may also aid in management and treatment procedures. Here, eight dwarf goats (Capra hircus) learned a series of 10 visual four-choice discriminations using an automated device that allowed individual ad lib. access to the test setup while staying in a familiar environment and normal social setting. The animals were trained on each problem for 5 days, followed by concurrent testing of the current against the previous problem. Once all 10 problems had been learned, they were tested concurrently over the course of 9 days. In initial training, all goats achieved criterion learning levels on nearly all problems within 2 days and under 200 trials. Concurrently presenting the problems trained in adjacent sessions did not impair performance on either problem relative to single-problem learning. Upon concurrent presentation of all 10 previously learned problems, at least half were well-remembered immediately. Although this test revealed a recency effect (later problems were better remembered), many early-learned problems were also well-retained, and 10-item relearning was quite quick. These results show that dwarf goats can retain multiple-problem information proficiently and can do so over periods of several weeks. From an ecological point of view, the ability to form numerous associations between visual cues offered by specific plants and food quality is an important pre-grazing mechanism that helps goats exploit variation in vegetation and graze selectively.
Collapse
|
8931
|
Adolphs R, Spezio ML, Parlier M, Piven J. Distinct face-processing strategies in parents of autistic children. Curr Biol 2008; 18:1090-3. [PMID: 18635351 PMCID: PMC2504759 DOI: 10.1016/j.cub.2008.06.073] [Citation(s) in RCA: 82] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2008] [Revised: 06/20/2008] [Accepted: 06/20/2008] [Indexed: 11/19/2022]
Abstract
In his original description of autism, Kanner [1] noted that the parents of autistic children often exhibited unusual social behavior themselves, consistent with what we now know about the high heritability of autism [2]. We investigated this so-called Broad Autism Phenotype in the parents of children with autism, who themselves did not receive a diagnosis of any psychiatric illness. Building on recent quantifications of social cognition in autism [3], we investigated face processing by using the "bubbles" method [4] to measure how viewers make use of information from specific facial features in order to judge emotions. Parents of autistic children who were assessed as socially aloof (N = 15), a key component of the phenotype [5], showed a remarkable reduction in processing the eye region in faces, together with enhanced processing of the mouth, compared to a control group of parents of neurotypical children (N = 20), as well as to nonaloof parents of autistic children (N = 27, whose pattern of face processing was intermediate). The pattern of face processing seen in the Broad Autism Phenotype showed striking similarities to that previously reported to occur in autism [3] and for the first time provides a window into the endophenotype that may result from a subset of the genes that contribute to social cognition.
Collapse
Affiliation(s)
- Ralph Adolphs
- California Institute of Technology, Pasadena, California 91125, USA
| | | | | | | |
Collapse
|
8932
|
Sgouros N, Kontaxakis I, Sangriotis M. Effect of different traversal schemes in integral image coding. APPLIED OPTICS 2008; 47:D28-D37. [PMID: 18594576 DOI: 10.1364/ao.47.000d28] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Integral imaging (InIm) is a highly promising technique for the delivery of three-dimensional (3D) image content. During capturing, different views of an object are recorded as an array of elemental images (EIs), which form the integral image. High-resolution InIm requires sensors with increased resolution and produces huge amounts of highly correlated data. In an efficient encoding scheme for InIm compression both inter-EI and intra-EI correlations have to be properly exploited. We present an EI traversal scheme that maximizes the performance of InIm encoders by properly rearranging EIs to increase the intra-EI correlation of jointly coded EIs. This technique can be used to augment performance of both InIm specific and properly adapted general use encoder setups, used in InIm compression. An objective quality metric is also introduced for evaluating the effects of different traversal schemes on the encoder performance.
Collapse
Affiliation(s)
- Nicholas Sgouros
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Panepistimiopolis, Ilissia, Greece.
| | | | | |
Collapse
|
8933
|
Yu J, Wang Y, Shen Y. Noise reduction and edge detection via kernel anisotropic diffusion. Pattern Recognit Lett 2008. [DOI: 10.1016/j.patrec.2008.03.002] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
8934
|
Lyu S, Simoncelli EP. Nonlinear Image Representation Using Divisive Normalization. PROCEEDINGS. IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION 2008; 2008:1-8. [PMID: 25346590 PMCID: PMC4207373 DOI: 10.1109/cvpr.2008.4587821] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper, we describe a nonlinear image representation based on divisive normalization that is designed to match the statistical properties of photographic images, as well as the perceptual sensitivity of biological visual systems. We decompose an image using a multi-scale oriented representation, and use Student's t as a model of the dependencies within local clusters of coefficients. We then show that normalization of each coefficient by the square root of a linear combination of the amplitudes of the coefficients in the cluster reduces statistical dependencies. We further show that the resulting divisive normalization transform is invertible and provide an efficient iterative inversion algorithm. Finally, we probe the statistical and perceptual advantages of this image representation by examining its robustness to added noise, and using it to enhance image contrast.
Collapse
Affiliation(s)
- Siwei Lyu
- Howard Hughes Medical Institute, and Center for Neuroscience, New York University
| | - Eero P Simoncelli
- Howard Hughes Medical Institute, and Center for Neuroscience, New York University
| |
Collapse
|
8935
|
Channappayya SS, Bovik AC, Caramanis C, Heath RW. Design of linear equalizers optimized for the structural similarity index. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:857-872. [PMID: 18482882 DOI: 10.1109/tip.2008.921328] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We propose an algorithm for designing linear equalizers that maximize the structural similarity (SSIM) index between the reference and restored signals. The SSIM index has enjoyed considerable application in the evaluation of image processing algorithms. Algorithms, however, have not been designed yet to explicitly optimize for this measure. The design of such an algorithm is nontrivial due to the nonconvex nature of the distortion measure. In this paper, we reformulate the nonconvex problem as a quasi-convex optimization problem, which admits a tractable solution. We compute the optimal solution in near closed form, with complexity of the resulting algorithm comparable to complexity of the linear minimum mean squared error (MMSE) solution, independent of the number of filter taps. To demonstrate the usefulness of the proposed algorithm, it is applied to restore images that have been blurred and corrupted with additive white gaussian noise. As a special case, we consider blur-free image denoising. In each case, its performance is compared to a locally adaptive linear MSE-optimal filter. We show that the images denoised and restored using the SSIM-optimal filter have higher SSIM index, and superior perceptual quality than those restored using the MSE-optimal adaptive linear filter. Through these results, we demonstrate that a) designing image processing algorithms, and, in particular, denoising and restoration-type algorithms, can yield significant gains over existing (in particular, linear MMSE-based) algorithms by optimizing them for perceptual distortion measures, and b) these gains may be obtained without significant increase in the computational complexity of the algorithm.
Collapse
Affiliation(s)
- Sumohana S Channappayya
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712-0240, USA.
| | | | | | | |
Collapse
|
8936
|
Enhancing obstetric and gynecology ultrasound images by adaptation of the speckle reducing anisotropic diffusion filter. Artif Intell Med 2008; 43:223-42. [PMID: 18499411 DOI: 10.1016/j.artmed.2008.04.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2007] [Revised: 03/23/2008] [Accepted: 04/01/2008] [Indexed: 11/20/2022]
Abstract
OBJECTIVE So far there is no ideal speckle reduction filtering technique that is capable of enhancing and reducing the level of noise in medical ultrasound (US) images, while efficiently responding to medical experts' validation criteria which quite often include a subjective component. This paper presents an interactive tool called evolutionary speckle reducing anisotropic diffusion filter (EVOSRAD) that performs adaptive speckle filtering on ultrasound B-mode still images. The medical expert runs the algorithm interactively, having a permanent control over the output, and guiding the filtering process towards obtaining enhanced images that agree to his/her subjective quality criteria. METHODS AND MATERIAL We employ an interactive evolutionary algorithm (IGA) to adapt on-line the parameters of a speckle reducing anisotropic diffusion (SRAD) filter. For a given input US image, the algorithm evolves the parameters of the SRAD filter according to subjective criteria of the medical expert who runs the interactive algorithm. The method and its validation are applied to a test bed comprising both real and simulated obstetrics and gynecology (OB/GYN) ultrasound images. RESULTS The potential of the method is analyzed in comparison to other speckle reduction filters: the original SRAD filter, the anisotropic diffusion, offset and median filters. Results obtained show the good potential of the method on several classes of OB/GYN ultrasound images, as well as on a synthetic image simulating a real fetal US image. Quality criteria for the evaluation and validation of the method include subjective scoring given by the medical expert who runs the interactive method, as well as objective global and local quality criteria. CONCLUSIONS The method presented allows the medical expert to design its own filters according to the degree of medical expertise as well as to particular and often subjective assessment criteria. A filter is designed for a given class of ultrasound images and for a given medical expert who will later use the respective filter in clinical practice. The process of designing a filter is simple and employs an interactive visualization and scoring stage that does not require image processing knowledge. Results show that filters tailored using the presented method achieve better quality scores than other more generic speckle filtering techniques.
Collapse
|
8937
|
Wang C, Ma KL. A statistical approach to volume data quality assessment. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2008; 14:590-602. [PMID: 18369266 DOI: 10.1109/tvcg.2007.70628] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Quality assessment plays a crucial role in data analysis. In this paper, we present a reduced-reference approach to volume data quality assessment. Our algorithm extracts important statistical information from the original data in the wavelet domain. Using the extracted information as feature and predefined distance functions, we are able to identify and quantify the quality loss in the reduced or distorted version of data, eliminating the need to access the original data. Our feature representation is naturally organized in the form of multiple scales, which facilitates quality evaluation of data with different resolutions. The feature can be effectively compressed in size. We have experimented with our algorithm on scientific and medical data sets of various sizes and characteristics. Our results show that the size of the feature does not increase in proportion to the size of original data. This ensures the scalability of our algorithm and makes it very applicable for quality assessment of large-scale data sets. Additionally, the feature could be used to repair the reduced or distorted data for quality improvement. Finally, our approach can be treated as a new way to evaluate the uncertainty introduced by different versions of data.
Collapse
Affiliation(s)
- Chaoli Wang
- Visualization and Interface Design Innovation Research Group, Department of Computer Science, University of California, Davis, Davis, CA 95616, USA.
| | | |
Collapse
|
8938
|
Roy A, Sural S, Mukherjee J, Majumdar AK. State-Based Modeling and Object Extraction From Echocardiogram Video. ACTA ACUST UNITED AC 2008; 12:366-76. [PMID: 18693504 DOI: 10.1109/titb.2007.910352] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Aditi Roy
- School of Information Technology, Indian Institute of Technology (IIT), Kharagpur 721302, India.
| | | | | | | |
Collapse
|
8939
|
Huang AM, Nguyen TQ. A multistage motion vector processing method for motion-compensated frame interpolation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:694-708. [PMID: 18390375 DOI: 10.1109/tip.2008.919360] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.
Collapse
Affiliation(s)
- Ai- Mei Huang
- Department of Electrical and Computer Engineering,University of California, San Diego, La Jolla, CA 92093, USA.
| | | |
Collapse
|
8940
|
|
8941
|
Channappayya SS, Bovik AC, Caramanis C, Heath RW. SSIM-optimal linear image restoration. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/icassp.2008.4517722] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
8942
|
Kim B, Lee KH, Kim KJ, Mantiuk R, Bajpai V, Kim TJ, Kim YH, Yoon CJ, Hahn S. Prediction of perceptible artifacts in JPEG2000 compressed abdomen CT images using a perceptual image quality metric. Acad Radiol 2008; 15:314-25. [PMID: 18280929 DOI: 10.1016/j.acra.2007.10.018] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2007] [Revised: 10/02/2007] [Accepted: 10/02/2007] [Indexed: 10/22/2022]
Abstract
RATIONALE AND OBJECTIVES To test a perceptual quality metric (high-dynamic range visual difference predictor, HDR-VDP) in predicting perceptible artifacts in Joint Photographic Experts Group 2000 compressed thin- and thick-section abdomen computed tomography images. MATERIALS AND METHODS A total of 120 thin (0.67 mm) and corresponding thick (5 mm) sections were compressed to ratios from 4:1 to 15:1. Peak signal-to-noise ratio (PSNR), HDR-VDP results (paired t-tests), and five radiologists' pooled responses for the presence of artifacts (exact tests for paired proportions) were compared between the thin and thick sections. For three subsets of 120 thin- (subset A), 120 thick- (subset B), and 60 thin- and 60 thick-section compressed images (subset C), receiver operating curve analysis was performed to compare PSNR and HDR-VDP in predicting the radiologists' responses. Using the cutoff values where the sum of sensitivity and specificity was the maximum in subset C, visually lossless thresholds (VLTs) were estimated for the 240 original images and the estimation accuracy was compared (McNemar test). RESULTS Thin sections showed more artifacts in terms of PSNR, HDR-VDP, and radiologists' responses (p < .0001). HDR-VDP outperformed PSNR for subset C (area under the curve: 0.97 versus 0.93, p = 0.03), whereas they did not differ significantly for subset A or B. Using the cutoff values, PSNR and HDR-VDP predicted the VLT accurately for 124 (51.7%) and 183 (76.3%) images, respectively (p < .0001). CONCLUSIONS HDR-VDP can predict the perceptible compression artifacts, and therefore can be potentially used to estimate the VLT for such compressions.
Collapse
|
8943
|
Gijesh Varghese, Zhou Wang. Video denoising using a spatiotemporal statistical model of wavelet coefficients. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/icassp.2008.4517845] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
8944
|
Kandadai S, Hardin J, Creusere CD. Audio quality assessment using the mean structural similarity measure. ACTA ACUST UNITED AC 2008. [DOI: 10.1109/icassp.2008.4517586] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8945
|
Deformable 2D-3D registration of the pelvis with a limited field of view, using shape statistics. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2008; 10:519-26. [PMID: 18044608 DOI: 10.1007/978-3-540-75759-7_63] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
Our paper summarizes experiments for measuring the accuracy of deformable 2D-3D registration between sets of simulated x-ray images (DRR's) and a statistical shape model of the pelvis bones, which includes x-ray attenuation information ("density"). In many surgical scenarios, the images contain a truncated view of the pelvis anatomy. Our work specifically addresses this problem by examining different selections of truncated views as target images. Our atlas is derived by applying principal component analysis to a population of up to 110 instance shapes. The experiments measure the registration error with a large and truncated FOV. A typical accuracy of about 2 mm is achieved in the 2D-3D registration, compared with about 1.4 mm of an "optimal" 3D-3D registration.
Collapse
|
8946
|
Munteanu C, Morales F, Ruiz-Alzola J. Speckle Reduction Through Interactive Evolution of a General Order Statistics Filter for Clinical Ultrasound Imaging. IEEE Trans Biomed Eng 2008; 55:365-9. [DOI: 10.1109/tbme.2007.897833] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8947
|
|
8948
|
Loizou CP, Pattichis CS. Despeckle Filtering Algorithms and Software for Ultrasound Imaging. ACTA ACUST UNITED AC 2008. [DOI: 10.2200/s00116ed1v01y200805ase001] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
8949
|
Photo and Video Quality Evaluation: Focusing on the Subject. LECTURE NOTES IN COMPUTER SCIENCE 2008. [DOI: 10.1007/978-3-540-88690-7_29] [Citation(s) in RCA: 182] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
8950
|
André T, Antonini M, Barlaud M, Gray RM. Entropy-based distortion measure and bit allocation for wavelet image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:3058-3064. [PMID: 18092603 DOI: 10.1109/tip.2007.909408] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
|