8951
|
Chandler DM, Hemami SS. VSNR: a wavelet-based visual signal-to-noise ratio for natural images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2284-98. [PMID: 17784602 DOI: 10.1109/tip.2007.901820] [Citation(s) in RCA: 153] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
This paper presents an efficient metric for quantifying the visual fidelity of natural images based on near-threshold and suprathreshold properties of human vision. The proposed metric, the visual signal-to-noise ratio (VSNR), operates via a two-stage approach. In the first stage, contrast thresholds for detection of distortions in the presence of natural images are computed via wavelet-based models of visual masking and visual summation in order to determine whether the distortions in the distorted image are visible. If the distortions are below the threshold of detection, the distorted image is deemed to be of perfect visual fidelity (VSNR = infinity) and no further analysis is required. If the distortions are suprathreshold, a second stage is applied which operates based on the low-level visual property of perceived contrast, and the mid-level visual property of global precedence. These two properties are modeled as Euclidean distances in distortion-contrast space of a multiscale wavelet decomposition, and VSNR is computed based on a simple linear sum of these distances. The proposed VSNR metric is generally competitive with current metrics of visual fidelity; it is efficient both in terms of its low computational complexity and in terms of its low memory requirements; and it operates based on physical luminances and visual angle (rather than on digital pixel values and pixel-based dimensions) to accommodate different viewing conditions.
Collapse
Affiliation(s)
- Damon M Chandler
- School of Electrical and Computer engineering, Oklahoma State University, Stillwater, OK 74078, USA.
| | | |
Collapse
|
8952
|
Liao B, Chen Y. An Image Quality Assessment Algorithm Based on Dual-scale Edge Structure Similarity. SECOND INTERNATIONAL CONFERENCE ON INNOVATIVE COMPUTING, INFORMATIO AND CONTROL (ICICIC 2007) 2007. [DOI: 10.1109/icicic.2007.143] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
8953
|
Lee J, Horiuchi T, Saito R, Kotera H. Digital Color Image Halftone: Hybrid Error Diffusion Using the Mask Perturbation and Quality Verification. J Imaging Sci Technol 2007. [DOI: 10.2352/j.imagingsci.technol.(2007)51:5(391)] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
8954
|
Spezio ML, Adolphs R, Hurley RSE, Piven J. Abnormal use of facial information in high-functioning autism. J Autism Dev Disord 2007; 37:929-39. [PMID: 17006775 DOI: 10.1007/s10803-006-0232-9] [Citation(s) in RCA: 213] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Altered visual exploration of faces likely contributes to social cognition deficits seen in autism. To investigate the relationship between face gaze and social cognition in autism, we measured both face gaze and how facial regions were actually used during emotion judgments from faces. Compared to IQ-matched healthy controls, nine high-functioning adults with autism failed to make use of information from the eye region of faces, instead relying primarily on information from the mouth. Face gaze accounted for the increased reliance on the mouth, and partially accounted for the deficit in using information from the eyes. These findings provide a novel quantitative assessment of how people with autism utilize information in faces when making social judgments.
Collapse
Affiliation(s)
- Michael L Spezio
- Division of Humanities and Social Sciences, 228-77, California Institute of Technology, Caltech, Pasadena, CA 91125, USA.
| | | | | | | |
Collapse
|
8955
|
Snidaro L, Niu R, Foresti GL, Varshney PK. Quality-Based Fusion of Multiple Video Sensors for Video Surveillance. ACTA ACUST UNITED AC 2007; 37:1044-51. [PMID: 17702301 DOI: 10.1109/tsmcb.2007.895331] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this correspondence, we address the problem of fusing data for object tracking for video surveillance. The fusion process is dynamically regulated to take into account the performance of the sensors in detecting and tracking the targets. This is performed through a function that adjusts the measurement error covariance associated with the position information of each target according to the quality of its segmentation. In this manner, localization errors due to incorrect segmentation of the blobs are reduced thus improving tracking accuracy. Experimental results on video sequences of outdoor environments show the effectiveness of the proposed approach.
Collapse
|
8956
|
Visual enhancement of digital ultrasound images: wavelet versus Gauss–Laplace contrast pyramid. Int J Comput Assist Radiol Surg 2007. [DOI: 10.1007/s11548-007-0122-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
8957
|
Noore A, Singh R, Vatsa M, Houck MM. Enhancing security of fingerprints through contextual biometric watermarking. Forensic Sci Int 2007; 169:188-94. [PMID: 17018250 DOI: 10.1016/j.forsciint.2006.08.019] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2005] [Revised: 08/03/2006] [Accepted: 08/25/2006] [Indexed: 11/22/2022]
Abstract
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
Collapse
Affiliation(s)
- Afzel Noore
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, USA.
| | | | | | | |
Collapse
|
8958
|
Hossny M, Nahavandi S, Creighton D. A Quadtree Driven Image Fusion Quality Assessment. ACTA ACUST UNITED AC 2007. [DOI: 10.1109/indin.2007.4384794] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
8959
|
Salinas HM, Fernández DC. Comparison of PDE-based nonlinear diffusion approaches for image enhancement and denoising in optical coherence tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:761-71. [PMID: 17679327 DOI: 10.1109/tmi.2006.887375] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
A comparison between two nonlinear diffusion methods for denoising OCT images is performed. Specifically, we compare and contrast the performance of the traditional nonlinear Perona-Malik filter with a complex diffusion filter that has been recently introduced by Gilboa et al.. The complex diffusion approach based on the generalization of the nonlinear scale space to the complex domain by combining the diffusion and the free Schridinger equation is evaluated on synthetic images and also on representative OCT images at various noise levels. The performance improvement over the traditional nonlinear Perona-Malik filter is quantified in terms of noise suppression, image structural preservation and visual quality. An average signal-to-noise ratio (SNR) improvement of about 2.5 times and an average contrast to noise ratio (CNR) improvement of 49% was obtained while mean structure similarity (MSSIM) was practically not degraded after denoising. The nonlinear complex diffusion filtering can be applied with success to many OCT imaging applications. In summary, the numerical values of the image quality metrics along with the qualitative analysis results indicated the good feature preservation performance of the complex diffusion process, as desired for better diagnosis in medical imaging processing.
Collapse
|
8960
|
Singh S, Kumar V, Verma HK. Adaptive threshold-based block classification in medical image compression for teleradiology. Comput Biol Med 2007; 37:811-9. [PMID: 17055471 DOI: 10.1016/j.compbiomed.2006.08.021] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2005] [Revised: 07/23/2006] [Accepted: 08/30/2006] [Indexed: 11/13/2022]
Abstract
Telemedicine, among other things, involves storage and transmission of medical images, popularly known as teleradiology. Due to constraints on bandwidth and storage capacity, a medical image may be needed to be compressed before transmission/storage. Among various compression techniques, transform-based techniques that convert an image in spatial domain into the data in spectral domain are very effective. Discrete cosine transform (DCT) is possibly the most popular transform used in compression of images in standards like Joint Photographic Experts Group (JPEG). In DCT-based compression the image is split into smaller blocks for computational simplicity. The blocks are classified on the basis of information content to maximize compression ratio without sacrificing diagnostic information. The present paper presents a technique along with computational algorithm for classification of blocks on the basis of an adaptive threshold value of variance. The adaptive approach makes the classification technique applicable across the board to all medical images. Its efficacy is demonstrated by applying it to CT, X-ray and ultrasound images and by comparing the results against the JPEG in terms of various objective quality indices.
Collapse
Affiliation(s)
- Sukhwinder Singh
- Instrumentation and Signal Processing Lab, Department of Electrical Engineering, Indian Institute of Technology Roorkee, Roorkee 247667, Uttaranchal, India
| | | | | |
Collapse
|
8961
|
Abstract
With the development of communication technology the applications and services of health telemetics are growing. In view of the increasingly important role played by digital medical imaging in modern health care, it is necessary for large amount of image data to be economically stored and/or transmitted. There is a need for the development of image compression systems that combine high compression ratio with preservation of critical information. During the past decade wavelets have been a significant development in the field of image compression. In this paper, a hybrid scheme using both discrete wavelet transform (DWT) and discrete cosine transform (DCT) for medical image compression is presented. DCT is applied to the DWT details, which generally have zero mean and small variance, thereby achieving better compression than obtained from either technique alone. The results of the hybrid scheme are compared with JPEG and set partitioning in hierarchical trees (SPIHT) coder and it is found that the performance of the proposed scheme is better.
Collapse
Affiliation(s)
- S Singh
- Electrical Engineering Department, Indian Institute of Technology, Roorkee, Uttaranchal, 247 667, India
| | | | | |
Collapse
|
8962
|
Aysal TC, Barner KE. Rayleigh-maximum-likelihood filtering for speckle reduction of ultrasound images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:712-27. [PMID: 17518065 DOI: 10.1109/tmi.2007.895484] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Speckle is a multiplicative noise that degrades ultrasound images. Recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need for more robust despeckling techniques, for both routine clinical practice and teleconsultation. Methods previously proposed for speckle reduction suffer from two major limitations: 1) noise attenuation is not sufficient, especially in the smooth and background areas; 2) existing methods do not sufficiently preserve or enhance edges--they only inhibit smoothing near edges. In this paper, we propose a novel technique that is capable of reducing the speckle more effectively than previous methods and jointly enhancing the edge information, rather than just inhibiting smoothing. The proposed method utilizes the Rayleigh distribution to model the speckle and adopts the robust maximum-likelihood estimation approach. The resulting estimator is statistically analyzed through first and second moment derivations. A tuning parameter that naturally evolves in the estimation equation is analyzed, and an adaptive method utilizing the instantaneous coefficient of variation is proposed to adjust this parameter. To further tailor performance, a weighted version of the proposed estimator is introduced to exploit varying statistics of input samples. Finally, the proposed method is evaluated and compared to well-accepted methods through simulations utilizing synthetic and real ultrasound data.
Collapse
Affiliation(s)
- Tuncer C Aysal
- Department of Electrical and Computer Engineering, University of Delaware, Newark, DE 19716, USA.
| | | |
Collapse
|
8963
|
Gaubatz MD, Hemami SS. Ordering for embedded coding of wavelet image data based on arbitrary scalar quantization schemes. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:982-96. [PMID: 17405431 DOI: 10.1109/tip.2007.891793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Many modern wavelet quantization schemes specify wavelet coefficient step sizes as continuous functions of an input step-size selection criterion; rate control is achieved by selecting an appropriate set of step sizes. In embedded wavelet coders, however, rate control is achieved simply by truncating the coded bit stream at the desired rate. The order in which wavelet data are coded implicitly controls quantization step sizes applied to create the reconstructed image. Since these step sizes are effectively discontinuous, piecewise-constant functions of rate, this paper examines the problem of designing a coding order for such a coder, guided by a quantization scheme where step sizes evolve continuously with rate. In particular, it formulates an optimization problem that minimizes the average relative difference between the piecewise-constant implicit step sizes associated with a layered coding strategy and the smooth step sizes given by a quantization scheme. The solution to this problem implies a coding order. Elegant, near-optimal solutions are presented to optimize step sizes over a variety of regions of rates, either continuous or discrete. This method can be used to create layers of coded data using any scalar quantization scheme combined with any wavelet bit-plane coder. It is illustrated using a variety of state-of-the-art coders and quantization schemes. In addition, the proposed method is verified with objective and subjective testing.
Collapse
Affiliation(s)
- Matthew D Gaubatz
- Department of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA.
| | | |
Collapse
|
8964
|
Leontaris A, Cosman PC, Reibman AR. Quality evaluation of motion-compensated edge artifacts in compressed video. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:943-56. [PMID: 17405428 DOI: 10.1109/tip.2007.891778] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.
Collapse
|
8965
|
Kakadiaris IA, Passalis G, Toderici G, Murtuza MN, Lu Y, Karampatziakis N, Theoharis T. Three-dimensional face recognition in the presence of facial expressions: an annotated deformable model approach. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2007; 29:640-9. [PMID: 17299221 DOI: 10.1109/tpami.2007.1017] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
In this paper, we present the computational tools and a hardware prototype for 3D face recognition. Full automation is provided through the use of advanced multistage alignment algorithms, resilience to facial expressions by employing a deformable model framework, and invariance to 3D capture devices through suitable preprocessing steps. In addition, scalability in both time and space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, Face Recognition Grand Challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality.
Collapse
|
8966
|
Yue Y, Croitoru MM, Bidani A, Zwischenberger JB, Clark JW. Ultrasound speckle suppression and edge enhancement using multiscale nonlinear wavelet diffusion. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2005:6429-32. [PMID: 17281740 DOI: 10.1109/iembs.2005.1615970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
This paper introduces a novel multiscale nonlinear wavelet diffusion (MNWD) method for ultrasound speckle suppression and edge enhancement. It considers wavelet diffusion as an approximation to nonlinear diffusion within the framework of the dyadic wavelet transform. Consequently, this knowledge is exploited in the design of a speckle suppression filter with an edge enhancement feature. MNWD takes advantage of the sparsity and multiresolution properties of wavelet, and the iterative edge enhancement feature of nonlinear diffusion. In our algorithm, speckle is suppressed by employing the iterative multiscale diffusion on the wavelet coefficients, while the edges of the image are enhanced by using an iterative signal compensation process. We validate the proposed method using synthetic and real echocardiographic images. Performance improvement over other traditional denoising filters is quantified in terms of noise suppression and structural preservation indices. The application of the proposed method is demonstrated by the segmentation of the echocardiographic image using the active contour.
Collapse
Affiliation(s)
- Yong Yue
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005 USA.
| | | | | | | | | |
Collapse
|
8967
|
Yue Y, Croitoru M, Bidani A, Zwischenberger J, Clark JW. Ultrasonic speckle suppression using robust nonlinear wavelet diffusion for LV volume quantification. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2004:1609-12. [PMID: 17272008 DOI: 10.1109/iembs.2004.1403488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
This work proposes a novel speckle suppression method, called robust nonlinear wavelet diffusion. It shows that the log-transformed speckle can be approximated by Gaussian noise contaminated with long burst outliers. Consequently, we exploit this knowledge to design a speckle suppression filter within the framework of wavelet analysis. The outliers are removed by the combination of the robust-residual filter and nonlinear diffusion filter, and the Gaussian noise is eliminated by the wavelet soft-shrinkage technique. We validate the proposed method using synthetic and real echocardiographic images. The performance improvement over other traditional denoising filters is quantified in terms of noise suppression and structural preservation indices. Finally, using the denoised image, we improve the performance of the gradient vector flow snake by modifying its external force field, and we quantify the volume of left ventricle via segmentation applied to the echocardiographic image.
Collapse
Affiliation(s)
- Yong Yue
- Department of Electrical and Computer Engineering, Rice University, Houston, TX, USA
| | | | | | | | | |
Collapse
|
8968
|
Zhang F, Yoo YM, Koh LM, Kim Y. Nonlinear diffusion in Laplacian pyramid domain for ultrasonic speckle reduction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:200-11. [PMID: 17304734 DOI: 10.1109/tmi.2006.889735] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
A new speckle reduction method, i.e., Laplacian pyramid-based nonlinear diffusion (LPND), is proposed for medical ultrasound imaging. With this method, speckle is removed by nonlinear diffusion filtering of bandpass ultrasound images in Laplacian pyramid domain. For nonlinear diffusion in each pyramid layer, a gradient threshold is automatically determined by a variation of median absolute deviation (MAD) estimator. The performance of the proposed LPND method has been compared with that of other speckle reduction methods, including the recently proposed speckle reducing anisotropic diffusion (SRAD) and nonlinear coherent diffusion (NCD). In simulation and phantom studies, an average gain of 1.55 dB and 1.34 dB in contrast-to-noise ratio was obtained compared to SRAD and NCD, respectively. The visual comparison of despeckled in vivo ultrasound images from liver and carotid artery shows that the proposed LPND method could effectively preserve edges and detailed structures while thoroughly suppressing speckle. These preliminary results indicate that the proposed speckle reduction method could improve image quality and the visibility of small structures and fine details in medical ultrasound imaging.
Collapse
Affiliation(s)
- Fan Zhang
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore.
| | | | | | | |
Collapse
|
8969
|
|
8970
|
Piva A, Barni M. Design and Analysis of the First BOWS Contest. EURASIP JOURNAL ON INFORMATION SECURITY 2007. [DOI: 10.1186/1687-417x-2007-098684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
|
8971
|
|
8972
|
|
8973
|
Triantaphillidou S, Allen E, Jacobson RE. Image Quality Comparison Between JPEG and JPEG2000. II. Scene Dependency, Scene Analysis, and Classification. J Imaging Sci Technol 2007. [DOI: 10.2352/j.imagingsci.technol.(2007)51:3(259)] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
8974
|
Wang C, Garcia A, Shen HW. Interactive level-of-detail selection using image-based quality metric for large volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2007; 13:122-34. [PMID: 17093341 DOI: 10.1109/tvcg.2007.15] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
For large volume visualization, an image-based quality metric is difficult to incorporate for level-of-detail selection and rendering without sacrificing the interactivity. This is because it is usually time-consuming to update view-dependent information as well as to adjust to transfer function changes. In this paper, we introduce an image-based level-of-detail selection algorithm for interactive visualization of large volumetric data. The design of our quality metric is based on an efficient way to evaluate the contribution of multiresolution data blocks to the final image. To ensure real-time update of the quality metric and interactive level-of-detail decisions, we propose a summary table scheme in response to runtime transfer function changes and a GPU-based solution for visibility estimation. Experimental results on large scientific and medical data sets demonstrate the effectiveness and efficiency of our algorithm.
Collapse
Affiliation(s)
- Chaoli Wang
- Department of Computer Science and Engineering, The Ohio State University, 395 Dreese Laboratories, Columbus, OH 43210, USA.
| | | | | |
Collapse
|
8975
|
Yamatani K, Saito N. Improvement of DCT-based compression algorithms using Poisson's equation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:3672-89. [PMID: 17153942 DOI: 10.1109/tip.2006.882005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
We propose two new image compression-decompression methods that reproduce images with better visual fidelity, less blocking artifacts, and better PSNR, particularly in low bit rates, than those processed by the JPEG Baseline method at the same bit rates. The additional computational cost is small, i.e., linearly proportional to the number of pixels in an input image. The first method, the "full mode" polyharmonic local cosine transform (PHLCT), modifies the encoder and decoder parts of the JPEG Baseline method. The goal of the full mode PHLCT is to reduce the code size in the encoding part and reduce the blocking artifacts in the decoder part. The second one, the "partial mode" PHLCT (or PPHLCT for short), modifies only the decoder part, and consequently, accepts the JPEG files, yet decompresses them with higher quality with less blocking artifacts. The key idea behind these algorithms is a decomposition of each image block into a polyharmonic component and a residual. The polyharmonic component in this paper is an approximate solution to Poisson's equation with the Neumann boundary condition, which means that it is a smooth predictor of the original image block only using the image gradient information across the block boundary. Thus, the residual--obtained by removing the polyharmonic component from the original image block--has approximately zero gradient across the block boundary, which gives rise to the fast-decaying DCT coefficients, which, in turn, lead to more efficient compression-decompression algorithms for the same bit rates. We show that the polyharmonic component of each block can be estimated solely by the first column and row of the DCT coefficient matrix of that block and those of its adjacent blocks and can predict an original image data better than some of the other AC prediction methods previously proposed. Our numerical experiments objectively and subjectively demonstrate the superiority of PHLCT over the JPEG Baseline method and the improvement of the JPEG-compressed images when decompressed by PPHLCT.
Collapse
Affiliation(s)
- Katsu Yamatani
- Department of Urban Science, Meijo University, Gifu 509-0261, Japan.
| | | |
Collapse
|
8976
|
Kattnig AP, Primot J. Radiometric order preserving method to display wide-dynamic images for imagery photointerpretation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2006; 23:2396-405. [PMID: 16985525 DOI: 10.1364/josaa.23.002396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Wide-dynamic numerical images are increasingly frequent in professional environments, military photointerpretation, and x-ray or magnetic resonance medical imagery. However, a dynamic compression process is necessary to exploit such images without incessant image manipulation. A wealth of efficient methods has been developed to tackle this problem on aesthetic grounds. We argue that professional imagery interpretation needs preservation of the original radiometric order. We develop a measure of image efficiency use of the 8-bit radiometric channel and find that it correlates nicely with subjective appraisal. The image underscores a radiometric order-preserving process to reach a standard radiometric efficiency. Lost information is then reintroduced by addition of an edge image devoid of artifacts, with an automatic weighting ensuring a natural-looking image.
Collapse
Affiliation(s)
- Alain Philippe Kattnig
- Office National d'Etudes et de Recherches Aérospatiales, Département d'Optique Théorique et Appliquée, Chatillon, France.
| | | |
Collapse
|
8977
|
Aja-Fernández S, Alberola-López C. On the estimation of the coefficient of variation for anisotropic diffusion speckle filtering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:2694-701. [PMID: 16948314 DOI: 10.1109/tip.2006.877360] [Citation(s) in RCA: 90] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In this paper, we focus on the problem of speckle removal by means of anisotropic diffusion and, specifically, on the importance of the correct estimation of the statistics involved. First, we derive an anisotropic diffusion filter that does not depend on a linear approximation of the speckle model assumed, which is the case of a previously reported filter, namely, SRAD. Then, we focus on the problem of estimation of the coefficient of variation of both signal and noise and of noise itself. Our experiments indicate that neighborhoods used for parameter estimation do not need to coincide with those used in the diffusion equations. Then, we show that, as long as the estimates are good enough, the filter proposed here and the SRAD perform fairly closely, a fact that emphasizes the importance of the correct estimation of the coefficients of variation.
Collapse
Affiliation(s)
- Santiago Aja-Fernández
- E.T.S. Ingenieros de Telecomunicación, Universidad de Valladolid, 47011 Valladolid, Spain.
| | | |
Collapse
|
8978
|
Bouguila N, Ziou D. A hybrid SEM algorithm for high-dimensional unsupervised learning using a finite generalized Dirichlet mixture. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:2657-68. [PMID: 16948310 DOI: 10.1109/tip.2006.877379] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper applies a robust statistical scheme to the problem of unsupervised learning of high-dimensional data. We develop, analyze, and apply a new finite mixture model based on a generalization of the Dirichlet distribution. The generalized Dirichlet distribution has a more general covariance structure than the Dirichlet distribution and offers high flexibility and ease of use for the approximation of both symmetric and asymmetric distributions. We show that the mathematical properties of this distribution allow high-dimensional modeling without requiring dimensionality reduction and, thus, without a loss of information. This makes the generalized Dirichlet distribution more practical and useful. We propose a hybrid stochastic expectation maximization algorithm (HSEM) to estimate the parameters of the generalized Dirichlet mixture. The algorithm is called stochastic because it contains a step in which the data elements are assigned randomly to components in order to avoid convergence to a saddle point. The adjective "hybrid" is justified by the introduction of a Newton-Raphson step. Moreover, the HSEM algorithm autonomously selects the number of components by the introduction of an agglomerative term. The performance of our method is tested by the classification of several pattern-recognition data sets. The generalized Dirichlet mixture is also applied to the problems of image restoration, image object recognition and texture image database summarization for efficient retrieval. For the texture image summarization problem, results are reported for the Vistex texture image database from the MIT Media Lab.
Collapse
Affiliation(s)
- Nizar Bouguila
- Concordia Institute for Information Systems Engineering (CIISE), Concordia University, Montréal, QC H3G IT7, Canada.
| | | |
Collapse
|
8979
|
Hirakawa K, Parks TW. Image denoising using total least squares. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:2730-42. [PMID: 16948317 DOI: 10.1109/tip.2006.877352] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
In this paper, we present a method for removing noise from digital images corrupted with additive, multiplicative, and mixed noise. An image patch from an ideal image is modeled as a linear combination of image patches from the noisy image. We propose to fit this model to the real-world image data in the total least square (TLS) sense, because the TLS formulation allows us to take into account the uncertainties in the measured data. We develop a method to reduce the contribution from the irrelevant image patches, which will sharpen the edges and reduce edge artifacts at the same time. Although the proposed algorithm is computationally demanding, the image quality of the output image demonstrates the effectiveness of the TLS algorithms.
Collapse
|
8980
|
Wang C, Shen HW. LOD map--A visual interface for navigating multiresolution volume visualization. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2006; 12:1029-36. [PMID: 17080831 DOI: 10.1109/tvcg.2006.159] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.
Collapse
Affiliation(s)
- Chaoli Wang
- Department of Computer Science and Engineering, The Ohio State University, 395 Dreese Laboratories, 2015 Neil Avenue, Columbus, OH 43210, USA.
| | | |
Collapse
|
8981
|
Abstract
With the recent explosion of interest in microarray technology, massive amounts of microarray images are currently being produced. The storage and transmission of these types of data are becoming increasingly challenging. This article reviews the latest technologies that allow for the compression and storage of microarray images in dedicated database systems.
Collapse
Affiliation(s)
- Yu Luo
- Department of Computer Science & Engineering, University of California, Riverside, 92521, USA.
| | | |
Collapse
|
8982
|
Park HJ, Lee TW. Capturing nonlinear dependencies in natural images using ICA and mixture of Laplacian distribution. Neurocomputing 2006. [DOI: 10.1016/j.neucom.2005.12.026] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
8983
|
Wang Z, Wu G, Sheikh HR, Simoncelli EP, Yang EH, Bovik AC. Quality-aware images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:1680-9. [PMID: 16764291 DOI: 10.1109/tip.2005.864165] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We propose the concept of quality-aware image, in which certain extracted features of the original (high-quality) image are embedded into the image data as invisible hidden messages. When a distorted version of such an image is received, users can decode the hidden messages and use them to provide an objective measure of the quality of the distorted image. To demonstrate the idea, we build a practical quality-aware image encoding, decoding and quality analysis system, which employs: 1) a novel reduced-reference image quality assessment algorithm based on a statistical model of natural images and 2) a previously developed quantization watermarking-based data hiding technique in the wavelet transform domain.
Collapse
Affiliation(s)
- Zhou Wang
- Center for Neural Science, New York University, NY 10012, USA.
| | | | | | | | | | | |
Collapse
|
8984
|
Fidler A, Likar B, Skaleric U. Lossy JPEG compression: easy to compress, hard to compare. Dentomaxillofac Radiol 2006; 35:67-73. [PMID: 16549431 DOI: 10.1259/dmfr/52842661] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES To review the literature on lossy compression in dental radiography and to discuss the importance and suitability of the methodology used for evaluation of image compression. METHODS A search of Medline (from 1966 to October 2004) was undertaken with the search expression "(Radiography, dental) and compression". Inclusion criterion was that the reference should be evaluating the effect of lossy image compression on diagnostic accuracy. For all included studies, information in relation to mode of image acquisition, image content, image compression, image display, and method of image evaluation was extracted. RESULTS 12 out of 32 papers were included in the review. The design of these 12 studies was found to vary considerably. Parameters used to express the degree of information loss (DIL) were either or both compression ratio (CR) and compression level (CL). The highest acceptable CR reported in the studies ranged from 3.6% to 15.4%. Furthermore, different CR values were proposed even for the same diagnostic task, for example, for caries diagnosis CR ranged from 6.2% to 11.1%. CONCLUSION Lossy image compression can be used in clinical radiology if it does not conflict with national law. However, the acceptable DIL is difficult to express and standardize. CR is probably not suitable to express DIL, because it is image content dependent. CL is also probably not suitable to express DIL because of the lack of compression software standardization.
Collapse
Affiliation(s)
- A Fidler
- Department of Restorative Dentistry and Endodontics, Faculty of Medicine, University of Ljubljana, Slovenia.
| | | | | |
Collapse
|
8985
|
Loizou CP, Pattichis CS, Pantziaris M, Tyllis T, Nicolaides A. Quality evaluation of ultrasound imaging in the carotid artery based on normalization and speckle reduction filtering. Med Biol Eng Comput 2006; 44:414-26. [PMID: 16937183 DOI: 10.1007/s11517-006-0045-1] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2005] [Accepted: 03/16/2006] [Indexed: 10/24/2022]
Abstract
Image quality is important when evaluating ultrasound images of the carotid for the assessment of the degree of atherosclerotic disease, or when transferring images through a telemedicine channel, and/or in other image processing tasks. The objective of this study was to investigate the usefulness of image quality evaluation based on image quality metrics and visual perception, in ultrasound imaging of the carotid artery after normalization and speckle reduction filtering. Image quality was evaluated based on statistical and texture features, image quality evaluation metrics, and visual perception evaluation made by two experts. These were computed on 80 longitudinal ultrasound images of the carotid bifurcation recorded from two different ultrasound scanners, the HDI ATL-3000 and the HDI ATL-5000 scanner, before (NF) and after (DS) speckle reduction filtering, after normalization (N), and after normalization and speckle reduction filtering (NDS). The results of this study showed that: (1) the normalized speckle reduction, NDS, images were rated visually better on both scanners; (2) the NDS images showed better statistical and texture analysis results on both scanners; (3) better image quality evaluation results were obtained between the original (NF) and normalized (N) images, i.e. NF-N, for both scanners, followed by the NF-DS images for the ATL HDI-5000 scanner and the NF-DS on the HDI ATL-3000 scanner; (4) the ATL HDI-5000 scanner images have considerable higher entropy than the ATL HDI-3000 scanner and thus more information content. However, based on the visual evaluation by the two experts, both scanners were rated similarly. The above findings are also in agreement with the visual perception evaluation, carried out by the two vascular experts. The results of this study showed that ultrasound image normalization and speckle reduction filtering are important preprocessing steps favoring image quality, and should be further investigated.
Collapse
Affiliation(s)
- C P Loizou
- Department of Computer Science, Intercollege, 92 Ayias Phylaxeos Str., PO Box 51604, 3507 Limassol, Cyprus.
| | | | | | | | | |
Collapse
|
8986
|
Yue Y, Croitoru MM, Bidani A, Zwischenberger JB, Clark JW. Nonlinear multiscale wavelet diffusion for speckle suppression and edge enhancement in ultrasound images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:297-311. [PMID: 16524086 DOI: 10.1109/tmi.2005.862737] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
This paper introduces a novel nonlinear multiscale wavelet diffusion method for ultrasound speckle suppression and edge enhancement. This method is designed to utilize the favorable denoising properties of two frequently used techniques: the sparsity and multiresolution properties of the wavelet, and the iterative edge enhancement feature of nonlinear diffusion. With fully exploited knowledge of speckle image models, the edges of images are detected using normalized wavelet modulus. Relying on this feature, both the envelope-detected speckle image and the log-compressed ultrasonic image can be directly processed by the algorithm without need for additional preprocessing. Speckle is suppressed by employing the iterative multiscale diffusion on the wavelet coefficients. With a tuning diffusion threshold strategy, the proposed method can improve the image quality for both visualization and auto-segmentation applications. We validate our method using synthetic speckle images and real ultrasonic images. Performance improvement over other despeckling filters is quantified in terms of noise suppression and edge preservation indices.
Collapse
Affiliation(s)
- Yong Yue
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA.
| | | | | | | | | |
Collapse
|
8987
|
Sheikh HR, Bovik AC. Image information and visual quality. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:430-44. [PMID: 16479813 DOI: 10.1109/tip.2005.859378] [Citation(s) in RCA: 586] [Impact Index Per Article: 30.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Image QA algorithms generally interpret image quality as fidelity or similarity with a "reference" or "perfect" image in some perceptual space. Such "full-reference" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by signal fidelity measures. In this paper, we approach the image QA problem as an information fidelity problem. Specifically, we propose to quantify the loss of image information to the distortion process and explore the relationship between image information and visual quality. QA systems are invariably involved with judging the visual quality of "natural" images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of such natural signals. Using these models, we previously presented an information fidelity criterion for image QA that related image quality with the amount of information shared between a reference and a distorted image. In this paper, we propose an image information measure that quantifies the information that is present in the reference image and how much of this reference information can be extracted from the distorted image. Combining these two quantities, we propose a visual information fidelity measure for image QA. We validate the performance of our algorithm with an extensive subjective study involving 779 images and show that our method outperforms recent state-of-the-art image QA algorithms by a sizeable margin in our simulations. The code and the data from the subjective study are available at the LIVE website.
Collapse
Affiliation(s)
- Hamid Rahim Sheikh
- Laboratory for Image and Video Engineering, Department of Electrical and Computer Engineering, The University of Texas, Austin, Austin, TX 78712-1084, USA.
| | | |
Collapse
|
8988
|
Shnayderman A, Gusev A, Eskicioglu AM. An SVD-based grayscale image quality measure for local and global assessment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:422-9. [PMID: 16479812 DOI: 10.1109/tip.2005.860605] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
The important criteria used in subjective evaluation of distorted images include the amount of distortion, the type of distortion, and the distribution of error. An ideal image quality measure should, therefore, be able to mimic the human observer. We present a new grayscale image quality measure that can be used as a graphical or a scalar measure to predict the distortion introduced by a wide range of noise sources. Based on singular value decomposition, it reliably measures the distortion not only within a distortion type at different distortion levels, but also across different distortion types. The measure was applied to five test images (airplane, boat, goldhill, Lena, and peppers) using six types of distortion (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening, and DC-shifting), each with five distortion levels. Its performance is compared with PSNR and two recent measures.
Collapse
Affiliation(s)
- Aleksandr Shnayderman
- Department of Computer and Information Science, Brooklyn College, City University of New York, Brooklyn, NY 11210, USA.
| | | | | |
Collapse
|
8989
|
|
8990
|
Lan X, Roth S, Huttenlocher D, Black MJ. Efficient Belief Propagation with Learned Higher-Order Markov Random Fields. COMPUTER VISION – ECCV 2006 2006. [DOI: 10.1007/11744047_21] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
8991
|
Walsh AC, Updike PG, Sadda SR. Quantitative Fluorescein Angiography. Retina 2006. [DOI: 10.1016/b978-0-323-02598-0.50058-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
8992
|
Niu Y, Shen L. An Adaptive Multi-objective Particle Swarm Optimization for Color Image Fusion. LECTURE NOTES IN COMPUTER SCIENCE 2006. [DOI: 10.1007/11903697_60] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
8993
|
Malo J, Epifanio I, Navarro R, Simoncelli EP. Nonlinear image representation for efficient perceptual coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:68-80. [PMID: 16435537 DOI: 10.1109/tip.2005.860325] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Image compression systems commonly operate by transforming the input signal into a new representation whose elements are independently quantized. The success of such a system depends on two properties of the representation. First, the coding rate is minimized only if the elements of the representation are statistically independent. Second, the perceived coding distortion is minimized only if the errors in a reconstructed image arising from quantization of the different elements of the representation are perceptually independent. We argue that linear transforms cannot achieve either of these goals and propose, instead, an adaptive nonlinear image representation in which each coefficient of a linear transform is divided by a weighted sum of coefficient amplitudes in a generalized neighborhood. We then show that the divisive operation greatly reduces both the statistical and the perceptual redundancy amongst representation elements. We develop an efficient method of inverting this transformation, and we demonstrate through simulations that the dual reduction in dependency can greatly improve the visual quality of compressed images.
Collapse
Affiliation(s)
- Jesus Malo
- Departament d'Optica, Universitat de València, 46100 Burjassot, València, Spain.
| | | | | | | |
Collapse
|
8994
|
Aja-Fernández S, Estépar RSJ, Alberola-López C, Westin CF. Image quality assessment based on local variance. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2006; 2006:4815-4818. [PMID: 17946653 DOI: 10.1109/iembs.2006.259516] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A new and complementary method to assess image quality is presented. It is based on the comparison of the local variance distribution of two images. This new quality index is better suited to assess the non-stationarity of images, therefore it explicitly focuses on the image structure. We show that this new index outperforms other methods for the assessment of image quality in medical images.
Collapse
|
8995
|
Sheikh HR, Bovik AC, de Veciana G. An information fidelity criterion for image quality assessment using natural scene statistics. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:2117-28. [PMID: 16370464 DOI: 10.1109/tip.2005.859389] [Citation(s) in RCA: 249] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Measurement of visual quality is of fundamental importance to numerous image and video processing applications. The goal of quality assessment (QA) research is to design algorithms that can automatically assess the quality of images or videos in a perceptually consistent manner. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a "reference" or "perfecft" image in some perceptual space. Such "full-referenc" QA methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychovisual features of the human visual system (HVS), or by arbitrary signal fidelity criteria. In this paper, we approach the problem of image QA by proposing a novel information fidelity criterion that is based on natural scene statistics. QA systems are invariably involved with judging the visual quality of images and videos that are meant for "human consumption." Researchers have developed sophisticated models to capture the statistics of natural signals, that is, pictures and videos of the visual environment. Using these statistical models in an information-theoretic setting, we derive a novel QA algorithm that provides clear advantages over the traditional approaches. In particular, it is parameterless and outperforms current methods in our testing. We validate the performance of our algorithm with an extensive subjective study involving 779 images. We also show that, although our approach distinctly departs from traditional HVS-based methods, it is functionally similar to them under certain conditions, yet it outperforms them due to improved modeling. The code and the data from the subjective study are available at.
Collapse
Affiliation(s)
- Hamid Rahim Sheikh
- Laboratory for Image and Video Engineering, Department of Electrical and Computer Engineering, The University of Texas, Austin, TX 78712-1084, USA.
| | | | | |
Collapse
|
8996
|
Bernas T, Robinson JP, Asem EK, Rajwa B. Loss of image quality in photobleaching during microscopic imaging of fluorescent probes bound to chromatin. JOURNAL OF BIOMEDICAL OPTICS 2005; 10:064015. [PMID: 16409080 DOI: 10.1117/1.2136313] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Prolonged excitation of fluorescent probes leads eventually to loss of their capacity to emit light. A decrease in the number of detected photons reduces subsequently the resolving power of a fluorescence microscope. Adverse effects of fluorescence intensity loss on the quality of microscopic images of biological specimens have been recognized, but not determined quantitatively. We propose three human-independent methods of quality determination. These techniques require no reference images and are based on calculation of the actual resolution distance, information entropy, and signal-to-noise ratio (SNR). We apply the three measures to study the effect of photobleaching in cell nuclei stained with propidium iodide (PI) and chromomycin A3 (CA3) and imaged with fluorescence confocal microscopy. We conclude that the relative loss of image quality is smaller than the corresponding decrease in fluorescence intensity. Furthermore, the extent of quality loss is related to the optical properties of the imaging system and the noise characteristics of the detector. We discuss the importance of these findings for optimal registration and compression of biological images.
Collapse
Affiliation(s)
- Tytus Bernas
- University of Silesia, Faculty of Biology and Protection of Environment, Department of Plant Anatomy and Cytology, Jagiellonska 28, Katowice, Poland
| | | | | | | |
Collapse
|
8997
|
Lu Z, Lin W, Yang X, Ong E, Yao S. Modeling visual attention's modulatory aftereffects on visual sensitivity and quality evaluation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:1928-42. [PMID: 16279190 DOI: 10.1109/tip.2005.854478] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
With the fast development of visual noise-shaping related applications (visual compression, error resilience, watermarking, encryption, and display), there is an increasingly significant demand on incorporating perceptual characteristics into these applications for improved performance. In this paper, a very important mechanism of the human brain, visual attention, is introduced for visual sensitivity and visual quality evaluation. Based upon the analysis, a new numerical measure for visual attention's modulatory aftereffects, perceptual quality significance map (PQSM), is proposed. To a certain extent, the PQSM reflects the processing ability of the human brain on local visual contents statistically. The PQSM is generated with the integration of local perceptual stimuli from color contrast, texture contrast, motion, as well as cognitive features (skin color and face in this study). Experimental results with subjective viewing demonstrate the performance improvement on two PQSM-modulated visual sensitivity models and two PQSM-based visual quality metrics.
Collapse
Affiliation(s)
- Zhongkang Lu
- Institute for Infocomm Research, Agency for Science, Technology, and Research, Singapore.
| | | | | | | | | |
Collapse
|
8998
|
Loizou CP, Pattichis CS, Christodoulou CI, Istepanian RSH, Pantziaris M, Nicolaides A. Comparative evaluation of despeckle filtering in ultrasound imaging of the carotid artery. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2005; 52:1653-69. [PMID: 16382618 DOI: 10.1109/tuffc.2005.1561621] [Citation(s) in RCA: 120] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
It is well-known that speckle is a multiplicative noise that degrades the visual evaluation in ultrasound imaging. The recent advancements in ultrasound instrumentation and portable ultrasound devices necessitate the need of more robust despeckling techniques for enhanced ultrasound medical imaging for both routine clinical practice and teleconsultation. The objective of this work was to carry out a comparative evaluation of despeckle filtering based on texture analysis, image quality evaluation metrics, and visual evaluation by medical experts in the assessment of 440 (220 asymptomatic and 220 symptomatic) ultrasound images of the carotid artery bifurcation. In this paper a total of 10 despeckle filters were evaluated based on local statistics, median filtering, pixel homogeneity, geometric filtering, homomorphic filtering, anisotropic diffusion, nonlinear coherence diffusion, and wavelet filtering. The results of this study suggest that the first order statistics filter lsmv, gave the best performance, followed by the geometric filter gf4d, and the homogeneous mask area filter lsminsc. These filters improved the class separation between the asymptomatic and the symptomatic classes based on the statistics of the extracted texture features, gave only a marginal improvement in the classification success rate, and improved the visual assessment carried out by the two experts. More specifically, filters lsmv or gf4d can be used for despeckling asymptomatic images in which the expert is interested mainly in the plaque composition and texture analysis; and filters lsmv, gf4d, or lsminsc can be used for the despeckling of symptomatic images in which the expert is interested in identifying the degree of stenosis and the plaque borders. The proper selection of a despeckle filter is very important in the enhancement of ultrasonic imaging of the carotid artery. Further work is needed to evaluate at a larger scale and in clinical practice the performance of the proposed despeckle filters in the automated segmentation, texture analysis, and classification of carotid ultrasound imaging.
Collapse
Affiliation(s)
- Christos P Loizou
- Department of Computer Science, Intercollege, CY-3507 Limassol, Cyprus.
| | | | | | | | | | | |
Collapse
|
8999
|
Reinsberg SA, Doran SJ, Charles-Edwards EM, Leach MO. A complete distortion correction for MR images: II. Rectification of static-field inhomogeneities by similarity-based profile mapping. Phys Med Biol 2005; 50:2651-61. [PMID: 15901960 DOI: 10.1088/0031-9155/50/11/014] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Radiotherapy treatment planning relies on the use of geometrically correct images. This paper presents a fully automatic tool for correcting MR images for the effects of B(0) inhomogeneities. The post-processing method is based on the gradient-reversal technique of Chang and Fitzpatrick (1992 IEEE Trans. Med. Imaging 11 319-29) which combines two identical images acquired with a forward- and a reversed read gradient. This paper demonstrates how maximization of mutual information for registration of forward and reverse read gradient images allows the elimination of user interaction for the correction. Image quality is preserved to a degree not reported previously.
Collapse
Affiliation(s)
- Stefan A Reinsberg
- Cancer Research UK Clinical MR Research Group, Royal Marsden NHS Trust/Institute of Cancer Research, Downs Rd, Sutton SM2 5PT, UK.
| | | | | | | |
Collapse
|
9000
|
Maeder AJ. The image importance approach to human vision based image quality characterization. Pattern Recognit Lett 2005. [DOI: 10.1016/j.patrec.2004.10.018] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|