51
|
Zhang R, Lu C, Liu J. A high capacity reversible data hiding scheme for encrypted covers based on histogram shifting. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS 2019. [DOI: 10.1016/j.jisa.2019.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
52
|
GPU-Based Lossless Compression of Aurora Spectral Data using Online DPCM. REMOTE SENSING 2019. [DOI: 10.3390/rs11141635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
It is well known that aurorae have very high research value, but the data volume of aurora spectral data is very large, which brings great challenges to storage and transmission. To alleviate this problem, compression of aurora spectral data is indispensable. This paper presents a parallel Compute Unified Device Architecture (CUDA) implementation of the prediction-based online Differential Pulse Code Modulation (DPCM) method for the lossless compression of the aurora spectral data. Two improvements are proposed to improve the compression performance of the online DPCM method. One is on the computing of the prediction coefficients, and the other is on the encoding of the residual. In the CUDA implementation, we proposed a decomposition method for the matrix multiplication to avoid redundant data accesses and calculations. In addition, the CUDA implementation is optimized with a multi-stream technique and multi-graphics processing unit (GPU) technique, respectively. Finally, the average compression time of an aurora spectral image reaches about 0.06 s, which is much less than the 15 s aurora spectral data acquisition time interval and can save a lot of time for transmission and other subsequent tasks.
Collapse
|
53
|
A JND-Based Pixel-Domain Algorithm and Hardware Architecture for Perceptual Image Coding. J Imaging 2019; 5:jimaging5050050. [PMID: 34460488 PMCID: PMC8320967 DOI: 10.3390/jimaging5050050] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Accepted: 04/16/2019] [Indexed: 11/16/2022] Open
Abstract
This paper presents a hardware efficient pixel-domain just-noticeable difference (JND) model and its hardware architecture implemented on an FPGA. This JND model architecture is further proposed to be part of a low complexity pixel-domain perceptual image coding architecture, which is based on downsampling and predictive coding. The downsampling is performed adaptively on the input image based on regions-of-interest (ROIs) identified by measuring the downsampling distortions against the visibility thresholds given by the JND model. The coding error at any pixel location can be guaranteed to be within the corresponding JND threshold in order to obtain excellent visual quality. Experimental results show the improved accuracy of the proposed JND model in estimating visual redundancies compared with classic JND models published earlier. Compression experiments demonstrate improved rate-distortion performance and visual quality over JPEG-LS as well as reduced compressed bit rates compared with other standard codecs such as JPEG 2000 at the same peak signal-to-perceptible-noise ratio (PSPNR). FPGA synthesis results targeting a mid-range device show very moderate hardware resource requirements and over 100 Megapixel/s throughput of both the JND model and the perceptual encoder.
Collapse
|
54
|
Abstract
Abstract
In recent years, the important and fast growth in the development and demand of multimedia products is contributing to an insufficiency in the bandwidth of devices and network storage memory. Consequently, the theory of data compression becomes more significant for reducing data redundancy in order to allow more transfer and storage of data. In this context, this paper addresses the problem of lossy image compression. Indeed, this new proposed method is based on the block singular value decomposition (SVD) power method that overcomes the disadvantages of MATLAB’s SVD function in order to make a lossy image compression. The experimental results show that the proposed algorithm has better compression performance compared with the existing compression algorithms that use MATLAB’s SVD function. In addition, the proposed approach is simple in terms of implementation and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.
Collapse
Affiliation(s)
- Khalid El Asnaoui
- Complex Systems Engineering and Human Systems, Mohammed VI Polytechnic University, Lot 660, Hay Moulay Rachid, Ben Guerir 43150, Morocco
| |
Collapse
|
55
|
|
56
|
Hernandez-Cabronero M, Sanchez V, Blanes I, Auli-Llinas F, Marcellin MW, Serra-Sagrista J. Mosaic-Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:21-32. [PMID: 29994394 DOI: 10.1109/tmi.2018.2852685] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes.
Collapse
|
57
|
Pixel-Value-Ordering based Reversible Information Hiding Scheme with Self-Adaptive Threshold Strategy. Symmetry (Basel) 2018. [DOI: 10.3390/sym10120764] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Pixel value ordering (PVO) hiding scheme is a kind of data embedding technique that hides a secret message in the difference of the largest and second largest pixels of a block. After that, the scholars improved PVO scheme by using a threshold to determine whether the block is smooth or complex. Only a smooth block can be used to hide information. The researchers analyzed all the possible thresholds to find the proper one for hiding secret message. However, it is time consuming. Some researchers decomposing the smooth block into four smaller blocks for hiding more messages to increase image quality. However, the complexity of the block is more important than block size. Hence, this study proposes an ameliorated method. The proposed scheme analyzes the variation of the region so as to judge the complexity of the block and applies quantification strategy to quantified the pixel for making sure the pixel is reversible. It adopts an adaptive threshold generation mechanism to find the proper threshold for different images. The results show that the image quality of the proposed scheme is higher than that of the other methods. The proposed scheme can also let the user adjust the hiding rate to achieve higher image quality or hiding capacity.
Collapse
|
58
|
Abstract
Hyperspectral imaging (HSI) technology has been used for various remote sensing applications due to its excellent capability of monitoring regions-of-interest over a period of time. However, the large data volume of four-dimensional multitemporal hyperspectral imagery demands massive data compression techniques. While conventional 3D hyperspectral data compression methods exploit only spatial and spectral correlations, we propose a simple yet effective predictive lossless compression algorithm that can achieve significant gains on compression efficiency, by also taking into account temporal correlations inherent in the multitemporal data. We present an information theoretic analysis to estimate potential compression performance gain with varying configurations of context vectors. Extensive simulation results demonstrate the effectiveness of the proposed algorithm. We also provide in-depth discussions on how to construct the context vectors in the prediction model for both multitemporal HSI and conventional 3D HSI data.
Collapse
|
59
|
An Efficient Middle Layer Platform for Medical Imaging Archives. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:3984061. [PMID: 30034674 PMCID: PMC6033252 DOI: 10.1155/2018/3984061] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 04/29/2018] [Accepted: 05/09/2018] [Indexed: 11/17/2022]
Abstract
Digital medical image usage is common in health services and clinics. These data have a vital importance for diagnosis and treatment; therefore, preservation, protection, and archiving of these data are a challenge. Rapidly growing file sizes differentiated data formats and increasing number of files constitute big data, which traditional systems do not have the capability to process and store these data. This study investigates an efficient middle layer platform based on Hadoop and MongoDB architecture using the state-of-the-art technologies in the literature. We have developed this system to improve the medical image compression method that we have developed before to create a middle layer platform that performs data compression and archiving operations. With this study, a platform using MapReduce programming model on Hadoop has been developed that can be scalable. MongoDB, a NoSQL database, has been used to satisfy performance requirements of the platform. A four-node Hadoop cluster has been built to evaluate the developed platform and execute distributed MapReduce algorithms. The actual patient medical images have been used to validate the performance of the platform. The processing of test images takes 15,599 seconds on a single node, but on the developed platform, this takes 8,153 seconds. Moreover, due to the medical imaging processing package used in the proposed method, the compression ratio values produced for the non-ROI image are between 92.12% and 97.84%. In conclusion, the proposed platform provides a cloud-based integrated solution to the medical image archiving problem.
Collapse
|
60
|
|
61
|
Gao L, Gao T, Zhao J, Liu Y. Reversible Watermarking in Digital Image Using PVO and RDWT. INTERNATIONAL JOURNAL OF DIGITAL CRIME AND FORENSICS 2018. [DOI: 10.4018/ijdcf.2018040103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This article proposed a reversible digital image watermarking scheme using PVO and Redundant Discrete Wavelet Transform (RDWT). The PVO was introduce to the proposed scheme to enhance the embedding capacity. By embedding the watermark in the RDWT coefficients, the proposed scheme exploited the visual masking property of RDWT to achieve better visual quality. Also, the proposed scheme has better performance on embedding capacity because the RDWT has several sub-band coefficients for embedding. The experimental results on natural and medical images suggests that the proposed scheme could meet the demand of perceptional quality with better embedding capacity than former schemes.
Collapse
Affiliation(s)
- Lin Gao
- College of Software, Nankai University, Tianjin, China & School of Computer and Information Engineering, Tianjin ChengJian University, Tianjin, China
| | - Tiegang Gao
- College of Software, Nankai University, Tianjin, China
| | - Jie Zhao
- School of Computer and Information Engineering, Tianjin Chengjian University, Tianjin, China
| | - Yonglei Liu
- School of Electric and Information Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
62
|
Liu F, Hernandez-Cabronero M, Sanchez V, Marcellin MW, Bilgin A. The Current Role of Image Compression Standards in Medical Imaging. INFORMATION 2017; 8:131. [PMID: 34671488 PMCID: PMC8525863 DOI: 10.3390/info8040131] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
With increasing utilization of medical imaging in clinical practice and the growing dimensions of data volumes generated by various medical imaging modalities, the distribution, storage, and management of digital medical image data sets requires data compression. Over the past few decades, several image compression standards have been proposed by international standardization organizations. This paper discusses the current status of these image compression standards in medical imaging applications together with some of the legal and regulatory issues surrounding the use of compression in medical settings.
Collapse
Affiliation(s)
- Feng Liu
- College of Electronic Information and Optical Engineering, Nankai University, Haihe Education Park, 38 Tongyan Road, Jinnan District, Tianjin 300353, P. R. China
| | - Miguel Hernandez-Cabronero
- Department of Electrical and Computer Engineering, The University of Arizona; 1230 E. Speedway Blvd, Tucson, AZ, 85721, U.S.A
| | - Victor Sanchez
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, United Kingdom
| | - Michael W. Marcellin
- Department of Electrical and Computer Engineering, The University of Arizona; 1230 E. Speedway Blvd, Tucson, AZ, 85721, U.S.A
| | - Ali Bilgin
- Department of Electrical and Computer Engineering, The University of Arizona; 1230 E. Speedway Blvd, Tucson, AZ, 85721, U.S.A
- Department of Biomedical Engineering, The University of Arizona; 1127 E. James E. Rogers Way, Tucson, AZ, 85721, U.S.A
- Department of Medical Imaging, The University of Arizona; 1501 N. Campbell Ave., Tucson, AZ, 85724, U.S.A
| |
Collapse
|
63
|
Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction. Med Biol Eng Comput 2017; 56:957-966. [PMID: 29105018 DOI: 10.1007/s11517-017-1741-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Accepted: 10/20/2017] [Indexed: 10/18/2022]
Abstract
To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.
Collapse
|
64
|
Lucas LFR, Rodrigues NMM, da Silva Cruz LA, de Faria SMM. Lossless Compression of Medical Images Using 3-D Predictors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2250-2260. [PMID: 28613165 DOI: 10.1109/tmi.2017.2714640] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3-D-MRP, is based on the principle of minimum rate predictors (MRPs), which is one of the state-of-the-art lossless compression technologies presented in the data compression literature. The main features of the proposed method include the use of 3-D predictors, 3-D-block octree partitioning and classification, volume-based optimization, and support for 16-b-depth images. Experimental results demonstrate the efficiency of the 3-D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8- and 16-bit-depth contents, respectively, when compared with JPEG-LS, JPEG2000, CALIC, and HEVC, as well as other proposals based on the MRP algorithm.
Collapse
|
65
|
Alam MW, Hasan MM, Mohammed SK, Deeba F, Wahid KA. Are Current Advances of Compression Algorithms for Capsule Endoscopy Enough? A Technical Review. IEEE Rev Biomed Eng 2017; 10:26-43. [PMID: 28961125 DOI: 10.1109/rbme.2017.2757013] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The recent technological advances in capsule endoscopy system have revolutionized the healthcare system by introducing new techniques and functionalities to diagnose gastrointestinal tract. These techniques improve diagnostic accuracy and reduce the risk of hospitalization. Although many benefits of capsule endoscopy are known, there are still limitations including lower battery life, higher bandwidth, poor image quality and lower frame rate, which have restricted its wide use. In order to solve these limitations, the importance of a low-cost compression algorithm, that produces higher frame rate with better image quality and yet consumes lower bandwidth and transmission power, is paramount. While several review papers have been published describing the capability of capsule endoscope in terms of its functionality and emerging features, an extensive review on the compression algorithms from past and for future applications is still of great interest. Hence, in this review, we aim to address the issue by exploring the characteristics of endoscopic images, analyzing the strengths and weaknesses of useful compression techniques, and making suggestions for possible future adaptation.
Collapse
|
66
|
Haddad S, Coatrieux G, Cozic M, Bouslimi D. Joint Watermarking and Lossless JPEG-LS Compression for Medical Image Security. Ing Rech Biomed 2017. [DOI: 10.1016/j.irbm.2017.06.007] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
67
|
Wu MT. Artifact Effect Reduction Using Power and Entropy Algorithms. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s021800141754012x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, a method is proposed to reduce artifact effect for images. The proposed approach first decomposes an original image into four subband images. The powers of the four sub-images are calculated and compared with a power threshold accompanying entropy measure to decide that whether the sub-images are reserved or not. The pixels of the reserved sub-images are then compared with a pixel threshold accompanying different entropy measure in order to determine that the pixel is discarded or not. The experimental results of this approach in comparison with other methods are performed both subjectively and qualitatively and shown that the proposed algorithms could reconstruct the desired recovered performance subjectively and objectively.
Collapse
Affiliation(s)
- Ming-Te Wu
- Department of Information Technology, Kao Yuan University, No. 1821, Zhongshan Rd., Luzhu Dist., Kaohsiung City 82151, Taiwan
| |
Collapse
|
68
|
Comparative Performance Evaluation of Three Image Compression Algorithms. JOURNAL OF APPLIED SCIENCE & PROCESS ENGINEERING 2017. [DOI: 10.33736/jaspe.371.2017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The advent of computer and internet has brought about massive change to the ways images are being managed. This revolution has resulted in changes in image processing and management as well as the huge space requirement for images’ uploading, downloading, transferring and storing nowadays. In guiding against this huge space requirement, images need to be compressed before either storing or transmitting. Several algorithms or techniques on image compression had been developed in literature. In this study, three of these image compression algorithms were developed using MATLAB codes. The three algorithms developed are discrete cosine transform (DCT), discrete wavelet transform (DWT) and set partitioning in hierarchical tree (SPIHT). In order to ascertain which of them is most appropriate for image storing and transmission, comparative performance evaluations were conducted on the three developed algorithms using five performance indices. The results of the comparative performance evaluations show that the three algorithms are effective in image compression but with different efficiency rates. In addition, the comparative performance evaluations results show that DWT has the highest compression ratio and distortion level while the corresponding values for SPIHT is the lowest with those of DCT fall in-between. Also, the results of the study show that the lower the mean square error and the higher the peak signal-to-noise-ratio, the lower the distortion level in the compressed image.
Collapse
|
69
|
Starosolski R. Skipping Selected Steps of DWT Computation in Lossless JPEG 2000 for Improved Bitrates. PLoS One 2016; 11:e0168704. [PMID: 28006015 PMCID: PMC5178996 DOI: 10.1371/journal.pone.0168704] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 12/04/2016] [Indexed: 11/18/2022] Open
Abstract
In order to improve bitrates of lossless JPEG 2000, we propose to modify the discrete wavelet transform (DWT) by skipping selected steps of its computation. We employ a heuristic to construct the skipped steps DWT (SS-DWT) in an image-adaptive way and define fixed SS-DWT variants. For a large and diverse set of images, we find that SS-DWT significantly improves bitrates of non-photographic images. From a practical standpoint, the most interesting results are obtained by applying entropy estimation of coding effects for selecting among the fixed SS-DWT variants. This way we get the compression scheme that, as opposed to the general SS-DWT case, is compliant with the JPEG 2000 part 2 standard. It provides average bitrate improvement of roughly 5% for the entire test-set, whereas the overall compression time becomes only 3% greater than that of the unmodified JPEG 2000. Bitrates of photographic and non-photographic images are improved by roughly 0.5% and 14%, respectively. At a significantly increased cost of exploiting a heuristic, selecting the steps to be skipped based on the actual bitrate instead of an estimated one, and by applying reversible denoising and lifting steps to SS-DWT, we have attained greater bitrate improvements of up to about 17.5% for non-photographic images.
Collapse
Affiliation(s)
- Roman Starosolski
- Institute of Informatics, Faculty of Automatic Control, Electronics and Computer Science, Silesian University of Technology, Gliwice, Poland
| |
Collapse
|
70
|
Jaferzadeh K, Gholami S, Moon I. Lossless and lossy compression of quantitative phase images of red blood cells obtained by digital holographic imaging. APPLIED OPTICS 2016; 55:10409-10416. [PMID: 28059271 DOI: 10.1364/ao.55.010409] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, we evaluate lossless and lossy compression techniques to compress quantitative phase images of red blood cells (RBCs) obtained by an off-axis digital holographic microscopy (DHM). The RBC phase images are numerically reconstructed from their digital holograms and are stored in 16-bit unsigned integer format. In the case of lossless compression, predictive coding of JPEG lossless (JPEG-LS), JPEG2000, and JP3D are evaluated, and compression ratio (CR) and complexity (compression time) are compared against each other. It turns out that JP2k can outperform other methods by having the best CR. In the lossy case, JP2k and JP3D with different CRs are examined. Because some data is lost in a lossy way, the degradation level is measured by comparing different morphological and biochemical parameters of RBC before and after compression. Morphological parameters are volume, surface area, RBC diameter, sphericity index, and the biochemical cell parameter is mean corpuscular hemoglobin (MCH). Experimental results show that JP2k outperforms JP3D not only in terms of mean square error (MSE) when CR increases, but also in compression time in the lossy compression way. In addition, our compression results with both algorithms demonstrate that with high CR values the three-dimensional profile of RBC can be preserved and morphological and biochemical parameters can still be within the range of reported values.
Collapse
|
71
|
|
72
|
Sanchez V, Auli-Llinas F, Serra-Sagrista J. Piecewise Mapping in HEVC Lossless Intra-Prediction Coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:4004-4017. [PMID: 28113430 DOI: 10.1109/tip.2016.2571065] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The lossless intra-prediction coding modality of the High Efficiency Video Coding standard provides high coding performance while allowing frame-by-frame basis access to the coded data. This is of interest in many professional applications, such as medical imaging, automotive vision, and digital preservation in libraries and archives. Various improvements to lossless intra-prediction coding have been proposed recently, most of them based on sample-wise prediction using differential pulse code modulation (DPCM). Other recent proposals aim at further reducing the energy of intra-predicted residual blocks. However, the energy reduction achieved is frequently minimal due to the difficulty of correctly predicting the sign and magnitude of residual values. In this paper, we pursue a novel approach to this energy-reduction problem using piecewise mapping (pwm) functions. In particular, we analyze the range of values in residual blocks and apply accordingly a pwm function to map specific residual values to unique lower values. We encode the appropriate parameters associated with the pwm functions at the encoder, so that the corresponding inverse pwm functions at the decoder can map values back to the same residual values. These residual values are then used to reconstruct the original signal. This mapping is, therefore, reversible and introduces no losses. We evaluate the pwm functions on 4 × 4 residual blocks computed after DPCM-based prediction for lossless coding of a variety of camera-captured and screen content sequences. Evaluation results show that the pwm functions can attain the maximum bitrate reductions of 5.54% and 28.33% for screen content material compared with DPCM-based and block-wise intra-prediction, respectively. Compared with intra-block copy, piecewise mapping can attain the maximum bit-rate reductions of 11.48% for a camera-captured material.
Collapse
|
73
|
Capurro I, Lecumberry F, Martin A, Ramirez I, Rovira E, Seroussi G. Efficient Sequential Compression of Multichannel Biomedical Signals. IEEE J Biomed Health Inform 2016; 21:904-916. [PMID: 27337728 DOI: 10.1109/jbhi.2016.2582683] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper proposes lossless and near-lossless compression algorithms for multichannel biomedical signals. The algorithms are sequential and efficient, which makes them suitable for low-latency and low-power signal transmission applications. We make use of information theory and signal processing tools (such as universal coding, universal prediction, and fast online implementations of multivariate recursive least squares), combined with simple methods to exploit spatial as well as temporal redundancies typically present in biomedical signals. The algorithms are tested with publicly available electroencephalogram and electrocardiogram databases, surpassing in all cases the current state of the art in near-lossless and lossless compression ratios.
Collapse
|
74
|
Wu H, Sun X, Yang J, Zeng W, Wu F. Lossless Compression of JPEG Coded Photo Collections. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:2684-2696. [PMID: 27071170 DOI: 10.1109/tip.2016.2551366] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared with the JPEG coded image collections, our method achieves average bit savings of more than 31%.
Collapse
|
75
|
Anantha Babu S, Eswaran P, Senthil Kumar C. Lossless Compression Algorithm Using Improved RLC for Grayscale Image. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2016. [DOI: 10.1007/s13369-016-2082-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
76
|
Weinlich A, Amon P, Hutter A, Kaup A. Probability Distribution Estimation for Autoregressive Pixel-Predictive Image Coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:1382-1395. [PMID: 26829790 DOI: 10.1109/tip.2016.2522339] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Pixelwise linear prediction using backward-adaptive least-squares or weighted least-squares estimation of prediction coefficients is currently among the state-of-the-art methods for lossless image compression. While current research is focused on mean intensity prediction of the pixel to be transmitted, best compression requires occurrence probability estimates for all possible intensity values. Apart from common heuristic approaches, we show how prediction error variance estimates can be derived from the (weighted) least-squares training region and how a complete probability distribution can be built based on an autoregressive image model. The analysis of image stationarity properties further allows deriving a novel formula for weight computation in weighted least-squares proofing and generalizing ad hoc equations from the literature. For sparse intensity distributions in non-natural images, a modified image model is presented. Evaluations were done in the newly developed C++ framework volumetric, artificial, and natural image lossless coder (Vanilc), which can compress a wide range of images, including 16-bit medical 3D volumes or multichannel data. A comparison with several of the best available lossless image codecs proofs that the method can achieve very competitive compression ratios. In terms of reproducible research, the source code of Vanilc has been made public.
Collapse
|
77
|
Hernández-Cabronero M, Blanes I, Pinho AJ, Marcellin MW, Serra-Sagristà J. Analysis-Driven Lossy Compression of DNA Microarray Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:654-664. [PMID: 26462084 DOI: 10.1109/tmi.2015.2489262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1.
Collapse
|
78
|
Kazeminia S, Karimi N, Soroushmehr SMR, Samavi S, Derksen H, Najarian K. Region of interest extraction for lossless compression of bone X-ray images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:3061-4. [PMID: 26736938 DOI: 10.1109/embc.2015.7319038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
For few decades digital X-ray imaging has been one of the most important tools for medical diagnosis. With the advent of distance medicine and the use of big data in this respect, the need for efficient storage and online transmission of these images is becoming an essential feature. Limited storage space and limited transmission bandwidth are the main challenges. Efficient image compression methods are lossy while the information of medical images should be preserved with no change. Hence, lossless compression methods are necessary for this purpose. In this paper, a novel method has been proposed to eliminate the non-ROI data from bone X-ray images. Background pixels do not contain any valuable medical information. The proposed method is based on the histogram dispersion method. ROI is separated from the background and it is compressed with a lossless compression method to preserve medical information of the image. Compression ratios of the implemented results show that the proposed algorithm is capable of effective reduction of the statistical and spatial redundancies.
Collapse
|
79
|
Preston C, Arnavut Z, Koc B. Lossless compression of medical images using Burrows-Wheeler Transformation with Inversion Coder. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:2956-9. [PMID: 26736912 DOI: 10.1109/embc.2015.7319012] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Medical imaging is a quickly growing industry where the need for highly efficient lossless compression algorithms is necessary in order to reduce storage space and transmission rates for the large, high resolution, medical images. Due to the fact that medical imagining cannot utilize lossy compression, in the event that vital information may be lost, it is imperative that lossless compression be used. While several authors have investigated lossless compression of medical images, the Burrows-Wheeler Transformation with an Inversion Coder (BWIC) has not been examined. Our investigation shows that BWIC runs in linear time and yields better compression rates than well-known image coders, such as JPEG-LS and JPEG-2000.
Collapse
|
80
|
Minervini M, Scharr H, Tsaftaris SA. The significance of image compression in plant phenotyping applications. FUNCTIONAL PLANT BIOLOGY : FPB 2015; 42:971-988. [PMID: 32480737 DOI: 10.1071/fp15033] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Accepted: 07/01/2015] [Indexed: 06/11/2023]
Abstract
We are currently witnessing an increasingly higher throughput in image-based plant phenotyping experiments. The majority of imaging data are collected using complex automated procedures and are then post-processed to extract phenotyping-related information. In this article, we show that the image compression used in such procedures may compromise phenotyping results and this needs to be taken into account. We use three illuminating proof-of-concept experiments that demonstrate that compression (especially in the most common lossy JPEG form) affects measurements of plant traits and the errors introduced can be high. We also systematically explore how compression affects measurement fidelity, quantified as effects on image quality, as well as errors in extracted plant visual traits. To do so, we evaluate a variety of image-based phenotyping scenarios, including size and colour of shoots, leaf and root growth. To show that even visual impressions can be used to assess compression effects, we use root system images as examples. Overall, we find that compression has a considerable effect on several types of analyses (albeit visual or quantitative) and that proper care is necessary to ensure that this choice does not affect biological findings. In order to avoid or at least minimise introduced measurement errors, for each scenario, we derive recommendations and provide guidelines on how to identify suitable compression options in practice. We also find that certain compression choices can offer beneficial returns in terms of reducing the amount of data storage without compromising phenotyping results. This may enable even higher throughput experiments in the future.
Collapse
Affiliation(s)
- Massimo Minervini
- Pattern Recognition and Image Analysis, IMT Institute for Advanced Studies, Lucca, Piazza S. Francesco, 19, 55100 Lucca, Italy
| | - Hanno Scharr
- Institute of Bio- and Geosciences: Plant Sciences, Forschungszentrum Jülich GmbH, Wilhelm-Johnen-Straße, 52428 Jülich, Germany
| | - Sotirios A Tsaftaris
- Pattern Recognition and Image Analysis, IMT Institute for Advanced Studies, Lucca, Piazza S. Francesco, 19, 55100 Lucca, Italy
| |
Collapse
|
81
|
Wang X, Ding J, Pei Q. A novel reversible image data hiding scheme based on pixel value ordering and dynamic pixel block partition. Inf Sci (N Y) 2015. [DOI: 10.1016/j.ins.2015.03.022] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
82
|
Chen J, Chen TS, Lin C, Chen SY, Lin J. A simple JPEG-LS compressed technique for 2DGE image with ROI emphasis. THE IMAGING SCIENCE JOURNAL 2015. [DOI: 10.1179/1743131x14y.0000000086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
83
|
Liu J, Zhai G, Yang X, Chen L. Lossless predictive coding for images with Bayesian treatment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5519-5530. [PMID: 25361506 DOI: 10.1109/tip.2014.2365698] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Adaptive predictor has long been used for lossless predictive coding of images. Most of existing lossless predictive coding techniques mainly focus on suitability of prediction model for training set with the underlying assumption of local consistency, which may not hold well on object boundaries and cause large predictive error. In this paper, we propose a novel approach based on the assumption that local consistency and patch redundancy exist simultaneously in natural images. We derive a family of linear models and design a new algorithm to automatically select one suitable model for prediction. From the Bayesian perspective, the model with maximum posterior probability is considered as the best. Two types of model evidence are included in our algorithm. One is traditional training evidence, which represents the models’ suitability for current pixel under the assumption of local consistency. The other is target evidence, which is proposed to express the preference for different models from the perspective of patch redundancy. It is shown that the fusion of training evidence and target evidence jointly exploits the benefits of local consistency and patch redundancy. As a result, our proposed predictor is more suitable for natural images with textures and object boundaries. Comprehensive experiments demonstrate that the proposed predictor achieves higher efficiency compared with the state-of-the-art lossless predictors.
Collapse
|
84
|
Khan TH, Wahid KA. Design of a lossless image compression system for video capsule endoscopy and its performance in in-vivo trials. SENSORS 2014; 14:20779-99. [PMID: 25375753 PMCID: PMC4279511 DOI: 10.3390/s141120779] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/17/2014] [Revised: 10/14/2014] [Accepted: 10/20/2014] [Indexed: 12/04/2022]
Abstract
In this paper, a new low complexity and lossless image compression system for capsule endoscopy (CE) is presented. The compressor consists of a low-cost YEF color space converter and variable-length predictive with a combination of Golomb-Rice and unary encoding. All these components have been heavily optimized for low-power and low-cost and lossless in nature. As a result, the entire compression system does not incur any loss of image information. Unlike transform based algorithms, the compressor can be interfaced with commercial image sensors which send pixel data in raster-scan fashion that eliminates the need of having large buffer memory. The compression algorithm is capable to work with white light imaging (WLI) and narrow band imaging (NBI) with average compression ratio of 78% and 84% respectively. Finally, a complete capsule endoscopy system is developed on a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. The prototype is developed using circular PCBs having a diameter of 16 mm. Several in-vivo and ex-vivo trials using pig's intestine have been conducted using the prototype to validate the performance of the proposed lossless compression algorithm. The results show that, compared with all other existing works, the proposed algorithm offers a solution to wireless capsule endoscopy with lossless and yet acceptable level of compression.
Collapse
Affiliation(s)
- Tareq H Khan
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N5A9, Canada.
| | - Khan A Wahid
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N5A9, Canada.
| |
Collapse
|
85
|
Rad RM, Wong K, Guo JM. A unified data embedding and scrambling method. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1463-1475. [PMID: 24565789 DOI: 10.1109/tip.2014.2302681] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Conventionally, data embedding techniques aim at maintaining high-output image quality so that the difference between the original and the embedded images is imperceptible to the naked eye. Recently, as a new trend, some researchers exploited reversible data embedding techniques to deliberately degrade image quality to a desirable level of distortion. In this paper, a unified data embedding-scrambling technique called UES is proposed to achieve two objectives simultaneously, namely, high payload and adaptive scalable quality degradation. First, a pixel intensity value prediction method called checkerboard-based prediction is proposed to accurately predict 75% of the pixels in the image based on the information obtained from 25% of the image. Then, the locations of the predicted pixels are vacated to embed information while degrading the image quality. Given a desirable quality (quantified in SSIM) for the output image, UES guides the embedding-scrambling algorithm to handle the exact number of pixels, i.e., the perceptual quality of the embedded-scrambled image can be controlled. In addition, the prediction errors are stored at a predetermined precision using the structure side information to perfectly reconstruct or approximate the original image. In particular, given a desirable SSIM value, the precision of the stored prediction errors can be adjusted to control the perceptual quality of the reconstructed image. Experimental results confirmed that UES is able to perfectly reconstruct or approximate the original image with SSIM value > 0.99 after completely degrading its perceptual quality while embedding at 7.001 bpp on average.
Collapse
|
86
|
Dragoi IC, Coltuc D. Local-prediction-based difference expansion reversible watermarking. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1779-1790. [PMID: 24808346 DOI: 10.1109/tip.2014.2307482] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper investigates the use of local prediction in difference expansion reversible watermarking. For each pixel, a least square predictor is computed on a square block centered on the pixel and the corresponding prediction error is expanded. The same predictor is recovered at detection without any additional information. The proposed local prediction is general and it applies regardless of the predictor order or the prediction context. For the particular cases of least square predictors with the same context as the median edge detector, gradient-adjusted predictor or the simple rhombus neighborhood, the local prediction-based reversible watermarking clearly outperforms the state-of-the-art schemes based on the classical counterparts. Experimental results are provided.
Collapse
|
87
|
Kujawinska M, Kozacki T, Falldorf C, Meeser T, Hennelly BM, Garbat P, Zaperty W, Niemelä M, Finke G, Kowiel M, Naughton T. Multiwavefront digital holographic television. OPTICS EXPRESS 2014; 22:2324-2336. [PMID: 24663525 DOI: 10.1364/oe.22.002324] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents the full technology chain supporting wide angle digital holographic television from holographic capture of real world objects/scenes to holographic display with an extended viewing angle. The data are captured with multiple CCD cameras located around an object. The display system is based on multiple tilted spatial light modulators (SLMs) arranged in a circular configuration. The capture-display system is linked by a holographic data processing module, which allows for significant decoupling of the capture and display systems. The presented experimental results, based on the reconstruction of real world, variable in time scenes, illustrates imaging dynamics, viewing angle and quality.
Collapse
|
88
|
Dai W, Xiong H, Wang J, Zheng YF. Large Discriminative Structured Set Prediction Modeling With Max-Margin Markov Network for Lossless Image Coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:541-554. [PMID: 26270907 DOI: 10.1109/tip.2013.2293429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Inherent statistical correlation for context-based prediction and structural interdependencies for local coherence is not fully exploited in existing lossless image coding schemes. This paper proposes a novel prediction model where the optimal correlated prediction for a set of pixels is obtained in the sense of the least code length. It not only exploits the spatial statistical correlations for the optimal prediction directly based on 2D contexts, but also formulates the data-driven structural interdependencies to make the prediction error coherent with the underlying probability distribution for coding. Under the joint constraints for local coherence, max-margin Markov networks are incorporated to combine support vector machines structurally to make max-margin estimation for a correlated region. Specifically, it aims to produce multiple predictions in the blocks with the model parameters learned in such a way that the distinction between the actual pixel and all possible estimations is maximized. It is proved that, with the growth of sample size, the prediction error is asymptotically upper bounded by the training error under the decomposable loss function. Incorporated into the lossless image coding framework, the proposed model outperforms most prediction schemes reported.
Collapse
|
89
|
Kim S, Cho NI. Hierarchical prediction and context adaptive coding for lossless color image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:445-449. [PMID: 24490231 DOI: 10.1109/tip.2013.2293428] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.
Collapse
|
90
|
Tsai YY, Chan CS, Liu CL, Su BR. A reversible steganographic algorithm for BTC-compressed images based on difference expansion and median edge detector. THE IMAGING SCIENCE JOURNAL 2013. [DOI: 10.1179/1743131x12y.0000000032] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
91
|
Martchenko A, Deng G. Bayesian predictor combination for lossless image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:5263-5270. [PMID: 24108716 DOI: 10.1109/tip.2013.2284067] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Adaptive predictor combination (APC) is a framework for combining multiple predictors for lossless image compression and is often at the core of state-of-the-art algorithms. In this paper, a Bayesian parameter estimation scheme is proposed for APC. Extensive experiments using natural, medical, and remote sensing images of 8–16 bit/pixel have confirmed that the predictive performance is consistently better than that of APC for any combination of fixed predictors and with only a marginal increase in computational complexity. The predictive performance improves with every additional fixed predictor, a property that is not found in other predictor combination schemes studied in this paper. Analysis and simulation show that the performance of the proposed algorithm is not sensitive to the choice of hyper-parameters of the prior distributions. Furthermore, the proposed prediction scheme provides a theoretical justification for the error correction stage that is often included as part of a prediction process.
Collapse
|
92
|
Karimi N, Samavi S, Shirani S. Lossless compression of RNAi fluorescence images using regional fluctuations of pixels. IEEE J Biomed Health Inform 2013; 17:259-68. [PMID: 24235106 DOI: 10.1109/jbhi.2012.2235453] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
RNA interference (RNAi) is considered one of the most powerful genomic tools which allows the study of drug discovery and understanding of the complex cellular processes by high-content screens. This field of study, which was the subject of 2006 Nobel Prize of medicine, has drastically changed the conventional methods of analysis of genes. A large number of images have been produced by the RNAi experiments. Even though a number of capable special purpose methods have been proposed recently for the processing of RNAi images but there is no customized compression scheme for these images. Hence, highly proficient tools are required to compress these images. In this paper, we propose a new efficient lossless compression scheme for the RNAi images. A new predictor specifically designed for these images is proposed. It is shown that pixels can be classified into three categories based on their intensity distributions. Using classification of pixels based on the intensity fluctuations among the neighbors of a pixel a context-based method is designed. Comparisons of the proposed method with the existing state-of-the-art lossless compression standards and well-known general-purpose methods are performed to show the efficiency of the proposed method.
Collapse
|
93
|
Tabus I, Schiopu I, Astola J. Context coding of depth map images under the piecewise-constant image model representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:4195-4210. [PMID: 23807443 DOI: 10.1109/tip.2013.2271117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This paper introduces an efficient method for lossless compression of depth map images, using the representation of a depth image in terms of three entities: 1) the crack-edges; 2) the constant depth regions enclosed by them; and 3) the depth value over each region. The starting representation is identical with that used in a very efficient coder for palette images, the piecewise-constant image model coding, but the techniques used for coding the elements of the representation are more advanced and especially suitable for the type of redundancy present in depth images. Initially, the vertical and horizontal crack-edges separating the constant depth regions are transmitted by 2D context coding using optimally pruned context trees. Both the encoder and decoder can reconstruct the regions of constant depth from the transmitted crack-edge image. The depth value in a given region is encoded using the depth values of the neighboring regions already encoded, exploiting the natural smoothness of the depth variation, and the mutual exclusiveness of the values in neighboring regions. The encoding method is suitable for lossless compression of depth images, obtaining compression of about 10-65 times, and additionally can be used as the entropy coding stage for lossy depth compression.
Collapse
|
94
|
Li X, Li B, Yang B, Zeng T. General framework to histogram-shifting-based reversible data hiding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:2181-2191. [PMID: 23399962 DOI: 10.1109/tip.2013.2246179] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Histogram shifting (HS) is a useful technique of reversible data hiding (RDH). With HS-based RDH, high capacity and low distortion can be achieved efficiently. In this paper, we revisit the HS technique and present a general framework to construct HS-based RDH. By the proposed framework, one can get a RDH algorithm by simply designing the so-called shifting and embedding functions. Moreover, by taking specific shifting and embedding functions, we show that several RDH algorithms reported in the literature are special cases of this general construction. In addition, two novel and efficient RDH algorithms are also introduced to further demonstrate the universality and applicability of our framework. It is expected that more efficient RDH algorithms can be devised according to the proposed framework by carefully designing the shifting and embedding functions.
Collapse
Affiliation(s)
- Xiaolong Li
- Institute of Computer Science and Technology, Peking University, Beijing 100871, China.
| | | | | | | |
Collapse
|
95
|
Li X, Li J, Li B, Yang B. High-fidelity reversible data hiding scheme based on pixel-value-ordering and prediction-error expansion. SIGNAL PROCESSING 2013; 93:198-205. [DOI: 10.1016/j.sigpro.2012.07.025] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
96
|
Puthooran E, Anand RS, Mukherjee S. Lossless Compression of Medical Images Using a Dual Level DPCM with Context Adaptive Switching Neural Network Predictor. INT J COMPUT INT SYS 2013. [DOI: 10.1080/18756891.2013.816059] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022] Open
|
97
|
Prades-Nebot J, Morbee M, Delp EJ. Generalized PCM coding of images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:3801-3806. [PMID: 22562763 DOI: 10.1109/tip.2012.2197015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Pulse-code modulation (PCM) with embedded quantization allows the rate of the PCM bitstream to be reduced by simply removing a fixed number of least significant bits from each codeword. Although this source coding technique is extremely simple, it has a poor coding efficiency. In this paper, we present a generalized PCM (GPCM) algorithm for images that simply removes bits from each codeword. In contrast to PCM, however, the number and the specific bits that a GPCM encoder removes in each codeword depends on its position in the bitstream and the statistics of the image. Since GPCM allows the encoding to be performed with different degrees of computational complexity, it can adapt to the computational resources that are available in each application. Experimental results show that GPCM outperforms PCM with a gain that depends on the rate, the computational complexity of the encoding, and the degree of inter-pixel correlation of the image.
Collapse
|
98
|
Taquet J, Labit C. Hierarchical oriented predictions for resolution scalable lossless and near-lossless compression of CT and MRI biomedical images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:2641-2652. [PMID: 22294032 DOI: 10.1109/tip.2012.2186147] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We propose a new hierarchical approach to resolution scalable lossless and near-lossless (NLS) compression. It combines the adaptability of DPCM schemes with new hierarchical oriented predictors to provide resolution scalability with better compression performances than the usual hierarchical interpolation predictor or the wavelet transform. Because the proposed hierarchical oriented prediction (HOP) is not really efficient on smooth images, we also introduce new predictors, which are dynamically optimized using a least-square criterion. Lossless compression results, which are obtained on a large-scale medical image database, are more than 4% better on CTs and 9% better on MRIs than resolution scalable JPEG-2000 (J2K) and close to nonscalable CALIC. The HOP algorithm is also well suited for NLS compression, providing an interesting rate-distortion tradeoff compared with JPEG-LS and equivalent or a better PSNR than J2K for a high bit rate on noisy (native) medical images.
Collapse
Affiliation(s)
- Jonathan Taquet
- INRIA, Centre Inria Rennes Bretagne Atlantique, IRISA, 35042 Rennes, France.
| | | |
Collapse
|
99
|
Kieu TD, Chang CC. Reversible watermarking schemes based on expansion embedding using pixel partition strategies. THE IMAGING SCIENCE JOURNAL 2012. [DOI: 10.1179/1743131x11y.0000000014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
100
|
Deligiannis N, Barbarien J, Jacobs M, Munteanu A, Skodras A, Schelkens P. Side-information-dependent correlation channel estimation in hash-based distributed video coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:1934-1949. [PMID: 22203710 DOI: 10.1109/tip.2011.2181400] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Collapse
Affiliation(s)
- Nikos Deligiannis
- Department of Electronics and Informatics, Vrije Universiteit Brussel, Brussels, Belgium.
| | | | | | | | | | | |
Collapse
|