1
|
Regoršek Ž, Gorkič A, Trost A. Parallel Lossless Compression of Raw Bayer Images on FPGA-Based High-Speed Camera. SENSORS (BASEL, SWITZERLAND) 2024; 24:6632. [PMID: 39460112 PMCID: PMC11510722 DOI: 10.3390/s24206632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2024] [Revised: 10/09/2024] [Accepted: 10/11/2024] [Indexed: 10/28/2024]
Abstract
Digital image compression is applied to reduce camera bandwidth and storage requirements, but real-time lossless compression on a high-speed high-resolution camera is a challenging task. The article presents hardware implementation of a Bayer colour filter array lossless image compression algorithm on an FPGA-based camera. The compression algorithm reduces colour and spatial redundancy and employs Golomb-Rice entropy coding. A rule limiting the maximum code length is introduced for the edge cases. The proposed algorithm is based on integer operators for efficient hardware implementation. The algorithm is first verified as a C++ model and later implemented on AMD-Xilinx Zynq UltraScale+ device using VHDL. An effective tree-like pipeline structure is proposed to concatenate codes of compressed pixel data to generate a bitstream representing data of 16 parallel pixels. The proposed parallel compression achieves up to 56% reduction in image size for high-resolution images. Pipelined implementation without any state machine ensures operating frequencies up to 320 MHz. Parallelised operation on 16 pixels effectively increases data throughput to 40 Gbit/s while keeping the total memory requirements low due to real-time processing.
Collapse
Affiliation(s)
- Žan Regoršek
- Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia;
| | - Aleš Gorkič
- OptoMotive, Mechatronics Ltd., 1000 Ljubljana, Slovenia
| | - Andrej Trost
- Faculty of Electrical Engineering, University of Ljubljana, 1000 Ljubljana, Slovenia;
| |
Collapse
|
2
|
Zuo R, Wei S, Wang Y, Irsch K, Kang JU. High-resolution in vivo 4D-OCT fish-eye imaging using 3D-UNet with multi-level residue decoder. BIOMEDICAL OPTICS EXPRESS 2024; 15:5533-5546. [PMID: 39296392 PMCID: PMC11407266 DOI: 10.1364/boe.532258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 07/18/2024] [Accepted: 08/09/2024] [Indexed: 09/21/2024]
Abstract
Optical coherence tomography (OCT) allows high-resolution volumetric imaging of biological tissues in vivo. However, 3D-image acquisition often suffers from motion artifacts due to slow frame rates and involuntary and physiological movements of living tissue. To solve these issues, we implement a real-time 4D-OCT system capable of reconstructing near-distortion-free volumetric images based on a deep learning-based reconstruction algorithm. The system initially collects undersampled volumetric images at a high speed and then upsamples the images in real-time by a convolutional neural network (CNN) that generates high-frequency features using a deep learning algorithm. We compare and analyze both dual-2D- and 3D-UNet-based networks for the OCT 3D high-resolution image reconstruction. We refine the network architecture by incorporating multi-level information to accelerate convergence and improve accuracy. The network is optimized by utilizing the 16-bit floating-point precision for network parameters to conserve GPU memory and enhance efficiency. The result shows that the refined and optimized 3D-network is capable of retrieving the tissue structure more precisely and enable real-time 4D-OCT imaging at a rate greater than 10 Hz with a root mean square error (RMSE) of ∼0.03.
Collapse
Affiliation(s)
- Ruizhi Zuo
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Shuwen Wei
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Yaning Wang
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Kristina Irsch
- CNRS, Vision Institute, Paris, France
- School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Jin U Kang
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
- School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
3
|
Lau CK, Xia M, Wong TT. Taming Reversible Halftoning Via Predictive Luminance. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4841-4852. [PMID: 37220038 DOI: 10.1109/tvcg.2023.3278691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Traditional halftoning usually drops colors when dithering images with binary dots, which makes it difficult to recover the original color information. We proposed a novel halftoning technique that converts a color image into a binary halftone with full restorability to its original version. Our novel base halftoning technique consists of two convolutional neural networks (CNNs) to produce the reversible halftone patterns, and a noise incentive block (NIB) to mitigate the flatness degradation issue of CNNs. Furthermore, to tackle the conflicts between the blue-noise quality and restoration accuracy in our novel base method, we proposed a predictor-embedded approach to offload predictable information from the network, which in our case is the luminance information resembling from the halftone pattern. Such an approach allows the network to gain more flexibility to produce halftones with better blue-noise quality without compromising the restoration quality. Detailed studies on the multiple-stage training method and loss weightings have been conducted. We have compared our predictor-embedded method and our novel method regarding spectrum analysis on halftone, halftone accuracy, restoration accuracy, and the data embedding studies. Our entropy evaluation evidences our halftone contains less encoding information than our novel base method. The experiments show our predictor-embedded method gains more flexibility to improve the blue-noise quality of halftones and maintains a comparable restoration quality with a higher tolerance for disturbances.
Collapse
|
4
|
Roriz R, Silva H, Dias F, Gomes T. A Survey on Data Compression Techniques for Automotive LiDAR Point Clouds. SENSORS (BASEL, SWITZERLAND) 2024; 24:3185. [PMID: 38794039 PMCID: PMC11125693 DOI: 10.3390/s24103185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 05/10/2024] [Accepted: 05/15/2024] [Indexed: 05/26/2024]
Abstract
In the evolving landscape of autonomous driving technology, Light Detection and Ranging (LiDAR) sensors have emerged as a pivotal instrument for enhancing environmental perception. They can offer precise, high-resolution, real-time 3D representations around a vehicle, and the ability for long-range measurements under low-light conditions. However, these advantages come at the cost of the large volume of data generated by the sensor, leading to several challenges in transmission, processing, and storage operations, which can be currently mitigated by employing data compression techniques to the point cloud. This article presents a survey of existing methods used to compress point cloud data for automotive LiDAR sensors. It presents a comprehensive taxonomy that categorizes these approaches into four main groups, comparing and discussing them across several important metrics.
Collapse
Affiliation(s)
| | | | | | - Tiago Gomes
- Centro ALGORITMI/LASI, Escola de Engenharia, Universidade do Minho, 4800-058 Guimarães, Portugal
| |
Collapse
|
5
|
Bai Y, Liu X, Wang K, Ji X, Wu X, Gao W. Deep Lossy Plus Residual Coding for Lossless and Near-Lossless Image Compression. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:3577-3594. [PMID: 38163313 DOI: 10.1109/tpami.2023.3348486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Lossless and near-lossless image compression is of paramount importance to professional users in many technical fields, such as medicine, remote sensing, precision engineering and scientific research. But despite rapidly growing research interests in learning-based image compression, no published method offers both lossless and near-lossless modes. In this paper, we propose a unified and powerful deep lossy plus residual (DLPR) coding framework for both lossless and near-lossless image compression. In the lossless mode, the DLPR coding system first performs lossy compression and then lossless coding of residuals. We solve the joint lossy and residual compression problem in the approach of VAEs, and add autoregressive context modeling of the residuals to enhance lossless compression performance. In the near-lossless mode, we quantize the original residuals to satisfy a given ℓ∞ error bound, and propose a scalable near-lossless compression scheme that works for variable ℓ∞ bounds instead of training multiple networks. To expedite the DLPR coding, we increase the degree of algorithm parallelization by a novel design of coding context, and accelerate the entropy coding with adaptive residual interval. Experimental results demonstrate that the DLPR coding system achieves both the state-of-the-art lossless and near-lossless image compression performance with competitive coding speed.
Collapse
|
6
|
Wang H, Deng K, Zhong G, Duan Y, Yin M, Meng F, Wang Y. Optimization of Internet of Things Remote Desktop Protocol for Low-Bandwidth Environments Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:1208. [PMID: 38400366 PMCID: PMC10892110 DOI: 10.3390/s24041208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/01/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024]
Abstract
This paper discusses optimizing desktop image quality and bandwidth consumption in remote IoT GUI desktop scenarios. Remote desktop tools, which are crucial for work efficiency, typically employ image compression techniques to manage bandwidth. Although JPEG is widely used for its efficiency in eliminating redundancy, it can introduce quality loss with increased compression. Recently, deep learning-based compression techniques have emerged, challenging traditional methods like JPEG. This study introduces an optimized RFB (Remote Frame Buffer) protocol based on a convolutional neural network (CNN) image compression algorithm, focusing on human visual perception in desktop image processing. The improved RFB protocol proposed in this paper, compared to the unoptimized RFB protocol, can save 30-80% of bandwidth consumption and enhances remote desktop image quality, as evidenced by improved PSNR and MS-SSIM values between the remote desktop image and the original image, thus providing superior desktop image transmission quality.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Yulong Wang
- Institute of Computer Application, China Academy of Engineering Physics, Mianyang 621900, China; (H.W.); (K.D.); (G.Z.); (Y.D.); (M.Y.); (F.M.)
| |
Collapse
|
7
|
Ungureanu VI, Negirla P, Korodi A. Image-Compression Techniques: Classical and "Region-of-Interest-Based" Approaches Presented in Recent Papers. SENSORS (BASEL, SWITZERLAND) 2024; 24:791. [PMID: 38339507 PMCID: PMC10857028 DOI: 10.3390/s24030791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/18/2024] [Accepted: 01/23/2024] [Indexed: 02/12/2024]
Abstract
Image compression is a vital component for domains in which the computational resources are usually scarce such as automotive or telemedicine fields. Also, when discussing real-time systems, the large amount of data that must flow through the system can represent a bottleneck. Therefore, the storage of images, alongside the compression, transmission, and decompression procedures, becomes vital. In recent years, many compression techniques that only preserve the quality of the region of interest of an image have been developed, the other parts being either discarded or compressed with major quality loss. This paper proposes a study of relevant papers from the last decade which are focused on the selection of a region of interest of an image and on the compression techniques that can be applied to that area. To better highlight the novelty of the hybrid methods, classical state-of-the-art approaches are also analyzed. The current work will provide an overview of classical and hybrid compression methods alongside a categorization based on compression ratio and other quality factors such as mean-square error and peak signal-to-noise ratio, structural similarity index measure, and so on. This overview can help researchers to develop a better idea of what compression algorithms are used in certain domains and to find out if the presented performance parameters are of interest for the intended purpose.
Collapse
Affiliation(s)
- Vlad-Ilie Ungureanu
- Automation and Applied Informatics Department, University Politehnica Timisoara, 300006 Timisoara, Romania;
| | - Paul Negirla
- Automation and Applied Informatics Department, University Politehnica Timisoara, 300006 Timisoara, Romania;
| | | |
Collapse
|
8
|
Kizhakkumkara Muhamad R, Schretter C, Blinder D, Schelkens P. Autoregressive modeling for lossless compression of holograms. OPTICS EXPRESS 2023; 31:38589-38609. [PMID: 38017961 DOI: 10.1364/oe.502545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/14/2023] [Indexed: 11/30/2023]
Abstract
The large number of pixels to be processed and stored for digital holographic techniques necessitates the development of effective lossless compression techniques. Use cases for such techniques are archiving holograms, especially sensitive biomedical data, and improving the data transmission capacity of bandwidth-limited data transport channels where quality loss cannot be tolerated, like display interfaces. Only a few lossless compression techniques exist for holography, and the search for an efficient technique well suited for processing the large amounts of pixels typically encountered is ongoing. We demonstrate the suitability of autoregressive modeling for compressing signals with limited spatial bandwidth content, like holographic images. The applicability of such schemes for any such bandlimited signal is motivated by a mathematical insight that is novel to our knowledge. The devised compression scheme is lossless and enables decoding architecture that essentially has only two steps. It is also highly scalable, with smaller model sizes providing an effective, low-complexity mechanism to transmit holographic data, while larger models obtain significantly higher compression ratios when compared to state-of-the-art lossless image compression solutions, for a wide selection of both computer-generated and optically-acquired holograms. We also provide a detailed analysis of the various methods that can be used for determining the autoregressive model in the context of compression.
Collapse
|
9
|
Hamano G, Imaizumi S, Kiya H. Effects of JPEG Compression on Vision Transformer Image Classification for Encryption-then-Compression Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:3400. [PMID: 37050460 PMCID: PMC10098741 DOI: 10.3390/s23073400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 03/03/2023] [Accepted: 03/20/2023] [Indexed: 06/19/2023]
Abstract
This paper evaluates the effects of JPEG compression on image classification using the Vision Transformer (ViT). In recent years, many studies have been carried out to classify images in the encrypted domain for privacy preservation. Previously, the authors proposed an image classification method that encrypts both a trained ViT model and test images. Here, an encryption-then-compression system was employed to encrypt the test images, and the ViT model was preliminarily trained by plain images. The classification accuracy in the previous method was exactly equal to that without any encryption for the trained ViT model and test images. However, even though the encrypted test images can be compressible, the practical effects of JPEG, which is a typical lossy compression method, have not been investigated so far. In this paper, we extend our previous method by compressing the encrypted test images with JPEG and verify the classification accuracy for the compressed encrypted-images. Through our experiments, we confirm that the amount of data in the encrypted images can be significantly reduced by JPEG compression, while the classification accuracy of the compressed encrypted-images is highly preserved. For example, when the quality factor is set to 85, this paper shows that the classification accuracy can be maintained at over 98% with a more than 90% reduction in the amount of image data. Additionally, the effectiveness of JPEG compression is demonstrated through comparison with linear quantization. To the best of our knowledge, this is the first study to classify JPEG-compressed encrypted images without sacrificing high accuracy. Through our study, we have come to the conclusion that we can classify compressed encrypted-images without degradation to accuracy.
Collapse
Affiliation(s)
- Genki Hamano
- Graduate School of Science and Engineering, Chiba University, 1-33 Yayoicho, Chiba 263-8522, Japan
| | - Shoko Imaizumi
- Graduate School of Engineering, Chiba University, 1-33 Yayoicho, Chiba 263-8522, Japan
| | - Hitoshi Kiya
- Faculty of System Design, Tokyo Metropolitan University, 6-6 Asahigaoka, Tokyo 191-0065, Japan
| |
Collapse
|
10
|
Žalik B, Strnad D, Kohek Š, Kolingerová I, Nerat A, Lukač N, Lipuš B, Žalik M, Podgorelec D. FLoCIC: A Few Lines of Code for Raster Image Compression. ENTROPY (BASEL, SWITZERLAND) 2023; 25:533. [PMID: 36981421 PMCID: PMC10047997 DOI: 10.3390/e25030533] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 03/15/2023] [Accepted: 03/18/2023] [Indexed: 06/18/2023]
Abstract
A new approach is proposed for lossless raster image compression employing interpolative coding. A new multifunction prediction scheme is presented first. Then, interpolative coding, which has not been applied frequently for image compression, is explained briefly. Its simplification is introduced in regard to the original approach. It is determined that the JPEG LS predictor reduces the information entropy slightly better than the multi-functional approach. Furthermore, the interpolative coding was moderately more efficient than the most frequently used arithmetic coding. Finally, our compression pipeline is compared against JPEG LS, JPEG 2000 in the lossless mode, and PNG using 24 standard grayscale benchmark images. JPEG LS turned out to be the most efficient, followed by JPEG 2000, while our approach using simplified interpolative coding was moderately better than PNG. The implementation of the proposed encoder is extremely simple and can be performed in less than 60 lines of programming code for the coder and 60 lines for the decoder, which is demonstrated in the given pseudocodes.
Collapse
Affiliation(s)
- Borut Žalik
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Damjan Strnad
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Štefan Kohek
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Ivana Kolingerová
- Department of Computer Science and Engineering, University of West Bohemia, Technická 8, 306 14 Plzeň, Czech Republic
| | - Andrej Nerat
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Niko Lukač
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Bogdan Lipuš
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - Mitja Žalik
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| | - David Podgorelec
- Faculty of Electrical Engineering and Computer Science, University of Maribor, Koroška cesta 46, SI-2000 Maribor, Slovenia
| |
Collapse
|
11
|
Xue D, Ma H, Li L, Liu D, Xiong Z. aiWave: Volumetric Image Compression With 3-D Trained Affine Wavelet-Like Transform. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:606-618. [PMID: 36201414 DOI: 10.1109/tmi.2022.3212780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Volumetric image compression has become an urgent task to effectively transmit and store images produced in biological research and clinical practice. At present, the most commonly used volumetric image compression methods are based on wavelet transform, such as JP3D. However, JP3D employs an ideal, separable, global, and fixed wavelet basis to convert input images from pixel domain to frequency domain, which seriously limits its performance. In this paper, we first design a 3-D trained wavelet-like transform to enable signal-dependent and non-separable transform. Then, an affine wavelet basis is introduced to capture the various local correlations in different regions of volumetric images. Furthermore, we embed the proposed wavelet-like transform to an end-to-end compression framework called aiWave to enable an adaptive compression scheme for various datasets. Last but not least, we introduce the weight sharing strategies of the affine wavelet-like transform according to the volumetric data characteristics in the axial direction to reduce the number of parameters. The experimental results show that: 1) when cooperating our trained 3-D affine wavelet-like transform with a simple factorized entropy coding module, aiWave performs better than JP3D and is comparable in terms of encoding and decoding complexities; 2) when adding a context module to remove signal redundancy further, aiWave can achieve a much better performance than HEVC.
Collapse
|
12
|
Ulacha G, Łazoryszczak M. Lossless Image Coding Using Non-MMSE Algorithms to Calculate Linear Prediction Coefficients. ENTROPY (BASEL, SWITZERLAND) 2023; 25:156. [PMID: 36673299 PMCID: PMC9857394 DOI: 10.3390/e25010156] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/09/2023] [Accepted: 01/10/2023] [Indexed: 06/17/2023]
Abstract
This paper presents a lossless image compression method with a fast decoding time and flexible adjustment of coder parameters affecting its implementation complexity. A comparison of several approaches for computing non-MMSE prediction coefficients with different levels of complexity was made. The data modeling stage of the proposed codec was based on linear (calculated by the non-MMSE method) and non-linear (complemented by a context-dependent constant component removal block) predictions. Prediction error coding uses a two-stage compression: an adaptive Golomb code and a binary arithmetic code. The proposed solution results in 30% shorter decoding times and a lower bit average than competing solutions (by 7.9% relative to the popular JPEG-LS codec).
Collapse
Affiliation(s)
- Grzegorz Ulacha
- Correspondence: (G.U.); (M.Ł.); Tel.: +48-91-449-5542 (G.U.)
| | | |
Collapse
|
13
|
Schwartz C, Sander I, Bruhn F, Persson M, Ekblad J, Fuglesang C. Satellite Image Compression Guided by Regions of Interest. SENSORS (BASEL, SWITZERLAND) 2023; 23:730. [PMID: 36679527 PMCID: PMC9861944 DOI: 10.3390/s23020730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/28/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
Small satellites empower different applications for an affordable price. By dealing with a limited capacity for using instruments with high power consumption or high data-rate requirements, small satellite missions usually focus on specific monitoring and observation tasks. Considering that multispectral and hyperspectral sensors generate a significant amount of data subjected to communication channel impairments, bandwidth constraint is an important challenge in data transmission. That issue is addressed mainly by source and channel coding techniques aiming at an effective transmission. This paper targets a significant further bandwidth reduction by proposing an on-the-fly analysis on the satellite to decide which information is effectively useful before coding and transmitting. The images are tiled and classified using a set of detection algorithms after defining the least relevant content for general remote sensing applications. The methodology makes use of the red-band, green-band, blue-band, and near-infrared-band measurements to perform the classification of the content by managing a cloud detection algorithm, a change detection algorithm, and a vessel detection algorithm. Experiments for a set of typical scenarios of summer and winter days in Stockholm, Sweden, were conducted, and the results show that non-important content can be identified and discarded without compromising the predefined useful information for water and dry-land regions. For the evaluated images, only 22.3% of the information would need to be transmitted to the ground station to ensure the acquisition of all the important content, which illustrates the merits of the proposed method. Furthermore, the embedded platform's constraints regarding processing time were analyzed by running the detection algorithms on Unibap's iX10-100 space cloud platform.
Collapse
Affiliation(s)
| | - Ingo Sander
- KTH Royal Institute of Technology, 100 44 Stockholm, Sweden
| | - Fredrik Bruhn
- Unibap AB, Kungsängsgatan 12, 753 22 Uppsala, Sweden
- School of Innovation, Design and Engineering (IDT), Embedded Systems Division, Mälardalen University, P.O. Box 883, 721 23 Västerås, Sweden
| | | | - Joakim Ekblad
- Saab AB, Olof Palmes Gata 17, 111 22 Stockholm, Sweden
| | | |
Collapse
|
14
|
Sun C, Fan X, Zhao D. Lossless Recompression of JPEG Images Using Transform Domain Intra Prediction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:88-99. [PMID: 37015483 DOI: 10.1109/tip.2022.3226409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
JPEG, which was developed 30 years ago, is the most widely used image coding format, especially favored by the resource-deficient devices, due to its simplicity and efficiency. With the evolution of the Internet and the popularity of mobile devices, a huge amount of user-generated JPEG images are uploaded to social media sites like Facebook and Flickr or stored in personal computers or notebooks, which leads to an increase in storage cost. However, the performance of JPEG is far from the state-of-art coding methods. Therefore, the lossless recompression of JPEG images is urgent to be studied, which will further reduce the storage cost while maintaining the image fidelity. In this paper, a hybrid coding framework for the lossless recompression of JPEG images (LLJPEG) using transform domain intra prediction is proposed, including block partition and intra prediction, transform and quantization, and entropy coding. Specifically, in LLJPEG, intra prediction is first used to obtain a predicted block. Then the predicted block is transformed by DCT and then quantized to obtain the predicted coefficients. After that, the predicted coefficients are subtracted from the original coefficients to get the DCT coefficient residuals. Finally, the DCT residuals are entropy coded. In LLJPEG, some new coding tools are proposed for intra prediction and the entropy coding is redesigned. The experiments show that LLJPEG can reduce the storage space by 29.43% and 26.40% on the Kodak and DIV2K datasets respectively without any loss for JPEG images, while maintaining low decoding complexity.
Collapse
|
15
|
B S, P A. Recent developments in wireless capsule endoscopy imaging: Compression and summarization techniques. Comput Biol Med 2022; 149:106087. [PMID: 36115301 DOI: 10.1016/j.compbiomed.2022.106087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 08/10/2022] [Accepted: 09/03/2022] [Indexed: 11/03/2022]
Abstract
Wireless capsule endoscopy (WCE) can be viewed as an innovative technology introduced in the medical domain to directly visualize the digestive system using a battery-powered electronic capsule. It is considered a desirable substitute for conventional digestive tract diagnostic methods for a comfortable and painless inspection. Despite many benefits, WCE results in poor video quality due to low frame resolution and diagnostic accuracy. Many research groups have presented diversified, low-complexity compression techniques to economize battery power consumed in the radio-frequency transmission of the captured video, which allows for capturing the images at high resolution. Many vision-based computational methods have been developed to improve the diagnostic yield. These methods include approaches for automatically detecting abnormalities and reducing the amount of time needed for video analysis. Though various research works have been put forth in the WCE imaging field, there is still a wide gap between the existing techniques and the current needs. Hence, this article systematically reviews recent WCE video compression and summarization techniques. The review's objectives are as follows: First, to provide the details of the requirement, challenges and design percepts for the low complexity WCE video compressor. Second, to discuss the most recent compression methods, emphasizing simple distributed video coding methods. Next, to review the most recent summarization techniques and the significance of using deep neural networks. Further, this review aims to provide a quantitative analysis of the state-of-the-art methods along with their advantages and drawbacks. At last, to discuss existing problems and possible future directions for building a robust WCE imaging framework.
Collapse
Affiliation(s)
- Sushma B
- Image Processing and Analysis Lab (iPAL), Department of Electronics and Communication Engineering, National Institute of Technology Karnataka-Surathkal, Mangalore 575025, Karnataka, India; Department of Electronics and Communication Engineering, CMR Institute of Technology, Bengaluru 560037, Karnataka, India.
| | - Aparna P
- Image Processing and Analysis Lab (iPAL), Department of Electronics and Communication Engineering, National Institute of Technology Karnataka-Surathkal, Mangalore 575025, Karnataka, India
| |
Collapse
|
16
|
Nakahara Y, Matsushima T. Stochastic Model of Block Segmentation Based on Improper Quadtree and Optimal Code under the Bayes Criterion. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1152. [PMID: 36010816 PMCID: PMC9407622 DOI: 10.3390/e24081152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 06/15/2023]
Abstract
Most previous studies on lossless image compression have focused on improving preprocessing functions to reduce the redundancy of pixel values in real images. However, we assumed stochastic generative models directly on pixel values and focused on achieving the theoretical limit of the assumed models. In this study, we proposed a stochastic model based on improper quadtrees. We theoretically derive the optimal code for the proposed model under the Bayes criterion. In general, Bayes-optimal codes require an exponential order of calculation with respect to the data lengths. However, we propose an algorithm that takes a polynomial order of calculation without losing optimality by assuming a novel prior distribution.
Collapse
Affiliation(s)
- Yuta Nakahara
- Center for Data Science, Waseda University, 1-6-1 Nisniwaseda, Shinjuku-ku, Tokyo 169-8050, Japan
| | - Toshiyasu Matsushima
- Department of Pure and Applied Mathematics, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan
| |
Collapse
|
17
|
Recursive Least Squares for Near-Lossless Hyperspectral Data Compression. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147172] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
The hyperspectral image compression scheme is a trade-off between the limited hardware resources of the on-board platform and the ever-growing resolution of the optical instruments. Predictive coding attracts researchers due to its low computational complexity and moderate memory requirements. We propose a near-lossless prediction-based compression scheme that removes spatial and spectral redundant information, thereby significantly reducing the size of hyperspectral images. This scheme predicts the target pixel’s value via a linear combination of previous pixels. The weight matrix of the predictor is iteratively updated using a recursive least squares filter with a loop quantizer. The optimal number of bands for prediction was analyzed experimentally. The results indicate that the proposed scheme outperforms state-of-the-art compression methods in terms of the compression ratio and quality retrieval.
Collapse
|
18
|
Lone MR. A high speed and memory efficient algorithm for perceptually-lossless volumetric medical image compression. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2020.04.014] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
19
|
Fan G, Pan Z, Zhou Q, Dong J, Zhang X. Reversible data hiding in multispectral images for satellite communications. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS 2022. [DOI: 10.1016/j.jisa.2022.103180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
20
|
de Sousa Filho FNM, Pereira de Sá VG, Brigatti E. Entropy estimation in bidimensional sequences. Phys Rev E 2022; 105:054116. [PMID: 35706216 DOI: 10.1103/physreve.105.054116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 04/15/2022] [Indexed: 06/15/2023]
Abstract
We investigate the performance of entropy estimation methods, based either on block entropies or compression approaches, in the case of bidimensional sequences. We introduce a validation data set made of images produced by a large number of different natural systems, in the vast majority characterized by long-range correlations, which produce a large spectrum of entropies. Results show that the framework based on lossless compressors applied to the one-dimensional projection of the considered data set leads to poor estimates. This is because higher dimensional correlations are lost in the projection operation. The adoption of compression methods which do not introduce dimensionality reduction improves the performance of this approach. By far, the best estimation of the asymptotic entropy is generated by the faster convergence of the traditional block-entropies method. As a by-product of our analysis, we show how a specific compressor method can be used as a potentially interesting technique for automatic detection of symmetries in textures and images.
Collapse
Affiliation(s)
- F N M de Sousa Filho
- Instituto de Computação, Universidade Federal do Rio de Janeiro, Av. Athos da Silveira Ramos, 274, 21941-916, Rio de Janeiro, RJ, Brazil
| | - V G Pereira de Sá
- Instituto de Computação, Universidade Federal do Rio de Janeiro, Av. Athos da Silveira Ramos, 274, 21941-916, Rio de Janeiro, RJ, Brazil
| | - E Brigatti
- Instituto de Física, Universidade Federal do Rio de Janeiro, Av. Athos da Silveira Ramos, 149, Cidade Universitária, 21941-972, Rio de Janeiro, RJ, Brazil
| |
Collapse
|
21
|
Min Q, Wang X, Huang B, Zhou Z. Lossless medical image compression based on anatomical information and deep neural networks. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
22
|
Chen Z, Gu S, Lu G, Xu D. Exploiting Intra-Slice and Inter-Slice Redundancy for Learning-Based Lossless Volumetric Image Compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1697-1707. [PMID: 35081025 DOI: 10.1109/tip.2022.3140608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
3D volumetric image processing has attracted increasing attention in the last decades, in which one major research area is to develop efficient lossless volumetric image compression techniques to better store and transmit such images with massive amount of information. In this work, we propose the first end-to-end optimized learning framework for losslessly compressing 3D volumetric data. Our approach builds upon a hierarchical compression scheme by additionally introducing the intra-slice auxiliary features and estimating the entropy model based on both intra-slice and inter-slice latent priors. Specifically, we first extract the hierarchical intra-slice auxiliary features through multi-scale feature extraction modules. Then, an Intra-slice and Inter-slice Conditional Entropy Coding module is proposed to fuse the intra-slice and inter-slice information from different scales as the context information. Based on such context information, we can predict the distributions for both intra-slice auxiliary features and the slice images. To further improve the lossless compression performance, we also introduce two new gating mechanisms called Intra-Gate and Inter-Gate to generate the optimal feature representations for better information fusion. Eventually, we can produce the bitstream for losslessly compressing volumetric images based on the estimated entropy model. Different from the existing lossless volumetric image codecs, our end-to-end optimized framework jointly learns both intra-slice auxiliary features at different scales for each slice and inter-slice latent features from previously encoded slices for better entropy estimation. The extensive experimental results indicate that our framework outperforms the state-of-the-art hand-crafted lossless volumetric image codecs (e.g., JP3D) and the learning-based lossless image compression method on four volumetric image benchmarks for losslessly compressing both 3D Medical Images and Hyper-Spectral Images.
Collapse
|
23
|
Santos JM, Thomaz LA, Assuncao PAA, Cruz LADS, Tavora L, de Faria SMM. Lossless Coding of Light Fields Based on 4D Minimum Rate Predictors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1708-1722. [PMID: 35100115 DOI: 10.1109/tip.2022.3146009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Common representations of light fields use four-dimensional data structures, where a given pixel is closely related not only to its spatial neighbours within the same view, but also to its angular neighbours, co-located in adjacent views. Such structure presents increased redundancy between pixels, when compared with regular single-view images. Then, these redundancies are exploited to obtain compressed representations of the light field, using prediction algorithms specifically tailored to estimate pixel values based on both spatial and angular references. This paper proposes new encoding schemes which take advantage of the four-dimensional light field data structures to improve the coding performance of Minimum Rate Predictors. The proposed methods expand previous research on lossless coding beyond the current state-of-the-art. The experimental results, obtained using both traditional datasets and others more challenging, show bit-rate savings no smaller than 10%, when compared with existing methods for lossless light field compression.
Collapse
|
24
|
Nagoor OH, Whittle J, Deng J, Mora B, Jones MW. Sampling strategies for learning-based 3D medical image compression. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
25
|
A Systematic Review of Hardware-Accelerated Compression of Remotely Sensed Hyperspectral Images. SENSORS 2021; 22:s22010263. [PMID: 35009804 PMCID: PMC8749878 DOI: 10.3390/s22010263] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 12/28/2021] [Accepted: 12/29/2021] [Indexed: 12/27/2022]
Abstract
Hyperspectral imaging is an indispensable technology for many remote sensing applications, yet expensive in terms of computing resources. It requires significant processing power and large storage due to the immense size of hyperspectral data, especially in the aftermath of the recent advancements in sensor technology. Issues pertaining to bandwidth limitation also arise when seeking to transfer such data from airborne satellites to ground stations for postprocessing. This is particularly crucial for small satellite applications where the platform is confined to limited power, weight, and storage capacity. The availability of onboard data compression would help alleviate the impact of these issues while preserving the information contained in the hyperspectral image. We present herein a systematic review of hardware-accelerated compression of hyperspectral images targeting remote sensing applications. We reviewed a total of 101 papers published from 2000 to 2021. We present a comparative performance analysis of the synthesized results with an emphasis on metrics like power requirement, throughput, and compression ratio. Furthermore, we rank the best algorithms based on efficiency and elaborate on the major factors impacting the performance of hardware-accelerated compression. We conclude by highlighting some of the research gaps in the literature and recommend potential areas of future research.
Collapse
|
26
|
Xin G, Fan P. Soft Compression for Lossless Image Coding Based on Shape Recognition. ENTROPY 2021; 23:e23121680. [PMID: 34945986 PMCID: PMC8700521 DOI: 10.3390/e23121680] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 12/07/2021] [Accepted: 12/10/2021] [Indexed: 11/16/2022]
Abstract
Soft compression is a lossless image compression method that is committed to eliminating coding redundancy and spatial redundancy simultaneously. To do so, it adopts shapes to encode an image. In this paper, we propose a compressible indicator function with regard to images, which gives a threshold of the average number of bits required to represent a location and can be used for illustrating the working principle. We investigate and analyze soft compression for binary image, gray image and multi-component image with specific algorithms and compressible indicator value. In terms of compression ratio, the soft compression algorithm outperforms the popular classical standards PNG and JPEG2000 in lossless image compression. It is expected that the bandwidth and storage space needed when transmitting and storing the same kind of images (such as medical images) can be greatly reduced with applying soft compression.
Collapse
Affiliation(s)
- Gangtao Xin
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China;
- Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China
| | - Pingyi Fan
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China;
- Beijing National Research Center for Information Science and Technology (BNRist), Beijing 100084, China
- Correspondence: ; Tel.: +86-010-6279-6973
| |
Collapse
|
27
|
A Reversible Data Hiding Method in Encrypted Images for Controlling Trade-Off between Hiding Capacity and Compression Efficiency. J Imaging 2021; 7:jimaging7120268. [PMID: 34940735 PMCID: PMC8703996 DOI: 10.3390/jimaging7120268] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 12/02/2021] [Accepted: 12/03/2021] [Indexed: 12/03/2022] Open
Abstract
In this paper, we propose a new framework for reversible data hiding in encrypted images, where both the hiding capacity and lossless compression efficiency are flexibly controlled. There exist two main purposes; one is to provide highly efficient lossless compression under a required hiding capacity, while the other is to enable us to extract an embedded payload from a decrypted image. The proposed method can decrypt marked encrypted images without data extraction and derive marked images. An original image is arbitrarily divided into two regions. Two different methods for reversible data hiding in encrypted images (RDH-EI) are used in our method, and each one is used for either region. Consequently, one region can be decrypted without data extraction and also losslessly compressed using image coding standards even after the processing. The other region possesses a significantly high hiding rate, around 1 bpp. Experimental results show the effectiveness of the proposed method in terms of hiding capacity and lossless compression efficiency.
Collapse
|
28
|
Wu F, Zhou X, Chen Z, Yang B. A reversible data hiding scheme for encrypted images with pixel difference encoding. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107583] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
29
|
Abstract
Deep networks often possess a vast number of parameters, and their significant redundancy in parameterization has become a widely-recognized property. This presents significant challenges and restricts many deep learning applications, making the focus on reducing the complexity of models while maintaining their powerful performance. In this paper, we present an overview of popular methods and review recent works on compressing and accelerating deep neural networks. We consider not only pruning methods but also quantization methods, and low-rank factorization methods. This review also intends to clarify these major concepts, and highlights their characteristics, advantages, and shortcomings.
Collapse
|
30
|
Zhao S, Yang S, Liu Z, Feng Z, Zhang K. Sparse flow adversarial model for robust image compression. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107284] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
31
|
Sharma U, Sood M, Puthooran E. A Block Adaptive Near-Lossless Compression Algorithm for Medical Image Sequences and Diagnostic Quality Assessment. J Digit Imaging 2021; 33:516-530. [PMID: 31659588 DOI: 10.1007/s10278-019-00283-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
The near-lossless compression technique has better compression ratio than lossless compression technique while maintaining a maximum error limit for each pixel. It takes the advantage of both the lossy and lossless compression methods providing high compression ratio, which can be used for medical images while preserving diagnostic information. The proposed algorithm uses a resolution and modality independent threshold-based predictor, optimal quantization (q) level, and adaptive block size encoding. The proposed method employs resolution independent gradient edge detector (RIGED) for removing inter-pixel redundancy and block adaptive arithmetic encoding (BAAE) is used after quantization to remove coding redundancy. Quantizer with an optimum q level is used to implement the proposed method for high compression efficiency and for the better quality of the recovered images. The proposed method is implemented on volumetric 8-bit and 16-bit standard medical images and also validated on real time 16-bit-depth images collected from government hospitals. The results show the proposed algorithm yields a high coding performance with BPP of 1.37 and produces high peak signal-to-noise ratio (PSNR) of 51.35 dB for 8-bit-depth image dataset as compared with other near-lossless compression. The average BPP values of 3.411 and 2.609 are obtained by the proposed technique for 16-bit standard medical image dataset and real-time medical dataset respectively with maintained image quality. The improved near-lossless predictive coding technique achieves high compression ratio without losing diagnostic information from the image.
Collapse
Affiliation(s)
- Urvashi Sharma
- Department of Electronics and Communication, Jaypee University of Information Technology, Waknaghat, H.P, India.
| | - Meenakshi Sood
- Department of Electronics and Communication, Jaypee University of Information Technology, Waknaghat, H.P, India
| | - Emjee Puthooran
- Department of Electronics and Communication, Jaypee University of Information Technology, Waknaghat, H.P, India
| |
Collapse
|
32
|
A Stochastic Model for Block Segmentation of Images Based on the Quadtree and the Bayes Code for It. ENTROPY 2021; 23:e23080991. [PMID: 34441131 PMCID: PMC8392546 DOI: 10.3390/e23080991] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 07/24/2021] [Accepted: 07/28/2021] [Indexed: 11/24/2022]
Abstract
In information theory, lossless compression of general data is based on an explicit assumption of a stochastic generative model on target data. However, in lossless image compression, researchers have mainly focused on the coding procedure that outputs the coded sequence from the input image, and the assumption of the stochastic generative model is implicit. In these studies, there is a difficulty in discussing the difference between the expected code length and the entropy of the stochastic generative model. We solve this difficulty for a class of images, in which they have non-stationarity among segments. In this paper, we propose a novel stochastic generative model of images by redefining the implicit stochastic generative model in a previous coding procedure. Our model is based on the quadtree so that it effectively represents the variable block size segmentation of images. Then, we construct the Bayes code optimal for the proposed stochastic generative model. It requires the summation of all possible quadtrees weighted by their posterior. In general, its computational cost increases exponentially for the image size. However, we introduce an efficient algorithm to calculate it in the polynomial order of the image size without loss of optimality. As a result, the derived algorithm has a better average coding rate than that of JBIG.
Collapse
|
33
|
Li L, Chang CC, Lin CC. Reversible Data Hiding in Encrypted Image Based on (7, 4) Hamming Code and UnitSmooth Detection. ENTROPY 2021; 23:e23070790. [PMID: 34206604 PMCID: PMC8306628 DOI: 10.3390/e23070790] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 11/26/2022]
Abstract
With the development of cloud storage and privacy protection, reversible data hiding in encrypted images (RDHEI) plays the dual role of privacy protection and secret information transmission. RDHEI has a good application prospect and practical value. The current RDHEI algorithms still have room for improvement in terms of hiding capacity, security and separability. Based on (7, 4) Hamming Code and our proposed prediction/ detection functions, this paper proposes a Hamming Code and UnitSmooth detection based RDHEI scheme, called HUD-RDHEI scheme for short. To prove our performance, two database sets—BOWS-2 and BOSSBase—have been used in the experiments, and peak signal to noise ratio (PSNR) and pure embedding rate (ER) are served as criteria to evaluate the performance on image quality and hiding capacity. Experimental results confirm that the average pure ER with our proposed scheme is up to 2.556 bpp and 2.530 bpp under BOSSBase and BOWS-2, respectively. At the same time, security and separability is guaranteed. Moreover, there are no incorrect extracted bits during data extraction phase and the visual quality of directly decrypted image is exactly the same as the cover image.
Collapse
Affiliation(s)
- Lin Li
- Computer Engineering College, Ji Mei University, Xiamen 361021, China;
- Department of Information Engineering and Computer Science, Feng Chia University, 100 Wenhwa Road, Seatwen, Taichung 40724, Taiwan
| | - Chin-Chen Chang
- Department of Information Engineering and Computer Science, Feng Chia University, 100 Wenhwa Road, Seatwen, Taichung 40724, Taiwan
- Correspondence: (C.-C.C.); (C.-C.L.)
| | - Chia-Chen Lin
- Department of Computer Science and Information Engineering, National Chin-Yi University of Technology, Taichung 411030, Taiwan
- Correspondence: (C.-C.C.); (C.-C.L.)
| |
Collapse
|
34
|
Al-Khayyat K, Al-Shaikhli I, Al-Hagery M. Second compression for pixelated images under edge-based compression algorithms: JPEG-LS as an example. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-201563] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This paper details the examination of a particular case of data compression, where the compression algorithm removes the redundancy from data, which occurs when edge-based compression algorithms compress (previously compressed) pixelated images. The newly created redundancy can be removed using another round of compression. This work utilized the JPEG-LS as an example of an edge-based compression algorithm for compressing pixelated images. The output of this process was subjected to another round of compression using a more robust but slower compressor (PAQ8f). The compression ratio of the second compression was, on average, 18%, which is high for random data. The results of the second compression were superior to the lossy JPEG. Under the used data set, lossy JPEG needs to sacrifice 10% on average to realize nearly total lossless compression ratios of the two-successive compressions. To generalize the results, fast general-purpose compression algorithms (7z, bz2, and Gzip) were used too.
Collapse
Affiliation(s)
- Kamal Al-Khayyat
- Kulliyyah of Information and Communications Technology, Department of Computer Science, International Islamic University Malaysia, Kuala Lumpur, Malaysia
| | - Imad Al-Shaikhli
- Kulliyyah of Information and Communications Technology, Department of Computer Science, International Islamic University Malaysia, Kuala Lumpur, Malaysia
| | - Mohammed Al-Hagery
- Department of Computer Science, College of Computer, Qassim University, Buraydah, Saudi Arabia
- BIND Research Group, College of Computer, Qassim University, Buraydah, Saudi Arabia
| |
Collapse
|
35
|
Xin G, Fan P. A lossless compression method for multi-component medical images based on big data mining. Sci Rep 2021; 11:12372. [PMID: 34117350 PMCID: PMC8196061 DOI: 10.1038/s41598-021-91920-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 06/02/2021] [Indexed: 11/08/2022] Open
Abstract
In disease diagnosis, medical image plays an important part. Its lossless compression is pretty critical, which directly determines the requirement of local storage space and communication bandwidth of remote medical systems, so as to help the diagnosis and treatment of patients. There are two extraordinary properties related to medical images: lossless and similarity. How to take advantage of these two properties to reduce the information needed to represent an image is the key point of compression. In this paper, we employ the big data mining to set up the image codebook. That is, to find the basic components of images. We propose a soft compression algorithm for multi-component medical images, which can exactly reflect the fundamental structure of images. A general representation framework for image compression is also put forward and the results indicate that our developed soft compression algorithm can outperform the popular benchmarks PNG and JPEG2000 in terms of compression ratio.
Collapse
Affiliation(s)
- Gangtao Xin
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Pingyi Fan
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
36
|
High-Capacity Reversible Data Hiding in Encrypted Images Based on Hierarchical Quad-Tree Coding and Multi-MSB Prediction. ELECTRONICS 2021. [DOI: 10.3390/electronics10060664] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Nowadays, more and more researchers are interested in reversible data hiding in encrypted images (RDHEI), which can be applied in privacy protection and cloud storage. In this paper, a new RDHEI method on the basis of hierarchical quad-tree coding and multi-MSB (most significant bit) prediction is proposed. The content owner performs pixel prediction to obtain a prediction error image and explores the maximum embedding capacity of the prediction error image by hierarchical quad-tree coding before image encryption. According to the marked bits of vacated room capacity, the data hider can embed additional data into the room-vacated image without knowing the content of original image. Through the data hiding key and the encryption key, the legal receiver is able to conduct data extraction and image recovery separately. Experimental results show that the average embedding rates of the proposed method can separately reach 3.504 bpp (bits per pixel), 3.394 bpp, and 2.746 bpp on three well-known databases, BOSSBase, BOWS-2, and UCID, which are higher than some state-of-the-art methods.
Collapse
|
37
|
Abstract
A great deal of information is produced daily, due to advances in telecommunication, and the issue of storing it on digital devices or transmitting it over the Internet is challenging. Data compression is essential in managing this information well. Therefore, research on data compression has become a topic of great interest to researchers, and the number of applications in this area is increasing. Over the last few decades, international organisations have developed many strategies for data compression, and there is no specific algorithm that works well on all types of data. The compression ratio, as well as encoding and decoding times, are mainly used to evaluate an algorithm for lossless image compression. However, although the compression ratio is more significant for some applications, others may require higher encoding or decoding speeds or both; alternatively, all three parameters may be equally important. The main aim of this article is to analyse the most advanced lossless image compression algorithms from each point of view, and evaluate the strength of each algorithm for each kind of image. We develop a technique regarding how to evaluate an image compression algorithm that is based on more than one parameter. The findings that are presented in this paper may be helpful to new researchers and to users in this area.
Collapse
|
38
|
Starosolski R. Employing New Hybrid Adaptive Wavelet-Based Transform and Histogram Packing to Improve JP3D Compression of Volumetric Medical Images. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1385. [PMID: 33297589 PMCID: PMC7762414 DOI: 10.3390/e22121385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2020] [Revised: 12/02/2020] [Accepted: 12/04/2020] [Indexed: 11/16/2022]
Abstract
The primary purpose of the reported research was to improve the discrete wavelet transform (DWT)-based JP3D compression of volumetric medical images by applying new methods that were only previously used in the compression of two-dimensional (2D) images. Namely, we applied reversible denoising and lifting steps with step skipping to three-dimensional (3D)-DWT and constructed a hybrid transform that combined 3D-DWT with prediction. We evaluated these methods using a test-set containing images of modalities: Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Ultrasound (US). They proved effective for 3D data resulting in over two times greater compression ratio improvements than competitive methods. While employing fast entropy estimation of JP3D compression ratio to reduce the cost of image-adaptive parameter selection for the new methods, we found that some MRI images had sparse histograms of intensity levels. We applied the classical histogram packing (HP) and found that, on average, it resulted in greater ratio improvements than the new sophisticated methods and that it could be combined with these new methods to further improve ratios. Finally, we proposed a few practical compression schemes that exploited HP, entropy estimation, and the new methods; on average, they improved the compression ratio by up to about 6.5% at an acceptable cost.
Collapse
Affiliation(s)
- Roman Starosolski
- Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
39
|
Hou Z, Al-Atabany W, Farag R, Vuong QC, Mokhov A, Degenaar P. A scalable data transmission scheme for implantable optogenetic visual prostheses. J Neural Eng 2020; 17:055001. [DOI: 10.1088/1741-2552/abaf2e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
40
|
High-Performance Lossless Compression of Hyperspectral Remote Sensing Scenes Based on Spectral Decorrelation. REMOTE SENSING 2020. [DOI: 10.3390/rs12182955] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The capacity of the downlink channel is a major bottleneck for applications based on remote sensing hyperspectral imagery (HSI). Data compression is an essential tool to maximize the amount of HSI scenes that can be retrieved on the ground. At the same time, energy and hardware constraints of spaceborne devices impose limitations on the complexity of practical compression algorithms. To avoid any distortion in the analysis of the HSI data, only lossless compression is considered in this study. This work aims at finding the most advantageous compression–complexity trade-off within the state of the art in HSI compression. To do so, a novel comparison of the most competitive spectral decorrelation approaches combined with the best performing low-complexity compressors of the state is presented. Compression performance and execution time results are obtained for a set of 47 HSI scenes produced by 14 different sensors in real remote sensing missions. Assuming only a limited amount of energy is available, obtained data suggest that the FAPEC algorithm yields the best trade-off. When compared to the CCSDS 123.0-B-2 standard, FAPEC is 5.0 times faster and its compressed data rates are on average within 16% of the CCSDS standard. In scenarios where energy constraints can be relaxed, CCSDS 123.0-B-2 yields the best average compression results of all evaluated methods.
Collapse
|
41
|
Ulacha G, Stasiński R, Wernik C. Extended Multi WLS Method for Lossless Image Coding. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E919. [PMID: 33286688 PMCID: PMC7597162 DOI: 10.3390/e22090919] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 07/31/2020] [Accepted: 08/19/2020] [Indexed: 06/12/2023]
Abstract
In this paper, the most efficient (from data compaction point of view) and current image lossless coding method is presented. Being computationally complex, the algorithm is still more time efficient than its main competitors. The presented cascaded method is based on the Weighted Least Square (WLS) technique, with many improvements introduced, e.g., its main stage is followed by a two-step NLMS predictor ended with Context-Dependent Constant Component Removing. The prediction error is coded by a highly efficient binary context arithmetic coder. The performance of the new algorithm is compared to that of other coders for a set of widely used benchmark images.
Collapse
Affiliation(s)
- Grzegorz Ulacha
- Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, ul. Żołnierska 49, 71-210 Szczecin, Poland;
| | - Ryszard Stasiński
- Department of Informatics and Telecommunications, Poznań University of Technology, ul. Piotrowo 3, 60-965 Poznań, Poland;
| | - Cezary Wernik
- Faculty of Computer Science and Information Technology, West Pomeranian University of Technology, ul. Żołnierska 49, 71-210 Szczecin, Poland;
| |
Collapse
|
42
|
Abstract
In this article, we design and evaluate several algorithms for the computation of the optimal Rice coding parameter. We conjecture that the optimal Rice coding parameter can be bounded and verify this conjecture through numerical experiments using real data. We also describe algorithms that partition the input sequence of data into sub-sequences, such that if each sub-sequence is coded with a different Rice parameter, the overall code length is minimised. An algorithm for finding the optimal partitioning solution for Rice codes is proposed, as well as fast heuristics, based on the understanding of the problem trade-offs.
Collapse
|
43
|
Starosolski R. Hybrid Adaptive Lossless Image Compression Based on Discrete Wavelet Transform. ENTROPY 2020; 22:e22070751. [PMID: 33286523 PMCID: PMC7517294 DOI: 10.3390/e22070751] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 07/05/2020] [Accepted: 07/06/2020] [Indexed: 11/16/2022]
Abstract
A new hybrid transform for lossless image compression exploiting a discrete wavelet transform (DWT) and prediction is the main new contribution of this paper. Simple prediction is generally considered ineffective in conjunction with DWT but we applied it to subbands of DWT modified using reversible denoising and lifting steps (RDLSs) with step skipping. The new transform was constructed in an image-adaptive way using heuristics and entropy estimation. For a large and diverse test set consisting of 499 photographic and 247 non-photographic (screen content) images, we found that RDLS with step skipping allowed effectively combining DWT with prediction. Using prediction, we nearly doubled the JPEG 2000 compression ratio improvements that could be obtained using RDLS with step skipping. Because for some images it might be better to apply prediction instead of DWT, we proposed compression schemes with various tradeoffs, which are practical contributions of this study. Compared with unmodified JPEG 2000, one scheme improved the compression ratios of photographic and non-photographic images, on average, by 1.2% and 30.9%, respectively, at the cost of increasing the compression time by 2% and introducing only minimal modifications to JPEG 2000. Greater ratio improvements, exceeding 2% and 32%, respectively, are attainable at a greater cost.
Collapse
Affiliation(s)
- Roman Starosolski
- Department of Algorithmics and Software, Silesian University of Technology, 44-100 Gliwice, Poland
| |
Collapse
|
44
|
Žalik B, Mongus D, Lukač N, Žalik KR. Can burrows-Wheeler transform be replaced in chain code compression? Inf Sci (N Y) 2020. [DOI: 10.1016/j.ins.2020.03.073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
45
|
Sharma U, Sood M, Puthooran E, Kumar Y. A Block-Based Arithmetic Entropy Encoding Scheme for Medical Images. INTERNATIONAL JOURNAL OF HEALTHCARE INFORMATION SYSTEMS AND INFORMATICS 2020. [DOI: 10.4018/ijhisi.2020070104] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The digitization of human body, especially for treatment of diseases can generate a large volume of data. This generated medical data has a large resolution and bit depth. In the field of medical diagnosis, lossless compression techniques are widely adopted for the efficient archiving and transmission of medical images. This article presents an efficient coding solution based on a predictive coding technique. The proposed technique consists of Resolution Independent Gradient Edge Predictor16 (RIGED16) and Block Based Arithmetic Encoding (BAAE). The objective of this technique is to find universal threshold values for prediction and provide an optimum block size for encoding. The validity of the proposed technique is tested on some real images as well as standard images. The simulation results of the proposed technique are compared with some well-known and existing compression techniques. It is revealed that proposed technique gives a higher coding efficiency rate compared to other techniques.
Collapse
Affiliation(s)
| | - Meenakshi Sood
- National Institute of Technical Teachers Training and Research, India
| | | | - Yugal Kumar
- Jaypee University of Information Technology, India
| |
Collapse
|
46
|
Jiang Z, Pan WD, Shen H. Spatially and Spectrally Concatenated Neural Networks for Efficient Lossless Compression of Hyperspectral Imagery. J Imaging 2020; 6:38. [PMID: 34460584 PMCID: PMC8321058 DOI: 10.3390/jimaging6060038] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 05/24/2020] [Accepted: 05/25/2020] [Indexed: 11/16/2022] Open
Abstract
To achieve efficient lossless compression of hyperspectral images, we design a concatenated neural network, which is capable of extracting both spatial and spectral correlations for accurate pixel value prediction. Unlike conventional neural network based methods in the literature, the proposed neural network functions as an adaptive filter, thereby eliminating the need for pre-training using decompressed data. To meet the demand for low-complexity onboard processing, we use a shallow network with only two hidden layers for efficient feature extraction and predictive filtering. Extensive simulations on commonly used hyperspectral datasets and the standard CCSDS test datasets show that the proposed approach attains significant improvements over several other state-of-the-art methods, including standard compressors such as ESA, CCSDS-122, and CCSDS-123.
Collapse
Affiliation(s)
- Zhuocheng Jiang
- Department of Electrical and Computer Engineering, University of Alabama in Huntsville, Huntsville, AL 35899, USA;
| | - W. David Pan
- Department of Electrical and Computer Engineering, University of Alabama in Huntsville, Huntsville, AL 35899, USA;
| | - Hongda Shen
- Chubbs Insurance Inc., New York, NY 10020, USA;
| |
Collapse
|
47
|
Sun W, Chen Z. Learned Image Downscaling for Upscaling using Content Adaptive Resampler. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:4027-4040. [PMID: 32031937 DOI: 10.1109/tip.2020.2970248] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Deep convolutional neural network based image super-resolution (SR) models have shown superior performance in recovering the underlying high resolution (HR) images from low resolution (LR) images obtained from the predefined downscaling methods. In this paper, we propose a learned image downscaling method based on content adaptive resampler (CAR) with consideration on the upscaling process. The proposed resampler network generates content adaptive image resampling kernels that are applied to the original HR input to generate pixels on the downscaled image. Moreover, a differentiable upscaling (SR) module is employed to upscale the LR result into its underlying HR counterpart. By back-propagating the reconstruction error down to the original HR input across the entire framework to adjust model parameters, the proposed framework achieves a new state-of-the-art SR performance through upscaling guided image resamplers which adaptively preserve detailed information that is essential to the upscaling. Experimental results indicate that the quality of the generated LR image is comparable to that of the traditional interpolation based method and the significant SR performance gain is achieved by deep SR models trained jointly with the CAR model. The code is publicly available on: https://github.com/sunwj/CAR.
Collapse
|
48
|
A High Efficiency Multistage Coder for Lossless Audio Compression using OLS+ and CDCCR method. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9235218] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, the improvement of the cascaded prediction method was presented. Three types of main predictor block with different levels of complexity were compared, including two complex prediction methods with backward adaptation, i.e., extension Active Level Classification Model (ALCM+) and extended Ordinary Least Square (OLS+). Our own approach to implementation of the effective context-dependent constant component removal block is also presented. Additionally, the improved adaptive arithmetic coder with short, medium and long-term adaptation was presented, and the experiment was carried out comparing the results with other known lossless audio coders against which our method obtained the best efficiency.
Collapse
|
49
|
Dong Y, Pan WD, Wu D. Impact of Misclassification Rates on Compression Efficiency of Red Blood Cell Images of Malaria Infection Using Deep Learning. ENTROPY 2019. [PMCID: PMC7514366 DOI: 10.3390/e21111062] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Malaria is a severe public health problem worldwide, with some developing countries being most affected. Reliable remote diagnosis of malaria infection will benefit from efficient compression of high-resolution microscopic images. This paper addresses a lossless compression of malaria-infected red blood cell images using deep learning. Specifically, we investigate a practical approach where images are first classified before being compressed using stacked autoencoders. We provide probabilistic analysis on the impact of misclassification rates on compression performance in terms of the information-theoretic measure of entropy. We then use malaria infection image datasets to evaluate the relations between misclassification rates and actually obtainable compressed bit rates using Golomb–Rice codes. Simulation results show that the joint pattern classification/compression method provides more efficient compression than several mainstream lossless compression techniques, such as JPEG2000, JPEG-LS, CALIC, and WebP, by exploiting common features extracted by deep learning on large datasets. This study provides new insight into the interplay between classification accuracy and compression bitrates. The proposed compression method can find useful telemedicine applications where efficient storage and rapid transfer of large image datasets is desirable.
Collapse
Affiliation(s)
- Yuhang Dong
- Department of Electrical and Computer Engineering, University of Alabama in Huntsville, Huntsville, AL 35899, USA;
| | - W. David Pan
- Department of Electrical and Computer Engineering, University of Alabama in Huntsville, Huntsville, AL 35899, USA;
- Correspondence:
| | - Dongsheng Wu
- Department of Mathematical Sciences, University of Alabama in Huntsville, Huntsville, AL 35899, USA;
| |
Collapse
|
50
|
Lanz D, Kaup A. Graph-Based Compensated Wavelet Lifting for Scalable Lossless Coding of Dynamic Medical Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2439-2451. [PMID: 31634838 DOI: 10.1109/tip.2019.2947138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Lossless compression of dynamic 2-D+t and 3-D+t medical data is challenging regarding the huge amount of data, the characteristics of the inherent noise, and the high bit depth. Beyond that, a scalable representation is often required in telemedicine applications. Motion Compensated Temporal Filtering works well for lossless compression of medical volume data and additionally provides temporal, spatial, and quality scalability features. To achieve a high quality lowpass subband, which shall be used as a downscaled representative of the original data, graph-based motion compensation was recently introduced to this framework. However, encoding the motion information, which is stored in adjacency matrices, is not well investigated so far. This work focuses on coding these adjacency matrices to make the graph-based motion compensation feasible for data compression. We propose a novel coding scheme based on constructing so-called motion maps. This allows for the first time to compare the performance of graph-based motion compensation to traditional block-and mesh-based approaches. For high quality lowpass subbands our method is able to outperform the block-and mesh-based approaches by increasing the visual quality in terms of PSNR by 0.53 dB and 0.28 dB for CT data, as well as 1.04 dB and 1.90 dB for MR data, respectively, while the bit rate is reduced at the same time.
Collapse
|