Lossy Compression Algorithm

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4752 Experts worldwide ranked by ideXlab platform

Roberto Sarmiento - One of the best experts on this subject based on the ideXlab platform.

  • a new Algorithm for the on board Compression of hyperspectral images
    Remote Sensing, 2018
    Co-Authors: Raul Guerra, Lucana Santos, Yubal Barrios, Maria Diaz, Sebastian Lopez, Roberto Sarmiento
    Abstract:

    Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient Compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless Compression Algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited Compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher Compression ratios, making it necessary to use Lossy Compression techniques. A new transform-based Lossy Compression Algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), is proposed in this manuscript. This compressor has been developed for achieving high Compression ratios with a good Compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art Compression solutions. The theoretical and experimental evidence indicates that the proposed Algorithm represents an excellent option for Lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  • fpga implementation of a Lossy Compression Algorithm for hyperspectral images with a high level synthesis tool
    Adaptive Hardware and Systems, 2013
    Co-Authors: Lucana Santos, José Fco. López, Roberto Sarmiento, Raffaele Vitulli
    Abstract:

    In this paper, we present an FPGA implementation of a novel adaptive and predictive Algorithm for Lossy hyperspectral image Compression. This Algorithm was specifically designed for on-board Compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the Lossy Compression Algorithm directly from a source code written in C language with a double motivation: demonstrating how well the Lossy Compression Algorithm would perform on an FPGA in terms of throughput and area; and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless Algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE Algorithm.

  • AHS - FPGA implementation of a Lossy Compression Algorithm for hyperspectral images with a high-level synthesis tool
    2013 NASA ESA Conference on Adaptive Hardware and Systems (AHS-2013), 2013
    Co-Authors: Lucana Santos, José Fco. López, Roberto Sarmiento, Raffaele Vitulli
    Abstract:

    In this paper, we present an FPGA implementation of a novel adaptive and predictive Algorithm for Lossy hyperspectral image Compression. This Algorithm was specifically designed for on-board Compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the Lossy Compression Algorithm directly from a source code written in C language with a double motivation: demonstrating how well the Lossy Compression Algorithm would perform on an FPGA in terms of throughput and area; and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless Algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE Algorithm.

  • GPU implementation of a Lossy Compression Algorithm for hyperspectral images
    2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2012
    Co-Authors: Lucana Santos, Raffaele Vitulli, José Fco. López, Roberto Sarmiento
    Abstract:

    In this paper, a Lossy Compression Algorithm for onboard hyperspectral image Compression is implemented on a GPU. The strategy followed for parallelizing the Algorithm is presented, as well as the experimental results obtained when executing it on the GPU. Furthermore, we present the speedups that can be gained by the GPU implementation with respect to the CPU implementation, which demonstrate that the computational power of the GPUs is effective in obtaining Compression results in short execution times.

  • WHISPERS - GPU implementation of a Lossy Compression Algorithm for hyperspectral images
    2012 4th Workshop on Hyperspectral Image and Signal Processing (WHISPERS), 2012
    Co-Authors: Lucana Santos, Raffaele Vitulli, José Fco. López, Roberto Sarmiento
    Abstract:

    In this paper, a Lossy Compression Algorithm for onboard hyperspectral image Compression is implemented on a GPU. The strategy followed for parallelizing the Algorithm is presented, as well as the experimental results obtained when executing it on the GPU. Furthermore, we present the speedups that can be gained by the GPU implementation with respect to the CPU implementation, which demonstrate that the computational power of the GPUs is effective in obtaining Compression results in short execution times.

Thakaramkunnath Ajayakumar Akshay - One of the best experts on this subject based on the ideXlab platform.

  • Investigate Redundancy In Sounding Reference Signal Based Channel Estimates
    Lunds universitet Institutionen för elektro- och informationsteknik, 2019
    Co-Authors: Bhavani Sankar Nishanth, Thakaramkunnath Ajayakumar Akshay
    Abstract:

    5G supports enormous increase in data rate. Massive antenna beamforming is expected to play a key role in increasing capacity in case of multi-user MIMO and coverage in case of single-user MIMO. The large number of antennas in massive MIMO system will lead to enormous amount of channel state information being stored in the memory and this necessitates the use of Compression techniques for efficient utilization of memory, which is limited. Sounding Reference Signals (SRS) are transmitted in the uplink to obtain channel estimate. In TDD based systems, by exploiting channel reciprocity channel estimates received in the uplink can be used in downlink as well. The product, we work on at Ericsson, is a TDD based system and uses SRS based channel estimates to compute beamforming weights to facilitate massive antenna beamforming. SRS based channel state information is represented by 32-bit complex number in this system, which is received per Evolved Node B (eNodeB) antenna, per User Equipment (UE) transmission antenna, and per Physical Resource Block Group (PRBG). This results in a significant amount of data that needs to be stored in the eNodeB. However, memory in the Digital Unit of eNodeB is limited. SRS based estimates occupy a major portion of this memory and therefore limit the capacity of the eNodeB for beamforming. This thesis focuses on the evaluation and implementation of lossless and Lossy Compression of SRS based channel estimates to attain space savings in the shared memory of eNodeB. This will help in achieving higher capacity for reciprocity-based beamforming and prolong the lifetime of existing hardware. Performance of various lossless data Compression Algorithms was analyzed based on Compression ratio, speed and complexity and the optimal one was selected. Lossy Compression of SRS based channel estimates was also implemented for LOS UEs using linear regression by least squares estimate. Impact on performance due to application of Lossy Compression Algorithm was studied.In order to reliably communicate over the air, the receiver needs to estimate the quality of the wireless link. This is done by the transmission of certain signals named ‘pilots’. In cellular communications, such as 5G, pilots are sent in both directions, which is from base station to the user and vice versa. To support 5G systems, numerous antennas will be used at transmitter, receiver or both. Such systems are called as massive multiple input multiple output (Massive MIMO) systems. Pilots need to be transmitted for each of these antennas and this will lead to significant amount of data. For efficient and reliable systems, it is important to ensure that the least amount of such information is stored in the base station, which necessitates the use of data Compression techniques. The link quality values obtained can be compressed by Lossy methods which involve loss of some information but higher Compression and by lossless methods that have no loss of information but lower Compression. Aim of this thesis is to study about a certain type of pilot, namely Sounding Reference Signal, and compressing the obtained link quality values. Hence, this thesis focuses on analyzing the performance of various Compression techniques based on their ability to compress this data, speed of the program and complexity of implementation and selecting the optimal technique based on the analysis. Implementation of Compression of these link quality values will help to prolong the lifetime of the equipment used now and can help in saving costs for Telecom companies

Akshay Thakaramkunnath Ajayakumar - One of the best experts on this subject based on the ideXlab platform.

  • Investigate Redundancy In Sounding Reference Signal Based Channel Estimates
    2019
    Co-Authors: Nishanth Bhavani Sankar, Akshay Thakaramkunnath Ajayakumar
    Abstract:

    5G supports enormous increase in data rate. Massive antenna beamforming is expected to play a key role in increasing capacity in case of multi-user MIMO and coverage in case of single-user MIMO. The large number of antennas in massive MIMO system will lead to enormous amount of channel state information being stored in the memory and this necessitates the use of Compression techniques for efficient utilization of memory, which is limited. Sounding Reference Signals (SRS) are transmitted in the uplink to obtain channel estimate. In TDD based systems, by exploiting channel reciprocity channel estimates received in the uplink can be used in downlink as well. The product, we work on at Ericsson, is a TDD based system and uses SRS based channel estimates to compute beamforming weights to facilitate massive antenna beamforming. SRS based channel state information is represented by 32-bit complex number in this system, which is received per Evolved Node B (eNodeB) antenna, per User Equipment (UE) transmission antenna, and per Physical Resource Block Group (PRBG). This results in a significant amount of data that needs to be stored in the eNodeB. However, memory in the Digital Unit of eNodeB is limited. SRS based estimates occupy a major portion of this memory and therefore limit the capacity of the eNodeB for beamforming. This thesis focuses on the evaluation and implementation of lossless and Lossy Compression of SRS based channel estimates to attain space savings in the shared memory of eNodeB. This will help in achieving higher capacity for reciprocity-based beamforming and prolong the lifetime of existing hardware. Performance of various lossless data Compression Algorithms was analyzed based on Compression ratio, speed and complexity and the optimal one was selected. Lossy Compression of SRS based channel estimates was also implemented for LOS UEs using linear regression by least squares estimate. Impact on performance due to application of Lossy Compression Algorithm was studied. (Less)

Tsachy Weissman - One of the best experts on this subject based on the ideXlab platform.

  • Denoising via MCMC-Based Lossy Compression
    IEEE Transactions on Signal Processing, 2012
    Co-Authors: Shirin Jalali, Tsachy Weissman
    Abstract:

    It has been established in the literature, in various theoretical and asymptotic senses, that universal Lossy Compression followed by some simple postprocessing results in universal denoising, for the setting of a stationary ergodic source corrupted by additive white noise. However, this interesting theoretical result has not yet been tested in practice in denoising simulated or real data. In this paper, we employ a recently developed MCMC-based universal Lossy compressor to build a universal Compression-based denoising Algorithm. We show that applying this iterative Lossy Compression Algorithm with appropriately chosen distortion measure and distortion level, followed by a simple derandomization operation, results in a family of denoisers that compares favorably (both theoretically and in practice) with other MCMC-based schemes, and with the discrete universal denoiser DUDE.

  • An MCMC Approach to Universal Lossy Compression of Analog Sources
    IEEE Transactions on Signal Processing, 2012
    Co-Authors: Dror Baron, Tsachy Weissman
    Abstract:

    Motivated by the Markov Chain Monte Carlo (MCMC) approach to the Compression of discrete sources developed by Jalali and Weissman, we propose a Lossy Compression Algorithm for analog sources that relies on a finite reproduction alphabet, which grows with the input length. The Algorithm achieves, in an appropriate asymptotic sense, the optimum Shannon theoretic tradeoff between rate and distortion, universally for stationary ergodic continuous amplitude sources. We further propose an MCMC-based Algorithm that resorts to a reduced reproduction alphabet when such reduction does not prevent achieving the Shannon limit. The latter Algorithm is advantageous due to its reduced complexity and improved rates of convergence when employed on sources with a finite and small optimum reproduction alphabet.

  • DCC - An MCMC Approach to Lossy Compression of Continuous Sources
    2010 Data Compression Conference, 2010
    Co-Authors: Dror Baron, Tsachy Weissman
    Abstract:

    Motivated by the Markov chain Monte Carlo (MCMC) relaxation method of Jalali and Weissman, we propose a Lossy Compression Algorithm for continuous amplitude sources that relies on a finite reproduction alphabet that grows with the input length. Our Algorithm asymptotically achieves the optimum rate distortion (RD) function universally for stationary ergodic continuous amplitude sources. However, the large alphabet slows down the convergence to the RD function, and is thus an impediment in practice. We thus propose an MCMC-based Algorithm that uses a (smaller) adaptive reproduction alphabet. In addition to computational advantages, the reduced alphabet accelerates convergenceto the RD function, and is thus more suitable in practice.

Lucana Santos - One of the best experts on this subject based on the ideXlab platform.

  • a new Algorithm for the on board Compression of hyperspectral images
    Remote Sensing, 2018
    Co-Authors: Raul Guerra, Lucana Santos, Yubal Barrios, Maria Diaz, Sebastian Lopez, Roberto Sarmiento
    Abstract:

    Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient Compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless Compression Algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited Compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher Compression ratios, making it necessary to use Lossy Compression techniques. A new transform-based Lossy Compression Algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), is proposed in this manuscript. This compressor has been developed for achieving high Compression ratios with a good Compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art Compression solutions. The theoretical and experimental evidence indicates that the proposed Algorithm represents an excellent option for Lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  • fpga implementation of a Lossy Compression Algorithm for hyperspectral images with a high level synthesis tool
    Adaptive Hardware and Systems, 2013
    Co-Authors: Lucana Santos, José Fco. López, Roberto Sarmiento, Raffaele Vitulli
    Abstract:

    In this paper, we present an FPGA implementation of a novel adaptive and predictive Algorithm for Lossy hyperspectral image Compression. This Algorithm was specifically designed for on-board Compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the Lossy Compression Algorithm directly from a source code written in C language with a double motivation: demonstrating how well the Lossy Compression Algorithm would perform on an FPGA in terms of throughput and area; and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless Algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE Algorithm.

  • AHS - FPGA implementation of a Lossy Compression Algorithm for hyperspectral images with a high-level synthesis tool
    2013 NASA ESA Conference on Adaptive Hardware and Systems (AHS-2013), 2013
    Co-Authors: Lucana Santos, José Fco. López, Roberto Sarmiento, Raffaele Vitulli
    Abstract:

    In this paper, we present an FPGA implementation of a novel adaptive and predictive Algorithm for Lossy hyperspectral image Compression. This Algorithm was specifically designed for on-board Compression, where FPGAs are the most attractive and popular option, featuring low power and high-performance. However, the traditional RTL design flow is rather time-consuming. High-level synthesis (HLS) tools, like the well-known CatapultC, can help to shorten these times. Utilizing CatapultC, we obtain an FPGA implementation of the Lossy Compression Algorithm directly from a source code written in C language with a double motivation: demonstrating how well the Lossy Compression Algorithm would perform on an FPGA in terms of throughput and area; and at the same time showing how HLS is applied, in terms of source code preparation and CatapultC settings, to obtain an efficient hardware implementation in a relatively short time. The P&R on a Virtex 5 5VFX130 displays effective performance terms of area (maximum device utilization at 14%) and frequency (80 MHz). A comparison with a previous FPGA implementation of a lossless to near-lossless Algorithm is also provided. Results on a Virtex 4 4VLX200 show less memory requirements and higher frequency for the LCE Algorithm.

  • GPU implementation of a Lossy Compression Algorithm for hyperspectral images
    2012 4th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), 2012
    Co-Authors: Lucana Santos, Raffaele Vitulli, José Fco. López, Roberto Sarmiento
    Abstract:

    In this paper, a Lossy Compression Algorithm for onboard hyperspectral image Compression is implemented on a GPU. The strategy followed for parallelizing the Algorithm is presented, as well as the experimental results obtained when executing it on the GPU. Furthermore, we present the speedups that can be gained by the GPU implementation with respect to the CPU implementation, which demonstrate that the computational power of the GPUs is effective in obtaining Compression results in short execution times.

  • WHISPERS - GPU implementation of a Lossy Compression Algorithm for hyperspectral images
    2012 4th Workshop on Hyperspectral Image and Signal Processing (WHISPERS), 2012
    Co-Authors: Lucana Santos, Raffaele Vitulli, José Fco. López, Roberto Sarmiento
    Abstract:

    In this paper, a Lossy Compression Algorithm for onboard hyperspectral image Compression is implemented on a GPU. The strategy followed for parallelizing the Algorithm is presented, as well as the experimental results obtained when executing it on the GPU. Furthermore, we present the speedups that can be gained by the GPU implementation with respect to the CPU implementation, which demonstrate that the computational power of the GPUs is effective in obtaining Compression results in short execution times.