Wavelet Compression

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 20355 Experts worldwide ranked by ideXlab platform

Antonio Ortega - One of the best experts on this subject based on the ideXlab platform.

  • a dynamic programming approach to distortion energy optimization for distributed Wavelet Compression with applications to data gathering inwireless sensor networks
    International Conference on Acoustics Speech and Signal Processing, 2006
    Co-Authors: A Ciancio, Antonio Ortega
    Abstract:

    We address a scenario where energy-constrained sensors in a wireless sensor network can choose among different distributed coding schemes to encode their data. We propose a framework where the network is described as a graph, with sensors representing the nodes, and where communication and processing costs are associated to edge weights and the coding schemes associated to states of operation. After describing data transitions and edge costs, we show that a shortest-path algorithm can be used to find the optimum network configuration, i.e., the one that leads to the lowest overall energy consumption.

  • energy efficient data representation and routing for wireless sensor networks based on a distributed Wavelet Compression algorithm
    Information Processing in Sensor Networks, 2006
    Co-Authors: Alexandre Ciancio, Antonio Ortega, Sundeep Pattem, Bhaskar Krishnamachari
    Abstract:

    We address the problem of energy consumption reduction for wireless sensor networks, where each of the sensors has limited power and acquires data that should be transmitted to a central node. The final goal is to have a reconstructed version of the data measurements at the central node, with the sensors spending as little energy as possible, for a given data reconstruction accuracy. In our scenario, sensors in the network have a choice of different coding schemes to achieve varying levels of Compression. The Compression algorithms considered are based on the lifting factorization of the Wavelet transform, and exploit the natural data flow in the network to aggregate data by computing partial Wavelet coefficients that are refined as data flows towards the central node. The proposed algorithm operates by first selecting a routing strategy through the network. Then, for each route, an optimal combination of data representation algorithms i.e. assignment at each node, is selected. A simple heuristic is used to determine the data representation technique to use once path merges are taken into consideration. We demonstrate that by optimizing the coding algorithm selection the overall energy consumption can be significantly reduced when compared to the case when data is just quantized and forwarded to the central node. Moreover, the proposed algorithm provides a tool to compare different routing techniques and identify those that are most efficient overall, for given node locations. We evaluate the algorithm using both a second-order autoregressive (AR) model and empirical data from a real wireless sensor network deployment.

  • a distributed Wavelet Compression algorithm for wireless multihop sensor networks using lifting
    International Conference on Acoustics Speech and Signal Processing, 2004
    Co-Authors: Alexandre Ciancio, Antonio Ortega
    Abstract:

    We address the problem of Compression for wireless sensor networks, where each of the sensors has limited power, and acquires data that should be sent to a central node. The final goal is to have a reconstructed version of the sampled field at the central node, with the sensors spending as little energy as possible. We propose a distributed Compression algorithm for multihop, distributed sensor networks based on the lifting factorization of the Wavelet transform that exploits the natural data flow in the network to aggregate data by computing partial Wavelet coefficients that are refined as the data flows towards the central node. A key result of our work is that by performing partial computations we greatly reduce unnecessary transmission, significantly reducing the overall energy consumption.

P. Jonathon Phillips - One of the best experts on this subject based on the ideXlab platform.

  • Computational and performance aspects of PCA-based face-recognition algorithms
    Perception, 2001
    Co-Authors: Hyeonjoon Moon, P. Jonathon Phillips
    Abstract:

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and Wavelet Compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.

Hyeonjoon Moon - One of the best experts on this subject based on the ideXlab platform.

  • computational and performance aspects of pca based face recognition algorithms
    Perception, 2001
    Co-Authors: Hyeonjoon Moon, Jonathon P Phillips
    Abstract:

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and Wavelet Compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We...

  • Computational and performance aspects of PCA-based face-recognition algorithms
    Perception, 2001
    Co-Authors: Hyeonjoon Moon, P. Jonathon Phillips
    Abstract:

    Algorithms based on principal component analysis (PCA) form the basis of numerous studies in the psychological and algorithmic face-recognition literature. PCA is a statistical technique and its incorporation into a face-recognition algorithm requires numerous design decisions. We explicitly state the design decisions by introducing a generic modular PCA-algorithm. This allows us to investigate these decisions, including those not documented in the literature. We experimented with different implementations of each module, and evaluated the different implementations using the September 1996 FERET evaluation protocol (the de facto standard for evaluating face-recognition algorithms). We experimented with (i) changing the illumination normalization procedure; (ii) studying effects on algorithm performance of compressing images with JPEG and Wavelet Compression algorithms; (iii) varying the number of eigenvectors in the representation; and (iv) changing the similarity measure in the classification process. We performed two experiments. In the first experiment, we obtained performance results on the standard September 1996 FERET large-gallery image sets. In the second experiment, we examined the variability in algorithm performance on different sets of facial images. The study was performed on 100 randomly generated image sets (galleries) of the same size. Our two most significant results are (i) changing the similarity measure produced the greatest change in performance, and (ii) that difference in performance of +/- 10% is needed to distinguish between algorithms.

E A Kurbatova - One of the best experts on this subject based on the ideXlab platform.

  • Wavelet Compression of off axis digital holograms using real imaginary and amplitude phase parts
    Scientific Reports, 2019
    Co-Authors: Pavel A. Cheremkhin, E A Kurbatova
    Abstract:

    Compression of digital holograms allows one to store, transmit, and reconstruct large sets of holographic data. There are many digital image Compression methods, and usually Wavelets are used for this task. However, many significant specialties exist for Compression of digital holograms. As a result, it is preferential to use a set of methods that includes filtering, scalar and vector quantization, Wavelet processing, etc. These methods in conjunction allow one to achieve an acceptable quality of reconstructed images and significant Compression ratios. In this paper, Wavelet Compression of amplitude/phase and real/imaginary parts of the Fourier spectrum of filtered off-axis digital holograms is compared. The combination of frequency filtering, Compression of the obtained spectral components, and extra Compression of the Wavelet decomposition coefficients by threshold processing and quantization is analyzed. Computer-generated and experimentally recorded digital holograms are compressed. The quality of the obtained reconstructed images is estimated. The results demonstrate the possibility of Compression ratios of 380 using real/imaginary parts. Amplitude/phase Compression allows ratios that are a factor of 2–4 lower for obtaining similar quality of reconstructed objects.

  • Quality of reconstruction of compressed off-axis digital holograms by frequency filtering and Wavelets.
    Applied optics, 2017
    Co-Authors: Pavel A. Cheremkhin, E A Kurbatova
    Abstract:

    Compression of digital holograms can significantly help with the storage of objects and data in 2D and 3D form, its transmission, and its reconstruction. Compression of standard images by methods based on Wavelets allows high Compression ratios (up to 20-50 times) with minimum losses of quality. In the case of digital holograms, application of Wavelets directly does not allow high values of Compression to be obtained. However, additional preprocessing and postprocessing can afford significant Compression of holograms and the acceptable quality of reconstructed images. In this paper application of Wavelet transforms for Compression of off-axis digital holograms are considered. The combined technique based on zero- and twin-order elimination, Wavelet Compression of the amplitude and phase components of the obtained Fourier spectrum, and further additional Compression of Wavelet coefficients by thresholding and quantization is considered. Numerical experiments on reconstruction of images from the compressed holograms are performed. The comparative analysis of applicability of various Wavelets and methods of additional Compression of Wavelet coefficients is performed. Optimum parameters of Compression of holograms by the methods can be estimated. Sizes of holographic information were decreased up to 190 times.

Alexandre Ciancio - One of the best experts on this subject based on the ideXlab platform.

  • energy efficient data representation and routing for wireless sensor networks based on a distributed Wavelet Compression algorithm
    Information Processing in Sensor Networks, 2006
    Co-Authors: Alexandre Ciancio, Antonio Ortega, Sundeep Pattem, Bhaskar Krishnamachari
    Abstract:

    We address the problem of energy consumption reduction for wireless sensor networks, where each of the sensors has limited power and acquires data that should be transmitted to a central node. The final goal is to have a reconstructed version of the data measurements at the central node, with the sensors spending as little energy as possible, for a given data reconstruction accuracy. In our scenario, sensors in the network have a choice of different coding schemes to achieve varying levels of Compression. The Compression algorithms considered are based on the lifting factorization of the Wavelet transform, and exploit the natural data flow in the network to aggregate data by computing partial Wavelet coefficients that are refined as data flows towards the central node. The proposed algorithm operates by first selecting a routing strategy through the network. Then, for each route, an optimal combination of data representation algorithms i.e. assignment at each node, is selected. A simple heuristic is used to determine the data representation technique to use once path merges are taken into consideration. We demonstrate that by optimizing the coding algorithm selection the overall energy consumption can be significantly reduced when compared to the case when data is just quantized and forwarded to the central node. Moreover, the proposed algorithm provides a tool to compare different routing techniques and identify those that are most efficient overall, for given node locations. We evaluate the algorithm using both a second-order autoregressive (AR) model and empirical data from a real wireless sensor network deployment.

  • a distributed Wavelet Compression algorithm for wireless multihop sensor networks using lifting
    International Conference on Acoustics Speech and Signal Processing, 2004
    Co-Authors: Alexandre Ciancio, Antonio Ortega
    Abstract:

    We address the problem of Compression for wireless sensor networks, where each of the sensors has limited power, and acquires data that should be sent to a central node. The final goal is to have a reconstructed version of the sampled field at the central node, with the sensors spending as little energy as possible. We propose a distributed Compression algorithm for multihop, distributed sensor networks based on the lifting factorization of the Wavelet transform that exploits the natural data flow in the network to aggregate data by computing partial Wavelet coefficients that are refined as the data flows towards the central node. A key result of our work is that by performing partial computations we greatly reduce unnecessary transmission, significantly reducing the overall energy consumption.