Rate Vector

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 2082 Experts worldwide ranked by ideXlab platform

N. Farvardin - One of the best experts on this subject based on the ideXlab platform.

  • a structured fixed Rate Vector quantizer derived from a variable length scalar quantizer i memoryless sources
    IEEE Transactions on Information Theory, 1993
    Co-Authors: R. Laroia, N. Farvardin
    Abstract:

    For Pt.I see ibid., vol.39, no.3, p.851-67 (1993). The fixed-Rate scalar-Vector quantizer (SVQ) for quantizing stationary memoryless sources is extended to a specific type of Vector source in which each component is a stationary memoryless scalar subsource independent of the other components. Algorithms for the design and implementation of the original SVQ are modified to apply to this case. The resulting SVQ, referred to as the extended SVQ (ESVQ), is then used to quantize stationary sources with memory (with known autocorrelation function). Numerical results are presented for the quantization of first-order Gauss-Markov sources using this scheme. It is shown that the ESVQ-based scheme performs very close to entropy-coded transform quantization while maintaining a fixed-Rate output and outperforms the fixed-Rate scheme that uses scalar Lloyd-Max quantization of the transform coefficients. It is also shown that this scheme performs better than implementable Vector quantizers, especially at high Rates. >

  • A structured fixed-Rate Vector quantizer derived from a variable-length scalar quantizer. I. Memoryless sources
    IEEE Transactions on Information Theory, 1993
    Co-Authors: R. Laroia, N. Farvardin
    Abstract:

    A low-complexity, fixed-Rate structured Vector quantizer for memoryless sources is described. This quantizer is referred to as the scalar-Vector quantizer (SVQ), and the structure of its codebook is derived from a variable-length scalar quantizer. Design and implementation algorithms for this quantizer are developed and bounds on its performance are provided. Simulation results show that performance close to that of the optimal entropy-constrained scalar quantizer is possible with the fixed-Rate quantizer. The SVQ is also robust against channel errors and outperforms both Lloyd-Max and entropy-constrained scalar quantizers for a wide range of channel error probabilities.

  • A structured fixed-Rate Vector quantizer derived from a variable-length scalar quantizer. II. Vector sources
    IEEE Transactions on Information Theory, 1993
    Co-Authors: R. Loroia, N. Farvardin
    Abstract:

    For Pt.I see ibid., vol.39, no.3, p.851-67 (1993). The fixed-Rate scalar-Vector quantizer (SVQ) for quantizing stationary memoryless sources is extended to a specific type of Vector source in which each component is a stationary memoryless scalar subsource independent of the other components. Algorithms for the design and implementation of the original SVQ are modified to apply to this case. The resulting SVQ, referred to as the extended SVQ (ESVQ), is then used to quantize stationary sources with memory (with known autocorrelation function). Numerical results are presented for the quantization of first-order Gauss-Markov sources using this scheme. It is shown that the ESVQ-based scheme performs very close to entropy-coded transform quantization while maintaining a fixed-Rate output and outperforms the fixed-Rate scheme that uses scalar Lloyd-Max quantization of the transform coefficients. It is also shown that this scheme performs better than implementable Vector quantizers, especially at high Rates.

R M Gray - One of the best experts on this subject based on the ideXlab platform.

  • Clustering and Finding the Number of Clusters by Unsupervised Learning of Mixture Models using Vector Quantization
    2007 IEEE International Conference on Acoustics Speech and Signal Processing - ICASSP '07, 2007
    Co-Authors: Sangho Yoon, R M Gray
    Abstract:

    A new Lagrangian formulation with entropy and codebook size was proposed to extend the Lagrangian formulation of variable-Rate Vector quantization. We use the new Lagrangian formulation to perform clustering and to find the number of clusters by fitting mixture models to data using Vector quantization. Experimental results show that the entropy and memory constrained Vector quantization outperforms the state-of-the-art model selection algorithms in the examples considered.

  • variable Rate Vector quantization for speech image and video compression
    IEEE Transactions on Communications, 1993
    Co-Authors: T Lookabaugh, Eve A Riskin, Philip A Chou, R M Gray
    Abstract:

    The performance of a Vector quantizer can be improved by using a variable-Rate code. Three variable-Rate Vector quantization systems are applied to speech, image, and video sources and compared to standard Vector quantization and noiseless variable-Rate coding approaches. The systems range from a simple and flexible tree-based Vector quantizer to a high-performance, but complex, jointly optimized Vector quantizer and noiseless code. The systems provide significant performance improvements for subband speech coding, predictive image coding, and motion-compensated video, but provide only marginal improvements for Vector quantization of linear predictive coefficients in speech and direct Vector quantization of images. Criteria are suggested for determining when variable-Rate Vector quantization may provide significant performance improvement over standard approaches. >

  • Variable Rate Vector Quantization
    Vector Quantization and Signal Compression, 1992
    Co-Authors: Allen Gersho, R M Gray
    Abstract:

    All of the lossy compression schemes considered thus far have been fixed Rate schemes, systems that send a fixed number of bits for each input symbol or block or, equivalently, a fixed number of bits per unit time. The only variable Rate schemes considered were the noiseless codes of Chapter 9 where channel codewords of differing sizes were assigned to different input Vectors. In the noiseless code case the basic stRategy is fairly obvious: by assigning short codewords to highly probable input Vectors and long codewords to rare input Vectors in an intelligent fashion, one can reduce the average number of bits sent. A similar stRategy can be applied in Vector quantization: instead of sending a fixed number, R, of bits for each input Vector, one might have a code that assigns a variable number of bits to different codewords. For example, one could use several bits to describe codewords for active input Vectors such as plosives in speech or edges in an image and use fewer bits for codewords describing low activity input Vectors such as silence in speech or background in images. Unlike the lossless codes, however, it is not clear that here the best stRategy is to assign shorter words to more probable Vectors and longer words to less probable since the overall goal is now to minimize average distortion and hence both probability and distortion must be taken into account.

  • Variable Rate Vector Quantization of Images Using Decision Trees
    1990 Conference Record Twenty-Fourth Asilomar Conference on Signals Systems and Computers 1990., 1990
    Co-Authors: Eve A Riskin, R M Gray, Richard A. Olshen
    Abstract:

    Techniques for clustering and the design of decision trees have bcen combined recently to produce codcs. These tree-structured codes are efficient and easy to implement for problems of variable Rate image compression. This paper is a summary of some techniques for the resulting Vector quantizers, which are explained in the context of designing decision trees. We describe how to grow large trees by splitting nodes individually, and how to prune these large trees by an algorithin termed tiic generalized RFOS algorithm. Estimation based on an independent test sample and on crossvalidation both figure in pruning algorithms.

  • Variable Rate Vector quantization of images
    1990
    Co-Authors: R M Gray, Eve A Riskin
    Abstract:

    Vector quantization is a lossy compression technique that has become popular in the last decade. Its performance for image compression applications can be significantly improved by using variable Rate codes, which are able to code active regions of an image, such as the edges, at a higher resolution. At the same time, variable Rate coding saves bits by coding the less active regions, such as the background or regions of constant intensity, at a lower resolution. In this thesis, we present several applications of a recently developed pruning technique for variable Rate tree-structured Vector quantizer design to images. Pruned tree-structured Vector quantization (PTSVQ) is particularly suitable for progressive transmission of images, in which an increasingly higher quality image can be reconstructed by the decoder. A variation of PTSVQ incorpoRates a predictive preprocessor that improves the performance of the coders by close to 3 dB. Next, a technique is introduced for directly designing a variable Rate tree-structured Vector quantizer. Here the tree is grown one node at a time rather than the typical one layer at a time. This is less constrained than growing a balanced tree and the resulting unbalanced tree outperforms a balanced tree of the same average Rate. When the tree is pruned, additional improvement is measured in the signal to noise ratio at high Rates over standard PTSVQ. The tree growing algorithm can be interpreted as a constrained inverse operation of the pruning algorithm. Finally, the pruning algorithm is applied to a bit allocation problem in which differing numbers of bits are allocated in a classified Vector quantizer application. The algorithm is conceptually very simple and has very low complexity under convexity constraints on the class quantizer functions.

R. Laroia - One of the best experts on this subject based on the ideXlab platform.

  • a structured fixed Rate Vector quantizer derived from a variable length scalar quantizer i memoryless sources
    IEEE Transactions on Information Theory, 1993
    Co-Authors: R. Laroia, N. Farvardin
    Abstract:

    For Pt.I see ibid., vol.39, no.3, p.851-67 (1993). The fixed-Rate scalar-Vector quantizer (SVQ) for quantizing stationary memoryless sources is extended to a specific type of Vector source in which each component is a stationary memoryless scalar subsource independent of the other components. Algorithms for the design and implementation of the original SVQ are modified to apply to this case. The resulting SVQ, referred to as the extended SVQ (ESVQ), is then used to quantize stationary sources with memory (with known autocorrelation function). Numerical results are presented for the quantization of first-order Gauss-Markov sources using this scheme. It is shown that the ESVQ-based scheme performs very close to entropy-coded transform quantization while maintaining a fixed-Rate output and outperforms the fixed-Rate scheme that uses scalar Lloyd-Max quantization of the transform coefficients. It is also shown that this scheme performs better than implementable Vector quantizers, especially at high Rates. >

  • A structured fixed-Rate Vector quantizer derived from a variable-length scalar quantizer. I. Memoryless sources
    IEEE Transactions on Information Theory, 1993
    Co-Authors: R. Laroia, N. Farvardin
    Abstract:

    A low-complexity, fixed-Rate structured Vector quantizer for memoryless sources is described. This quantizer is referred to as the scalar-Vector quantizer (SVQ), and the structure of its codebook is derived from a variable-length scalar quantizer. Design and implementation algorithms for this quantizer are developed and bounds on its performance are provided. Simulation results show that performance close to that of the optimal entropy-constrained scalar quantizer is possible with the fixed-Rate quantizer. The SVQ is also robust against channel errors and outperforms both Lloyd-Max and entropy-constrained scalar quantizers for a wide range of channel error probabilities.

Eve A Riskin - One of the best experts on this subject based on the ideXlab platform.

  • variable Rate Vector quantization for speech image and video compression
    IEEE Transactions on Communications, 1993
    Co-Authors: T Lookabaugh, Eve A Riskin, Philip A Chou, R M Gray
    Abstract:

    The performance of a Vector quantizer can be improved by using a variable-Rate code. Three variable-Rate Vector quantization systems are applied to speech, image, and video sources and compared to standard Vector quantization and noiseless variable-Rate coding approaches. The systems range from a simple and flexible tree-based Vector quantizer to a high-performance, but complex, jointly optimized Vector quantizer and noiseless code. The systems provide significant performance improvements for subband speech coding, predictive image coding, and motion-compensated video, but provide only marginal improvements for Vector quantization of linear predictive coefficients in speech and direct Vector quantization of images. Criteria are suggested for determining when variable-Rate Vector quantization may provide significant performance improvement over standard approaches. >

  • Variable Rate Vector Quantization of Images Using Decision Trees
    1990 Conference Record Twenty-Fourth Asilomar Conference on Signals Systems and Computers 1990., 1990
    Co-Authors: Eve A Riskin, R M Gray, Richard A. Olshen
    Abstract:

    Techniques for clustering and the design of decision trees have bcen combined recently to produce codcs. These tree-structured codes are efficient and easy to implement for problems of variable Rate image compression. This paper is a summary of some techniques for the resulting Vector quantizers, which are explained in the context of designing decision trees. We describe how to grow large trees by splitting nodes individually, and how to prune these large trees by an algorithin termed tiic generalized RFOS algorithm. Estimation based on an independent test sample and on crossvalidation both figure in pruning algorithms.

  • Variable Rate Vector quantization of images
    1990
    Co-Authors: R M Gray, Eve A Riskin
    Abstract:

    Vector quantization is a lossy compression technique that has become popular in the last decade. Its performance for image compression applications can be significantly improved by using variable Rate codes, which are able to code active regions of an image, such as the edges, at a higher resolution. At the same time, variable Rate coding saves bits by coding the less active regions, such as the background or regions of constant intensity, at a lower resolution. In this thesis, we present several applications of a recently developed pruning technique for variable Rate tree-structured Vector quantizer design to images. Pruned tree-structured Vector quantization (PTSVQ) is particularly suitable for progressive transmission of images, in which an increasingly higher quality image can be reconstructed by the decoder. A variation of PTSVQ incorpoRates a predictive preprocessor that improves the performance of the coders by close to 3 dB. Next, a technique is introduced for directly designing a variable Rate tree-structured Vector quantizer. Here the tree is grown one node at a time rather than the typical one layer at a time. This is less constrained than growing a balanced tree and the resulting unbalanced tree outperforms a balanced tree of the same average Rate. When the tree is pruned, additional improvement is measured in the signal to noise ratio at high Rates over standard PTSVQ. The tree growing algorithm can be interpreted as a constrained inverse operation of the pruning algorithm. Finally, the pruning algorithm is applied to a bit allocation problem in which differing numbers of bits are allocated in a classified Vector quantizer application. The algorithm is conceptually very simple and has very low complexity under convexity constraints on the class quantizer functions.

  • Variable Rate Vector quantization for medical image compression
    IEEE Transactions on Medical Imaging, 1990
    Co-Authors: Eve A Riskin, T Lookabaugh, Philip A Chou, R M Gray
    Abstract:

    Three techniques for variable-Rate Vector quantizer design are applied to medical images. The first two are extensions of an algorithm for optimal pruning in tree-structured classification and regression due to Breiman et al. The code design algorithms find subtrees of a given tree-structured Vector quantizer (TSVQ), each one optimal in that it has the lowest average distortion of all subtrees of the TSVQ with the same or lesser average Rate. Since the resulting subtrees have variable depth, natural variable-Rate coders result. The third technique is a joint optimization of a Vector quantizer and a noiseless variable-Rate code. This technique is relatively complex but it has the potential to yield the highest performance of all three techniques.

T Lookabaugh - One of the best experts on this subject based on the ideXlab platform.

  • variable Rate Vector quantization for speech image and video compression
    IEEE Transactions on Communications, 1993
    Co-Authors: T Lookabaugh, Eve A Riskin, Philip A Chou, R M Gray
    Abstract:

    The performance of a Vector quantizer can be improved by using a variable-Rate code. Three variable-Rate Vector quantization systems are applied to speech, image, and video sources and compared to standard Vector quantization and noiseless variable-Rate coding approaches. The systems range from a simple and flexible tree-based Vector quantizer to a high-performance, but complex, jointly optimized Vector quantizer and noiseless code. The systems provide significant performance improvements for subband speech coding, predictive image coding, and motion-compensated video, but provide only marginal improvements for Vector quantization of linear predictive coefficients in speech and direct Vector quantization of images. Criteria are suggested for determining when variable-Rate Vector quantization may provide significant performance improvement over standard approaches. >

  • Variable Rate Vector quantization for medical image compression
    IEEE Transactions on Medical Imaging, 1990
    Co-Authors: Eve A Riskin, T Lookabaugh, Philip A Chou, R M Gray
    Abstract:

    Three techniques for variable-Rate Vector quantizer design are applied to medical images. The first two are extensions of an algorithm for optimal pruning in tree-structured classification and regression due to Breiman et al. The code design algorithms find subtrees of a given tree-structured Vector quantizer (TSVQ), each one optimal in that it has the lowest average distortion of all subtrees of the TSVQ with the same or lesser average Rate. Since the resulting subtrees have variable depth, natural variable-Rate coders result. The third technique is a joint optimization of a Vector quantizer and a noiseless variable-Rate code. This technique is relatively complex but it has the potential to yield the highest performance of all three techniques.