Vector Quantizers

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 279 Experts worldwide ranked by ideXlab platform

R.m. Gray - One of the best experts on this subject based on the ideXlab platform.

  • texture classification based on multiple gauss mixture Vector Quantizers
    International Conference on Multimedia and Expo, 2002
    Co-Authors: Kyungsuk Pyun, Chee Sun Won, Johan Lim, R.m. Gray
    Abstract:

    We propose a texture classification method using multiple Gauss mixture Vector Quantizers (GMVQ). We designed a separate model codebook or Gauss mixture for each texture using the generalized Lloyd algorithm with a minimum discrimination information (MDI) distortion based on a training data set. The multi-codebook structure of the GMVQ classifier is an extension to images of the isolated utterance speech recognizer of J.E. Shore and D. Burton (see Proc. Int. Conf. Acoust., Speech, and Sig. Processing, IEEE82Ch.1746-7, p.907-10, 1982). We applied the algorithm to the Brodatz texture database and showed it to be competitive in performance in comparison to other texture classifiers. Its low complexity implementation and real-time operation make the approach suitable for content-based image retrieval.

  • asymptotic performance of Vector Quantizers with the perceptual distortion measure
    International Symposium on Information Theory, 1997
    Co-Authors: Jia Li, N Chaddha, R.m. Gray
    Abstract:

    Gersho's (1979) bounds on the asymptotic performance of Vector Quantizers are valid for Vector distortions which are powers of the Euclidean norm. Yamada, Tazaki, and Gray (1980) generalized the results to distortion measures that are increasing functions of the norm of their argument. In both cases, the distortion is uniquely determined by the Vector quantization error, i.e., the Euclidean difference between the original Vector and the codeword into which it is quantized. We generalize these asymptotic bounds to input-weighted quadratic distortion measures and measures that are approximately output-weighted-quadratic when the distortion is small, a class of distortion measures often claimed to be perceptually meaningful. An approximation of the asymptotic distortion based on Gersho's conjecture is derived as well. We also consider the problem of source mismatch, where the quantizer is designed using a probability density different from the true source density. The resulting asymptotic performance in terms of distortion increase in decibels is shown to be linear in the relative entropy between the true and estimated probability densities.

  • finite state hierarchical table lookup Vector quantization for images
    International Conference on Acoustics Speech and Signal Processing, 1996
    Co-Authors: N Chaddha, Sanjeev Mehrotra, R.m. Gray
    Abstract:

    This paper presents an algorithm for image compression using finite state hierarchical table-lookup Vector quantization. Finite state Vector Quantizers are Vector Quantizers with memory. Finite state Vector quantization (FSVQ) takes advantage of the correlation between adjacent blocks of pixels in an image and also helps in overcoming the complexity problem of block memoryless VQ for large block sizes by using smaller block sizes for similar performance. FSVQ algorithms typically try to preserve edge and gray scale gradient continuity across block boundaries in images in order to reduce blockiness. Our algorithm combines FSVQ with hierarchical table-lookup Vector quantization. Thus the full-search encoder in an FSVQ is replaced by a table-lookup encoder. In these table lookup encoders, input Vectors to the encoder are used directly as addresses in code tables to choose the code-words. In order to preserve manageable table sizes for large dimension VQs, we use hierarchical structures to quantize the Vector successively in stages. Since both the encoder and decoder are implemented by table lookups, there are no arithmetic computations required in the final system implementation. To further improve the subjective quality of compressed images we use block transform based finite-state table-lookup Vector Quantizers with subjective distortion measures. There is no need to perform the forward or reverse transforms as they are implemented in the tables.

  • unbalanced non binary tree structured Vector Quantizers
    Asilomar Conference on Signals Systems and Computers, 1993
    Co-Authors: T M Schmidl, Pamela C Cosman, R.m. Gray
    Abstract:

    An established method for developing unbalanced binary tree-structured Vector Quantizers is greedy growing followed by optimal pruning. These algorithms can be extended to a hybrid binary/quaternary tree structure or to a pure quaternary tree structure. The trade-off of decreased distortion for increased rate is examined for the split into two or four children at each terminal node. The trees employing quaternary splits have smaller memory requirements for the codebook and provide slightly lower mean-squared-error on the test sequence as compared to a binary tree. >

  • a comparison of growing and pruning balanced and unbalanced tree structured Vector Quantizers
    International Symposium on Information Theory, 1991
    Co-Authors: Eve A Riskin, R.m. Gray
    Abstract:

    We examine the question of whether growing and optimally pruning balanced or unbalanced tree-strnctured Vector Quantizers will result in lower average distortion. It is shown that growing an unbalanced tree will not always lead to lower distortion than growing a balanced tree, even though the balanced tree has more structure. Conditions are presented under which it is guaranteed that pruning an unbalanced tree will always outperform pruning a balanced tree of the same initial average bit rate. Finally, we present a case where pruning a balanced tree may outperform, at some bit rates, pruning an unbalanced tree of lower initial distortion.

Jean Cardinal - One of the best experts on this subject based on the ideXlab platform.

  • design of tree structured multiple description Vector Quantizers
    Data Compression Conference, 2001
    Co-Authors: Jean Cardinal
    Abstract:

    We present a new multiple description source coding scheme based on tree-structured Vector quantization (TSVQ). In this scheme, the codebook of each decoder is organized in a binary tree. The encoding is greedy and based on a sequence of binary decisions as in traditional TSVQ. Each binary decision of the encoder corresponds to adding information on one of the available channels and the encoding complexity can be shown to be proportional to the total bitrate. We describe the encoder structure for the two-channel case, and propose an entropy-constrained design algorithm based on marginal return analysis. Experimental results on a Gaussian source are presented for various design parameters and the generalization of the scheme to more than two channels is outlined.

  • multipath tree structured Vector Quantizers
    European Signal Processing Conference, 2000
    Co-Authors: Jean Cardinal
    Abstract:

    Tree-structured Vector quantization (TSVQ) is a popular mean of avoiding the exponential complexity of full-search Vector Quantizers. We present two new design algorithms for TSVQ in which more than one path can be chosen at each internal node. The two algorithms differ on the way the paths are chosen. In the first algorithm the number of paths is fixed and the encoding is similar to the M-algorithm for delayed decision coders. In the second algorithm, the paths are chosen adaptively at each node, according to a (1 + e) — nearest neighbor rule. We show the performances of the two algorithms on an AR(1) gaussian process, and observe that the adaptive method is the best one. Those methods allow near full-search performances at a fraction of the complexity cost.

  • complexity constrained tree structured Vector Quantizers
    21st Symposium on Information Theory in the Benelux, 2000
    Co-Authors: Jean Cardinal
    Abstract:

    We present a new algorithm for complexity-distortion optimization in tree-structured Vector Quantizers (TSVQ). The algorithm allows the user to specify the average rate and computational complexity budgets R and C, measured in bits and multiplications per sample, respectively. The output is an optimal--in a sense to be specified--TSVQ satisfying the constraints. The complexity budget is lower-bounded by the complexity of a binary TSVQ and upper-bounded by the complexity of a full-search entropy-constrained Vector quantizer. Experimental results for synthetic and natural sources are given.

Eve A Riskin - One of the best experts on this subject based on the ideXlab platform.

  • a comparison of growing and pruning balanced and unbalanced tree structured Vector Quantizers
    International Symposium on Information Theory, 1991
    Co-Authors: Eve A Riskin, R.m. Gray
    Abstract:

    We examine the question of whether growing and optimally pruning balanced or unbalanced tree-strnctured Vector Quantizers will result in lower average distortion. It is shown that growing an unbalanced tree will not always lead to lower distortion than growing a balanced tree, even though the balanced tree has more structure. Conditions are presented under which it is guaranteed that pruning an unbalanced tree will always outperform pruning a balanced tree of the same initial average bit rate. Finally, we present a case where pruning a balanced tree may outperform, at some bit rates, pruning an unbalanced tree of lower initial distortion.

  • lookahead in growing tree structured Vector Quantizers
    International Conference on Acoustics Speech and Signal Processing, 1991
    Co-Authors: Eve A Riskin, R.m. Gray
    Abstract:

    A technique is presented for directly designing an unbalanced variable rate tree-structured Vector quantizer. The algorithm is an extension of an algorithm for decision tree design which grows the tree one node at a time rather than one layer at a time. The node that is split is the one that yields the greatest slope of decrease in distortion to increase in rate. This is performing a lookahead step of depth one. The authors then modify the growing technique to allow for lookahead of depths two and three. It is found that two- and three-step lookahead provide only slight improvement in the signal to noise ratio of the overall tree (on the order of 0.6 dB). >

R L Baker - One of the best experts on this subject based on the ideXlab platform.

  • recursive optimal pruning with applications to tree structured Vector Quantizers
    IEEE Transactions on Image Processing, 1992
    Co-Authors: S Z Kiang, R L Baker, Gary J Sullivan, C Y Chiu
    Abstract:

    A pruning algorithm of P.A. Chou et al. (1989) for designing optimal tree structures identifies only those codebooks which lie on the convex hull of the original codebook's operational distortion rate function. The authors introduce a modified version of the original algorithm, which identifies a large number of codebooks having minimum average distortion, under the constraint that, in each step, only modes having no descendents are removed from the tree. All codebooks generated by the original algorithm are also generated by this algorithm. The new algorithm generates a much larger number of codebooks in the middle- and low-rate regions. The additional codebooks permit operation near the codebook's operational distortion rate function without time sharing by choosing from the increased number of available bit rates. Despite the statistical mismatch which occurs when coding data outside the training sequence, these pruned codebooks retain their performance advantage over full search Vector Quantizers (VQs) for a large range of rates. >

  • recursive optimal pruning of tree structured Vector Quantizers
    International Conference on Acoustics Speech and Signal Processing, 1991
    Co-Authors: S Z Kiang, Gary J Sullivan, Chungyen Chiu, R L Baker
    Abstract:

    The generalized BFOS (G-BFOS), a sequential pruning algorithm for designing optimal tree structures, was presented by Chou, Lookabaugh, and Gray (see IEEE Trans. Inf. Theory, vol.35, no.2, p.299, 1989), and it was applied to tree-structured Vector Quantizers (TSVQ). G-BFOS yields VQ codebooks that often outperform conventional generalized Lloyd full search codebooks having the same rate and block size. The authors have developed a modified version of G-BFOS, called the recursive optimal pruning algorithm (ROPA), which recursively searches for the nodes to be pruned next. The sequence of these pruned codebooks includes the original optimal G-BFOS codebooks and many additional ones. The optimality of these codebooks is described, and simulations evaluate their performance. >

N.m. Nasrabadi - One of the best experts on this subject based on the ideXlab platform.

  • Very low bit-rate video coding using variable block-size entropy-constrained residual Vector Quantizers
    IEEE Journal on Selected Areas in Communications, 1997
    Co-Authors: Heesung Kwon, M. Venkatramam, N.m. Nasrabadi
    Abstract:

    We present a practical video coding algorithm for use at very low bit rates. For efficient coding at very low bit rates, it is important to intelligently allocate bits within a frame, and so a powerful variable-rate algorithm is required. We use Vector quantization to encode the motion-compensated residue signal in an H.263-like framework. For a given complexity, it is well understood that structured Vector Quantizers perform better than unstructured and unconstrained Vector Quantizers. A combination of structured Vector Quantizers is used in our work to encode the video sequences. The proposed codec is a multistage residual Vector quantizer, with transform Vector Quantizers in the initial stages. The transform-VQ captures the low-frequency information, using only a small portion of the bit budget, while the later stage residual VQ captures the high-frequency information, using the remaining bits. We used a strategy to adaptively refine only areas of high activity, using recursive decomposition and selective refinement in the later stages. An entropy constraint was used to modify the codebooks to allow better entropy coding of the indexes. We evaluate the performance of the proposed codec, and compare this data with the performance of the H.263-based codec. Experimental results show that the proposed codec delivered significantly better perceptual quality along with better quantitative performance.

  • multi stage target recognition using modular Vector Quantizers and multilayer perceptrons
    Computer Vision and Pattern Recognition, 1996
    Co-Authors: L A Chan, N.m. Nasrabadi, V Mirelli
    Abstract:

    An automatic target recognition (ATR) classifier is proposed that uses modularly cascaded Vector Quantizers (VQs) and multilayer perceptrons (MLPs). A dedicated VQ codebook is constructed for each target class at a specific range of aspects, which is trained with the K-means algorithm and a modified learning Vector quantization (LVQ) algorithm. Each final codebook is expected to give the lowest mean squared error (MSE) for its correct target class at a given range of aspects. These MSEs are then processed by an array of window MLPs and a target MLP consecutively. In the spatial domain, target recognition rates of 90.3 and 65.3 percent are achieved for moderately and highly cluttered test sets, respectively. Using the wavelet decomposition with an adaptive and independent codebook per sub-band, the VQs alone have produced recognition rates of 98.7 and 69.0 percent on more challenging training and test sets, respectively.

  • automatic target recognition using modularly cascaded Vector Quantizers and multilayer perceptrons
    International Conference on Acoustics Speech and Signal Processing, 1996
    Co-Authors: L A Chan, N.m. Nasrabadi, V Mirelli
    Abstract:

    An automatic target recognition classifier is constructed of a set of Vector Quantizers (VQs) and multilayer perceptrons (MLPs) that are modularly cascaded. A dedicated VQ codebook is constructed for each target at a specific range of aspects. Each codebook is a set of block feature templates that are iteratively adapted to represent a particular target at a specific range of aspects. These templates are further trained by a modified learning Vector quantization (LVQ) algorithm that enhances their discriminatory power. The mean squared errors resulting from matching the input image with the block templates in each each codebook are input to an array of window MLPs (WMLPs). Each WMLP is trained to recognize its intended-target at a specific range of aspects. The outputs of the WMLPs are manipulated and fed into a target MLP (TMLP) that produces the final recognition results. A recognition rate of 65.3 percent is achieved on a highly cluttered test set.