Approximation Vector - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Approximation Vector

The Experts below are selected from a list of 72 Experts worldwide ranked by ideXlab platform

E.a.b. Da Silva – 1st expert on this subject based on the ideXlab platform

  • A rate control strategy for embedded wavelet video coders in an MPEG-4 framework
    Seamless Interconnection for Universal Services. Global Telecommunications Conference. GLOBECOM'99. (Cat. No.99CH37042), 1999
    Co-Authors: E.a.b. Da Silva, Rosângela Caetano

    Abstract:

    Embedded wavelet encoders possess a number of interesting features for video encoding applications. Among those, we can mention-the ability to encode a picture with precise control over the bit-rate. This differs significantly from the DCT-based methods adopted in the MPEG-4 standard, in which the rate cannot be directly set, being controlled by the quantizer step size instead. We investigate, rate-control strategies for embedded wavelet encoders in an MPEG-4 framework, that take advantage of the precise control that can be obtained for the bit-rate in such coders. We also investigate the use of successive Approximation Vector quantization in them for replacing the traditionally used successive Approximation scalar quantization. The results are encouraging showing that both the use of Vector quantization and the adoption of an adequate rate control strategy can improve both objective and subjective quality of video sequences encoded using embedded wavelet encoders.

  • convergent algorithms for successive Approximation Vector quantisation with applications to wavelet image compression
    IEE Proceedings – Vision Image and Signal Processing, 1999
    Co-Authors: Marcos Craizer, E.a.b. Da Silva, E.g. Ramos

    Abstract:

    Embedded wavelet coders have become very popular in image compression applications, owing to their simplicity and high coding efficiency. Most of them incorporate some form of successive Approximation scalar quantisation. Recently developed algorithms for successive Approximation Vector quantisation have been shown to be capable of outperforming successive Approximation scalar quantisation ones. In the paper, some algorithms for successive Approximation Vector quantisation are analysed. Results that were previously known only on an experimental basis are derived analytically. An improved algorithm is also developed and is proved to be convergent. These algorithms are applied to the coding of wavelet coefficients of images. Experimental results show that the improved algorithm is more stable in a rate/spl times/distortion sense, while maintaining coding performances compatible with the state-of-the-art.

  • Successive Approximation Vector quantization with improved convergence
    ITS'98 Proceedings. SBT IEEE International Telecommunications Symposium (Cat. No.98EX202), 1998
    Co-Authors: E.a.b. Da Silva, Marcos Craizer

    Abstract:

    Successive Approximation Vector quantization (SA-VQ) is a relatively recent algorithm in which each Vector is represented by a series of Vectors of decreasing magnitudes and orientations drawn from a fixed orientation codebook. It has been shown to provide good performance in wavelet coding schemes. In this paper, analytical results concerning the convergence of SA-VQ are presented in the form of two theorems. In the first one, results which had been previously determined only experimentally are presented analytically. In the second, a modification is proposed to the original SA-VQ algorithm which improves its convergence properties. Then, image compression results deriving from the application of the modified SA-VQ algorithm to coding wavelet transform coefficients are presented, showing improved PSNR performance.

Dinh Phung – 2nd expert on this subject based on the ideXlab platform

  • Approximation Vector machines for large scale online learning
    Journal of Machine Learning Research, 2017
    Co-Authors: Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung

    Abstract:

    One of the most challenging problems in kernel online learning is to bound the model size and to promote model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity — a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage sparsity and safeguard its risk in compromising the performance. In an online setting context, when an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis for the common loss functions including Hinge, smooth Hinge, and Logistic (i.e., for the classification task) and l1, l2, and e-insensitive (i.e., for the regression task) to characterize the gap between the Approximation and optimal solutions. This gap crucially depends on two key factors including the frequency of Approximation (i.e., how frequent the Approximation operation takes place) and the predefined threshold. We conducted extensive experiments for classification and regression tasks in batch and online modes using several benchmark datasets. The quantitative results show that our proposed AVM obtained comparable predictive performances with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

  • Approximation Vector machines for large scale online learning
    arXiv: Learning, 2016
    Co-Authors: Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung

    Abstract:

    One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the Approximation and optimal solutions. This gap crucially depends on the frequency of Approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and $\epsilon$-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

M Ghanbari – 3rd expert on this subject based on the ideXlab platform

  • a successive Approximation Vector quantizer for wavelet transform image coding
    IEEE Transactions on Image Processing, 1996
    Co-Authors: E.a.b. Da Silva, Demetrios G Sampson, M Ghanbari

    Abstract:

    A coding method for wavelet coefficients of images using Vector quantization, called successive Approximation Vector quantization (SA-W-VQ) is proposed. In this method, each Vector is coded by a series of Vectors of decreasing magnitudes until a certain distortion level is reached. The successive Approximation using Vectors is analyzed, and conditions for convergence are derived. It is shown that lattice codebooks are an efficient tool for meeting these conditions without the need for very large codebooks. Regular lattices offer the extra advantage of fast encoding algorithms. In SA-W-VQ, distortion equalization of the wavelet coefficients can be achieved together with high compression ratio and precise bit-rate control. The performance of SA-W-VQ for still image coding is compared against some of the most successful image coding systems reported in the literature. The comparison shows that SA-W-VQ performs remarkably well at several bit rates and in various test images.