Approximation Vector

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 72 Experts worldwide ranked by ideXlab platform

E.a.b. Da Silva - One of the best experts on this subject based on the ideXlab platform.

  • A rate control strategy for embedded wavelet video coders in an MPEG-4 framework
    Seamless Interconnection for Universal Services. Global Telecommunications Conference. GLOBECOM'99. (Cat. No.99CH37042), 1999
    Co-Authors: E.a.b. Da Silva, Rosângela Caetano
    Abstract:

    Embedded wavelet encoders possess a number of interesting features for video encoding applications. Among those, we can mention-the ability to encode a picture with precise control over the bit-rate. This differs significantly from the DCT-based methods adopted in the MPEG-4 standard, in which the rate cannot be directly set, being controlled by the quantizer step size instead. We investigate, rate-control strategies for embedded wavelet encoders in an MPEG-4 framework, that take advantage of the precise control that can be obtained for the bit-rate in such coders. We also investigate the use of successive Approximation Vector quantization in them for replacing the traditionally used successive Approximation scalar quantization. The results are encouraging showing that both the use of Vector quantization and the adoption of an adequate rate control strategy can improve both objective and subjective quality of video sequences encoded using embedded wavelet encoders.

  • convergent algorithms for successive Approximation Vector quantisation with applications to wavelet image compression
    IEE Proceedings - Vision Image and Signal Processing, 1999
    Co-Authors: Marcos Craizer, E.a.b. Da Silva, E.g. Ramos
    Abstract:

    Embedded wavelet coders have become very popular in image compression applications, owing to their simplicity and high coding efficiency. Most of them incorporate some form of successive Approximation scalar quantisation. Recently developed algorithms for successive Approximation Vector quantisation have been shown to be capable of outperforming successive Approximation scalar quantisation ones. In the paper, some algorithms for successive Approximation Vector quantisation are analysed. Results that were previously known only on an experimental basis are derived analytically. An improved algorithm is also developed and is proved to be convergent. These algorithms are applied to the coding of wavelet coefficients of images. Experimental results show that the improved algorithm is more stable in a rate/spl times/distortion sense, while maintaining coding performances compatible with the state-of-the-art.

  • Successive Approximation Vector quantization with improved convergence
    ITS'98 Proceedings. SBT IEEE International Telecommunications Symposium (Cat. No.98EX202), 1998
    Co-Authors: E.a.b. Da Silva, Marcos Craizer
    Abstract:

    Successive Approximation Vector quantization (SA-VQ) is a relatively recent algorithm in which each Vector is represented by a series of Vectors of decreasing magnitudes and orientations drawn from a fixed orientation codebook. It has been shown to provide good performance in wavelet coding schemes. In this paper, analytical results concerning the convergence of SA-VQ are presented in the form of two theorems. In the first one, results which had been previously determined only experimentally are presented analytically. In the second, a modification is proposed to the original SA-VQ algorithm which improves its convergence properties. Then, image compression results deriving from the application of the modified SA-VQ algorithm to coding wavelet transform coefficients are presented, showing improved PSNR performance.

  • Results on successive Approximation Vector quantisation
    Electronics Letters, 1998
    Co-Authors: Marcos Craizer, E.a.b. Da Silva, E.g. Ramos
    Abstract:

    Successive Approximation Vector quantisation is a new algorithm that has given very good results in coding wavelet coefficients of images. Results which had been previously obtained on an experimental basis are established analytically. After modifications derived from this analysis, the algorithm shows very good convergence properties, as well as an improved coding performance.

  • a successive Approximation Vector quantizer for wavelet transform image coding
    IEEE Transactions on Image Processing, 1996
    Co-Authors: E.a.b. Da Silva, Demetrios G Sampson, M Ghanbari
    Abstract:

    A coding method for wavelet coefficients of images using Vector quantization, called successive Approximation Vector quantization (SA-W-VQ) is proposed. In this method, each Vector is coded by a series of Vectors of decreasing magnitudes until a certain distortion level is reached. The successive Approximation using Vectors is analyzed, and conditions for convergence are derived. It is shown that lattice codebooks are an efficient tool for meeting these conditions without the need for very large codebooks. Regular lattices offer the extra advantage of fast encoding algorithms. In SA-W-VQ, distortion equalization of the wavelet coefficients can be achieved together with high compression ratio and precise bit-rate control. The performance of SA-W-VQ for still image coding is compared against some of the most successful image coding systems reported in the literature. The comparison shows that SA-W-VQ performs remarkably well at several bit rates and in various test images.

Dinh Phung - One of the best experts on this subject based on the ideXlab platform.

  • Approximation Vector machines for large scale online learning
    Journal of Machine Learning Research, 2017
    Co-Authors: Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
    Abstract:

    One of the most challenging problems in kernel online learning is to bound the model size and to promote model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity -- a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage sparsity and safeguard its risk in compromising the performance. In an online setting context, when an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis for the common loss functions including Hinge, smooth Hinge, and Logistic (i.e., for the classification task) and l1, l2, and e-insensitive (i.e., for the regression task) to characterize the gap between the Approximation and optimal solutions. This gap crucially depends on two key factors including the frequency of Approximation (i.e., how frequent the Approximation operation takes place) and the predefined threshold. We conducted extensive experiments for classification and regression tasks in batch and online modes using several benchmark datasets. The quantitative results show that our proposed AVM obtained comparable predictive performances with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

  • Approximation Vector machines for large scale online learning
    arXiv: Learning, 2016
    Co-Authors: Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
    Abstract:

    One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the Approximation and optimal solutions. This gap crucially depends on the frequency of Approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and $\epsilon$-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

M Ghanbari - One of the best experts on this subject based on the ideXlab platform.

  • a successive Approximation Vector quantizer for wavelet transform image coding
    IEEE Transactions on Image Processing, 1996
    Co-Authors: E.a.b. Da Silva, Demetrios G Sampson, M Ghanbari
    Abstract:

    A coding method for wavelet coefficients of images using Vector quantization, called successive Approximation Vector quantization (SA-W-VQ) is proposed. In this method, each Vector is coded by a series of Vectors of decreasing magnitudes until a certain distortion level is reached. The successive Approximation using Vectors is analyzed, and conditions for convergence are derived. It is shown that lattice codebooks are an efficient tool for meeting these conditions without the need for very large codebooks. Regular lattices offer the extra advantage of fast encoding algorithms. In SA-W-VQ, distortion equalization of the wavelet coefficients can be achieved together with high compression ratio and precise bit-rate control. The performance of SA-W-VQ for still image coding is compared against some of the most successful image coding systems reported in the literature. The comparison shows that SA-W-VQ performs remarkably well at several bit rates and in various test images.

Trung Le - One of the best experts on this subject based on the ideXlab platform.

  • Approximation Vector machines for large scale online learning
    Journal of Machine Learning Research, 2017
    Co-Authors: Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
    Abstract:

    One of the most challenging problems in kernel online learning is to bound the model size and to promote model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity -- a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage sparsity and safeguard its risk in compromising the performance. In an online setting context, when an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis for the common loss functions including Hinge, smooth Hinge, and Logistic (i.e., for the classification task) and l1, l2, and e-insensitive (i.e., for the regression task) to characterize the gap between the Approximation and optimal solutions. This gap crucially depends on two key factors including the frequency of Approximation (i.e., how frequent the Approximation operation takes place) and the predefined threshold. We conducted extensive experiments for classification and regression tasks in batch and online modes using several benchmark datasets. The quantitative results show that our proposed AVM obtained comparable predictive performances with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

  • Approximation Vector machines for large scale online learning
    arXiv: Learning, 2016
    Co-Authors: Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung
    Abstract:

    One of the most challenging problems in kernel online learning is to bound the model size and to promote the model sparsity. Sparse models not only improve computation and memory usage, but also enhance the generalization capacity, a principle that concurs with the law of parsimony. However, inappropriate sparsity modeling may also significantly degrade the performance. In this paper, we propose Approximation Vector Machine (AVM), a model that can simultaneously encourage the sparsity and safeguard its risk in compromising the performance. When an incoming instance arrives, we approximate this instance by one of its neighbors whose distance to it is less than a predefined threshold. Our key intuition is that since the newly seen instance is expressed by its nearby neighbor the optimal performance can be analytically formulated and maintained. We develop theoretical foundations to support this intuition and further establish an analysis to characterize the gap between the Approximation and optimal solutions. This gap crucially depends on the frequency of Approximation and the predefined threshold. We perform the convergence analysis for a wide spectrum of loss functions including Hinge, smooth Hinge, and Logistic for classification task, and $l_1$, $l_2$, and $\epsilon$-insensitive for regression task. We conducted extensive experiments for classification task in batch and online modes, and regression task in online mode over several benchmark datasets. The results show that our proposed AVM achieved a comparable predictive performance with current state-of-the-art methods while simultaneously achieving significant computational speed-up due to the ability of the proposed AVM in maintaining the model size.

Petr Vaněk - One of the best experts on this subject based on the ideXlab platform.

  • Convergence theory for the exact interpolation scheme with Approximation Vector as the first column of the prolongator and Rayleigh quotient iteration nonlinear smoother
    Applications of Mathematics, 2017
    Co-Authors: Petr Vaněk, Ivana Pultarová
    Abstract:

    We extend the analysis of the recently proposed nonlinear EIS scheme applied to the partial eigenvalue problem. We address the case where the Rayleigh quotient iteration is used as the smoother on the fine-level. Unlike in our previous theoretical results, where the smoother given by the linear inverse power method is assumed, we prove nonlinear speed-up when the Approximation becomes close to the exact solution. The speed-up is cubic. Unlike existent convergence estimates for the Rayleigh quotient iteration, our estimates take advantage of the powerful effect of the coarse-space.

  • Exact interpolation scheme with Approximation Vector used as a column of the prolongator
    Numerical Linear Algebra With Applications, 2015
    Co-Authors: Roman Kužel, Petr Vaněk
    Abstract:

    Summary Our method is a kind of exact interpolation scheme by Achi Brandt et al. In the exact interpolation scheme, for x being the fine-level Approximation of the solution, the coarse-space V = V(x) is constructed so that x satisfies x∈V. We achieve it simply by adding the Vector x as a first column of the prolongator. (The columns of the prolongator P form a computationally relevant basis of the coarse-space V = Range(P).) The advantages of this construction become obvious when solving non-linear problems. The cost of enriching the coarse-space V by the current Approximation x is a single dense column of the prolongator that has to be updated each iteration. Our method can be used for multilevel acceleration of virtually any iterative method used for solving both linear and non-linear systems. Copyright © 2015 John Wiley & Sons, Ltd.