Function Space

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Alejandro Ribeiro - One of the best experts on this subject based on the ideXlab platform.

  • parsimonious online learning with kernels via sparse projections in Function Space
    Journal of Machine Learning Research, 2019
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    Despite their attractiveness, popular perception is that techniques for nonparametric Function approximation do not scale to streaming data due to an intractable growth in the amount of storage they require. To solve this problem in a memory-affordable way, we propose an online technique based on Functional stochastic gradient descent in tandem with supervised sparsification based on greedy Function subSpace projections. The method, called parsimonious online learning with kernels (POLK), provides a controllable tradeoff? between its solution accuracy and the amount of memory it requires. We derive conditions under which the generated Function sequence converges almost surely to the optimal Function, and we establish that the memory requirement remains finite. We evaluate POLK for kernel multi-class logistic regression and kernel hinge-loss classification on three canonical data sets: a synthetic Gaussian mixture model, the MNIST hand-written digits, and the Brodatz texture database. On all three tasks, we observe a favorable tradeoff of objective Function evaluation, classification performance, and complexity of the nonparametric regressor extracted the proposed method.

  • parsimonious online learning with kernels via sparse projections in Function Space
    International Conference on Acoustics Speech and Signal Processing, 2017
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    We consider stochastic nonparametric regression problems in a reproducing kernel Hilbert Space (RKHS), an extension of expected risk minimization to nonlinear Function estimation. Popular perception is that kernel methods are inapplicable to online settings, since the generalization of stochastic methods to kernelized Function Spaces require memory storage that is cubic in the iteration index (“the curse of kernelization”). We alleviate this intractability in two ways: (1) we consider the use of Functional stochastic gradient method (FSGD) which operates on a subset of training examples at each step; and (2), we extract parsimonious approximations of the resulting stochastic sequence via a greedy sparse subSpace projection scheme based on kernel orthogonal matching pursuit (KOMP). We establish that this method converges almost surely in both diminishing and constant algorithm step-size regimes for a specific selection of sparse approximation budget. The method is evaluated on a kernel multi-class support vector machine problem, where data samples are generated from class-dependent Gaussian mixture models.

Alec Koppel - One of the best experts on this subject based on the ideXlab platform.

  • parsimonious online learning with kernels via sparse projections in Function Space
    Journal of Machine Learning Research, 2019
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    Despite their attractiveness, popular perception is that techniques for nonparametric Function approximation do not scale to streaming data due to an intractable growth in the amount of storage they require. To solve this problem in a memory-affordable way, we propose an online technique based on Functional stochastic gradient descent in tandem with supervised sparsification based on greedy Function subSpace projections. The method, called parsimonious online learning with kernels (POLK), provides a controllable tradeoff? between its solution accuracy and the amount of memory it requires. We derive conditions under which the generated Function sequence converges almost surely to the optimal Function, and we establish that the memory requirement remains finite. We evaluate POLK for kernel multi-class logistic regression and kernel hinge-loss classification on three canonical data sets: a synthetic Gaussian mixture model, the MNIST hand-written digits, and the Brodatz texture database. On all three tasks, we observe a favorable tradeoff of objective Function evaluation, classification performance, and complexity of the nonparametric regressor extracted the proposed method.

  • parsimonious online learning with kernels via sparse projections in Function Space
    International Conference on Acoustics Speech and Signal Processing, 2017
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    We consider stochastic nonparametric regression problems in a reproducing kernel Hilbert Space (RKHS), an extension of expected risk minimization to nonlinear Function estimation. Popular perception is that kernel methods are inapplicable to online settings, since the generalization of stochastic methods to kernelized Function Spaces require memory storage that is cubic in the iteration index (“the curse of kernelization”). We alleviate this intractability in two ways: (1) we consider the use of Functional stochastic gradient method (FSGD) which operates on a subset of training examples at each step; and (2), we extract parsimonious approximations of the resulting stochastic sequence via a greedy sparse subSpace projection scheme based on kernel orthogonal matching pursuit (KOMP). We establish that this method converges almost surely in both diminishing and constant algorithm step-size regimes for a specific selection of sparse approximation budget. The method is evaluated on a kernel multi-class support vector machine problem, where data samples are generated from class-dependent Gaussian mixture models.

Garrett Warnell - One of the best experts on this subject based on the ideXlab platform.

  • parsimonious online learning with kernels via sparse projections in Function Space
    Journal of Machine Learning Research, 2019
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    Despite their attractiveness, popular perception is that techniques for nonparametric Function approximation do not scale to streaming data due to an intractable growth in the amount of storage they require. To solve this problem in a memory-affordable way, we propose an online technique based on Functional stochastic gradient descent in tandem with supervised sparsification based on greedy Function subSpace projections. The method, called parsimonious online learning with kernels (POLK), provides a controllable tradeoff? between its solution accuracy and the amount of memory it requires. We derive conditions under which the generated Function sequence converges almost surely to the optimal Function, and we establish that the memory requirement remains finite. We evaluate POLK for kernel multi-class logistic regression and kernel hinge-loss classification on three canonical data sets: a synthetic Gaussian mixture model, the MNIST hand-written digits, and the Brodatz texture database. On all three tasks, we observe a favorable tradeoff of objective Function evaluation, classification performance, and complexity of the nonparametric regressor extracted the proposed method.

  • parsimonious online learning with kernels via sparse projections in Function Space
    International Conference on Acoustics Speech and Signal Processing, 2017
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    We consider stochastic nonparametric regression problems in a reproducing kernel Hilbert Space (RKHS), an extension of expected risk minimization to nonlinear Function estimation. Popular perception is that kernel methods are inapplicable to online settings, since the generalization of stochastic methods to kernelized Function Spaces require memory storage that is cubic in the iteration index (“the curse of kernelization”). We alleviate this intractability in two ways: (1) we consider the use of Functional stochastic gradient method (FSGD) which operates on a subset of training examples at each step; and (2), we extract parsimonious approximations of the resulting stochastic sequence via a greedy sparse subSpace projection scheme based on kernel orthogonal matching pursuit (KOMP). We establish that this method converges almost surely in both diminishing and constant algorithm step-size regimes for a specific selection of sparse approximation budget. The method is evaluated on a kernel multi-class support vector machine problem, where data samples are generated from class-dependent Gaussian mixture models.

Ethan Stump - One of the best experts on this subject based on the ideXlab platform.

  • parsimonious online learning with kernels via sparse projections in Function Space
    Journal of Machine Learning Research, 2019
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    Despite their attractiveness, popular perception is that techniques for nonparametric Function approximation do not scale to streaming data due to an intractable growth in the amount of storage they require. To solve this problem in a memory-affordable way, we propose an online technique based on Functional stochastic gradient descent in tandem with supervised sparsification based on greedy Function subSpace projections. The method, called parsimonious online learning with kernels (POLK), provides a controllable tradeoff? between its solution accuracy and the amount of memory it requires. We derive conditions under which the generated Function sequence converges almost surely to the optimal Function, and we establish that the memory requirement remains finite. We evaluate POLK for kernel multi-class logistic regression and kernel hinge-loss classification on three canonical data sets: a synthetic Gaussian mixture model, the MNIST hand-written digits, and the Brodatz texture database. On all three tasks, we observe a favorable tradeoff of objective Function evaluation, classification performance, and complexity of the nonparametric regressor extracted the proposed method.

  • parsimonious online learning with kernels via sparse projections in Function Space
    International Conference on Acoustics Speech and Signal Processing, 2017
    Co-Authors: Alec Koppel, Garrett Warnell, Ethan Stump, Alejandro Ribeiro
    Abstract:

    We consider stochastic nonparametric regression problems in a reproducing kernel Hilbert Space (RKHS), an extension of expected risk minimization to nonlinear Function estimation. Popular perception is that kernel methods are inapplicable to online settings, since the generalization of stochastic methods to kernelized Function Spaces require memory storage that is cubic in the iteration index (“the curse of kernelization”). We alleviate this intractability in two ways: (1) we consider the use of Functional stochastic gradient method (FSGD) which operates on a subset of training examples at each step; and (2), we extract parsimonious approximations of the resulting stochastic sequence via a greedy sparse subSpace projection scheme based on kernel orthogonal matching pursuit (KOMP). We establish that this method converges almost surely in both diminishing and constant algorithm step-size regimes for a specific selection of sparse approximation budget. The method is evaluated on a kernel multi-class support vector machine problem, where data samples are generated from class-dependent Gaussian mixture models.

Nina Taft - One of the best experts on this subject based on the ideXlab platform.

  • learning in a large Function Space privacy preserving mechanisms for svm learning
    Journal of Privacy and Confidentiality, 2012
    Co-Authors: Benjamin I P Rubinstein, Peter L Bartlett, Ling Huang, Nina Taft
    Abstract:

    The ubiquitous need for analyzing privacy-sensitive information—including health records, personal communications, product ratings and social network data—is driving significant interest in privacy-preserving data analysis across several research communities. This paper explores the release of Support Vector Machine (SVM) classifiers while preserving the privacy of training data. The SVM is a popular machine learning method that maps data to a high-dimensional feature Space before learning a linear decision boundary. We present efficient mechanisms for finite-dimensional feature mappings and for (potentially infinite-dimensional) mappings with translation-invariant kernels. In the latter case, our mechanism borrows a technique from large-scale learning to learn in a finite-dimensional feature Space whose inner-product uniformly approximates the desired feature Space inner-product (the desired kernel) with high probability. Differential privacy is established using algorithmic stability, a property used in learning theory to bound generalization error. Utility—when the private classifier is pointwise close to the non-private classifier with high probability—is proven using smoothness of regularized empirical risk minimization with respect to small perturbations to the feature mapping. Finally we conclude with lower bounds on the differential privacy of any mechanism approximating the SVM.

  • learning in a large Function Space privacy preserving mechanisms for svm learning
    arXiv: Learning, 2009
    Co-Authors: Benjamin I P Rubinstein, Peter L Bartlett, Ling Huang, Nina Taft
    Abstract:

    Several recent studies in privacy-preserving learning have considered the trade-off between utility or risk and the level of differential privacy guaranteed by mechanisms for statistical query processing. In this paper we study this trade-off in private Support Vector Machine (SVM) learning. We present two efficient mechanisms, one for the case of finite-dimensional feature mappings and one for potentially infinite-dimensional feature mappings with translation-invariant kernels. For the case of translation-invariant kernels, the proposed mechanism minimizes regularized empirical risk in a random Reproducing Kernel Hilbert Space whose kernel uniformly approximates the desired kernel with high probability. This technique, borrowed from large-scale learning, allows the mechanism to respond with a finite encoding of the classifier, even when the Function class is of infinite VC dimension. Differential privacy is established using a proof technique from algorithmic stability. Utility--the mechanism's response Function is pointwise epsilon-close to non-private SVM with probability 1-delta--is proven by appealing to the smoothness of regularized empirical risk minimization with respect to small perturbations to the feature mapping. We conclude with a lower bound on the optimal differential privacy of the SVM. This negative result states that for any delta, no mechanism can be simultaneously (epsilon,delta)-useful and beta-differentially private for small epsilon and small beta.