Support Vector Machines

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 312597 Experts worldwide ranked by ideXlab platform

Grace Wahba - One of the best experts on this subject based on the ideXlab platform.

  • Multicategory Support Vector Machines
    Journal of the American Statistical Association, 2006
    Co-Authors: Yoonkyung Lee, Yi Lin, Grace Wahba
    Abstract:

    Two-category Support Vector Machines (SVM) have been very popular in the machine learning community for classification problems. Solving multicategory problems by a series of binary classifiers is quite common in the SVM paradigm; however, this approach may fail under various circumstances. We propose the multicategory Support Vector machine (MSVM), which extends the binary SVM to the multicategory case and has good theoretical properties. The proposed method provides a unifying framework when there are either equal or unequal misclassification costs. As a tuning criterion for the MSVM, an approximate leave-one-out cross-validation function, called Generalized Approximate Cross Validation, is derived, analogous to the binary case. The effectiveness of the MSVM is demonstrated through the applications to cancer classification using microarray data and cloud classification with satellite radiance profiles.

Bart De Moor - One of the best experts on this subject based on the ideXlab platform.

  • least squares Support Vector Machines
    2002
    Co-Authors: Johan A. K. Suykens, Tony Van Gestel, Jos De Brabanter, Bart De Moor, Joos Vandewalle
    Abstract:

    Support Vector Machines Basic Methods of Least Squares Support Vector Machines Bayesian Inference for LS-SVM Models Robustness Large Scale Problems LS-SVM for Unsupervised Learning LS-SVM for Recurrent Networks and Control.

  • Least Squares Support Vector Machines - Least Squares Support Vector Machines
    2002
    Co-Authors: Johan A. K. Suykens, Tony Van Gestel, Jos De Brabanter, Bart De Moor, Joos Vandewalle
    Abstract:

    Support Vector Machines Basic Methods of Least Squares Support Vector Machines Bayesian Inference for LS-SVM Models Robustness Large Scale Problems LS-SVM for Unsupervised Learning LS-SVM for Recurrent Networks and Control.

  • optimal control by least squares Support Vector Machines
    Neural Networks, 2001
    Co-Authors: Johan A. K. Suykens, Joos Vandewalle, Bart De Moor
    Abstract:

    Support Vector Machines have been very successful in pattern recognition and function estimation problems. In this paper we introduce the use of least squares Support Vector Machines (LS-SVM's) for the optimal control of nonlinear systems. Linear and neural full static state feedback controllers are considered. The problem is formulated in such a way that it incorporates the N-stage optimal control problem as well as a least squares Support Vector machine approach for mapping the state space into the action space. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS-SVM control scheme. The results are discussed for Support Vector Machines with radial basis function kernel. Advantages of LS-SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercer's condition. The curse of dimensionality is avoided in comparison with defining a regular grid for the centers in classical radial basis function networks. This is at the expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parametric optimization problems. In the SVM methodology the number of unknowns equals the number of training data, while in the primal space the number of unknowns can be infinite dimensional. The method is illustrated both on stabilization and tracking problems including examples on swinging up an inverted pendulum with local stabilization at the endpoint and a tracking problem for a ball and beam system.

Yoonkyung Lee - One of the best experts on this subject based on the ideXlab platform.

  • Multicategory Support Vector Machines
    Journal of the American Statistical Association, 2006
    Co-Authors: Yoonkyung Lee, Yi Lin, Grace Wahba
    Abstract:

    Two-category Support Vector Machines (SVM) have been very popular in the machine learning community for classification problems. Solving multicategory problems by a series of binary classifiers is quite common in the SVM paradigm; however, this approach may fail under various circumstances. We propose the multicategory Support Vector machine (MSVM), which extends the binary SVM to the multicategory case and has good theoretical properties. The proposed method provides a unifying framework when there are either equal or unequal misclassification costs. As a tuning criterion for the MSVM, an approximate leave-one-out cross-validation function, called Generalized Approximate Cross Validation, is derived, analogous to the binary case. The effectiveness of the MSVM is demonstrated through the applications to cancer classification using microarray data and cloud classification with satellite radiance profiles.

Johan A. K. Suykens - One of the best experts on this subject based on the ideXlab platform.

  • least squares Support Vector Machines
    2002
    Co-Authors: Johan A. K. Suykens, Tony Van Gestel, Jos De Brabanter, Bart De Moor, Joos Vandewalle
    Abstract:

    Support Vector Machines Basic Methods of Least Squares Support Vector Machines Bayesian Inference for LS-SVM Models Robustness Large Scale Problems LS-SVM for Unsupervised Learning LS-SVM for Recurrent Networks and Control.

  • Least Squares Support Vector Machines - Least Squares Support Vector Machines
    2002
    Co-Authors: Johan A. K. Suykens, Tony Van Gestel, Jos De Brabanter, Bart De Moor, Joos Vandewalle
    Abstract:

    Support Vector Machines Basic Methods of Least Squares Support Vector Machines Bayesian Inference for LS-SVM Models Robustness Large Scale Problems LS-SVM for Unsupervised Learning LS-SVM for Recurrent Networks and Control.

  • optimal control by least squares Support Vector Machines
    Neural Networks, 2001
    Co-Authors: Johan A. K. Suykens, Joos Vandewalle, Bart De Moor
    Abstract:

    Support Vector Machines have been very successful in pattern recognition and function estimation problems. In this paper we introduce the use of least squares Support Vector Machines (LS-SVM's) for the optimal control of nonlinear systems. Linear and neural full static state feedback controllers are considered. The problem is formulated in such a way that it incorporates the N-stage optimal control problem as well as a least squares Support Vector machine approach for mapping the state space into the action space. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS-SVM control scheme. The results are discussed for Support Vector Machines with radial basis function kernel. Advantages of LS-SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercer's condition. The curse of dimensionality is avoided in comparison with defining a regular grid for the centers in classical radial basis function networks. This is at the expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parametric optimization problems. In the SVM methodology the number of unknowns equals the number of training data, while in the primal space the number of unknowns can be infinite dimensional. The method is illustrated both on stabilization and tracking problems including examples on swinging up an inverted pendulum with local stabilization at the endpoint and a tracking problem for a ball and beam system.

Bernhard Schölkopf - One of the best experts on this subject based on the ideXlab platform.

  • Generalized Support Vector Machines
    2020
    Co-Authors: Alex Smola, Bernhard Schölkopf, Peter L. Bartlett, Dale Schuurmans
    Abstract:

    This chapter contains sections titled: Introduction, GSVM: The General Support Vector Machine, Quadratic Programming Support Vector Machines, Linear Programming Support Vector Machines, A Simple Illustrative Example, Conclusion, Acknowledgments

  • a tutorial on ν Support Vector Machines
    Applied Stochastic Models in Business and Industry, 2005
    Co-Authors: Paihsuen Chen, Bernhard Schölkopf
    Abstract:

    We briefly describe the main ideas of statistical learning theory, Support Vector Machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the so-called ν-SVM, including details of the algorithm and its implementation, theoretical results, and practical applications. Copyright © 2005 John Wiley & Sons, Ltd.

  • Predicting time series with Support Vector Machines
    2005
    Co-Authors: K. -r. Müller, J. Kohlmorgen, Alex Smola, Bernhard Schölkopf, Gunnar Rätsch, Vladimir Vapnik
    Abstract:

    Support Vector Machines are used for time series prediction and compared to radial basis function networks. We make use of two different cost functions for Support Vectors: training with (i) an e insensitive loss and (ii) Huber's robust loss function and discuss how to choose the regularization parameters in these models. Two applications are considered: data from (a) a noisy (normal and uniform noise) Mackey Glass equation and (b) the Santa Fe competition (set D). In both cases Support Vector Machines show an excellent performance. In case (b) the Support Vector approach improves the best known result on the benchmark by a factor of 29%.

  • Kernel Methods and Support Vector Machines
    2003
    Co-Authors: Bernhard Schölkopf, Alex Smola
    Abstract:

    Over the past ten years kernel methods such as Support Vector Machines and Gaussian Processes have become a staple for modern statistical estimation and machine learning. The groundwork for this field was laid in the second half of the 20th century by Vapnik and Chervonenkis (geometrical formulation of an optimal separating hyperplane, capacity measures for margin classifiers), Mangasarian (linear separation by a convex function class), Aronszajn (Reproducing Kernel Hilbert Spaces), Aizerman, Braverman, and Rozonoer (nonlinearity via kernel feature spaces), Arsenin and Tikhonov (regularization and ill-posed problems), and Wahba (regularization in Reproducing Kernel Hilbert Spaces). However, it took until the early 90s until positive definite kernels became a popular and viable means of estimation. Firstly this was due to the lack of sufficiently powerful hardware, since kernel methods require the computation of the socalled kernel matrix, which requires quadratic storage in the number of data points (a computer of at least a few megabytes of memory is required to deal with 1000+ points). Secondly, many of the previously mentioned techniques lay dormant or existed independently and only recently the (in hindsight obvious) connections were made to turn this into a practical estimation tool. Nowadays, a variety of good reference books exist and anyone serious about dealing with kernel methods is recommended to consult one of the following works for further information [15, 5, 8, 12]. Below, we will summarize the main ideas of kernel method and Support Vector Machines, building on the summary given in [13].

  • training invariant Support Vector Machines
    Machine Learning, 2002
    Co-Authors: Dennis Decoste, Bernhard Schölkopf
    Abstract:

    Practical experience has shown that in order to obtain the best possible performance, prior knowledge about invariances of a classification problem at hand ought to be incorporated into the training procedure. We describe and review all known methods for doing so in Support Vector Machines, provide experimental results, and discuss their respective merits. One of the significant new results reported in this work is our recent achievement of the lowest reported test error on the well-known MNIST digit recognition benchmark task, with SVM training times that are also significantly faster than previous SVM methods.