Support Vector

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 257445 Experts worldwide ranked by ideXlab platform

Thomas Hofmann - One of the best experts on this subject based on the ideXlab platform.

  • hidden markov Support Vector machines
    International Conference on Machine Learning, 2003
    Co-Authors: Yasemin Altun, Ioannis Tsochantaridis, Thomas Hofmann
    Abstract:

    This paper presents a novel discriminative learning technique for label sequences based on a combination of the two most successful learning algorithms, Support Vector Machines and Hidden Markov Models which we call Hidden Markov Support Vector Machine. The proposed architecture handles dependencies between neighboring labels using Viterbi decoding. In contrast to standard HMM training, the learning procedure is discriminative and is based on a maximum/soft margin criterion. Compared to previous methods like Conditional Random Fields, Maximum Entropy Markov Models and label sequence boosting, HM-SVMs have a number of advantages. Most notably, it is possible to learn non-linear discriminant functions via kernel functions. At the same time, HM-SVMs share the key advantages with other discriminative methods, in particular the capability to deal with overlapping features. We report experimental evaluations on two tasks, named entity recognition and part-of-speech tagging, that demonstrate the competitiveness of the proposed approach.

  • hidden markov Support Vector machines
    International Conference on Machine Learning, 2003
    Co-Authors: Yasemin Altun, Ioannis Tsochantaridis, Thomas Hofmann
    Abstract:

    This paper presents a novel discriminative learning technique for label sequences based on a combination of the two most successful learning algorithms, Support Vector Machines and Hidden Markov Models which we call Hidden Markov Support Vector Machine. The proposed architecture handles dependencies between neighboring labels using Viterbi decoding. In contrast to standard HMM training, the learning procedure is discriminative and is based on a maximum/soft margin criterion. Compared to previous methods like Conditional Random Fields, Maximum Entropy Markov Models and label sequence boosting, HM-SVMs have a number of advantages. Most notably, it is possible to learn non-linear discriminant functions via kernel functions. At the same time, HM-SVMs share the key advantages with other discriminative methods, in particular the capability to deal with overlapping features. We report experimental evaluations on two tasks, named entity recognition and part-of-speech tagging, that demonstrate the competitiveness of the proposed approach.

Shouyang Wang - One of the best experts on this subject based on the ideXlab platform.

  • multiple ν Support Vector regression based on spectral risk measure minimization
    Neurocomputing, 2013
    Co-Authors: Yongqiao Wang, Shouyang Wang
    Abstract:

    Statistical learning theory provides the justification of the @e-insensitive loss in Support Vector regression, but suggests little guidance on the determination of the critical hyper-parameter @e. Instead of predefining @e, @n-Support Vector regression automatically selects @e by making the percent of deviations larger than @e be asymptotically equal to @n. In stochastic programming terminology, the goal of @n-Support Vector regression is to minimize the conditional Value-at-Risk measure of deviations, i.e. the expectation of the larger @n-percent deviations. This paper tackles the determination of the critical hyper-parameter @n in @n-Support Vector regression when the error term follows a complex distribution. Instead of one singleton @n, the paper assumes @n to be a combination of multiple, finite or infinite, candidate choices. Thus, the cost function becomes a weighted sum of component conditional value-at-risk measures associated with these base @ns. This paper shows that this cost function can be represented with a spectral risk measure and its minimization can be reformulated to a linear programming problem. Experiments on three artificial data sets show that this [email protected] Support Vector regression has great advantage over the classical @n-Support Vector regression when the error terms follow mixed polynomial distributions. Experiments on 10 real-world data sets also clearly demonstrate that this new method can achieve better performance than @e-Support Vector regression and @n-Support Vector regression.

  • a new fuzzy Support Vector machine to evaluate credit risk
    IEEE Transactions on Fuzzy Systems, 2005
    Co-Authors: Yongqiao Wang, Shouyang Wang
    Abstract:

    Due to recent financial crises and regulatory concerns, financial intermediaries' credit risk assessment is an area of renewed interest in both the academic world and the business community. In this paper, we propose a new fuzzy Support Vector machine to discriminate good creditors from bad ones. Because in credit scoring areas we usually cannot label one customer as absolutely good who is sure to repay in time, or absolutely bad who will default certainly, our new fuzzy Support Vector machine treats every sample as both positive and negative classes, but with different memberships. By this way we expect the new fuzzy Support Vector machine to have more generalization ability, while preserving the merit of insensitive to outliers, as the fuzzy Support Vector machine (SVM) proposed in previous papers. We reformulate this kind of two-group classification problem into a quadratic programming problem. Empirical tests on three public datasets show that it can have better discriminatory power than the standard Support Vector machine and the fuzzy Support Vector machine if appropriate kernel and membership generation method are chosen.

Yasemin Altun - One of the best experts on this subject based on the ideXlab platform.

  • hidden markov Support Vector machines
    International Conference on Machine Learning, 2003
    Co-Authors: Yasemin Altun, Ioannis Tsochantaridis, Thomas Hofmann
    Abstract:

    This paper presents a novel discriminative learning technique for label sequences based on a combination of the two most successful learning algorithms, Support Vector Machines and Hidden Markov Models which we call Hidden Markov Support Vector Machine. The proposed architecture handles dependencies between neighboring labels using Viterbi decoding. In contrast to standard HMM training, the learning procedure is discriminative and is based on a maximum/soft margin criterion. Compared to previous methods like Conditional Random Fields, Maximum Entropy Markov Models and label sequence boosting, HM-SVMs have a number of advantages. Most notably, it is possible to learn non-linear discriminant functions via kernel functions. At the same time, HM-SVMs share the key advantages with other discriminative methods, in particular the capability to deal with overlapping features. We report experimental evaluations on two tasks, named entity recognition and part-of-speech tagging, that demonstrate the competitiveness of the proposed approach.

  • hidden markov Support Vector machines
    International Conference on Machine Learning, 2003
    Co-Authors: Yasemin Altun, Ioannis Tsochantaridis, Thomas Hofmann
    Abstract:

    This paper presents a novel discriminative learning technique for label sequences based on a combination of the two most successful learning algorithms, Support Vector Machines and Hidden Markov Models which we call Hidden Markov Support Vector Machine. The proposed architecture handles dependencies between neighboring labels using Viterbi decoding. In contrast to standard HMM training, the learning procedure is discriminative and is based on a maximum/soft margin criterion. Compared to previous methods like Conditional Random Fields, Maximum Entropy Markov Models and label sequence boosting, HM-SVMs have a number of advantages. Most notably, it is possible to learn non-linear discriminant functions via kernel functions. At the same time, HM-SVMs share the key advantages with other discriminative methods, in particular the capability to deal with overlapping features. We report experimental evaluations on two tasks, named entity recognition and part-of-speech tagging, that demonstrate the competitiveness of the proposed approach.

Mario Martin - One of the best experts on this subject based on the ideXlab platform.

  • on line Support Vector machine regression
    European conference on Machine Learning, 2002
    Co-Authors: Mario Martin
    Abstract:

    This paper describes an on-line method for building ?-insensitive Support Vector machines for regression as described in [12]. The method is an extension of the method developed by [1] for building incremental Support Vector machines for classification. Machines obtained by using this approach are equivalent to the ones obtained by applying exact methods like quadratic programming, but they are obtained more quickly and allow the incremental addition of new points, removal of existing points and update of target values for existing data. This development opens the application of SVM regression to areas such as on-line prediction of temporal series or generalization of value functions in reinforcement learning.

  • on line Support Vector machine regression
    Lecture Notes in Computer Science, 2002
    Co-Authors: Mario Martin
    Abstract:

    This paper describes an on-line method for building e-insensitive Support Vector machines for regression as described in [12]. The method is an extension of the method developed by [1] for building incremental Support Vector machines for classification. Machines obtained by using this approach are equivalent to the ones obtained by applying exact methods like quadratic programming, but they are obtained more quickly and allow the incremental addition of new points, removal of existing points and update of target values for existing data. This development opens the application of SVM regression to areas such as on-line prediction of temporal series or generalization of value functions in reinforcement learning.

Yongqiao Wang - One of the best experts on this subject based on the ideXlab platform.

  • multiple ν Support Vector regression based on spectral risk measure minimization
    Neurocomputing, 2013
    Co-Authors: Yongqiao Wang, Shouyang Wang
    Abstract:

    Statistical learning theory provides the justification of the @e-insensitive loss in Support Vector regression, but suggests little guidance on the determination of the critical hyper-parameter @e. Instead of predefining @e, @n-Support Vector regression automatically selects @e by making the percent of deviations larger than @e be asymptotically equal to @n. In stochastic programming terminology, the goal of @n-Support Vector regression is to minimize the conditional Value-at-Risk measure of deviations, i.e. the expectation of the larger @n-percent deviations. This paper tackles the determination of the critical hyper-parameter @n in @n-Support Vector regression when the error term follows a complex distribution. Instead of one singleton @n, the paper assumes @n to be a combination of multiple, finite or infinite, candidate choices. Thus, the cost function becomes a weighted sum of component conditional value-at-risk measures associated with these base @ns. This paper shows that this cost function can be represented with a spectral risk measure and its minimization can be reformulated to a linear programming problem. Experiments on three artificial data sets show that this [email protected] Support Vector regression has great advantage over the classical @n-Support Vector regression when the error terms follow mixed polynomial distributions. Experiments on 10 real-world data sets also clearly demonstrate that this new method can achieve better performance than @e-Support Vector regression and @n-Support Vector regression.

  • a new fuzzy Support Vector machine to evaluate credit risk
    IEEE Transactions on Fuzzy Systems, 2005
    Co-Authors: Yongqiao Wang, Shouyang Wang
    Abstract:

    Due to recent financial crises and regulatory concerns, financial intermediaries' credit risk assessment is an area of renewed interest in both the academic world and the business community. In this paper, we propose a new fuzzy Support Vector machine to discriminate good creditors from bad ones. Because in credit scoring areas we usually cannot label one customer as absolutely good who is sure to repay in time, or absolutely bad who will default certainly, our new fuzzy Support Vector machine treats every sample as both positive and negative classes, but with different memberships. By this way we expect the new fuzzy Support Vector machine to have more generalization ability, while preserving the merit of insensitive to outliers, as the fuzzy Support Vector machine (SVM) proposed in previous papers. We reformulate this kind of two-group classification problem into a quadratic programming problem. Empirical tests on three public datasets show that it can have better discriminatory power than the standard Support Vector machine and the fuzzy Support Vector machine if appropriate kernel and membership generation method are chosen.