Cutting Plane Algorithm

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 315 Experts worldwide ranked by ideXlab platform

Tomas Werner - One of the best experts on this subject based on the ideXlab platform.

  • High-arity interactions, polyhedral relaxations, and Cutting Plane Algorithm for soft constraint optimisation (MAP-MRF)
    2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008
    Co-Authors: Tomas Werner
    Abstract:

    LP relaxation approach to soft constraint optimisation (i.e. MAP-MRF) has been mostly considered only for binary problems. We present its generalisation to n-ary problems, including a simple Algorithm to optimise the LP bound, n-ary max-sum diffusion. As applications, we show that a hierarchy of gradually tighter polyhedral relaxations of MAP-MRF is obtained by adding zero interactions. We propose a Cutting Plane Algorithm, where cuts correspond to adding zero interactions and the separation problem to finding an unsatisfiable constraint satisfaction subproblem. Next, we show that certain high-arity interactions, e.g. certain global constraints, can be included into the framework in a principled way. Finally, we prove that n-ary max-sum diffusion finds global optimum for n-ary supermodular problems.

  • CVPR - High-arity interactions, polyhedral relaxations, and Cutting Plane Algorithm for soft constraint optimisation (MAP-MRF)
    2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008
    Co-Authors: Tomas Werner
    Abstract:

    LP relaxation approach to soft constraint optimisation (i.e. MAP-MRF) has been mostly considered only for binary problems. We present its generalisation to n-ary problems, including a simple Algorithm to optimise the LP bound, n-ary max-sum diffusion. As applications, we show that a hierarchy of gradually tighter polyhedral relaxations of MAP-MRF is obtained by adding zero interactions. We propose a Cutting Plane Algorithm, where cuts correspond to adding zero interactions and the separation problem to finding an unsatisfiable constraint satisfaction subproblem. Next, we show that certain high-arity interactions, e.g. certain global constraints, can be included into the framework in a principled way. Finally, we prove that n-ary max-sum diffusion finds global optimum for n-ary supermodular problems.

Soren Sonnenburg - One of the best experts on this subject based on the ideXlab platform.

  • optimized Cutting Plane Algorithm for large scale risk minimization
    Journal of Machine Learning Research, 2009
    Co-Authors: Vojtěch Franc, Soren Sonnenburg
    Abstract:

    We have developed an optimized Cutting Plane Algorithm (OCA) for solving large-scale risk minimization problems. We prove that the number of iterations OCA requires to converge to a e precise solution is approximately linear in the sample size. We also derive OCAS, an OCA-based linear binary Support Vector Machine (SVM) solver, and OCAM, a linear multi-class SVM solver. In an extensive empirical evaluation we show that OCAS outperforms current state-of-the-art SVM solvers like SVMlight, SVMperf and BMRM, achieving speedup factor more than 1,200 over SVMlight on some data sets and speedup factor of 29 over SVMperf, while obtaining the same precise support vector solution. OCAS, even in the early optimization steps, often shows faster convergence than the currently prevailing approximative methods in this domain, SGD and Pegasos. In addition, our proposed linear multi-class SVM solver, OCAM, achieves speedups of factor of up to 10 compared to SVMmulti-class. Finally, we use OCAS and OCAM in two real-world applications, the problem of human acceptor splice site detection and malware detection. Effectively parallelizing OCAS, we achieve state-of-the-art results on an acceptor splice site recognition problem only by being able to learn from all the available 50 million examples in a 12-million-dimensional feature space. Source code, data sets and scripts to reproduce the experiments are available at http://cmp.felk.cvut.cz/~xfrancv/ocas/html/ .

  • optimized Cutting Plane Algorithm for support vector machines
    International Conference on Machine Learning, 2008
    Co-Authors: Vojtěch Franc, Soren Sonnenburg
    Abstract:

    We have developed a new Linear Support Vector Machine (SVM) training Algorithm called OCAS. Its computational effort scales linearly with the sample size. In an extensive empirical evaluation OCAS significantly outperforms current state of the art SVM solvers, like SVMlight, SVMperf and BMRM, achieving speedups of over 1,000 on some datasets over SVMlight and 20 over SVMperf, while obtaining the same precise Support Vector solution. OCAS even in the early optimization steps shows often faster convergence than the so far in this domain prevailing approximative methods SGD and Pegasos. Effectively parallelizing OCAS we were able to train on a dataset of size 15 million examples (itself about 32GB in size) in just 671 seconds --- a competing string kernel SVM required 97,484 seconds to train on 10 million examples sub-sampled from this dataset.

Vojtěch Franc - One of the best experts on this subject based on the ideXlab platform.

  • optimized Cutting Plane Algorithm for large scale risk minimization
    Journal of Machine Learning Research, 2009
    Co-Authors: Vojtěch Franc, Soren Sonnenburg
    Abstract:

    We have developed an optimized Cutting Plane Algorithm (OCA) for solving large-scale risk minimization problems. We prove that the number of iterations OCA requires to converge to a e precise solution is approximately linear in the sample size. We also derive OCAS, an OCA-based linear binary Support Vector Machine (SVM) solver, and OCAM, a linear multi-class SVM solver. In an extensive empirical evaluation we show that OCAS outperforms current state-of-the-art SVM solvers like SVMlight, SVMperf and BMRM, achieving speedup factor more than 1,200 over SVMlight on some data sets and speedup factor of 29 over SVMperf, while obtaining the same precise support vector solution. OCAS, even in the early optimization steps, often shows faster convergence than the currently prevailing approximative methods in this domain, SGD and Pegasos. In addition, our proposed linear multi-class SVM solver, OCAM, achieves speedups of factor of up to 10 compared to SVMmulti-class. Finally, we use OCAS and OCAM in two real-world applications, the problem of human acceptor splice site detection and malware detection. Effectively parallelizing OCAS, we achieve state-of-the-art results on an acceptor splice site recognition problem only by being able to learn from all the available 50 million examples in a 12-million-dimensional feature space. Source code, data sets and scripts to reproduce the experiments are available at http://cmp.felk.cvut.cz/~xfrancv/ocas/html/ .

  • optimized Cutting Plane Algorithm for support vector machines
    International Conference on Machine Learning, 2008
    Co-Authors: Vojtěch Franc, Soren Sonnenburg
    Abstract:

    We have developed a new Linear Support Vector Machine (SVM) training Algorithm called OCAS. Its computational effort scales linearly with the sample size. In an extensive empirical evaluation OCAS significantly outperforms current state of the art SVM solvers, like SVMlight, SVMperf and BMRM, achieving speedups of over 1,000 on some datasets over SVMlight and 20 over SVMperf, while obtaining the same precise Support Vector solution. OCAS even in the early optimization steps shows often faster convergence than the so far in this domain prevailing approximative methods SGD and Pegasos. Effectively parallelizing OCAS we were able to train on a dataset of size 15 million examples (itself about 32GB in size) in just 671 seconds --- a competing string kernel SVM required 97,484 seconds to train on 10 million examples sub-sampled from this dataset.

Xing-yu Wang - One of the best experts on this subject based on the ideXlab platform.

  • Learning large margin nearest neighbor classifiers via Cutting Plane Algorithm
    2010 International Conference on Machine Learning and Cybernetics, 2010
    Co-Authors: Xiang-yun Qing, Peng Ding, Xing-yu Wang
    Abstract:

    The performance of popular and classical k-nearest neighbor classifier depends on the distance metric. Large margin nearest neighbor classifier using gradient optimization method is prone to local minima. In this paper, we present a Mahalanobis metric learning method based on Cutting Plane Algorithm which reduces largely constraints for solving the semidefinite programming problem. Experimental results on the ITC I data sets show that our method can achieve promising speedups compared with the gradient based method under the similar training, test error rates.

  • ICMLC - Learning large margin nearest neighbor classifiers via Cutting Plane Algorithm
    2010 International Conference on Machine Learning and Cybernetics, 2010
    Co-Authors: Xiang-yun Qing, Peng Ding, Xing-yu Wang
    Abstract:

    The performance of popular and classical k-nearest neighbor classifier depends on the distance metric. Large margin nearest neighbor classifier using gradient optimization method is prone to local minima. In this paper, we present a Mahalanobis metric learning method based on Cutting Plane Algorithm which reduces largely constraints for solving the semidefinite programming problem. Experimental results on the ITC I data sets show that our method can achieve promising speedups compared with the gradient based method under the similar training, test error rates.

Changshui Zhang - One of the best experts on this subject based on the ideXlab platform.

  • a faster Cutting Plane Algorithm with accelerated line search for linear svm
    Pattern Recognition, 2017
    Co-Authors: Changshui Zhang
    Abstract:

    Faster Cutting Plane Algorithms with accelerated line search are proposed to solve linear SVM.It proposes a novel linear-time line search solver while the existing strategy spends O(mlogm) time.An optimized explicit piecewise linear function finding Algorithm for multiclass linear SVM is derived.It can be proved to reduce the total SVM training time.Experiments demonstrate the effectiveness of the proposed Algorithm. Cutting Plane Algorithm (CPA) is a generalization of iterative first-order gradient method, in which the objective function is approximated successively by supporting hyperPlanes. CPA has been tailored to solve regularized loss minimization in machine learning by exploiting the regularization structure. In particular, for linear Support Vector Machine (SVM) embedding a line search procedure effectively remedies the fluctuations of function value and speeds up the convergence in practical issue. However, the existing line search strategy based on sorting Algorithm takes O(mlogm) time. In this paper, we propose a more effective line search solver which spends only linear time. It can be extended to multiclass SVM in which an optimized explicit piecewise linear function finding Algorithm is prearranged. The total SVM training time is proved to reduce theoretically and experiments consistently confirm the effectiveness of the proposed Algorithms.

  • efficient maximum margin clustering via Cutting Plane Algorithm
    SIAM International Conference on Data Mining, 2008
    Co-Authors: Bin Zhao, Fei Wang, Changshui Zhang
    Abstract:

    Maximum margin clustering (MMC) is a recently proposed clustering method, which extends the theory of support vector machine to the unsupervised scenario and aims at finding the maximum margin hyperPlane which separates the data from different classes. Traditionally, MMC is formulated as a non-convex integer programming problem and is thus difficult to solve. Several methods have been proposed in the literature to solve the MMC problem based on either semidefinite programming or alternative optimization. However, these methods are time demanding while handling large scale datasets and therefore unsuitable for real world applications. In this paper, we propose the Cutting Plane maximum margin clustering (CPMMC) Algorithm, to solve the MMC problem. Specifically, we construct a nested sequence of successively tighter relaxations of the original MMC problem, and each optimization problem in this sequence could be efficiently solved using the constrained concave-convex procedure (CCCP). Moreover, we prove theoretically that the CPMMC Algorithm takes time O(sn) to converge with guaranteed accuracy, where n is the total number of samples in the dataset and s is the average number of non-zero features, i.e. the sparsity. Experimental evaluations on several real world datasets show that CPMMC performs better than existing MMC methods, both in efficiency and accuracy.