Decomposition Algorithms

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 63585 Experts worldwide ranked by ideXlab platform

Clint Scovel - One of the best experts on this subject based on the ideXlab platform.

  • polynomial time Decomposition Algorithms for support vector machines
    Machine Learning, 2003
    Co-Authors: Don Hush, Clint Scovel
    Abstract:

    This paper studies the convergence properties of a general class of Decomposition Algorithms for support vector machines (SVMs). We provide a model algorithm for Decomposition, and prove necessary and sufficient conditions for stepwise improvement of this algorithm. We introduce a simple “rate certifying” condition and prove a polynomial-time bound on the rate of convergence of the model algorithm when it satisfies this condition. Although it is not clear that existing SVM Algorithms satisfy this condition, we provide a version of the model algorithm that does. For this algorithm we show that when the slack multiplier C satisfies \sqrt{1/2} ≤ C ≤ mL, where m is the number of samples and L is a matrix norm, then it takes no more than 4LC2m4/e iterations to drive the criterion to within e of its optimum.

Don Hush - One of the best experts on this subject based on the ideXlab platform.

  • polynomial time Decomposition Algorithms for support vector machines
    Machine Learning, 2003
    Co-Authors: Don Hush, Clint Scovel
    Abstract:

    This paper studies the convergence properties of a general class of Decomposition Algorithms for support vector machines (SVMs). We provide a model algorithm for Decomposition, and prove necessary and sufficient conditions for stepwise improvement of this algorithm. We introduce a simple “rate certifying” condition and prove a polynomial-time bound on the rate of convergence of the model algorithm when it satisfies this condition. Although it is not clear that existing SVM Algorithms satisfy this condition, we provide a version of the model algorithm that does. For this algorithm we show that when the slack multiplier C satisfies \sqrt{1/2} ≤ C ≤ mL, where m is the number of samples and L is a matrix norm, then it takes no more than 4LC2m4/e iterations to drive the criterion to within e of its optimum.

Pavel Laskov - One of the best experts on this subject based on the ideXlab platform.

  • feasible direction Decomposition Algorithms for training support vector machines
    Machine Learning, 2002
    Co-Authors: Pavel Laskov
    Abstract:

    The article presents a general view of a class of Decomposition Algorithms for training Support Vector Machines (SVM) which are motivated by the method of feasible directions. The first such algorithm for the pattern recognition SVM has been proposed in Joachims, T. (1999, Scholkopf et al. (Eds.) Advances in kernel methods-Support vector learning (pp. 185–208). MIT Press). Its extension to the regression SVM—the maximal inconsistency algorithm—has been recently presented by the author (Laskov, 2000, Solla, Leen, & Muller (Eds.) Advances in neural information processing systems 12 (pp. 484–490). MIT Press). A detailed account of both Algorithms is carried out, complemented by theoretical investigation of the relationship between the two Algorithms. It is proved that the two Algorithms are equivalent for the pattern recognition SVM, and the feasible direction interpretation of the maximal inconsistency algorithm is given for the regression SVM. The experimental results demonstrate an order of magnitude decrease of training time in comparison with training without Decomposition, and, most importantly, provide experimental evidence of the linear convergence rate of the feasible direction Decomposition Algorithms.

O. B. Widlund - One of the best experts on this subject based on the ideXlab platform.

  • On the Design of Small Coarse Spaces for Domain Decomposition Algorithms
    SIAM Journal on Scientific Computing, 2017
    Co-Authors: Clark R. Dohrmann, O. B. Widlund
    Abstract:

    Methods are presented for automatically constructing coarse spaces of low dimension for domain Decomposition Algorithms. These constructions use equivalence classes of nodes on the interface between the subdomains into which the domain of a given elliptic problem has been subdivided, e.g., by a mesh partitioner such as METIS; these equivalence classes already play a central role in the design, analysis, and programming of many domain Decomposition Algorithms. The coarse space elements are well defined even for irregular subdomains, are continuous, and are well suited for use in two-level or multilevel preconditioners such as overlapping Schwarz Algorithms. An analysis for scalar elliptic and linear elasticity problems reveals that significant reductions in the coarse space dimension can be achieved while not sacrificing the favorable condition number estimates for larger coarse spaces previously developed. These estimates depend primarily on the Lipschitz parameters of the subdomains. Numerical examples f...

  • an adaptive choice of primal constraints for bddc domain Decomposition Algorithms
    Electronic Transactions on Numerical Analysis, 2016
    Co-Authors: Juan G Calvo, O. B. Widlund
    Abstract:

    An adaptive choice based on parallel sums for the primal space of BDDC [1] deluxe methods [2] is analyzed. The primal constraints of a BDDC algorithm provide the global, coarse part of such a preconditioner and is of crucial importance for obtaining rapid convergence of these preconditioned conjugate gradient methods for the case of many subdomains. For problems in three dimensions, there is a need to develop Algorithms and results for equivalence classes with three or more elements, e.g., subdomain edges. For this purpose, parallel sums for general equivalence classes are considered. The use of parallel sums for equivalence classes with two elements (subdomain faces) has proven very successful; see [3]. An upper bound of the square of the norm of a jump operator PD acting on the elements in a product space related to the subdomains is derived; it has been known that such a bound provides an estimate of the condition number of the BDDC algorithm; see [4]. This bound is given in terms of parallel sums of single Schur complements and sums of other Schur complements. Hence, generalized eigenvalue problems with parallel sums related to the faces and edges of the subdomains are formulated. A few eigenvectors associated with the smallest eigenvalues are selected and they generate a primal constraint. These generalized eigenvalue problems are defined in terms of the relevant Schur complements and Schur complements of these Schur complements associated with a minimal energy extension, e.g., from a subdomain edge of a three-dimensional finite element problem. Numerical results for elliptic problems verify the performance of the algorithm, using a series of experiments with regular subdomains as well as subdomains generated by a METIS mesh partitioner. There is also fast convergence for problems with a quite irregular coefficient inside the subdomains.

  • domain Decomposition Algorithms with small overlap
    2015
    Co-Authors: Maksymilian Dryja, O. B. Widlund
    Abstract:

    Numerical experiments have shown that two-level Schwarz methods often perform very well even if the overlap between neighboring subregions is quite small. This is true to an even greater extent for a related algorithm, due to Barry Smith, where a Schwarz algorithm is applied to the reduced linear system of equations that remains after the variables interior to the subregions have been eliminated. In this paper, a supporting theory is developed.

  • domain Decomposition Algorithms for indefinite elliptic problems
    2015
    Co-Authors: Xiaochuan Cai, O. B. Widlund
    Abstract:

    Iterative methods for linear systems of algebraic equations arising from the finite element discretization of nonsymmetric and indefinite elliptic problems are considered. Methods previously known to work well for positive definite, symmetric problems are extended to certain nonsymmetric problems, which can also have some eigenvalues in the left half plane.This paper presents an additive Schwarz method applied to linear, second order, symmetric or nonsymmetric, indefinite elliptic boundary value problems in two and three dimensions. An alternative linear system, which has the same solution as the original problem, is derived and this system is then solved by using GMRES, an iterative method of conjugate gradient type. In each iteration step, a coarse mesh finite element problem and a number of local problems are solved on small, overlapping subregions into which the original region is subdivided. The rate of convergence is shown to be independent of the number of degrees of freedom and the number of local...

  • Domain Decomposition Algorithms for H(curl) problems
    2010
    Co-Authors: O. B. Widlund
    Abstract:

    In this talk, we will discuss recent progress on developing domain Decomposition Algorithms for problems formulated in H(curl) and approximated by low order edge elements. We are focusing on self-adjoint positive denite model problems and are, in particular, interested in developing Algorithms with a performance which is insensitive to large changes in the material properties.

Cesar De Prada - One of the best experts on this subject based on the ideXlab platform.

  • improving scenario Decomposition Algorithms for robust nonlinear model predictive control
    Computers & Chemical Engineering, 2015
    Co-Authors: Ruben Marti, Sergio Lucia, D Sarabia, Radoslav Paulen, Sebastian Engell, Cesar De Prada
    Abstract:

    Abstract This paper deals with the efficient computation of solutions of robust nonlinear model predictive control problems that are formulated using multi-stage stochastic programming via the generation of a scenario tree. Such a formulation makes it possible to consider explicitly the concept of recourse, which is inherent to any receding horizon approach, but it results in large-scale optimization problems. One possibility to solve these problems in an efficient manner is to decompose the large-scale optimization problem into several subproblems that are iteratively modified and repeatedly solved until a solution to the original problem is achieved. In this paper we review the most common methods used for such Decomposition and apply them to solve robust nonlinear model predictive control problems in a distributed fashion. We also propose a novel method to reduce the number of iterations of the coordination algorithm needed for the Decomposition methods to converge. The performance of the different approaches is evaluated in extensive simulation studies of two nonlinear case studies.