Regularization

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 168045 Experts worldwide ranked by ideXlab platform

René Vidal - One of the best experts on this subject based on the ideXlab platform.

  • oracle based active set algorithm for scalable elastic net subspace clustering
    Computer Vision and Pattern Recognition, 2016
    Co-Authors: Chong You, Daniel P. Robinson, René Vidal
    Abstract:

    State-of-the-art subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with l1, l2 or nuclear norms. l1 Regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. l2 and nuclear norm Regularization often improve connectivity, but give a subspace-preserving affinity only for independent subspaces. Mixed l1, l2 and nuclear norm Regularizations offer a balance between the subspacepreserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the l1 and l2 norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to l2 Regularization) and subspace-preserving (due to l1 Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves state-of-the-art clustering performance, but also efficiently handles large-scale datasets.

  • Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering
    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016
    Co-Authors: Chong You, Daniel P. Robinson, Chun-guang Li, René Vidal
    Abstract:

    State-of-the-art subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$ Regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. $\ell_2$ and nuclear norm Regularization often improve connectivity, but give a subspace-preserving affinity only for independent subspaces. Mixed $\ell_1$, $\ell_2$ and nuclear norm Regularizations offer a balance between the subspace-preserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to $\ell_2$ Regularization) and subspace-preserving (due to $\ell_1$ Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves state-of-the-art clustering performance, but also efficiently handles large-scale datasets.

Chong You - One of the best experts on this subject based on the ideXlab platform.

  • oracle based active set algorithm for scalable elastic net subspace clustering
    Computer Vision and Pattern Recognition, 2016
    Co-Authors: Chong You, Daniel P. Robinson, René Vidal
    Abstract:

    State-of-the-art subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with l1, l2 or nuclear norms. l1 Regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. l2 and nuclear norm Regularization often improve connectivity, but give a subspace-preserving affinity only for independent subspaces. Mixed l1, l2 and nuclear norm Regularizations offer a balance between the subspacepreserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the l1 and l2 norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to l2 Regularization) and subspace-preserving (due to l1 Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves state-of-the-art clustering performance, but also efficiently handles large-scale datasets.

  • Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering
    Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016
    Co-Authors: Chong You, Daniel P. Robinson, Chun-guang Li, René Vidal
    Abstract:

    State-of-the-art subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$ Regularization is guaranteed to give a subspace-preserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. $\ell_2$ and nuclear norm Regularization often improve connectivity, but give a subspace-preserving affinity only for independent subspaces. Mixed $\ell_1$, $\ell_2$ and nuclear norm Regularizations offer a balance between the subspace-preserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to $\ell_2$ Regularization) and subspace-preserving (due to $\ell_1$ Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves state-of-the-art clustering performance, but also efficiently handles large-scale datasets.

Walid Hachem - One of the best experts on this subject based on the ideXlab platform.

  • snake a stochastic proximal gradient algorithm for regularized problems over large graphs
    IEEE Transactions on Automatic Control, 2019
    Co-Authors: Adil Salim, Pascal Bianchi, Walid Hachem
    Abstract:

    A regularized optimization problem over a large unstructured graph is studied, where the Regularization term is tied to the graph geometry. Typical Regularization examples include the total variation and the Laplacian Regularizations over the graph. When the graph is a simple path without loops, efficient off-the-shelf algorithms can be used. However, when the graph is large and unstructured, such algorithms cannot be used directly. In this paper, an algorithm, referred to as “Snake,” is proposed to solve such regularized problems over general graphs. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs.

  • snake a stochastic proximal gradient algorithm for regularized problems over large graphs
    arXiv: Optimization and Control, 2017
    Co-Authors: Adil Salim, Pascal Bianchi, Walid Hachem
    Abstract:

    A regularized optimization problem over a large unstructured graph is studied, where the Regularization term is tied to the graph geometry. Typical Regularization examples include the total variation and the Laplacian Regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops. In this paper, an algorithm, referred to as "Snake", is proposed to solve such regularized problems over general graphs, by taking benefit of these fast methods. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs.

Roger Lewandowski - One of the best experts on this subject based on the ideXlab platform.

  • a high accuracy leray deconvolution model of turbulence and its limiting behavior
    Analysis and Applications, 2008
    Co-Authors: William Layton, Roger Lewandowski
    Abstract:

    In 1934, J. Leray proposed a Regularization of the Navier–Stokes equations whose limits were weak solutions of the Navier–Stokes equations. Recently, a modification of the Leray model, called the Leray-alpha model, has attracted interest for turbulent flow simulations. One common drawback of the Leray type Regularizations is their low accuracy. Increasing the accuracy of a simulation based on a Leray Regularization requires cutting the averaging radius, i.e. remeshing and resolving on finer meshes. This article analyzes on a family of Leray type models of arbitrarily high orders of accuracy for a fixed averaging radius. We establish the basic theory of the entire family including limiting behavior as the averaging radius decreases to zero (a simple extension of results known for the Leray model). We also give a more technically interesting result on the limit as the order of the models increases with a fixed averaging radius. Because of this property, increasing the accuracy of the model is potentially cheaper than decreasing the averaging radius (or meshwidth) and high order models are doubly interesting.

  • a high accuracy leray deconvolution model of turbulence and its limiting behavior
    arXiv: Mathematical Physics, 2007
    Co-Authors: William Layton, Roger Lewandowski
    Abstract:

    In 1934 J. Leray proposed a Regularization of the Navier-Stokes equations whose limits were weak solutions of the NSE. Recently, a modification of the Leray model, called the Leray-alpha model, has atracted study for turbulent flow simulation. One common drawback of Leray type Regularizations is their low accuracy. Increasing the accuracy of a simulation based on a Leray Regularization requires cutting the averaging radius, i.e., remeshing and resolving on finer meshes. This report analyzes a family of Leray type models of arbitrarily high orders of accuracy for fixed averaging radius. We establish the basic theory of the entire family including limiting behavior as the averaging radius decreases to zero, (a simple extension of results known for the Leray model). We also give a more technically interesting result on the limit as the order of the models increases with fixed averaging radius. Because of this property, increasing accuracy of the model is potentially cheaper than decreasing the averaging radius (or meshwidth) and high order models are doubly interesting.

Taiji Suzuki - One of the best experts on this subject based on the ideXlab platform.

  • stochastic dual coordinate ascent with alternating direction method of multipliers
    International Conference on Machine Learning, 2014
    Co-Authors: Taiji Suzuki
    Abstract:

    We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on alternating direction method of multipliers (ADMM) to deal with complex Regularization functions such as structured Regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently.

  • stochastic dual coordinate ascent with alternating direction multiplier method
    arXiv: Machine Learning, 2013
    Co-Authors: Taiji Suzuki
    Abstract:

    We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on Alternating Direction Multiplier Method (ADMM) to deal with complex Regularization functions such as structured Regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently.