The Experts below are selected from a list of 168045 Experts worldwide ranked by ideXlab platform
René Vidal  One of the best experts on this subject based on the ideXlab platform.

oracle based active set algorithm for scalable elastic net subspace clustering
Computer Vision and Pattern Recognition, 2016CoAuthors: Chong You, Daniel P. Robinson, René VidalAbstract:Stateoftheart subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with l1, l2 or nuclear norms. l1 Regularization is guaranteed to give a subspacepreserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. l2 and nuclear norm Regularization often improve connectivity, but give a subspacepreserving affinity only for independent subspaces. Mixed l1, l2 and nuclear norm Regularizations offer a balance between the subspacepreserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the l1 and l2 norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to l2 Regularization) and subspacepreserving (due to l1 Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves stateoftheart clustering performance, but also efficiently handles largescale datasets.

Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016CoAuthors: Chong You, Daniel P. Robinson, Chunguang Li, René VidalAbstract:Stateoftheart subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$ Regularization is guaranteed to give a subspacepreserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. $\ell_2$ and nuclear norm Regularization often improve connectivity, but give a subspacepreserving affinity only for independent subspaces. Mixed $\ell_1$, $\ell_2$ and nuclear norm Regularizations offer a balance between the subspacepreserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to $\ell_2$ Regularization) and subspacepreserving (due to $\ell_1$ Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves stateoftheart clustering performance, but also efficiently handles largescale datasets.
Chong You  One of the best experts on this subject based on the ideXlab platform.

oracle based active set algorithm for scalable elastic net subspace clustering
Computer Vision and Pattern Recognition, 2016CoAuthors: Chong You, Daniel P. Robinson, René VidalAbstract:Stateoftheart subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with l1, l2 or nuclear norms. l1 Regularization is guaranteed to give a subspacepreserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. l2 and nuclear norm Regularization often improve connectivity, but give a subspacepreserving affinity only for independent subspaces. Mixed l1, l2 and nuclear norm Regularizations offer a balance between the subspacepreserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the l1 and l2 norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to l2 Regularization) and subspacepreserving (due to l1 Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves stateoftheart clustering performance, but also efficiently handles largescale datasets.

Oracle Based Active Set Algorithm for Scalable Elastic Net Subspace Clustering
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016CoAuthors: Chong You, Daniel P. Robinson, Chunguang Li, René VidalAbstract:Stateoftheart subspace clustering methods are based on expressing each data point as a linear combination of other data points while regularizing the matrix of coefficients with $\ell_1$, $\ell_2$ or nuclear norms. $\ell_1$ Regularization is guaranteed to give a subspacepreserving affinity (i.e., there are no connections between points from different subspaces) under broad theoretical conditions, but the clusters may not be connected. $\ell_2$ and nuclear norm Regularization often improve connectivity, but give a subspacepreserving affinity only for independent subspaces. Mixed $\ell_1$, $\ell_2$ and nuclear norm Regularizations offer a balance between the subspacepreserving and connectedness properties, but this comes at the cost of increased computational complexity. This paper studies the geometry of the elastic net regularizer (a mixture of the $\ell_1$ and $\ell_2$ norms) and uses it to derive a provably correct and scalable active set method for finding the optimal coefficients. Our geometric analysis also provides a theoretical justification and a geometric interpretation for the balance between the connectedness (due to $\ell_2$ Regularization) and subspacepreserving (due to $\ell_1$ Regularization) properties for elastic net subspace clustering. Our experiments show that the proposed active set method not only achieves stateoftheart clustering performance, but also efficiently handles largescale datasets.
Walid Hachem  One of the best experts on this subject based on the ideXlab platform.

snake a stochastic proximal gradient algorithm for regularized problems over large graphs
IEEE Transactions on Automatic Control, 2019CoAuthors: Adil Salim, Pascal Bianchi, Walid HachemAbstract:A regularized optimization problem over a large unstructured graph is studied, where the Regularization term is tied to the graph geometry. Typical Regularization examples include the total variation and the Laplacian Regularizations over the graph. When the graph is a simple path without loops, efficient offtheshelf algorithms can be used. However, when the graph is large and unstructured, such algorithms cannot be used directly. In this paper, an algorithm, referred to as “Snake,” is proposed to solve such regularized problems over general graphs. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs.

snake a stochastic proximal gradient algorithm for regularized problems over large graphs
arXiv: Optimization and Control, 2017CoAuthors: Adil Salim, Pascal Bianchi, Walid HachemAbstract:A regularized optimization problem over a large unstructured graph is studied, where the Regularization term is tied to the graph geometry. Typical Regularization examples include the total variation and the Laplacian Regularizations over the graph. When applying the proximal gradient algorithm to solve this problem, there exist quite affordable methods to implement the proximity operator (backward step) in the special case where the graph is a simple path without loops. In this paper, an algorithm, referred to as "Snake", is proposed to solve such regularized problems over general graphs, by taking benefit of these fast methods. The algorithm consists in properly selecting random simple paths in the graph and performing the proximal gradient algorithm over these simple paths. This algorithm is an instance of a new general stochastic proximal gradient algorithm, whose convergence is proven. Applications to trend filtering and graph inpainting are provided among others. Numerical experiments are conducted over large graphs.
Roger Lewandowski  One of the best experts on this subject based on the ideXlab platform.

a high accuracy leray deconvolution model of turbulence and its limiting behavior
Analysis and Applications, 2008CoAuthors: William Layton, Roger LewandowskiAbstract:In 1934, J. Leray proposed a Regularization of the Navier–Stokes equations whose limits were weak solutions of the Navier–Stokes equations. Recently, a modification of the Leray model, called the Lerayalpha model, has attracted interest for turbulent flow simulations. One common drawback of the Leray type Regularizations is their low accuracy. Increasing the accuracy of a simulation based on a Leray Regularization requires cutting the averaging radius, i.e. remeshing and resolving on finer meshes. This article analyzes on a family of Leray type models of arbitrarily high orders of accuracy for a fixed averaging radius. We establish the basic theory of the entire family including limiting behavior as the averaging radius decreases to zero (a simple extension of results known for the Leray model). We also give a more technically interesting result on the limit as the order of the models increases with a fixed averaging radius. Because of this property, increasing the accuracy of the model is potentially cheaper than decreasing the averaging radius (or meshwidth) and high order models are doubly interesting.

a high accuracy leray deconvolution model of turbulence and its limiting behavior
arXiv: Mathematical Physics, 2007CoAuthors: William Layton, Roger LewandowskiAbstract:In 1934 J. Leray proposed a Regularization of the NavierStokes equations whose limits were weak solutions of the NSE. Recently, a modification of the Leray model, called the Lerayalpha model, has atracted study for turbulent flow simulation. One common drawback of Leray type Regularizations is their low accuracy. Increasing the accuracy of a simulation based on a Leray Regularization requires cutting the averaging radius, i.e., remeshing and resolving on finer meshes. This report analyzes a family of Leray type models of arbitrarily high orders of accuracy for fixed averaging radius. We establish the basic theory of the entire family including limiting behavior as the averaging radius decreases to zero, (a simple extension of results known for the Leray model). We also give a more technically interesting result on the limit as the order of the models increases with fixed averaging radius. Because of this property, increasing accuracy of the model is potentially cheaper than decreasing the averaging radius (or meshwidth) and high order models are doubly interesting.
Taiji Suzuki  One of the best experts on this subject based on the ideXlab platform.

stochastic dual coordinate ascent with alternating direction method of multipliers
International Conference on Machine Learning, 2014CoAuthors: Taiji SuzukiAbstract:We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on alternating direction method of multipliers (ADMM) to deal with complex Regularization functions such as structured Regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford minibatch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently.

stochastic dual coordinate ascent with alternating direction multiplier method
arXiv: Machine Learning, 2013CoAuthors: Taiji SuzukiAbstract:We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on Alternating Direction Multiplier Method (ADMM) to deal with complex Regularization functions such as structured Regularizations. Although the original ADMM is a batch method, the proposed method offers a stochastic update rule where each iteration requires only one or few sample observations. Moreover, our method can naturally afford minibatch update and it gives speed up of convergence. We show that, under mild assumptions, our method converges exponentially. The numerical experiments show that our method actually performs efficiently.