The Experts below are selected from a list of 5820 Experts worldwide ranked by ideXlab platform
Chojui Hsieh - One of the best experts on this subject based on the ideXlab platform.
-
quic dirty a Quadratic Approximation approach for dirty statistical models
Neural Information Processing Systems, 2014Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Stephen Becker, Peder A OlsenAbstract:In this paper, we develop a family of algorithms for optimizing "superposition-structured" or "dirty" statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid reg-ularization. Most of the current approaches are first-order methods, including proximal gradient or Alternating Direction Method of Multipliers (ADMM). We propose a new family of second-order methods where we approximate the loss function using Quadratic Approximation. The superposition structured regularizer then leads to a subproblem that can be efficiently solved by alternating minimization. We propose a general active subspace selection approach to speed up the solver by utilizing the low-dimensional structure given by the regularizers, and provide convergence guarantees for our algorithm. Empirically, we show that our approach is more than 10 times faster than state-of-the-art first-order approaches for the latent variable graphical model selection problems and multi-task learning problems when there is more than one regularizer. For these problems, our approach appears to be the first algorithm that can extend active subspace ideas to multiple regularizers.
-
quic Quadratic Approximation for sparse inverse covariance estimation
Journal of Machine Learning Research, 2014Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to previous methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
arXiv: Learning, 2013Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
Neural Information Processing Systems, 2011Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Matyas A SustikAbstract:The l1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
Pradeep Ravikumar - One of the best experts on this subject based on the ideXlab platform.
-
quic dirty a Quadratic Approximation approach for dirty statistical models
Neural Information Processing Systems, 2014Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Stephen Becker, Peder A OlsenAbstract:In this paper, we develop a family of algorithms for optimizing "superposition-structured" or "dirty" statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid reg-ularization. Most of the current approaches are first-order methods, including proximal gradient or Alternating Direction Method of Multipliers (ADMM). We propose a new family of second-order methods where we approximate the loss function using Quadratic Approximation. The superposition structured regularizer then leads to a subproblem that can be efficiently solved by alternating minimization. We propose a general active subspace selection approach to speed up the solver by utilizing the low-dimensional structure given by the regularizers, and provide convergence guarantees for our algorithm. Empirically, we show that our approach is more than 10 times faster than state-of-the-art first-order approaches for the latent variable graphical model selection problems and multi-task learning problems when there is more than one regularizer. For these problems, our approach appears to be the first algorithm that can extend active subspace ideas to multiple regularizers.
-
quic Quadratic Approximation for sparse inverse covariance estimation
Journal of Machine Learning Research, 2014Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to previous methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
arXiv: Learning, 2013Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
Neural Information Processing Systems, 2011Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Matyas A SustikAbstract:The l1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
Amin Kargarian - One of the best experts on this subject based on the ideXlab platform.
-
diagonal Quadratic Approximation for decentralized collaborative tso dso optimal power flow
IEEE Transactions on Smart Grid, 2019Co-Authors: Ali Mohammadi, Mahdi Mehrtash, Amin KargarianAbstract:Collaborative operation of electricity transmission and distribution systems improves the economy and reliability of the entire power system. However, this is a challenging problem given that transmission system operators (TSOs) and distribution system operators (DSOs) are autonomous entities that are unwilling to reveal their commercially sensitive information. This paper presents a decentralized decision-making algorithm for collaborative TSO+DSO optimal power flow (OPF) implementation. The proposed algorithm is based on analytical target cascading for multilevel hierarchical optimization in complex engineering systems. A local OPF is formulated for each TSO/DSO taking into consideration interactions between the transmission and distribution systems while respecting autonomy and information privacy of TSO and DSOs. The local OPF of TSO is solved in the upper-level of hierarchy, and the local OPFs of DSOs are handled in the lower-level. A diagonal Quadratic Approximation (DQA) and a truncated DQA are presented to develop iterative coordination strategies in which all local OPFs are solved in a parallel manner with no need for a central coordinator. This parallel implementation significantly enhances computations efficiency of the algorithm. The proposed collaborative TSO+DSO OPF is evaluated using a 6-bus and the IEEE 118-bus test systems, and promising results are obtained.
-
Diagonal Quadratic Approximation for Decentralized Collaborative TSO+DSO Optimal Power Flow
IEEE Transactions on Smart Grid, 2019Co-Authors: Ali Mohammadi, Mahdi Mehrtash, Amin KargarianAbstract:Collaborative operation of electricity transmission and distribution systems improves the economy and reliability of the entire power system. However, this is a challenging problem given that transmission system operators (TSOs) and distribution system operators (DSOs) are autonomous entities that are unwilling to reveal their commercially sensitive information. This paper presents a decentralized decision-making algorithm for collaborative TSO+DSO optimal power flow (OPF) implementation. The proposed algorithm is based on analytical target cascading for multilevel hierarchical optimization in complex engineering systems. A local OPF is formulated for each TSO/DSO taking into consideration interactions between the transmission and distribution systems while respecting autonomy and information privacy of TSO and DSOs. The local OPF of TSO is solved in the upper-level of hierarchy, and the local OPFs of DSOs are handled in the lower-level. A diagonal Quadratic Approximation (DQA) and a truncated DQA are presented to develop iterative coordination strategies in which all local OPFs are solved in a parallel manner with no need for a central coordinator. This parallel implementation significantly enhances computations efficiency of the algorithm. The proposed collaborative TSO+DSO OPF is evaluated using a 6-bus and the IEEE 118-bus test systems, and promising results are obtained.
Inderjit S Dhillon - One of the best experts on this subject based on the ideXlab platform.
-
quic dirty a Quadratic Approximation approach for dirty statistical models
Neural Information Processing Systems, 2014Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Stephen Becker, Peder A OlsenAbstract:In this paper, we develop a family of algorithms for optimizing "superposition-structured" or "dirty" statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid reg-ularization. Most of the current approaches are first-order methods, including proximal gradient or Alternating Direction Method of Multipliers (ADMM). We propose a new family of second-order methods where we approximate the loss function using Quadratic Approximation. The superposition structured regularizer then leads to a subproblem that can be efficiently solved by alternating minimization. We propose a general active subspace selection approach to speed up the solver by utilizing the low-dimensional structure given by the regularizers, and provide convergence guarantees for our algorithm. Empirically, we show that our approach is more than 10 times faster than state-of-the-art first-order approaches for the latent variable graphical model selection problems and multi-task learning problems when there is more than one regularizer. For these problems, our approach appears to be the first algorithm that can extend active subspace ideas to multiple regularizers.
-
quic Quadratic Approximation for sparse inverse covariance estimation
Journal of Machine Learning Research, 2014Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to previous methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
arXiv: Learning, 2013Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
Neural Information Processing Systems, 2011Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Matyas A SustikAbstract:The l1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
Matyas A Sustik - One of the best experts on this subject based on the ideXlab platform.
-
quic Quadratic Approximation for sparse inverse covariance estimation
Journal of Machine Learning Research, 2014Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The l1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to previous methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
arXiv: Learning, 2013Co-Authors: Chojui Hsieh, Matyas A Sustik, Inderjit S Dhillon, Pradeep RavikumarAbstract:The L1-regularized Gaussian maximum likelihood estimator (MLE) has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to recent state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and present experimental results using synthetic and real-world application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.
-
sparse inverse covariance matrix estimation using Quadratic Approximation
Neural Information Processing Systems, 2011Co-Authors: Chojui Hsieh, Inderjit S Dhillon, Pradeep Ravikumar, Matyas A SustikAbstract:The l1 regularized Gaussian maximum likelihood estimator has been shown to have strong statistical guarantees in recovering a sparse inverse covariance matrix, or alternatively the underlying graph structure of a Gaussian Markov Random Field, from very limited samples. We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. In contrast to other state-of-the-art methods that largely use first order gradient information, our algorithm is based on Newton's method and employs a Quadratic Approximation, but with some modifications that leverage the structure of the sparse Gaussian MLE problem. We show that our method is superlinearly convergent, and also present experimental results using synthetic and real application data that demonstrate the considerable improvements in performance of our method when compared to other state-of-the-art methods.