Quadratic Convergence

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 26916 Experts worldwide ranked by ideXlab platform

Cunqiang Miao - One of the best experts on this subject based on the ideXlab platform.

  • on local Quadratic Convergence of inexact simplified jacobi davidson method for interior eigenpairs of hermitian eigenproblems
    Applied Mathematics Letters, 2017
    Co-Authors: Cunqiang Miao
    Abstract:

    Abstract For the Hermitian eigenproblems, under proper assumption on an initial approximation to the desired eigenvector, we prove local Quadratic Convergence of the inexact simplified Jacobi–Davidson method when the involved relaxed correction equation is solved by a standard Krylov subspace iteration, which particularly leads to local cubic Convergence when the relaxed correction equation is solved to a prescribed precision proportional to the norm of the current residual. These results are valid for the interior as well as the extreme eigenpairs of the Hermitian eigenproblem and, hence, generalize the results by Bai and Miao (2017) from the extreme eigenpairs to the interior ones.

  • on local Quadratic Convergence of inexact simplified jacobi davidson method
    Linear Algebra and its Applications, 2017
    Co-Authors: Cunqiang Miao
    Abstract:

    Abstract For the Hermitian eigenproblems, we prove local Quadratic Convergence of the inexact simplified Jacobi–Davidson method when the involved relaxed correction equation is solved by a standard Krylov subspace iteration. This method then shows local cubic Convergence rate when the relaxed correction equation is solved to a prescribed precision proportional to the norm of the current residual. As a by-product, we obtain local cubic Convergence of the simplified Jacobi–Davidson method. These results significantly improve the existing ones that show only local linear Convergence for the inexact simplified Jacobi–Davidson method, which lead to local Quadratic Convergence for the simplified Jacobi–Davidson method when the tolerance of the inexact solve is particularly set to be zero. Numerical experiments confirm these theoretical results.

Shunsuke Hayashi - One of the best experts on this subject based on the ideXlab platform.

Naihua Xiu - One of the best experts on this subject based on the ideXlab platform.

  • Quadratic Convergence of smoothing newton s method for 0 1 loss optimization
    arXiv: Optimization and Control, 2021
    Co-Authors: Shenglong Zhou, Lili Pan, Naihua Xiu
    Abstract:

    It has been widely recognized that the 0/1 loss function is one of the most natural choices for modelling classification errors, and it has a wide range of applications including support vector machines and 1-bit compressed sensing. Due to the combinatorial nature of the 0/1 loss function, methods based on convex relaxations or smoothing approximations have dominated the existing research and are often able to provide approximate solutions of good quality. However, those methods are not optimizing the 0/1 loss function directly and hence no optimality has been established for the original problem. This paper aims to study the optimality conditions of the 0/1 function minimization, and for the first time to develop Newton's method that directly optimizes the 0/1 function with a local Quadratic Convergence under reasonable conditions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods.ions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods.

  • Global and Quadratic Convergence of Newton Hard-Thresholding Pursuit
    arXiv: Optimization and Control, 2019
    Co-Authors: Shenglong Zhou, Naihua Xiu
    Abstract:

    Algorithms based on the hard thresholding principle have been well studied with sounding theoretical guarantees in the compressed sensing and more general sparsity-constrained optimization. It is widely observed in existing empirical studies that when a restricted Newton step was used (as the debiasing step), the hard-thresholding algorithms tend to meet halting conditions in a significantly low number of iterations and are very efficient. Hence, the thus obtained Newton hard-thresholding algorithms call for stronger theoretical guarantees than for their simple hard-thresholding counterparts. This paper provides a theoretical justification for the use of the restricted Newton step. We build our theory and algorithm, Newton Hard-Thresholding Pursuit (NHTP), for the sparsity-constrained optimization. Our main result shows that NHTP is Quadratically convergent under the standard assumption of restricted strong convexity and smoothness. We also establish its global Convergence to a stationary point under a weaker assumption. In the special case of the compressive sensing, NHTP effectively reduces to some of the existing hard-thresholding algorithms with a Newton step. Consequently, our fast Convergence result justifies why those algorithms perform better than without the Newton step. The efficiency of NHTP was demonstrated on both synthetic and real data in compressed sensing and sparse logistic regression.

Berkant Savas - One of the best experts on this subject based on the ideXlab platform.

  • A NEWTON–GRASSMANN METHOD FOR COMPUTING THE BEST MULTILINEAR RANK-(r1, r2, r3) APPROXIMATION OF A
    2013
    Co-Authors: Lars Eld, Berkant Savas
    Abstract:

    Abstract. We derive a Newton method for computing the best rank-(r1,r2,r3) approximation of a given J × K × L tensor A. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton’s method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a Quadratic Convergence rate for the Newton–Grassmann algorithm

  • A NEWTON-GRASSMANN METHOD FOR COMPUTING THE BEST MULTI-LINEAR RANK-(R_1, R_2, R_3) APPROXIMATION OF A Tensor
    2010
    Co-Authors: Lars Elden, Berkant Savas
    Abstract:

    We derive a Newton method for computing the best rank-(r_1, r_2, r_3) approximation of a given J × K × L tensor A. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton’s method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a Quadratic Convergence rate for the Newton-Grassmann algorithm

  • a newton grassmann method for computing the best multilinear rank r_1 r_2 r_3 approximation of a tensor
    SIAM Journal on Matrix Analysis and Applications, 2009
    Co-Authors: Lars Elden, Berkant Savas
    Abstract:

    We derive a Newton method for computing the best rank-$(r_1,r_2,r_3)$ approximation of a given $J\times K\times L$ tensor $\mathcal{A}$. The problem is formulated as an approximation problem on a product of Grassmann manifolds. Incorporating the manifold structure into Newton's method ensures that all iterates generated by the algorithm are points on the Grassmann manifolds. We also introduce a consistent notation for matricizing a tensor, for contracted tensor products and some tensor-algebraic manipulations, which simplify the derivation of the Newton equations and enable straightforward algorithmic implementation. Experiments show a Quadratic Convergence rate for the Newton-Grassmann algorithm.

Zhou Shenglong - One of the best experts on this subject based on the ideXlab platform.

  • Global and Quadratic Convergence of Newton hard-thresholding pursuit
    2021
    Co-Authors: Zhou Shenglong, Xiu Naihua, Qi Hou-duo
    Abstract:

    Algorithms based on the hard thresholding principle have been well studied with sounding theoretical guarantees in the compressed sensing and more general sparsity-constrained optimization. It is widely observed in existing empirical studies that when a restricted Newton step was used (as the debiasing step), the hard-thresholding algorithms tend to meet halting conditions in a significantly low number of iterations and hence are very efficient. However, the thus obtained Newton hard-thresholding algorithms do not offer any better theoretical guarantees than their simple hard-thresholding counterparts. This annoying discrepancy between theory and empirical studies has been known for some time. This paper provides a theoretical justification for the use of the restricted Newton step. We build our theory and algorithm, Newton Hard-Thresholding Pursuit ( NHTP), for the sparsity-constrained optimization. Our main result shows that NHTP is Quadratically convergent under the standard assumption of restricted strong convexity and smoothness. We also establish its global Convergence to a stationary point under a weaker assumption. In the special case of the compressive sensing, NHTP eventually reduces to some existing hard-thresholding algorithms with a Newton step. Consequently, our fast Convergence result justifies why those algorithms perform better than without the Newton step. The efficiency of NHTP was demonstrated on both synthetic and real data in compressed sensing and sparse logistic regression

  • Newton Method for Sparse Logistic Regression: Quadratic Convergence and Extensive Simulations
    2021
    Co-Authors: Wang Rui, Xiu Naihua, Zhou Shenglong
    Abstract:

    Sparse logistic regression, {as an effective tool of classification,} has been developed tremendously in recent two decades, from its origination the $\ell_1$-regularized version to the sparsity constrained models. This paper is carried out on the sparsity constrained logistic regression by the Newton method. We begin with establishing its first-order optimality condition associated with a $\tau$-stationary point. This point can be equivalently interpreted as an equation system which is then efficiently solved by the Newton method. The method has a considerably low computational complexity and enjoys global and Quadratic Convergence properties. Numerical experiments on random and real data demonstrate its superior performance when against seven state-of-the-art solvers

  • Quadratic Convergence of Newton's Method for 0/1 Loss Optimization
    2021
    Co-Authors: Zhou Shenglong, Xiu Naihua, Pan Lili, Qi Houduo
    Abstract:

    It has been widely recognized that the 0/1 loss function is one of the most natural choices for modelling classification errors, and it has a wide range of applications including support vector machines and 1-bit compressed sensing. Due to the combinatorial nature of the 0/1 loss function, methods based on convex relaxations or smoothing approximations have dominated the existing research and are often able to provide approximate solutions of good quality. However, those methods are not optimizing the 0/1 loss function directly and hence no optimality has been established for the original problem. This paper aims to study the optimality conditions of the 0/1 function minimization, and for the first time to develop Newton's method that directly optimizes the 0/1 function with a local Quadratic Convergence under reasonable conditions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods.ions. Extensive numerical experiments demonstrate its superior performance as one would expect from Newton-type methods