Previous Iteration

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 15408 Experts worldwide ranked by ideXlab platform

Mats Werme - One of the best experts on this subject based on the ideXlab platform.

  • Sequential integer programming methods for stress constrained topology optimization
    Structural and Multidisciplinary Optimization, 2007
    Co-Authors: Krister Svanberg, Mats Werme
    Abstract:

    This paper deals with topology optimization of load carrying structures defined on a discretized design domain where binary design variables are used to indicate material or void in the various finite elements. The main contribution is the development of two iterative methods which are guaranteed to find a local optimum with respect to a 1-neighbourhood. Each new Iteration point is obtained as the optimal solution to an integer linear programming problem which is an approximation of the original problem at the Previous Iteration point. The proposed methods are quite general and can be applied to a variety of topology optimization problems defined by 0-1 design variables. Most of the presented numerical examples are devoted to problems involving stresses which can be handled in a natural way since the design variables are kept binary in the subproblems.

  • sequential integer programming methods for stress constrained topology optimization
    Structural and Multidisciplinary Optimization, 2007
    Co-Authors: Krister Svanberg, Mats Werme
    Abstract:

    This paper deals with topology optimization of load carrying structures defined on a discretized design domain where binary design variables are used to indicate material or void in the various finite elements. The main contribution is the development of two iterative methods which are guaranteed to find a local optimum with respect to a 1-neighbourhood. Each new Iteration point is obtained as the optimal solution to an integer linear programming problem which is an approximation of the original problem at the Previous Iteration point. The proposed methods are quite general and can be applied to a variety of topology optimization problems defined by 0-1 design variables. Most of the presented numerical examples are devoted to problems involving stresses which can be handled in a natural way since the design variables are kept binary in the subproblems.

Krister Svanberg - One of the best experts on this subject based on the ideXlab platform.

  • Sequential integer programming methods for stress constrained topology optimization
    Structural and Multidisciplinary Optimization, 2007
    Co-Authors: Krister Svanberg, Mats Werme
    Abstract:

    This paper deals with topology optimization of load carrying structures defined on a discretized design domain where binary design variables are used to indicate material or void in the various finite elements. The main contribution is the development of two iterative methods which are guaranteed to find a local optimum with respect to a 1-neighbourhood. Each new Iteration point is obtained as the optimal solution to an integer linear programming problem which is an approximation of the original problem at the Previous Iteration point. The proposed methods are quite general and can be applied to a variety of topology optimization problems defined by 0-1 design variables. Most of the presented numerical examples are devoted to problems involving stresses which can be handled in a natural way since the design variables are kept binary in the subproblems.

  • sequential integer programming methods for stress constrained topology optimization
    Structural and Multidisciplinary Optimization, 2007
    Co-Authors: Krister Svanberg, Mats Werme
    Abstract:

    This paper deals with topology optimization of load carrying structures defined on a discretized design domain where binary design variables are used to indicate material or void in the various finite elements. The main contribution is the development of two iterative methods which are guaranteed to find a local optimum with respect to a 1-neighbourhood. Each new Iteration point is obtained as the optimal solution to an integer linear programming problem which is an approximation of the original problem at the Previous Iteration point. The proposed methods are quite general and can be applied to a variety of topology optimization problems defined by 0-1 design variables. Most of the presented numerical examples are devoted to problems involving stresses which can be handled in a natural way since the design variables are kept binary in the subproblems.

Xiaoe Ruan - One of the best experts on this subject based on the ideXlab platform.

  • A Networked Iterative Learning Control Approach with Input Packet Dropout Adjustment Factor
    2019 IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS), 2019
    Co-Authors: Yamiao Zhang, Xiaoe Ruan
    Abstract:

    This paper proposes an iterative learning control approach with input packet dropout adjustment factor for a class of discrete-time network control systems with random packet dropout. In the scheme, the missing system output signal is replaced by the corresponding desired output signal. In addition, the linear combination of the control signal and the actual system control input signal at the Previous Iteration is used to drive the controlled system if the control signal generated by iterative learning controller is successfully transmitted, otherwise the actual system input at the Previous Iteration is directly used to drive the controlled system. It is strictly proved that under certain conditions the actual system output is convergent to the desired system output in the sense of expectation. Finally, an example is given to demonstrate the validity of findings.

  • Convergence properties of two networked iterative learning control schemes for discrete-time systems with random packet dropout
    International Journal of Systems Science, 2018
    Co-Authors: Jian Liu, Xiaoe Ruan
    Abstract:

    This paper addresses convergence issue of two networked iterative learning control (NILC) schemes for a class of discrete-time nonlinear systems with random packet dropout occurred in input and output channels and modelled as 0–1 Bernoulli-type random variable. In the two NILC schemes, the dropped control input of the current Iteration is substituted by the synchronous input used at the Previous Iteration, whilst for the dropped system output, the first replacement strategy is to replace it by the synchronous pre-given desired trajectory and the second one is to substitute it by the synchronous output used at the Previous Iteration. By the stochastic analysis technique, we analyse the convergence properties of two NILC schemes. It is shown that under appropriate constraints on learning gain and packet dropout probabilities, the tracking errors driven by the two schemes are convergent to zero in the expectation sense along Iteration direction, respectively. Finally, illustrative simulations are carried out to manifest the validity and effectiveness of the results.

Jun Liu - One of the best experts on this subject based on the ideXlab platform.

  • IDEAL - Successive Ray Refinement and Its Application to Coordinate Descent for Lasso
    Lecture Notes in Computer Science, 2016
    Co-Authors: Jun Liu, Zheng Zhao, Ruiwen Zhang
    Abstract:

    Coordinate descent is one of the most popular approaches for solving Lasso and its extensions due to its simplicity and efficiency. When applying coordinate descent to solving Lasso, we update one coordinate at a time while fixing the remaining coordinates. Such an update, which is usually easy to compute, greedily decreases the objective function value. In this paper, we aim to improve its computational efficiency by reducing the number of coordinate descent Iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). SRR makes use of the following ray continuation property on the successive Iterations: for a particular coordinate, the value obtained in the next Iteration almost always lies on a ray that starts at its Previous Iteration and passes through the current Iteration. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the Previous Iteration but on a refined search point that has the following properties: on one hand, it lies on a ray that starts at a history solution and passes through the Previous Iteration, and on the other hand, it achieves the minimum objective function value among all the points on the ray. We propose a scheme for defining the search point and show that the refined search point can be efficiently obtained. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent Iterations, especially for small Lasso regularization parameters.

  • Successive Ray Refinement and Its Application to Coordinate Descent for LASSO
    arXiv: Learning, 2015
    Co-Authors: Jun Liu, Zheng Zhao, Ruiwen Zhang
    Abstract:

    Coordinate descent is one of the most popular approaches for solving Lasso and its extensions due to its simplicity and efficiency. When applying coordinate descent to solving Lasso, we update one coordinate at a time while fixing the remaining coordinates. Such an update, which is usually easy to compute, greedily decreases the objective function value. In this paper, we aim to improve its computational efficiency by reducing the number of coordinate descent Iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). SRR makes use of the following ray continuation property on the successive Iterations: for a particular coordinate, the value obtained in the next Iteration almost always lies on a ray that starts at its Previous Iteration and passes through the current Iteration. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the Previous Iteration but on a refined search point that has the following properties: on one hand, it lies on a ray that starts at a history solution and passes through the Previous Iteration, and on the other hand, it achieves the minimum objective function value among all the points on the ray. We propose two schemes for defining the search point and show that the refined search point can be efficiently obtained. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent Iterations, especially for small Lasso regularization parameters.

  • An improved trust region method for unconstrained optimization
    Journal of Vibration and Control, 2012
    Co-Authors: Jun Liu
    Abstract:

    In this paper, a new trust region method for unconstrained optimization is proposed. In the new method, the trust radius adjusts itself adaptively. In our algorithm, we use the convex combination of the Hessian matrix at a Previous Iteration and current Iteration to define a suitable trust region radius at each Iteration. The global, superlinear and quadratic convergence results of the algorithm are established under reasonable assumptions. Finally, some numerical results are given.

Ruiwen Zhang - One of the best experts on this subject based on the ideXlab platform.

  • IDEAL - Successive Ray Refinement and Its Application to Coordinate Descent for Lasso
    Lecture Notes in Computer Science, 2016
    Co-Authors: Jun Liu, Zheng Zhao, Ruiwen Zhang
    Abstract:

    Coordinate descent is one of the most popular approaches for solving Lasso and its extensions due to its simplicity and efficiency. When applying coordinate descent to solving Lasso, we update one coordinate at a time while fixing the remaining coordinates. Such an update, which is usually easy to compute, greedily decreases the objective function value. In this paper, we aim to improve its computational efficiency by reducing the number of coordinate descent Iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). SRR makes use of the following ray continuation property on the successive Iterations: for a particular coordinate, the value obtained in the next Iteration almost always lies on a ray that starts at its Previous Iteration and passes through the current Iteration. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the Previous Iteration but on a refined search point that has the following properties: on one hand, it lies on a ray that starts at a history solution and passes through the Previous Iteration, and on the other hand, it achieves the minimum objective function value among all the points on the ray. We propose a scheme for defining the search point and show that the refined search point can be efficiently obtained. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent Iterations, especially for small Lasso regularization parameters.

  • Successive Ray Refinement and Its Application to Coordinate Descent for LASSO
    arXiv: Learning, 2015
    Co-Authors: Jun Liu, Zheng Zhao, Ruiwen Zhang
    Abstract:

    Coordinate descent is one of the most popular approaches for solving Lasso and its extensions due to its simplicity and efficiency. When applying coordinate descent to solving Lasso, we update one coordinate at a time while fixing the remaining coordinates. Such an update, which is usually easy to compute, greedily decreases the objective function value. In this paper, we aim to improve its computational efficiency by reducing the number of coordinate descent Iterations. To this end, we propose a novel technique called Successive Ray Refinement (SRR). SRR makes use of the following ray continuation property on the successive Iterations: for a particular coordinate, the value obtained in the next Iteration almost always lies on a ray that starts at its Previous Iteration and passes through the current Iteration. Motivated by this ray-continuation property, we propose that coordinate descent be performed not directly on the Previous Iteration but on a refined search point that has the following properties: on one hand, it lies on a ray that starts at a history solution and passes through the Previous Iteration, and on the other hand, it achieves the minimum objective function value among all the points on the ray. We propose two schemes for defining the search point and show that the refined search point can be efficiently obtained. Empirical results for real and synthetic data sets show that the proposed SRR can significantly reduce the number of coordinate descent Iterations, especially for small Lasso regularization parameters.