KKT Condition

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4524 Experts worldwide ranked by ideXlab platform

Gabriel Haeser - One of the best experts on this subject based on the ideXlab platform.

  • optimality Condition and complexity analysis for linearly constrained optimization without differentiability on the boundary
    Mathematical Programming, 2019
    Co-Authors: Gabriel Haeser, Yinyu Ye
    Abstract:

    In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality Conditions for this problem that reduces to classical ones when the derivative on the boundary is available. For this type of problems, existing necessary Conditions often rely on the notion of subdifferential or become non-trivially weaker than the KKT Condition in the (twice-)differentiable counterpart problems. In contrast, this paper presents a new set of first- and second-order necessary Conditions that are derived without the use of subdifferential and reduce to exactly the KKT Condition when (twice-)differentiability holds. As a result, these Conditions are stronger than some existing ones considered for the discussed minimization problem when only non-negativity constraints are present. To solve for these optimality Conditions in the special but important case of linearly constrained problems, we present two novel interior point trust-region algorithms and show that their worst-case computational efficiency in achieving the potentially stronger optimality Conditions match the best known complexity bounds. Since this work considers a more general problem than those in the literature, our results also indicate that best known existing complexity bounds are actually held for a wider class of nonlinear programming problems. This new development is significant since optimality Conditions play a fundamental role in computational optimization and more and more nonlinear and nonconvex problems need to be solved in practice.

  • optimality Condition and complexity analysis for linearly constrained optimization without differentiability on the boundary
    arXiv: Computational Complexity, 2017
    Co-Authors: Gabriel Haeser, Yinyu Ye
    Abstract:

    In this paper we consider the minimization of a continuous function that is potentially not differentiable or not twice differentiable on the boundary of the feasible region. By exploiting an interior point technique, we present first- and second-order optimality Conditions for this problem that reduces to classical ones when the derivative on the boundary is available. For this type of problems, existing necessary Conditions often rely on the notion of subdifferential or become non-trivially weaker than the KKT Condition in the (twice-)differentiable counterpart problems. In contrast, this paper presents a new set of first- and second-order necessary Conditions that are derived without the use of subdifferential and reduces to exactly the KKT Condition when (twice-)differentiability holds. As a result, these Conditions are stronger than some existing ones considered for the discussed minimization problem when only non-negativity constraints are present. To solve for these optimality Conditions in the special but important case of linearly constrained problems, we present two novel interior trust-region point algorithms and show that their worst-case computational efficiency in achieving the potentially stronger optimality Conditions match the best known complexity bounds. Since this work considers a more general problem than the literature, our results also indicate that best known complexity bounds hold for a wider class of nonlinear programming problems.

  • on approximate KKT Condition and its extension to continuous variational inequalities
    Journal of Optimization Theory and Applications, 2011
    Co-Authors: Gabriel Haeser, Maria Laura Schuverdt
    Abstract:

    In this work, we introduce a necessary sequential Approximate-Karush-Kuhn-Tucker (AKKT) Condition for a point to be a solution of a continuous variational inequality, and we prove its relation with the Approximate Gradient Projection Condition (AGP) of Garciga-Otero and Svaiter. We also prove that a slight variation of the AKKT Condition is sufficient for a convex problem, either for variational inequalities or optimization. Sequential necessary Conditions are more suitable to iterative methods than usual punctual Conditions relying on constraint qualifications. The AKKT property holds at a solution independently of the fulfillment of a constraint qualification, but when a weak one holds, we can guarantee the validity of the KKT Conditions.

Min Jiang - One of the best experts on this subject based on the ideXlab platform.

  • Smoothing Partially Exact Penalty Function of Biconvex Programming
    Asia-Pacific Journal of Operational Research, 2020
    Co-Authors: Rui Shen, Zhiqing Meng, Min Jiang
    Abstract:

    In this paper, a smoothing partial exact penalty function of biconvex programming is studied. First, concepts of partial KKT point, partial optimum point, partial KKT Condition, partial Slater constraint qualification and partial exactness are defined for biconvex programming. It is proved that the partial KKT point is equal to the partial optimum point under the Condition of partial Slater constraint qualification and the penalty function of biconvex programming is partially exact if partial KKT Condition holds. We prove the error bounds properties between smoothing penalty function and penalty function of biconvex programming when the partial KKT Condition holds, as well as the error bounds between objective value of a partial optimum point of smoothing penalty function problem and its [Formula: see text]-feasible solution. So, a partial optimum point of the smoothing penalty function optimization problem is an approximately partial optimum point of biconvex programming. Second, based on the smoothing penalty function, two algorithms are presented for finding a partial optimum or approximate [Formula: see text]-feasible solution to an inequality constrained biconvex optimization and their convergence is proved under some Conditions. Finally, numerical experiments show that a satisfactory approximate solution can be obtained by the proposed algorithm.

  • Smoothing Partially Exact Penalty Function of Biconvex Programming
    Asia-Pacific Journal of Operational Research, 2020
    Co-Authors: Rui Shen, Zhiqing Meng, Min Jiang
    Abstract:

    In this paper, a smoothing partial exact penalty function of biconvex programming is studied. First, concepts of partial KKT point, partial optimum point, partial KKT Condition, partial Slater constraint qualification and partial exactness are defined for biconvex programming. It is proved that the partial KKT point is equal to the partial optimum point under the Condition of partial Slater constraint qualification and the penalty function of biconvex programming is partially exact if partial KKT Condition holds. We prove the error bounds properties between smoothing penalty function and penalty function of biconvex programming when the partial KKT Condition holds, as well as the error bounds between objective value of a partial optimum point of smoothing penalty function problem and its

Tian Ming - One of the best experts on this subject based on the ideXlab platform.

  • Updated Learning Algorithm of Support Vector Data Description Based on K-Means Clustering
    Computer Engineering, 2009
    Co-Authors: Tian Ming
    Abstract:

    Aiming at the flaw that the recognition precision of Support Vector Data Description based on K-Means(KMSVDD) clustering is lower than traditional Support Vector Data Description(SVDD) ,an improvement algorithm is proposed. This algorithm learns support vectors of every cluster and produces middle model,then finds out the data against middle model's Karush-Kuhn-Tucker(KKT) Condition from non-support vectors and obtains the final studying model by leaning them with all support vectors. Experimental result proves that this improvement algorithm has similar computing expenditure with KMSVDD and its recognizing accuracy is higher than KMSVDD and similar to traditional SVDD.

Mineo Kaneko - One of the best experts on this subject based on the ideXlab platform.

  • KKT Condition inspired solution of dvfs with limited number of voltage levels
    International Symposium on Circuits and Systems, 2017
    Co-Authors: Mineo Kaneko
    Abstract:

    This paper discusses Dynamically Voltage-Frequency Scaling (DVFS) with a limited number of voltage levels (Multi-Level DVFS (ML-DVFS)), and concurrent optimization of voltage levels and voltage assignment is investigated. Based on Karush-Kuhn-Tucker (KKT) Conditions for the optimum solution of our ML-DVFS optimization problem, several properties of the optimum solution of ML-DVFS problem are revealed. The proposed solution algorithm consists of the enumeration of partitioning of a task set and nested two-level bisection search on voltage levels and an auxiliary parameter which corresponds to one of Lagrangian multipliers in KKT Conditions. Experimental results verify the performance of ML-DVFS against unlimited DVFS.

  • ISCAS - KKT-Condition inspired solution of DVFS with limited number of voltage levels
    2017 IEEE International Symposium on Circuits and Systems (ISCAS), 2017
    Co-Authors: Mineo Kaneko
    Abstract:

    This paper discusses Dynamically Voltage-Frequency Scaling (DVFS) with a limited number of voltage levels (Multi-Level DVFS (ML-DVFS)), and concurrent optimization of voltage levels and voltage assignment is investigated. Based on Karush-Kuhn-Tucker (KKT) Conditions for the optimum solution of our ML-DVFS optimization problem, several properties of the optimum solution of ML-DVFS problem are revealed. The proposed solution algorithm consists of the enumeration of partitioning of a task set and nested two-level bisection search on voltage levels and an auxiliary parameter which corresponds to one of Lagrangian multipliers in KKT Conditions. Experimental results verify the performance of ML-DVFS against unlimited DVFS.

  • KKT Condition based study on dvfs for heterogeneous task set
    Asia Pacific Conference on Circuits and Systems, 2016
    Co-Authors: Mineo Kaneko
    Abstract:

    Power consumption is one of the major concerns for high performance VLSI design. Under the well-known trade-off between power and speed performance driven by the selection supply voltage level, Voltage-Frequency Scaling (VFS) is one of the promising techniques for saving energy while keeping performance requirement. In this paper, we consider multiple tasks to be processed on a single processor with a single overall deadline, and ask how supply voltage levels are determined in the VFS environment. Our solution is derived from Karush-Kuhn-Tucker (KKT) Conditions of the formulated nonlinear optimization problem, and later on, it is shown that KKT Conditions correspond to a kind of energy balance between tasks, where Energy-vs-Time efficiency is shown to play an important role for voltage-frequency schedule. While the discussions done in this paper are limited for the case of single processor, they might be important bases for various DVFS models including DVFS for multiple-processor system, DVFS with limited number of voltage levels, etc.

  • APCCAS - KKT-Condition based study on DVFS for heterogeneous task set
    2016 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), 2016
    Co-Authors: Mineo Kaneko
    Abstract:

    Power consumption is one of the major concerns for high performance VLSI design. Under the well-known trade-off between power and speed performance driven by the selection supply voltage level, Voltage-Frequency Scaling (VFS) is one of the promising techniques for saving energy while keeping performance requirement. In this paper, we consider multiple tasks to be processed on a single processor with a single overall deadline, and ask how supply voltage levels are determined in the VFS environment. Our solution is derived from Karush-Kuhn-Tucker (KKT) Conditions of the formulated nonlinear optimization problem, and later on, it is shown that KKT Conditions correspond to a kind of energy balance between tasks, where Energy-vs-Time efficiency is shown to play an important role for voltage-frequency schedule. While the discussions done in this paper are limited for the case of single processor, they might be important bases for various DVFS models including DVFS for multiple-processor system, DVFS with limited number of voltage levels, etc.

Liu Qi-ming - One of the best experts on this subject based on the ideXlab platform.

  • Improved Incremental Learning Algorithm for Support Vector Data Description
    Computer Engineering, 2009
    Co-Authors: Liu Qi-ming
    Abstract:

    An improved incremental learning algorithm for Support Vector Data Description(SVDD) is presented through the characteristic analysis of old samples and new samples. In the course of incremental learning, support vecter set and non-support vector set which may be converted into support vector in old samples and samples which violate Karush-Kuhn-Tucker(KKT) Condition in new samples are chosen as training samples and the useless samples are discarded in this algorithm. Experimental results show that the training time is greatly reduced while the classification precision is guaranteed.