Time Policy

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 712305 Experts worldwide ranked by ideXlab platform

Nitin Sachdeva - One of the best experts on this subject based on the ideXlab platform.

  • Generalized software release and testing stop Time Policy
    International Journal of Quality & Reliability Management, 2019
    Co-Authors: Avinash K. Shrivastava, Nitin Sachdeva
    Abstract:

    Purpose Almost everything around us is the output of software-driven machines or working with software. Software firms are working hard to meet the user’s requirements. But developing a fault-free software is not possible. Also due to market competition, firms do not want to delay their software release. But early release software comes with the problem of user reporting more failures during operations due to more number of faults lying in it. To overcome the above situation, software firms these days are releasing software with an adequate amount of testing instead of delaying the release to develop reliable software and releasing software patches post release to make the software more reliable. The paper aims to discuss these issues. Design/methodology/approach The authors have developed a generalized framework by assuming that testing continues beyond software release to determine the Time to release and stop testing of software. As the testing team is always not skilled, hence, the rate of detection correction of faults during testing may change over Time. Also, they may commit an error during software development, hence increasing the number of faults. Therefore, the authors have to consider these two factors as well in our proposed model. Further, the authors have done sensitivity analysis based on the cost-modeling parameters to check and analyze their impact on the software testing and release Policy. Findings From the proposed model, the authors found that it is better to release early and continue testing in the post-release phase. By using this model, firms can get the benefits of early release, and at the same Time, users get the benefit of post-release software reliability assurance. Originality/value The authors are proposing a generalized model for software scheduling.

Qinglai Wei - One of the best experts on this subject based on the ideXlab platform.

  • Policy Iteration for Optimal Control of Discrete-Time Nonlinear Systems
    Adaptive Dynamic Programming with Applications in Optimal Control, 2017
    Co-Authors: Derong Liu, Qinglai Wei, Xiong Yang, Ding Wang, Hongliang Li
    Abstract:

    This chapter is concerned with discrete-Time Policy iteration adaptive dynamic programming (ADP) methods for solving the Infinite horizon optimal control infinite horizon optimal control problem of nonlinear systems. The idea is to use a Policy iteration ADP technique to obtain the iterative control laws which minimize the iterative value functions. The main contribution of this chapter is to analyze the convergence and stability properties of Policy iteration method for discrete-Time nonlinear systems. It shows that the iterative value function is nonincreasingly convergent to the optimal solution of the Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear system. Neural networks are used to approximate the iterative value functions and compute the iterative control laws, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, numerical results and analysis are presented to illustrate the performance of the present method.

  • Policy iteration adaptive dynamic programming algorithm for discrete Time nonlinear systems
    IEEE Transactions on Neural Networks, 2014
    Co-Authors: Derong Liu, Qinglai Wei
    Abstract:

    This paper is concerned with a new discrete-Time Policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of Policy iteration method for discrete-Time nonlinear systems for the first Time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method.

Avinash K. Shrivastava - One of the best experts on this subject based on the ideXlab platform.

  • Generalized software release and testing stop Time Policy
    International Journal of Quality & Reliability Management, 2019
    Co-Authors: Avinash K. Shrivastava, Nitin Sachdeva
    Abstract:

    Purpose Almost everything around us is the output of software-driven machines or working with software. Software firms are working hard to meet the user’s requirements. But developing a fault-free software is not possible. Also due to market competition, firms do not want to delay their software release. But early release software comes with the problem of user reporting more failures during operations due to more number of faults lying in it. To overcome the above situation, software firms these days are releasing software with an adequate amount of testing instead of delaying the release to develop reliable software and releasing software patches post release to make the software more reliable. The paper aims to discuss these issues. Design/methodology/approach The authors have developed a generalized framework by assuming that testing continues beyond software release to determine the Time to release and stop testing of software. As the testing team is always not skilled, hence, the rate of detection correction of faults during testing may change over Time. Also, they may commit an error during software development, hence increasing the number of faults. Therefore, the authors have to consider these two factors as well in our proposed model. Further, the authors have done sensitivity analysis based on the cost-modeling parameters to check and analyze their impact on the software testing and release Policy. Findings From the proposed model, the authors found that it is better to release early and continue testing in the post-release phase. By using this model, firms can get the benefits of early release, and at the same Time, users get the benefit of post-release software reliability assurance. Originality/value The authors are proposing a generalized model for software scheduling.

Derong Liu - One of the best experts on this subject based on the ideXlab platform.

  • Policy Iteration for Optimal Control of Discrete-Time Nonlinear Systems
    Adaptive Dynamic Programming with Applications in Optimal Control, 2017
    Co-Authors: Derong Liu, Qinglai Wei, Xiong Yang, Ding Wang, Hongliang Li
    Abstract:

    This chapter is concerned with discrete-Time Policy iteration adaptive dynamic programming (ADP) methods for solving the Infinite horizon optimal control infinite horizon optimal control problem of nonlinear systems. The idea is to use a Policy iteration ADP technique to obtain the iterative control laws which minimize the iterative value functions. The main contribution of this chapter is to analyze the convergence and stability properties of Policy iteration method for discrete-Time nonlinear systems. It shows that the iterative value function is nonincreasingly convergent to the optimal solution of the Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear system. Neural networks are used to approximate the iterative value functions and compute the iterative control laws, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, numerical results and analysis are presented to illustrate the performance of the present method.

  • Policy iteration adaptive dynamic programming algorithm for discrete Time nonlinear systems
    IEEE Transactions on Neural Networks, 2014
    Co-Authors: Derong Liu, Qinglai Wei
    Abstract:

    This paper is concerned with a new discrete-Time Policy iteration adaptive dynamic programming (ADP) method for solving the infinite horizon optimal control problem of nonlinear systems. The idea is to use an iterative ADP technique to obtain the iterative control law, which optimizes the iterative performance index function. The main contribution of this paper is to analyze the convergence and stability properties of Policy iteration method for discrete-Time nonlinear systems for the first Time. It shows that the iterative performance index function is nonincreasingly convergent to the optimal solution of the Hamilton-Jacobi-Bellman equation. It is also proven that any of the iterative control laws can stabilize the nonlinear systems. Neural networks are used to approximate the performance index function and compute the optimal control law, respectively, for facilitating the implementation of the iterative ADP algorithm, where the convergence of the weight matrices is analyzed. Finally, the numerical results and analysis are presented to illustrate the performance of the developed method.

S.h. Clearwater - One of the best experts on this subject based on the ideXlab platform.

  • CCGRID - With great reliability comes great responsibility: tradeoffs of run-Time Policy on high reliability systems
    IEEE International Symposium on Cluster Computing and the Grid 2004. CCGrid 2004., 1
    Co-Authors: S.d. Kleban, J.r. Johnston, J.a. Ang, S.h. Clearwater
    Abstract:

    In this paper we describe a simulation study to improve performance on a large highly utilized cluster at Sandia National Laboratories. The unique characteristic about the cluster is that there are very few constraints on job size. In particular, the run-Time is limited only by system Times which occur about every two weeks. The major contribution of this paper is that we quantify the difference in makespan between running a single long job and its equivalent in many shorter jobs. We find that running longer jobs is beneficial to the facility as a whole when the cycle-weighted makespans are considered and that running shorter jobs has an overall beneficial effect on the makespan for the jobs taken unweighted and for most users.