Control Policy

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 366489 Experts worldwide ranked by ideXlab platform

Wai-ki Ching - One of the best experts on this subject based on the ideXlab platform.

  • On optimal Control Policy for probabilistic Boolean network: a state reduction approach.
    BMC systems biology, 2012
    Co-Authors: Xi Chen, Wai-ki Ching
    Abstract:

    Probabilistic Boolean Network (PBN) is a popular model for studying genetic regulatory networks. An important and practical problem is to find the optimal Control Policy for a PBN so as to avoid the network from entering into undesirable states. A number of research works have been done by using dynamic programming-based (DP) method. However, due to the high computational complexity of PBNs, DP method is computationally inefficient for a large size network. Therefore it is natural to seek for approximation methods. Inspired by the state reduction strategies, we consider using dynamic programming in conjunction with state reduction approach to reduce the computational cost of the DP method. Numerical examples are given to demonstrate both the effectiveness and the efficiency of our proposed method. Finding the optimal Control Policy for PBNs is meaningful. The proposed problem has been shown to be ∑2p - hard. By taking state reduction approach into consideration, the proposed method can speed up the computational time in applying dynamic programming-based algorithm. In particular, the proposed method is effective for larger size networks.

  • On optimal Control Policy for probabilistic Boolean network: a state reduction approach
    BMC Systems Biology, 2012
    Co-Authors: Xi Chen, Hao Jiang, Yushan Qiu, Wai-ki Ching
    Abstract:

    Background Probabilistic Boolean Network (PBN) is a popular model for studying genetic regulatory networks. An important and practical problem is to find the optimal Control Policy for a PBN so as to avoid the network from entering into undesirable states. A number of research works have been done by using dynamic programming-based (DP) method. However, due to the high computational complexity of PBNs, DP method is computationally inefficient for a large size network. Therefore it is natural to seek for approximation methods. Results Inspired by the state reduction strategies, we consider using dynamic programming in conjunction with state reduction approach to reduce the computational cost of the DP method. Numerical examples are given to demonstrate both the effectiveness and the efficiency of our proposed method. Conclusions Finding the optimal Control Policy for PBNs is meaningful. The proposed problem has been shown to be ∑ 2 p  - hard . By taking state reduction approach into consideration, the proposed method can speed up the computational time in applying dynamic programming-based algorithm. In particular, the proposed method is effective for larger size networks.

Xi Chen - One of the best experts on this subject based on the ideXlab platform.

  • ICONS - A shared Control Policy for center-out movement decoding in motor Brain-machine Interface
    IFAC Proceedings Volumes, 2013
    Co-Authors: Xi Chen, Yuxi Liao, Qiaosheng Zhang, Yiwen Wang, Shaomin Zhang, Xiaoxiang Zheng
    Abstract:

    Abstract Brain-Machine Interface provides a new way to Control the peripheral devices directly using signals from brain. However, because of the uncertainty and instability of brain signals, the decoding method cannot fulfill the demand of accurate Control of the intended movement. We proposed a shared Control Policy to involve environmental information into the decoding process of brain signals. While the monkey manipulated the joystick in a center-out task, the trajectory was updated with a Control signal that derived from current decoded kinematic information considering the potential targets. Our results showed that using the proposed method combined with the decoding process, the correlation coefficient between the predicted trajectory and the true signals increased by 17.4% in average. It indicated the Control Policy involved the environmental information could greatly improve the performance of motor brain machine interfaces in practice.

  • a shared Control Policy for center out movement decoding in motor brain machine interface
    International Conference on Systems, 2013
    Co-Authors: Xi Chen, Yuxi Liao, Qiaosheng Zhang, Yiwen Wang, Shaomin Zhang, Xiaoxiang Zheng
    Abstract:

    Abstract Brain-Machine Interface provides a new way to Control the peripheral devices directly using signals from brain. However, because of the uncertainty and instability of brain signals, the decoding method cannot fulfill the demand of accurate Control of the intended movement. We proposed a shared Control Policy to involve environmental information into the decoding process of brain signals. While the monkey manipulated the joystick in a center-out task, the trajectory was updated with a Control signal that derived from current decoded kinematic information considering the potential targets. Our results showed that using the proposed method combined with the decoding process, the correlation coefficient between the predicted trajectory and the true signals increased by 17.4% in average. It indicated the Control Policy involved the environmental information could greatly improve the performance of motor brain machine interfaces in practice.

  • On optimal Control Policy for probabilistic Boolean network: a state reduction approach.
    BMC systems biology, 2012
    Co-Authors: Xi Chen, Wai-ki Ching
    Abstract:

    Probabilistic Boolean Network (PBN) is a popular model for studying genetic regulatory networks. An important and practical problem is to find the optimal Control Policy for a PBN so as to avoid the network from entering into undesirable states. A number of research works have been done by using dynamic programming-based (DP) method. However, due to the high computational complexity of PBNs, DP method is computationally inefficient for a large size network. Therefore it is natural to seek for approximation methods. Inspired by the state reduction strategies, we consider using dynamic programming in conjunction with state reduction approach to reduce the computational cost of the DP method. Numerical examples are given to demonstrate both the effectiveness and the efficiency of our proposed method. Finding the optimal Control Policy for PBNs is meaningful. The proposed problem has been shown to be ∑2p - hard. By taking state reduction approach into consideration, the proposed method can speed up the computational time in applying dynamic programming-based algorithm. In particular, the proposed method is effective for larger size networks.

  • On optimal Control Policy for probabilistic Boolean network: a state reduction approach
    BMC Systems Biology, 2012
    Co-Authors: Xi Chen, Hao Jiang, Yushan Qiu, Wai-ki Ching
    Abstract:

    Background Probabilistic Boolean Network (PBN) is a popular model for studying genetic regulatory networks. An important and practical problem is to find the optimal Control Policy for a PBN so as to avoid the network from entering into undesirable states. A number of research works have been done by using dynamic programming-based (DP) method. However, due to the high computational complexity of PBNs, DP method is computationally inefficient for a large size network. Therefore it is natural to seek for approximation methods. Results Inspired by the state reduction strategies, we consider using dynamic programming in conjunction with state reduction approach to reduce the computational cost of the DP method. Numerical examples are given to demonstrate both the effectiveness and the efficiency of our proposed method. Conclusions Finding the optimal Control Policy for PBNs is meaningful. The proposed problem has been shown to be ∑ 2 p  - hard . By taking state reduction approach into consideration, the proposed method can speed up the computational time in applying dynamic programming-based algorithm. In particular, the proposed method is effective for larger size networks.

Keyi Xing - One of the best experts on this subject based on the ideXlab platform.

  • Robust supervisory Control Policy for avoiding deadlock in automated manufacturing systems with unreliable resources
    International Journal of Production Research, 2013
    Co-Authors: Hao Yue, Keyi Xing
    Abstract:

    Developing effective supervisory Control policies that guarantee deadlock-free operations for automated manufacturing systems (AMSs) has been an active area of research for the past two decades or so. A great deal of work has been done for the systems with reliable resources; while only a few works are about unreliable resources. This paper addresses the robust deadlock supervisory Control problem in AMSs with multiple unreliable resources. The objective is to develop a robust supervisory Control Policy for AMS under which the system can continue producing all part types not requiring any of the failed resources without manual intervention. Our Policy is made up of a modified Banker’s Algorithm and a set of remaining resource capacity constraints. A state is feasible if and only if it satisfies both of them. By using the improved version of the existing Banker’s Algorithm, our Control Policy has more permissibility. An example is provided to illustrate that the Policy gains advantage over the original one...

Hao Yue - One of the best experts on this subject based on the ideXlab platform.

  • Robust supervisory Control Policy for avoiding deadlock in automated manufacturing systems with unreliable resources
    International Journal of Production Research, 2013
    Co-Authors: Hao Yue, Keyi Xing
    Abstract:

    Developing effective supervisory Control policies that guarantee deadlock-free operations for automated manufacturing systems (AMSs) has been an active area of research for the past two decades or so. A great deal of work has been done for the systems with reliable resources; while only a few works are about unreliable resources. This paper addresses the robust deadlock supervisory Control problem in AMSs with multiple unreliable resources. The objective is to develop a robust supervisory Control Policy for AMS under which the system can continue producing all part types not requiring any of the failed resources without manual intervention. Our Policy is made up of a modified Banker’s Algorithm and a set of remaining resource capacity constraints. A state is feasible if and only if it satisfies both of them. By using the improved version of the existing Banker’s Algorithm, our Control Policy has more permissibility. An example is provided to illustrate that the Policy gains advantage over the original one...

Martin A Philbert - One of the best experts on this subject based on the ideXlab platform.

  • radon smoking and lung cancer the need to refocus radon Control Policy
    American Journal of Public Health, 2013
    Co-Authors: Paula M Lantz, David Mendez, Martin A Philbert
    Abstract:

    Exposure to radon is the second leading cause of lung cancer, and the risk is significantly higher for smokers than for nonsmokers. More than 85% of radon-induced lung cancer deaths are among smokers. The most powerful approach for reducing the public health burden of radon is shaped by 2 overarching principles: public communication efforts that promote residential radon testing and remediation will be the most cost effective if they are primarily directed at current and former smokers; and focusing on smoking prevention and cessation is the optimal strategy for reducing radon-induced lung cancer in terms of both public health gains and economic efficiency. Tobacco Control Policy is the most promising route to the public health goals of radon Control Policy.