Current System State

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 315693 Experts worldwide ranked by ideXlab platform

Melanie N. Zeilinger - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic model predictive safety certification for learning-based control
    IEEE Transactions on Automatic Control, 2021
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States that is known to be safe. A novel formulation allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on Bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing an RL algorithm with safety certificates.

  • Probabilistic model predictive safety certification for learning-based control
    2019
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.

  • Safe exploration of nonlinear dynamical Systems: A predictive safety filter for reinforcement learning.
    arXiv: Systems and Control, 2018
    Co-Authors: Kim Peter Wabersich, Melanie N. Zeilinger
    Abstract:

    The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of State and input constraints. In this paper, we address this problem for nonlinear Systems with continuous State and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical System into an unconstrained safe System, to which any RL algorithm can be applied `out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the Current System State, if it can be safely applied to the real System, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven System model and considering State and input dependent uncertainties.

Kim Peter Wabersich - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic model predictive safety certification for learning-based control
    IEEE Transactions on Automatic Control, 2021
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States that is known to be safe. A novel formulation allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on Bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing an RL algorithm with safety certificates.

  • Probabilistic model predictive safety certification for learning-based control
    2019
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.

  • Safe exploration of nonlinear dynamical Systems: A predictive safety filter for reinforcement learning.
    arXiv: Systems and Control, 2018
    Co-Authors: Kim Peter Wabersich, Melanie N. Zeilinger
    Abstract:

    The transfer of reinforcement learning (RL) techniques into real-world applications is challenged by safety requirements in the presence of physical limitations. Most RL methods, in particular the most popular algorithms, do not support explicit consideration of State and input constraints. In this paper, we address this problem for nonlinear Systems with continuous State and input spaces by introducing a predictive safety filter, which is able to turn a constrained dynamical System into an unconstrained safe System, to which any RL algorithm can be applied `out-of-the-box'. The predictive safety filter receives the proposed learning input and decides, based on the Current System State, if it can be safely applied to the real System, or if it has to be modified otherwise. Safety is thereby established by a continuously updated safety policy, which is based on a model predictive control formulation using a data-driven System model and considering State and input dependent uncertainties.

James T. Lin - One of the best experts on this subject based on the ideXlab platform.

  • deadlock prediction and avoidance based on petri nets for zone control automated guided vehicle Systems
    International Journal of Production Research, 1995
    Co-Authors: C C Lee, James T. Lin
    Abstract:

    Deadlock problems of zone-control uni-directional automated guided vehicle (AGV) Systems are discussed in this paper. Deadlocks of two types in such AGV Systems are first classified from the perspective of shared resources, i.e. guide-path zones and buffers. A special class of Petri nets, attributed Petri nets (APN), is defined and used to represent the Current State and to generate future States of zone-control AGV Systems. We propose an algorithmic procedure to predict in real time and to avoid deadlocks that are caused by sharing guide-path zones in zone-control AGV Systems. The proposed algorithm utilizes the Current System State and future predicted States to avoid deadlocks. These States are obtained and generated from the obtained APN. A modular approach is employed to facilitate the construction of APN models of zone-control AGV Systems.

  • Deadlock prediction and avoidance based on Petri nets for zone-control automated guided vehicle Systems
    International Journal of Production Research, 1995
    Co-Authors: C C Lee, James T. Lin
    Abstract:

    [[abstract]]© 1995 Taylor & Francis - Deadlock problems of zone-control uni-directional automated guided vehicle (AGV) Systems are discussed in this paper. Deadlocks of two types in such AGV Systems are first classified from the perspective of shared resources, i.e. guide-path zones and buffers. A special class of Petri nets, attributed Petri nets (APN), is defined and used to represent the Current State and to generate future States of zone-control AGV Systems. We propose an algorithmic procedure to predict in real time and to avoid deadlocks that are caused by sharing guide-path zones in zone-control AGV Systems. The proposed algorithm utilizes the Current System State and future predicted States to avoid deadlocks. These States are obtained and generated from the obtained APN. A modular approach is employed to facilitate the construction of APN models of zone-control AGV Systems.[[department]]工業工程與工程管理學

Lukas Hewing - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic model predictive safety certification for learning-based control
    IEEE Transactions on Automatic Control, 2021
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States that is known to be safe. A novel formulation allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on Bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing an RL algorithm with safety certificates.

  • Probabilistic model predictive safety certification for learning-based control
    2019
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.

Andrea Carron - One of the best experts on this subject based on the ideXlab platform.

  • Probabilistic model predictive safety certification for learning-based control
    IEEE Transactions on Automatic Control, 2021
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States that is known to be safe. A novel formulation allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on Bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing an RL algorithm with safety certificates.

  • Probabilistic model predictive safety certification for learning-based control
    2019
    Co-Authors: Kim Peter Wabersich, Lukas Hewing, Andrea Carron, Melanie N. Zeilinger
    Abstract:

    Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept, called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of State and input chance constraints for potentially large-scale Systems. The certificate is realized through a stochastic tube that safely connects the Current System State with a terminal set of States, that is known to be safe. A novel formulation in terms of a convex receding horizon problem allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing a simple RL algorithm with safety certificates.