The Experts below are selected from a list of 72 Experts worldwide ranked by ideXlab platform
Calin Belta - One of the best experts on this subject based on the ideXlab platform.
-
Temporal Logic motion control using actor-critic methods
The International Journal of Robotics Research, 2015Co-Authors: Jing Wang, Xu Chu Ding, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:This paper considers the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov decision process MDP. The robot control problem becomes finding the control policy which maximizes the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy, are computationally intensive. To address these issues, we propose an approximate dynamic programming framework based on a least-squares temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Simulations confirm that convergence of the parameters translates to an approximately optimal policy.
-
receding horizon temporal Logic control in dynamic environments
The International Journal of Robotics Research, 2014Co-Authors: Alphan Ulusoy, Calin BeltaAbstract:We present a receding horizon method for controlling an autonomous vehicle that must satisfy a rich mission specification over service requests occurring at the regions of a partitioned environment. The overall mission specification consists of a temporal Logic Statement over a set of static, a priori known requests, a regular expression over a set of dynamic requests that can be sensed only locally, and a servicing priority order over these dynamic requests. Our approach is based on two main steps. First, we construct an abstraction for the motion of the vehicle in the environment by using input-output linearization and assignment of vector fields to the regions in the partition. Second, a receding horizon controller computes local plans within the sensing range of the vehicle such that both local and global mission specifications are satisfied. We implement and evaluate our method through experiments and simulations consisting of a quadrotor performing a persistent surveillance task over a planar grid environment.
-
receding horizon control in dynamic environments from temporal Logic specifications
Robotics: Science and Systems, 2013Co-Authors: Alphan Ulusoy, Michael Marrazzo, Calin BeltaAbstract:We present a control strategy for an autonomous vehicle that is required to satisfy a rich mission specification over service requests occurring at the regions of a partitioned environment. The overall mission specification consists of a temporal Logic Statement over a set of static, a priori known requests, and a servicing priority order over a set of dynamic requests that can be sensed locally. Our approach is based on two main steps. First, we construct an abstraction for the motion of the vehicle in the environment by using input output linearization and assignment of vector fields to the regions in the partition. Second, a receding horizon controller computes local plans within the sensing range of the vehicle such that both local and global mission specifications are satisfied. We implement and evaluate our method in an experimental setup consisting of a quadrotor performing a persistent surveillance task over a planar grid environment.
-
Robotics: Science and Systems - Receding Horizon Control in Dynamic Environments from Temporal Logic Specifications
Robotics: Science and Systems IX, 2013Co-Authors: Alphan Ulusoy, Michael Marrazzo, Calin BeltaAbstract:We present a control strategy for an autonomous vehicle that is required to satisfy a rich mission specification over service requests occurring at the regions of a partitioned environment. The overall mission specification consists of a temporal Logic Statement over a set of static, a priori known requests, and a servicing priority order over a set of dynamic requests that can be sensed locally. Our approach is based on two main steps. First, we construct an abstraction for the motion of the vehicle in the environment by using input output linearization and assignment of vector fields to the regions in the partition. Second, a receding horizon controller computes local plans within the sensing range of the vehicle such that both local and global mission specifications are satisfied. We implement and evaluate our method in an experimental setup consisting of a quadrotor performing a persistent surveillance task over a planar grid environment.
-
Incremental Control Synthesis in Probabilistic Environments with Temporal Logic Constraints
arXiv: Robotics, 2012Co-Authors: Alphan Ulusoy, Tichakorn Wongpiromsarn, Calin BeltaAbstract:In this paper, we present a method for optimal control synthesis of a plant that interacts with a set of agents in a graph-like environment. The control specification is given as a temporal Logic Statement about some properties that hold at the vertices of the environment. The plant is assumed to be deterministic, while the agents are probabilistic Markov models. The goal is to control the plant such that the probability of satisfying a syntactically co-safe Linear Temporal Logic formula is maximized. We propose a computationally efficient incremental approach based on the fact that temporal Logic verification is computationally cheaper than synthesis. We present a case-study where we compare our approach to the classical non-incremental approach in terms of computation time and memory usage.
Alphan Ulusoy - One of the best experts on this subject based on the ideXlab platform.
-
receding horizon temporal Logic control in dynamic environments
The International Journal of Robotics Research, 2014Co-Authors: Alphan Ulusoy, Calin BeltaAbstract:We present a receding horizon method for controlling an autonomous vehicle that must satisfy a rich mission specification over service requests occurring at the regions of a partitioned environment. The overall mission specification consists of a temporal Logic Statement over a set of static, a priori known requests, a regular expression over a set of dynamic requests that can be sensed only locally, and a servicing priority order over these dynamic requests. Our approach is based on two main steps. First, we construct an abstraction for the motion of the vehicle in the environment by using input-output linearization and assignment of vector fields to the regions in the partition. Second, a receding horizon controller computes local plans within the sensing range of the vehicle such that both local and global mission specifications are satisfied. We implement and evaluate our method through experiments and simulations consisting of a quadrotor performing a persistent surveillance task over a planar grid environment.
-
receding horizon control in dynamic environments from temporal Logic specifications
Robotics: Science and Systems, 2013Co-Authors: Alphan Ulusoy, Michael Marrazzo, Calin BeltaAbstract:We present a control strategy for an autonomous vehicle that is required to satisfy a rich mission specification over service requests occurring at the regions of a partitioned environment. The overall mission specification consists of a temporal Logic Statement over a set of static, a priori known requests, and a servicing priority order over a set of dynamic requests that can be sensed locally. Our approach is based on two main steps. First, we construct an abstraction for the motion of the vehicle in the environment by using input output linearization and assignment of vector fields to the regions in the partition. Second, a receding horizon controller computes local plans within the sensing range of the vehicle such that both local and global mission specifications are satisfied. We implement and evaluate our method in an experimental setup consisting of a quadrotor performing a persistent surveillance task over a planar grid environment.
-
Robotics: Science and Systems - Receding Horizon Control in Dynamic Environments from Temporal Logic Specifications
Robotics: Science and Systems IX, 2013Co-Authors: Alphan Ulusoy, Michael Marrazzo, Calin BeltaAbstract:We present a control strategy for an autonomous vehicle that is required to satisfy a rich mission specification over service requests occurring at the regions of a partitioned environment. The overall mission specification consists of a temporal Logic Statement over a set of static, a priori known requests, and a servicing priority order over a set of dynamic requests that can be sensed locally. Our approach is based on two main steps. First, we construct an abstraction for the motion of the vehicle in the environment by using input output linearization and assignment of vector fields to the regions in the partition. Second, a receding horizon controller computes local plans within the sensing range of the vehicle such that both local and global mission specifications are satisfied. We implement and evaluate our method in an experimental setup consisting of a quadrotor performing a persistent surveillance task over a planar grid environment.
-
Incremental Control Synthesis in Probabilistic Environments with Temporal Logic Constraints
arXiv: Robotics, 2012Co-Authors: Alphan Ulusoy, Tichakorn Wongpiromsarn, Calin BeltaAbstract:In this paper, we present a method for optimal control synthesis of a plant that interacts with a set of agents in a graph-like environment. The control specification is given as a temporal Logic Statement about some properties that hold at the vertices of the environment. The plant is assumed to be deterministic, while the agents are probabilistic Markov models. The goal is to control the plant such that the probability of satisfying a syntactically co-safe Linear Temporal Logic formula is maximized. We propose a computationally efficient incremental approach based on the fact that temporal Logic verification is computationally cheaper than synthesis. We present a case-study where we compare our approach to the classical non-incremental approach in terms of computation time and memory usage.
-
CDC - Incremental control synthesis in probabilistic environments with Temporal Logic constraints
2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012Co-Authors: Alphan Ulusoy, Tichakorn Wongpiromsarn, Calin BeltaAbstract:In this paper, we present a method for optimal control synthesis of a plant that interacts with a set of agents in a graph-like environment. The control specification is given as a temporal Logic Statement about some properties that hold at the vertices of the environment. The plant is assumed to be deterministic, while the agents are probabilistic Markov models. The goal is to control the plant such that the probability of satisfying a syntactically co-safe Linear Temporal Logic formula is maximized. We propose a computationally efficient incremental approach based on the fact that temporal Logic verification is computationally cheaper than synthesis. We present a case-study where we compare our approach to the classical non-incremental approach in terms of computation time and memory usage.
Xu Chu Ding - One of the best experts on this subject based on the ideXlab platform.
-
Temporal Logic motion control using actor-critic methods
The International Journal of Robotics Research, 2015Co-Authors: Jing Wang, Xu Chu Ding, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:This paper considers the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov decision process MDP. The robot control problem becomes finding the control policy which maximizes the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy, are computationally intensive. To address these issues, we propose an approximate dynamic programming framework based on a least-squares temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Simulations confirm that convergence of the parameters translates to an approximately optimal policy.
-
Temporal Logic Motion Control using Actor-Critic Methods
arXiv: Robotics, 2012Co-Authors: Xu Chu Ding, Jing Wang, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:In this paper, we consider the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov Decision Process (MDP). The robot control problem becomes finding the control policy maximizing the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy are usually not computationally feasible. To address these issues, we propose an approximate dynamic programming framework based on a least-square temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Hardware-in-the-loop simulations confirm that convergence of the parameters translates to an approximately optimal policy.
-
ICRA - Temporal Logic motion control using actor-critic methods
2012 IEEE International Conference on Robotics and Automation, 2012Co-Authors: Xu Chu Ding, Jing Wang, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:In this paper, we consider the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov Decision Process (MDP). The robot control problem becomes finding the control policy maximizing the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy are usually not computationally feasible. To address these issues, we propose an approximate dynamic programming framework based on a least-square temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Hardware-in-the-loop simulations confirm that convergence of the parameters translates to an approximately optimal policy.
Jing Wang - One of the best experts on this subject based on the ideXlab platform.
-
Temporal Logic motion control using actor-critic methods
The International Journal of Robotics Research, 2015Co-Authors: Jing Wang, Xu Chu Ding, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:This paper considers the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov decision process MDP. The robot control problem becomes finding the control policy which maximizes the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy, are computationally intensive. To address these issues, we propose an approximate dynamic programming framework based on a least-squares temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Simulations confirm that convergence of the parameters translates to an approximately optimal policy.
-
Temporal Logic Motion Control using Actor-Critic Methods
arXiv: Robotics, 2012Co-Authors: Xu Chu Ding, Jing Wang, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:In this paper, we consider the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov Decision Process (MDP). The robot control problem becomes finding the control policy maximizing the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy are usually not computationally feasible. To address these issues, we propose an approximate dynamic programming framework based on a least-square temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Hardware-in-the-loop simulations confirm that convergence of the parameters translates to an approximately optimal policy.
-
ICRA - Temporal Logic motion control using actor-critic methods
2012 IEEE International Conference on Robotics and Automation, 2012Co-Authors: Xu Chu Ding, Jing Wang, Morteza Lahijanian, Ioannis Ch. Paschalidis, Calin BeltaAbstract:In this paper, we consider the problem of deploying a robot from a specification given as a temporal Logic Statement about some properties satisfied by the regions of a large, partitioned environment. We assume that the robot has noisy sensors and actuators and model its motion through the regions of the environment as a Markov Decision Process (MDP). The robot control problem becomes finding the control policy maximizing the probability of satisfying the temporal Logic task on the MDP. For a large environment, obtaining transition probabilities for each state-action pair, as well as solving the necessary optimization problem for the optimal policy are usually not computationally feasible. To address these issues, we propose an approximate dynamic programming framework based on a least-square temporal difference learning method of the actor-critic type. This framework operates on sample paths of the robot and optimizes a randomized control policy with respect to a small set of parameters. The transition probabilities are obtained only when needed. Hardware-in-the-loop simulations confirm that convergence of the parameters translates to an approximately optimal policy.
Tichakorn Wongpiromsarn - One of the best experts on this subject based on the ideXlab platform.
-
Incremental Control Synthesis in Probabilistic Environments with Temporal Logic Constraints
arXiv: Robotics, 2012Co-Authors: Alphan Ulusoy, Tichakorn Wongpiromsarn, Calin BeltaAbstract:In this paper, we present a method for optimal control synthesis of a plant that interacts with a set of agents in a graph-like environment. The control specification is given as a temporal Logic Statement about some properties that hold at the vertices of the environment. The plant is assumed to be deterministic, while the agents are probabilistic Markov models. The goal is to control the plant such that the probability of satisfying a syntactically co-safe Linear Temporal Logic formula is maximized. We propose a computationally efficient incremental approach based on the fact that temporal Logic verification is computationally cheaper than synthesis. We present a case-study where we compare our approach to the classical non-incremental approach in terms of computation time and memory usage.
-
CDC - Incremental control synthesis in probabilistic environments with Temporal Logic constraints
2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012Co-Authors: Alphan Ulusoy, Tichakorn Wongpiromsarn, Calin BeltaAbstract:In this paper, we present a method for optimal control synthesis of a plant that interacts with a set of agents in a graph-like environment. The control specification is given as a temporal Logic Statement about some properties that hold at the vertices of the environment. The plant is assumed to be deterministic, while the agents are probabilistic Markov models. The goal is to control the plant such that the probability of satisfying a syntactically co-safe Linear Temporal Logic formula is maximized. We propose a computationally efficient incremental approach based on the fact that temporal Logic verification is computationally cheaper than synthesis. We present a case-study where we compare our approach to the classical non-incremental approach in terms of computation time and memory usage.