Logic Constraint

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 105 Experts worldwide ranked by ideXlab platform

Ufuk Topcu - One of the best experts on this subject based on the ideXlab platform.

  • Entropy Maximization for Markov Decision Processes Under Temporal Logic Constraints
    IEEE Transactions on Automatic Control, 2020
    Co-Authors: Yagiz Savas, Melkior Ornik, Murat Cubuktepe, Mustafa O. Karabag, Ufuk Topcu
    Abstract:

    We study the problem of synthesizing a policy that maximizes the entropy of a Markov decision process (MDP) subject to a temporal Logic Constraint. Such a policy minimizes the predictability of the paths it generates, or dually, maximizes the exploration of different paths in an MDP while ensuring the satisfaction of a temporal Logic specification. We first show that the maximum entropy of an MDP can be finite, infinite, or unbounded. We provide necessary and sufficient conditions under which the maximum entropy of an MDP is finite, infinite, or unbounded. We then present an algorithm which is based on a convex optimization problem to synthesize a policy that maximizes the entropy of an MDP. We also show that maximizing the entropy of an MDP is equivalent to maximizing the entropy of the paths that reach a certain set of states in the MDP. Finally, we extend the algorithm to an MDP subject to a temporal Logic specification. In numerical examples, we demonstrate the proposed method on different motion planning scenarios and illustrate the relation between the restrictions imposed on the paths by a specification, the maximum entropy, and the predictability of paths.

  • Strategy Synthesis for Surveillance-Evasion Games with Learning-Enabled Visibility Optimization
    2019 IEEE 58th Conference on Decision and Control (CDC), 2019
    Co-Authors: Suda Bharadwaj, Louis Ly, Bo Wu, Richard Tsai, Ufuk Topcu
    Abstract:

    This paper studies a two-player game with a quantitative surveillance requirement on an adversarial target moving in a discrete state space and a secondary objective to maximize short-term visibility of the environment. We impose the surveillance requirement as a temporal Logic Constraint. We then use a greedy approach to determine vantage points that optimize a notion of information gain, namely, the number of newly-seen states. By using a convolutional neural network trained on a class of environments, we can efficiently approximate the information gain at each potential vantage point. Subsequent vantage points are chosen such that moving to that location will not jeopardize the surveillance requirement, regardless of any future action chosen by the target. Our method combines guarantees of correctness from formal methods with the scalability of machine learning to provide an efficient approach for surveillance-constrained visibility optimization.

  • Entropy Maximization for Markov Decision Processes Under Temporal Logic Constraints
    arXiv: Optimization and Control, 2018
    Co-Authors: Yagiz Savas, Melkior Ornik, Murat Cubuktepe, Ufuk Topcu
    Abstract:

    We study the problem of synthesizing a policy that maximizes the entropy of a Markov decision process (MDP) subject to a temporal Logic Constraint. Such a policy minimizes the predictability of the paths it generates, or dually, maximizes the continual exploration of different paths in an MDP while ensuring the satisfaction of a temporal Logic specification. We first show that the maximum entropy of an MDP can be finite, infinite or unbounded. We provide necessary and sufficient conditions under which the maximum entropy of an MDP is finite, infinite or unbounded. We then present an algorithm to synthesize a policy that maximizes the entropy of an MDP. The proposed algorithm is based on a convex optimization problem and runs in time polynomial in the size of the MDP. We also show that maximizing the entropy of an MDP is equivalent to maximizing the entropy of the paths that reach a certain set of states in the MDP. Finally, we extend the algorithm to an MDP subject to a temporal Logic specification. In numerical examples, we demonstrate the proposed method on different motion planning scenarios and illustrate that as the restrictions imposed on the paths by a specification increase, the maximum entropy decreases, which in turn, increases the predictability of paths.

Ronald D. Bonnell - One of the best experts on this subject based on the ideXlab platform.

  • Propositional Logic Constraint Patterns and Their Use in UML-Based Conceptual Modeling and Analysis
    IEEE Transactions on Knowledge and Data Engineering, 2007
    Co-Authors: James P. Davis, Ronald D. Bonnell
    Abstract:

    An important conceptual modeling activity in the development of database, object-oriented and agent-oriented systems is the capture and expression of domain Constraints governing underlying data and object states. UML is increasingly used for capturing conceptual models, as it supports conceptual modeling of arbitrary domains, and has extensible notation allowing capture of invariant Constraints both in the class diagram notation and in the separately denoted OCL syntax. However, a need exists for increased formalism in Constraint capture that does not sacrifice ease of use for the analyst. In this paper, we codify a set of invariant patterns formalized for capturing a rich category of propositional Constraints on class diagrams. We use tools of Boolean Logic to set out the distinction between these patterns, applying them in modeling by way of example. We use graph notation to systematically uncover Constraints hidden in the diagrams. We present data collected from applications across different domains, supporting the importance of "pattern-finding" for n-variable propositional Constraints using general graph theoretic methods. This approach enriches UML-based conceptual modeling for greater completeness, consistency, and correctness by formalizing the syntax and semantics of these Constraint patterns, which has not been done in a comprehensive manner before now

James P. Davis - One of the best experts on this subject based on the ideXlab platform.

  • Propositional Logic Constraint Patterns and Their Use in UML-Based Conceptual Modeling and Analysis
    IEEE Transactions on Knowledge and Data Engineering, 2007
    Co-Authors: James P. Davis, Ronald D. Bonnell
    Abstract:

    An important conceptual modeling activity in the development of database, object-oriented and agent-oriented systems is the capture and expression of domain Constraints governing underlying data and object states. UML is increasingly used for capturing conceptual models, as it supports conceptual modeling of arbitrary domains, and has extensible notation allowing capture of invariant Constraints both in the class diagram notation and in the separately denoted OCL syntax. However, a need exists for increased formalism in Constraint capture that does not sacrifice ease of use for the analyst. In this paper, we codify a set of invariant patterns formalized for capturing a rich category of propositional Constraints on class diagrams. We use tools of Boolean Logic to set out the distinction between these patterns, applying them in modeling by way of example. We use graph notation to systematically uncover Constraints hidden in the diagrams. We present data collected from applications across different domains, supporting the importance of "pattern-finding" for n-variable propositional Constraints using general graph theoretic methods. This approach enriches UML-based conceptual modeling for greater completeness, consistency, and correctness by formalizing the syntax and semantics of these Constraint patterns, which has not been done in a comprehensive manner before now

Yagiz Savas - One of the best experts on this subject based on the ideXlab platform.

  • Entropy Maximization for Markov Decision Processes Under Temporal Logic Constraints
    IEEE Transactions on Automatic Control, 2020
    Co-Authors: Yagiz Savas, Melkior Ornik, Murat Cubuktepe, Mustafa O. Karabag, Ufuk Topcu
    Abstract:

    We study the problem of synthesizing a policy that maximizes the entropy of a Markov decision process (MDP) subject to a temporal Logic Constraint. Such a policy minimizes the predictability of the paths it generates, or dually, maximizes the exploration of different paths in an MDP while ensuring the satisfaction of a temporal Logic specification. We first show that the maximum entropy of an MDP can be finite, infinite, or unbounded. We provide necessary and sufficient conditions under which the maximum entropy of an MDP is finite, infinite, or unbounded. We then present an algorithm which is based on a convex optimization problem to synthesize a policy that maximizes the entropy of an MDP. We also show that maximizing the entropy of an MDP is equivalent to maximizing the entropy of the paths that reach a certain set of states in the MDP. Finally, we extend the algorithm to an MDP subject to a temporal Logic specification. In numerical examples, we demonstrate the proposed method on different motion planning scenarios and illustrate the relation between the restrictions imposed on the paths by a specification, the maximum entropy, and the predictability of paths.

  • Entropy Maximization for Markov Decision Processes Under Temporal Logic Constraints
    arXiv: Optimization and Control, 2018
    Co-Authors: Yagiz Savas, Melkior Ornik, Murat Cubuktepe, Ufuk Topcu
    Abstract:

    We study the problem of synthesizing a policy that maximizes the entropy of a Markov decision process (MDP) subject to a temporal Logic Constraint. Such a policy minimizes the predictability of the paths it generates, or dually, maximizes the continual exploration of different paths in an MDP while ensuring the satisfaction of a temporal Logic specification. We first show that the maximum entropy of an MDP can be finite, infinite or unbounded. We provide necessary and sufficient conditions under which the maximum entropy of an MDP is finite, infinite or unbounded. We then present an algorithm to synthesize a policy that maximizes the entropy of an MDP. The proposed algorithm is based on a convex optimization problem and runs in time polynomial in the size of the MDP. We also show that maximizing the entropy of an MDP is equivalent to maximizing the entropy of the paths that reach a certain set of states in the MDP. Finally, we extend the algorithm to an MDP subject to a temporal Logic specification. In numerical examples, we demonstrate the proposed method on different motion planning scenarios and illustrate that as the restrictions imposed on the paths by a specification increase, the maximum entropy decreases, which in turn, increases the predictability of paths.

Murat Cubuktepe - One of the best experts on this subject based on the ideXlab platform.

  • Entropy Maximization for Markov Decision Processes Under Temporal Logic Constraints
    IEEE Transactions on Automatic Control, 2020
    Co-Authors: Yagiz Savas, Melkior Ornik, Murat Cubuktepe, Mustafa O. Karabag, Ufuk Topcu
    Abstract:

    We study the problem of synthesizing a policy that maximizes the entropy of a Markov decision process (MDP) subject to a temporal Logic Constraint. Such a policy minimizes the predictability of the paths it generates, or dually, maximizes the exploration of different paths in an MDP while ensuring the satisfaction of a temporal Logic specification. We first show that the maximum entropy of an MDP can be finite, infinite, or unbounded. We provide necessary and sufficient conditions under which the maximum entropy of an MDP is finite, infinite, or unbounded. We then present an algorithm which is based on a convex optimization problem to synthesize a policy that maximizes the entropy of an MDP. We also show that maximizing the entropy of an MDP is equivalent to maximizing the entropy of the paths that reach a certain set of states in the MDP. Finally, we extend the algorithm to an MDP subject to a temporal Logic specification. In numerical examples, we demonstrate the proposed method on different motion planning scenarios and illustrate the relation between the restrictions imposed on the paths by a specification, the maximum entropy, and the predictability of paths.

  • Entropy Maximization for Markov Decision Processes Under Temporal Logic Constraints
    arXiv: Optimization and Control, 2018
    Co-Authors: Yagiz Savas, Melkior Ornik, Murat Cubuktepe, Ufuk Topcu
    Abstract:

    We study the problem of synthesizing a policy that maximizes the entropy of a Markov decision process (MDP) subject to a temporal Logic Constraint. Such a policy minimizes the predictability of the paths it generates, or dually, maximizes the continual exploration of different paths in an MDP while ensuring the satisfaction of a temporal Logic specification. We first show that the maximum entropy of an MDP can be finite, infinite or unbounded. We provide necessary and sufficient conditions under which the maximum entropy of an MDP is finite, infinite or unbounded. We then present an algorithm to synthesize a policy that maximizes the entropy of an MDP. The proposed algorithm is based on a convex optimization problem and runs in time polynomial in the size of the MDP. We also show that maximizing the entropy of an MDP is equivalent to maximizing the entropy of the paths that reach a certain set of states in the MDP. Finally, we extend the algorithm to an MDP subject to a temporal Logic specification. In numerical examples, we demonstrate the proposed method on different motion planning scenarios and illustrate that as the restrictions imposed on the paths by a specification increase, the maximum entropy decreases, which in turn, increases the predictability of paths.