Adaptive Consistency

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 11076 Experts worldwide ranked by ideXlab platform

Javier Larrosa - One of the best experts on this subject based on the ideXlab platform.

  • ECAI - Using constraints with memory to implement variable elimination
    2004
    Co-Authors: Martí Sánchez, Pedro Meseguer, Javier Larrosa
    Abstract:

    Adaptive Consistency is a solving algorithm for constraint networks. Its basic step is variable elimination: it takes a network as input, and produces an equivalent network with one less variable and one new constraint (the join of the variable bucket). This process is iterated until every variable is eliminated, and then all solutions can be computed without backtracking. A direct, naive implementation of variable elimination may use more space than needed, which renders the algorithm inapplicable in many cases. We present a more sophisticated implementation, based on the projection with memory of constraints. When a variable is projected out from a constraint, we keep the supports which that variable gave to the remaining tuples. Using this data structure, we compute a set of new factorized constraints, equivalent to the new constraint computed as the join of the variable bucket, but using less space for a wide range of problems. We provide experimental evidence of the benefits of our approach.

  • Improving the applicability of Adaptive Consistency: Preliminary results
    Lecture Notes in Computer Science, 2004
    Co-Authors: Martí Sánchez, Pedro Meseguer, Javier Larrosa
    Abstract:

    We incorporate two ideas in ADC. The first one, delaying variable elimination, permits performing joins in different buckets, not forcing to eliminate one variable before start processing the bucket of another variable. It may cause exponential savings in space. The second idea, join with filtering, consists in taking into account the effect of other constraints when performing the join of two constraints. If a tuple resulting from this join is going to be removed by an existing constraint, this tuple is not produced. It can also produce exponential savings. We have tested these techniques on two classical problems, n-queens and Schur's lemma, showing very promising benefits.

  • CP - Improving the applicability of Adaptive Consistency: preliminary results
    Principles and Practice of Constraint Programming – CP 2004, 2004
    Co-Authors: Martí Sánchez, Pedro Meseguer, Javier Larrosa
    Abstract:

    We incorporate two ideas in ADC. The first one, delaying variable elimination, permits performing joins in different buckets, not forcing to eliminate one variable before start processing the bucket of another variable. It may cause exponential savings in space. The second idea, join with filtering, consists in taking into account the effect of other constraints when performing the join of two constraints. If a tuple resulting from this join is going to be removed by an existing constraint, this tuple is not produced. It can also produce exponential savings. We have tested these techniques on two classical problems, n-queens and Schur's lemma, showing very promising benefits. This research is supported by the REPLI project TIC-2002-04470-C03.

  • Algorithms for constraint satisfaction
    INTELIGENCIA ARTIFICIAL, 2003
    Co-Authors: Javier Larrosa, Pedro Meseguer
    Abstract:

    This paper describes the main algorithms for solving constraint satisfaction problems, and includes the corresponding encodings (the main different with respect [3] is that, in their paper, they provide a more informal description of the algorithms). We consider three main algorithmic approaches: search, inference and hybrid methods. Search methods can be divided into systematic and non-systematic. We present backtracking as an example of systematic search, and local search as an example of non-systematic search. Inference methods can be divided into complete and incomplete. We describe Adaptive Consistency as an example of complete inference, and several local Consistency algorithms as incomplete inference. We also present some examples of hybrid methods which combine search and inference.

  • boosting search with variable elimination
    Lecture Notes in Computer Science, 2000
    Co-Authors: Javier Larrosa
    Abstract:

    Variable elimination is the basic step of Adaptive Consistency[4]. It transforms the problem into an equivalent one, having one less variable. Unfortunately, there are many classes of problems for which it is infeasible, due to its exponential space and time complexity. However, by restricting variable elimination so that only low arity constraints are processed and recorded, it can be effectively combined with search, because the elimination of variables, reduces the search tree size. In this paper we introduce VarElimSearch(S,k), a hybrid metaalgorithm that combines search and variable elimination. The parameter S names the particular search procedure and k controls the tradeoff between the two strategies. The algorithm is space exponential in k. Regarding time, we show that its complexity is bounded by k and a structural parameter from the constraint graph. We also provide experimental evidence that the hybrid algorithm can outperform state-of-the-art algorithms in binary sparse problems. Experiments cover the tasks of finding one solution and the best solution (Max-CSP). Specially in the Max-CSP case, the advantage of our approach can be overwhelming.

Rina Dechter - One of the best experts on this subject based on the ideXlab platform.

  • bucket elimination a unifying framework for reasoning
    Artificial Intelligence, 1999
    Co-Authors: Rina Dechter
    Abstract:

    Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, Adaptive-Consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem’s interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set ,o r acutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time. © 1999 Elsevier Science B.V. All rights reserved.

  • Bucket elimination: A unifying framework for reasoning
    Artificial Intelligence, 1999
    Co-Authors: Rina Dechter
    Abstract:

    AbstractBucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, Adaptive-Consistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the induced-width of the problem's interaction graph.While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization).The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time

  • Experimental evaluation of preprocessing algorithms for constraint satisfaction problems
    Artificial Intelligence, 1994
    Co-Authors: Rina Dechter, Itay Meiri
    Abstract:

    Abstract This paper presents an experimental evaluation of two orthogonal schemes for pre-processing constraint satisfaction problems (CSPs). The first of these schemes involves a class of local Consistency techniques that includes directional arc Consistency, directional path Consistency, and Adaptive Consistency. The other scheme concerns the prearrangement of variables in a linear order to facilitate an efficient search. In the first series of experiments, we evaluated the effect of each of the local Consistency techniques on backtracking and backjumping. Surprisingly, although Adaptive Consistency has the best worst-case complexity bounds, we have found that it exhibits the worst performance, unless the constraint graph was very sparse. Directional arc Consistency (followed by either backjumping or backtracking) and backjumping (without any preprocessing) outperformed all other techniques: moreover, the former dominated the latter in computationally intensive situations. The second series of experiments suggests that maximum cardinality and minimum width are the best preordering (i.e., static ordering) strategies, while dynamic search rearrangement is superior to all the preorderings studied.

Luis Veiga - One of the best experts on this subject based on the ideXlab platform.

  • Adaptive Consistency and awareness support for distributed software development
    OTM Confederated International Conferences "On the Move to Meaningful Internet Systems", 2013
    Co-Authors: Andre Pessoa Negrao, Miguel Mateus, Paulo Ferreira, Luis Veiga
    Abstract:

    We present ARCADE, a Consistency and awareness model for Distributed Software Development. In ARCADE, updates to elements of the software project considered important to a programmer are sent to him promptly. As the importance of an element decreases, the frequency with which the programmer is notified about it also decreases. This way, the system provides a selective, continuous and focused level of awareness. As a result, the bandwidth required to propagate events is reduced and intrusion caused by unimportant notifications is minimized. In this paper we present the design of ARCADE, as well as an evaluation of its effectiveness.

  • OTM Conferences - Adaptive Consistency and Awareness Support for Distributed Software Development
    On the Move to Meaningful Internet Systems: OTM 2013 Conferences, 2013
    Co-Authors: Andre Pessoa Negrao, Miguel Mateus, Paulo Ferreira, Luis Veiga
    Abstract:

    We present ARCADE, a Consistency and awareness model for Distributed Software Development. In ARCADE, updates to elements of the software project considered important to a programmer are sent to him promptly. As the importance of an element decreases, the frequency with which the programmer is notified about it also decreases. This way, the system provides a selective, continuous and focused level of awareness. As a result, the bandwidth required to propagate events is reduced and intrusion caused by unimportant notifications is minimized. In this paper we present the design of ARCADE, as well as an evaluation of its effectiveness.

  • Adaptive Consistency for replicated state in real time strategy multiplayer games
    Adaptive and Reflective Middleware, 2012
    Co-Authors: Manuel Cajada, Paulo Ferreira, Luis Veiga
    Abstract:

    Although massive multiplayer online games have been gaining most popularity over the years, real-time strategy (RTS) has not been considerate a strong candidate for using this model because of the limited number of players supported, large number of game entities and strong Consistency requirements. To deal with this situation, concepts such as continuous Consistency and location-awareness have proven to be extremely useful in order to confine areas with Consistency requirements. The combination between these two concepts results on a powerful technique in which the player's location and divergence boundaries are directly linked, providing the player the most accurate information about objects inside his area-of-interest. The VFC model achieves a balance between the notions of continuous Consistency and location-awareness by defining multiple zones of Consistency around the player's location (pivot) with different divergence boundaries. In this work we propose VFC-RTS, an adaptation of the VFC model, characterized for establishing Consistency degrees, to the RTS scenario. We describe how the concepts of the original VFC model were adapt to the RTS paradigm and propose an architecture for a generic middleware. Later, we apply our solution to an open source, multi-platform RTS game with full Consistency requirements and evaluate the results to define the success of this work.

  • ARM - Adaptive Consistency for replicated state in real-time-strategy multiplayer games
    Proceedings of the 11th International Workshop on Adaptive and Reflective Middleware - ARM '12, 2012
    Co-Authors: Manuel Cajada, Paulo Ferreira, Luis Veiga
    Abstract:

    Although massive multiplayer online games have been gaining most popularity over the years, real-time strategy (RTS) has not been considerate a strong candidate for using this model because of the limited number of players supported, large number of game entities and strong Consistency requirements. To deal with this situation, concepts such as continuous Consistency and location-awareness have proven to be extremely useful in order to confine areas with Consistency requirements. The combination between these two concepts results on a powerful technique in which the player's location and divergence boundaries are directly linked, providing the player the most accurate information about objects inside his area-of-interest. The VFC model achieves a balance between the notions of continuous Consistency and location-awareness by defining multiple zones of Consistency around the player's location (pivot) with different divergence boundaries. In this work we propose VFC-RTS, an adaptation of the VFC model, characterized for establishing Consistency degrees, to the RTS scenario. We describe how the concepts of the original VFC model were adapt to the RTS paradigm and propose an architecture for a generic middleware. Later, we apply our solution to an open source, multi-platform RTS game with full Consistency requirements and evaluate the results to define the success of this work.

Rachid Guerraoui - One of the best experts on this subject based on the ideXlab platform.

  • The PCL Theorem: Transactions cannot be Parallel, Consistent, and Live
    Journal of the ACM, 2019
    Co-Authors: Victor Bushkov, Dmytro Dziuma, Panagiota Fatourou, Rachid Guerraoui
    Abstract:

    We establish a theorem called the PCL theorem, which states that it is impossible to design a transactional memory algorithm that ensures (1) parallelism, i.e., transactions do not need to synchronize unless they access the same application objects, (2) very little Consistency, i.e., a Consistency condition, called weak Adaptive Consistency, introduced here and that is weaker than snapshot isolation, processor Consistency, and any other Consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and (3) very little liveness, i.e., which transactions eventually commit if they run solo.

  • SPAA - The PCL theorem: transactions cannot be parallel, consistent and live
    Proceedings of the 26th ACM symposium on Parallelism in algorithms and architectures, 2014
    Co-Authors: Victor Bushkov, Dmytro Dziuma, Panagiota Fatourou, Rachid Guerraoui
    Abstract:

    We show that it is impossible to design a transactional memory system which ensures parallelism, i.e. transactions do not need to synchronize unless they access the same application objects, while ensuring very little Consistency, i.e. a Consistency condition, called weak Adaptive Consistency, introduced here and which is weaker than snapshot isolation, processor Consistency, and any other Consistency condition stronger than them (such as opacity, serializability, causal serializability, etc.), and very little liveness, i.e. that transactions eventually commit if they run solo.

Neil Yorkesmith - One of the best experts on this subject based on the ideXlab platform.

  • efficient variable elimination for semi structured simple temporal networks with continuous domains
    Knowledge Engineering Review, 2010
    Co-Authors: Neil Yorkesmith
    Abstract:

    The Simple Temporal Network (STN) is a widely used framework for reasoning about quantitative temporal constraints over variables with continuous or discrete domains. The inference tasks of determining Consistency and deriving the minimal network are traditionally achieved by graph algorithms (e.g. Floyd-Warshall, Johnson) or by iteration of narrowing operators (e.g. ΔSTP). None of these methods exploits effectively the tree-decomposition structure of the constraint graph of an STN. Methods based on variable elimination (e.g. Adaptive Consistency) can exploit this structure, but have not been applied to STNs as far as they could, in part because it is unclear how to efficiently pass the ‘messages’ over continuous domains. We show that for an STN, these messages can be represented compactly as sub-STNs. We then present an efficient message-passing scheme for computing the minimal constraints of an STN. Analysis of this algorithm, Prop-STP , brings formal explanation of the performance of the existing STN solvers ΔSTP and SR-PC. Empirical results validate the efficiency of Prop-STP, demonstrating performance comparable to ΔSTP, in cases where the constraint graph is known to have small tree width, such as those that arise during Hierarchical Task Network planning.