Adaptive Consistency - Explore the Science & Experts | ideXlab



Scan Science and Technology

Contact Leading Edge Experts & Companies

Adaptive Consistency

The Experts below are selected from a list of 11076 Experts worldwide ranked by ideXlab platform

Adaptive Consistency – Free Register to Access Experts & Abstracts

Javier Larrosa – One of the best experts on this subject based on the ideXlab platform.

  • ECAI – Using constraints with memory to implement variable elimination
    , 2004
    Co-Authors: Martí Sánchez, Pedro Meseguer, Javier Larrosa

    Abstract:

    Adaptive Consistency is a solving algorithm for constraint networks. Its basic step is variable elimination: it takes a network as input, and produces an equivalent network with one less variable and one new constraint (the join of the variable bucket). This process is iterated until every variable is eliminated, and then all solutions can be computed without backtracking. A direct, naive implementation of variable elimination may use more space than needed, which renders the algorithm inapplicable in many cases. We present a more sophisticated implementation, based on the projection with memory of constraints. When a variable is projected out from a constraint, we keep the supports which that variable gave to the remaining tuples. Using this data structure, we compute a set of new factorized constraints, equivalent to the new constraint computed as the join of the variable bucket, but using less space for a wide range of problems. We provide experimental evidence of the benefits of our approach.

    Free Register to Access Article

  • Improving the applicability of Adaptive Consistency: Preliminary results
    Lecture Notes in Computer Science, 2004
    Co-Authors: Martí Sánchez, Pedro Meseguer, Javier Larrosa

    Abstract:

    We incorporate two ideas in ADC. The first one, delaying variable elimination, permits performing joins in different buckets, not forcing to eliminate one variable before start processing the bucket of another variable. It may cause exponential savings in space. The second idea, join with filtering, consists in taking into account the effect of other constraints when performing the join of two constraints. If a tuple resulting from this join is going to be removed by an existing constraint, this tuple is not produced. It can also produce exponential savings. We have tested these techniques on two classical problems, n-queens and Schur’s lemma, showing very promising benefits.

    Free Register to Access Article

  • CP – Improving the applicability of Adaptive Consistency: preliminary results
    Principles and Practice of Constraint Programming – CP 2004, 2004
    Co-Authors: Martí Sánchez, Pedro Meseguer, Javier Larrosa

    Abstract:

    We incorporate two ideas in ADC. The first one, delaying variable elimination, permits performing joins in different buckets, not forcing to eliminate one variable before start processing the bucket of another variable. It may cause exponential savings in space. The second idea, join with filtering, consists in taking into account the effect of other constraints when performing the join of two constraints. If a tuple resulting from this join is going to be removed by an existing constraint, this tuple is not produced. It can also produce exponential savings. We have tested these techniques on two classical problems, n-queens and Schur’s lemma, showing very promising benefits.

    This research is supported by the REPLI project TIC-2002-04470-C03.

    Free Register to Access Article

Rina Dechter – One of the best experts on this subject based on the ideXlab platform.

  • bucket elimination a unifying framework for reasoning
    Artificial Intelligence, 1999
    Co-Authors: Rina Dechter

    Abstract:

    Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, AdaptiveConsistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem’s interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set ,o r acutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time. © 1999 Elsevier Science B.V. All rights reserved.

    Free Register to Access Article

  • Bucket elimination: A unifying framework for reasoning
    Artificial Intelligence, 1999
    Co-Authors: Rina Dechter

    Abstract:

    AbstractBucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problem-solving and reasoning tasks. Algorithms such as directional-resolution for propositional satisfiability, AdaptiveConsistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucket-elimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the induced-width of the problem’s interaction graph.While elimination strategies have extensive demands on memory, a contrasting class of algorithms called “conditioning search” require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization).The paper presents the bucket-elimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time

    Free Register to Access Article

  • Experimental evaluation of preprocessing algorithms for constraint satisfaction problems
    Artificial Intelligence, 1994
    Co-Authors: Rina Dechter, Itay Meiri

    Abstract:

    Abstract This paper presents an experimental evaluation of two orthogonal schemes for pre-processing constraint satisfaction problems (CSPs). The first of these schemes involves a class of local Consistency techniques that includes directional arc Consistency, directional path Consistency, and Adaptive Consistency. The other scheme concerns the prearrangement of variables in a linear order to facilitate an efficient search. In the first series of experiments, we evaluated the effect of each of the local Consistency techniques on backtracking and backjumping. Surprisingly, although Adaptive Consistency has the best worst-case complexity bounds, we have found that it exhibits the worst performance, unless the constraint graph was very sparse. Directional arc Consistency (followed by either backjumping or backtracking) and backjumping (without any preprocessing) outperformed all other techniques: moreover, the former dominated the latter in computationally intensive situations. The second series of experiments suggests that maximum cardinality and minimum width are the best preordering (i.e., static ordering) strategies, while dynamic search rearrangement is superior to all the preorderings studied.

    Free Register to Access Article

Luis Veiga – One of the best experts on this subject based on the ideXlab platform.

  • Adaptive Consistency and awareness support for distributed software development
    OTM Confederated International Conferences "On the Move to Meaningful Internet Systems", 2013
    Co-Authors: Andre Pessoa Negrao, Miguel Mateus, Paulo Ferreira, Luis Veiga

    Abstract:

    We present ARCADE, a Consistency and awareness model for Distributed Software Development. In ARCADE, updates to elements of the software project considered important to a programmer are sent to him promptly. As the importance of an element decreases, the frequency with which the programmer is notified about it also decreases. This way, the system provides a selective, continuous and focused level of awareness. As a result, the bandwidth required to propagate events is reduced and intrusion caused by unimportant notifications is minimized. In this paper we present the design of ARCADE, as well as an evaluation of its effectiveness.

    Free Register to Access Article

  • OTM Conferences – Adaptive Consistency and Awareness Support for Distributed Software Development
    On the Move to Meaningful Internet Systems: OTM 2013 Conferences, 2013
    Co-Authors: Andre Pessoa Negrao, Miguel Mateus, Paulo Ferreira, Luis Veiga

    Abstract:

    We present ARCADE, a Consistency and awareness model for Distributed Software Development. In ARCADE, updates to elements of the software project considered important to a programmer are sent to him promptly. As the importance of an element decreases, the frequency with which the programmer is notified about it also decreases. This way, the system provides a selective, continuous and focused level of awareness. As a result, the bandwidth required to propagate events is reduced and intrusion caused by unimportant notifications is minimized. In this paper we present the design of ARCADE, as well as an evaluation of its effectiveness.

    Free Register to Access Article

  • Adaptive Consistency for replicated state in real time strategy multiplayer games
    Adaptive and Reflective Middleware, 2012
    Co-Authors: Manuel Cajada, Paulo Ferreira, Luis Veiga

    Abstract:

    Although massive multiplayer online games have been gaining most popularity over the years, real-time strategy (RTS) has not been considerate a strong candidate for using this model because of the limited number of players supported, large number of game entities and strong Consistency requirements. To deal with this situation, concepts such as continuous Consistency and location-awareness have proven to be extremely useful in order to confine areas with Consistency requirements. The combination between these two concepts results on a powerful technique in which the player’s location and divergence boundaries are directly linked, providing the player the most accurate information about objects inside his area-of-interest. The VFC model achieves a balance between the notions of continuous Consistency and location-awareness by defining multiple zones of Consistency around the player’s location (pivot) with different divergence boundaries. In this work we propose VFC-RTS, an adaptation of the VFC model, characterized for establishing Consistency degrees, to the RTS scenario. We describe how the concepts of the original VFC model were adapt to the RTS paradigm and propose an architecture for a generic middleware. Later, we apply our solution to an open source, multi-platform RTS game with full Consistency requirements and evaluate the results to define the success of this work.

    Free Register to Access Article