Counterexample

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 32691 Experts worldwide ranked by ideXlab platform

Edmund M. Clarke - One of the best experts on this subject based on the ideXlab platform.

  • SAT-based Counterexample-guided abstraction refinement
    IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2004
    Co-Authors: Edmund M. Clarke, Aarti Gupta, Ofer Strichman
    Abstract:

    We describe new techniques for model checking in the Counterexample-guided abstraction-refinement framework. The abstraction phase "hides" the logic of various variables, hence considering them as inputs. This type of abstraction may lead to "spurious" Counterexamples, i.e., traces that cannot be simulated on the original (concrete) machine. We check whether a Counterexample is real or spurious with a satisfiability (SAT) checker. We then use a combination of 0-1 integer linear programming and machine learning techniques for refining the abstraction based on the Counterexample. The process is repeated until either a real Counterexample is found or the property is verified. We have implemented these techniques on top of the model checker NuSMV and the SAT solver Chaff. Experimental results prove the viability of these new techniques.

  • Counterexample guided abstraction refinement
    International Symposium on Temporal Representation and Reasoning, 2003
    Co-Authors: Edmund M. Clarke
    Abstract:

    The main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem. Abstraction methods attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the Kripke structure. Counterexample-guided abstraction refinement is an automatic abstraction method where, starting with a relatively small skeletal representation of the system to be verified, increasingly precise abstract representations of the system are computed. The key step is to extract information from false negatives ("spurious Counterexamples") due to over-approximation.

  • SPIN - SAT-Based Counterexample Guided Abstraction Refinement
    Model Checking Software, 2002
    Co-Authors: Edmund M. Clarke
    Abstract:

    We describe new techniques for model checking in the Counterexample guided abstraction/refinement framework. The abstraction phase 'hides' the logic of various variables, hence considering them as inputs. This type of abstraction may lead to 'spurious' Counterexamples, i.e. traces that cannot be simulated on the original (concrete) machine. We check whether a Counterexample is real or spurious with a SAT Checker. We then use a combination of Integer Linear Programming (ILP) and machine learning techniques for refining the abstraction based on the Counterexample. The process is repeated until either a real Counterexample is found or the property is verified. We have implemented these techniques on top of the model checker NuSMV and the SAT solver Chaff. Experimental results prove the viability of these new techniques.

Ufuk Topcu - One of the best experts on this subject based on the ideXlab platform.

  • Counterexamples for Robotic Planning Explained in Structured Language
    arXiv: Robotics, 2018
    Co-Authors: Lu Feng, Mahsa Ghasemi, Kai-wei Chang, Ufuk Topcu
    Abstract:

    Automated techniques such as model checking have been used to verify models of robotic mission plans based on Markov decision processes (MDPs) and generate Counterexamples that may help diagnose requirement violations. However, such artifacts may be too complex for humans to understand, because existing representations of Counterexamples typically include a large number of paths or a complex automaton. To help improve the interpretability of Counterexamples, we define a notion of explainable Counterexample, which includes a set of structured natural language sentences to describe the robotic behavior that lead to a requirement violation in an MDP model of robotic mission plan. We propose an approach based on mixed-integer linear programming for generating explainable Counterexamples that are minimal, sound and complete. We demonstrate the usefulness of the proposed approach via a case study of warehouse robots planning.

  • ICRA - Counterexamples for Robotic Planning Explained in Structured Language
    2018 IEEE International Conference on Robotics and Automation (ICRA), 2018
    Co-Authors: Lu Feng, Mahsa Ghasemi, Kai-wei Chang, Ufuk Topcu
    Abstract:

    Automated techniques such as model checking have been used to verify models of robotic mission plans based on Markov decision processes (MDPs) and generate Counterexamples that may help diagnose requirement violations. However, such artifacts may be too complex for humans to understand, because existing representations of Counterexamples typically include a large number of paths or a complex automaton. To help improve the interpretability of Counterexamples, we define a notion of explainable Counterexample, which includes a set of structured natural language sentences to describe the robotic behavior that lead to a requirement violation in an MDP model of robotic mission plan. We propose an approach based on mixed-integer linear programming for generating explainable Counterexamples that are minimal, sound and complete. We demonstrate the usefulness of the proposed approach via a case study of warehouse robots planning.

Jacques Theys - One of the best experts on this subject based on the ideXlab platform.

  • an explicit Counterexample to the lagarias wang finiteness conjecture
    Advances in Mathematics, 2011
    Co-Authors: Kevin G Hare, Ian Morris, Nikita Sidorov, Jacques Theys
    Abstract:

    Abstract The joint spectral radius of a finite set of real d × d matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the finiteness property if there exists a periodic product which achieves this maximal rate of growth. J.C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real d × d matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that Counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of 2 × 2 matrices which contains a Counterexample. Similar results were subsequently given by V.D. Blondel, J. Theys and A.A. Vladimirov and by V.S. Kozyakin, but no explicit Counterexample to the finiteness conjecture has so far been given. The purpose of this paper is to resolve this issue by giving the first completely explicit description of a Counterexample to the Lagarias–Wang finiteness conjecture. Namely, for the set A α ⁎ : = { ( 1 1 0 1 ) , α ⁎ ( 1 0 1 1 ) } we give an explicit value of α ⁎ ≃ 0.749326546330367557943961948091344672091327370236064317358024 … such that A α ⁎ does not satisfy the finiteness property.

  • an explicit Counterexample to the lagarias wang finiteness conjecture
    arXiv: Optimization and Control, 2010
    Co-Authors: Kevin G Hare, Ian Morris, Nikita Sidorov, Jacques Theys
    Abstract:

    The joint spectral radius of a finite set of real $d \times d$ matrices is defined to be the maximum possible exponential rate of growth of long products of matrices drawn from that set. A set of matrices is said to have the \emph{finiteness property} if there exists a periodic product which achieves this maximal rate of growth. J.C. Lagarias and Y. Wang conjectured in 1995 that every finite set of real $d \times d$ matrices satisfies the finiteness property. However, T. Bousch and J. Mairesse proved in 2002 that Counterexamples to the finiteness conjecture exist, showing in particular that there exists a family of pairs of $2 \times 2$ matrices which contains a Counterexample. Similar results were subsequently given by V.D. Blondel, J. Theys and A.A. Vladimirov and by V.S. Kozyakin, but no explicit Counterexample to the finiteness conjecture has so far been given. The purpose of this paper is to resolve this issue by giving the first completely explicit description of a Counterexample to the Lagarias-Wang finiteness conjecture. Namely, for the set \[ \mathsf{A}_{\alpha_*}:= \{({cc}1&1\\0&1), \alpha_*({cc}1&0\\1&1)\}\] we give an explicit value of \alpha_* \simeq 0.749326546330367557943961948091344672091327370236064317358024...] such that $\mathsf{A}_{\alpha_*}$ does not satisfy the finiteness property.

Virginie Wiels - One of the best experts on this subject based on the ideXlab platform.

  • HASE - Paths to Property Violation: A Structural Approach for Analyzing Counter-Examples
    2010 IEEE 12th International Symposium on High Assurance Systems Engineering, 2010
    Co-Authors: Thomas Bochot, Pierre Virelizier, Hélène Waeselynck, Virginie Wiels
    Abstract:

    At Airbus, flight control software is developed using SCADE formal models, from which 90% of the code can be generated. Having a formal design leaves open the possibility of introducing model checking techniques. But, from our analysis of cases extracted from real software, a key issue concerns the exploitation of Counterexamples showing property violation. Understanding the causes of the violation is not trivial, and the (unique) Counterexample returned by a model checker is not necessarily realistic from an operational viewpoint. To address this issue, we propose an automated structural analysis that identifies paths of the model that are activated by a Counterexample over time. This analysis allows us to extract relevant information to explain the observed violation. It may also serve to guide the model checker toward the search for different Counterexamples, exhibiting new path activation patterns.

Efim Kinber - One of the best experts on this subject based on the ideXlab platform.

  • Learning languages from positive data and negative Counterexamples
    Journal of Computer and System Sciences, 2008
    Co-Authors: Sanjay Jain, Efim Kinber
    Abstract:

    AbstractIn this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative Counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative Counterexamples and on the time for obtaining them. In particular, we consider the models where (1) a learner gets the least negative Counterexample; (2) the size of a negative Counterexample must be bounded by the size of the positive data seen so far; (3) a Counterexample can be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative Counterexamples) are explored. Several surprising results are obtained. In particular, for Gold's model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative Counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative Counterexample). Another result shows that for behaviorally correct learning (where semantic convergence is required from a learner) with negative Counterexamples, a learner making just one error in almost all its conjectures has the “ultimate power”: it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative Counterexamples provided by a teacher are not enough to compensate for full positive and negative data

  • Learning languages from positive data and a limited number of short Counterexamples
    Theoretical Computer Science, 2007
    Co-Authors: Sanjay Jain, Efim Kinber
    Abstract:

    AbstractWe consider two variants of a model for learning languages in the limit from positive data and a limited number of short negative Counterexamples (Counterexamples are considered to be short if they are smaller than the largest element of input seen so far). Negative Counterexamples to a conjecture are examples which belong to the conjectured language but do not belong to the input language. Within this framework, we explore how/when learners using n short (arbitrary) negative Counterexamples can be simulated (or simulate) using least short Counterexamples or just ‘no’ answers from a teacher. We also study how a limited number of short Counterexamples fairs against unconstrained Counterexamples, and also compare their capabilities with the data that can be obtained from subset, superset, and equivalence queries (possibly with Counterexamples). A surprising result is that just one short Counterexample can sometimes be more useful than any bounded number of Counterexamples of arbitrary sizes. Most of the results exhibit salient examples of languages learnable or not learnable within corresponding variants of our models

  • ALT - Learning Languages from Positive Data and Negative Counterexamples
    Lecture Notes in Computer Science, 2004
    Co-Authors: Sanjay Jain, Efim Kinber
    Abstract:

    In this paper we introduce a paradigm for learning in the limit of potentially infinite languages from all positive data and negative Counterexamples provided in response to the conjectures made by the learner. Several variants of this paradigm are considered that reflect different conditions/constraints on the type and size of negative Counterexamples and on the time for obtaining them. In particular, we consider the models where 1) a learner gets the least negative Counterexample; 2) the size of a negative Counterexample must be bounded by the size of the positive data seen so far; 3) a Counterexample may be delayed. Learning power, limitations of these models, relationships between them, as well as their relationships with classical paradigms for learning languages in the limit (without negative Counterexamples) are explored. Several surprising results are obtained. In particular, for Gold’s model of learning requiring a learner to syntactically stabilize on correct conjectures, learners getting negative Counterexamples immediately turn out to be as powerful as the ones that do not get them for indefinitely (but finitely) long time (or are only told that their latest conjecture is not a subset of the target language, without any specific negative Counterexample). Another result shows that for behaviourally correct learning (where semantic convergence is required from a learner) with negative Counterexamples, a learner making just one error in almost all its conjectures has the “ultimate power”: it can learn the class of all recursively enumerable languages. Yet another result demonstrates that sometimes positive data and negative Counterexamples provided by a teacher are not enough to compensate for full positive and negative data.