Data Independence

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 249 Experts worldwide ranked by ideXlab platform

A W Roscoe - One of the best experts on this subject based on the ideXlab platform.

  • Finitary refinement checks for infinitary specifications
    2004
    Co-Authors: A W Roscoe
    Abstract:

    We see how refinement against a variety of infinite-state CSP specifications can be translated into finitary refinement checks. Methods used include turning a process into its own specification inductively, and we recall Wolper's discovery that Data Independence can be used for this purpose.

  • automating Data Independence
    Lecture Notes in Computer Science, 2000
    Co-Authors: P J Broadfootl, Gavin Lowe, A W Roscoe
    Abstract:

    In this paper, we generalise and fully automate the use of Data Independence techniques in the analysis of security protocols, developed in [16,17]. In [17], we successfully applied these techniques to a series of case studies; however, our scripts were carefully crafted by hand to suit each case study, a rather time-consuming and error-prone task. We have fully automated the Data Independence techniques by incorporating them into Casper, thus abstracting away from the user the complexity of the techniques, making them much more accessible.

  • ESORICS - Automating Data Independence
    Lecture Notes in Computer Science, 2000
    Co-Authors: Philippa J. Broadfoot, Gavin Lowe, A W Roscoe
    Abstract:

    In this paper, we generalise and fully automate the use of Data Independence techniques in the analysis of security protocols, developed in [16,17]. In [17], we successfully applied these techniques to a series of case studies; however, our scripts were carefully crafted by hand to suit each case study, a rather time-consuming and error-prone task. We have fully automated the Data Independence techniques by incorporating them into Casper, thus abstracting away from the user the complexity of the techniques, making them much more accessible.

  • Proving security protocols with model checkers by Data Independence techniques
    Journal of Computer Security, 1999
    Co-Authors: A W Roscoe, Philippa J. Broadfoot
    Abstract:

    Model checkers such as FDR have been extremely effective in checking for, and finding, attacks on cryptographic protocols - see, for example, and many of the papers in . Their use in proving protocols has, on the other hand, generally been limited to showing that a given small instance, usually restricted by the finiteness of some set of resources such as keys and nonces, is free of attacks. While for specific protocols there are frequently good reasons for supposing that this will find any attack, it leaves a substantial gap in the method. The purpose of this paper is to show how techniques borrowed from Data Independence and related fields can be used to achieve the illusion that nodes can call upon an infinite supply of different nonces, keys, etc., even though the actual types used for these things remain finite. It is thus possible to create models of protocols in which nodes do not have to stop after a small number of runs, and to claim that a finite-state run on a model checker has proved that a given protocol is free from attacks which could be constructed in the model used. We develop our methods via a series of case studies, discovering a number of methods for restricting the number of states generated in attempted proofs, and using two distinct approaches to protocol specification.

  • PDPTA - Formal Verification of Arbitrary Network Topologies
    1999
    Co-Authors: Sadie Creese, A W Roscoe
    Abstract:

    We show how Data Independence re sults can be used to generalise an inductive proof from binary to arbitrary branching tree networks The example used is modelled on the RSVP Re source Reservation Protocol Of particular inter est is the need for a separate lower level induction which is itself closely tied to Data Independence The inductions combine the use of the process alge bra CSP to model systems and their speci cations and the FDR tool to discharge the various proof

David Nowak - One of the best experts on this subject based on the ideXlab platform.

  • TLCA - On a semantic definition of Data Independence
    Lecture Notes in Computer Science, 2003
    Co-Authors: Ranko Lazić, David Nowak
    Abstract:

    A variety of results which enable model checking of important classes of infinite-state systems are based on exploiting the property of Data Independence. The literature contains a number of definitions of variants of Data Independence, which are given by syntactic restrictions in particular formalisms. More recently, Data Independence was defined for labelled transition systems using logical relations, enabling results about Data independent systems to be proved without reference to a particular syntax. In this paper, we show that the semantic definition is sufficiently strong for this purpose. More precisely, it was known that any syntactically Data independent symbolic LTS denotes a semantically Data independent family of LTSs, but here we show that the converse also holds.

  • CONCUR - A unifying approach to Data-Independence
    CONCUR 2000 — Concurrency Theory, 2000
    Co-Authors: Ranko Lazić, David Nowak
    Abstract:

    A concurrent system is Data-independent with respect to a Data type when the only operation it can perform on values of that type is equality testing. The system can also assign, input, nondeterministically choose, and output such values. Based on this intuitive definition, syntactic restrictions which ensure Data-Independence have been formulated for a variety of different formalisms. However, it is difficult to see how these are related. We present the first semantic definition of Data-Independence which allows equality testing, and its extension which allows constant symbols and predicate symbols. Both are special cases of a definition of when a family of labelled transition systems is parametric. This provides a unified approach to Data-Independence and its extensions. The paper also contains two theorems which, given a system and a specification which are Data-independent, enable the verification for all instantiations of the Data types (and of the constant symbols and the predicate symbols, in the case of the extension) to be reduced to the verification for a finite number of finite instantiations. We illustrate the applicability of the approach to particular formalisms by a programming language similar to UNITY.

  • a unifying approach to Data Independence
    International Conference on Concurrency Theory, 2000
    Co-Authors: Ranko Lazic, David Nowak
    Abstract:

    A concurrent system is Data-independent with respect to a Data type when the only operation it can perform on values of that type is equality testing. The system can also assign, input, nondeterministically choose, and output such values. Based on this intuitive definition, syntactic restrictions which ensure Data-Independence have been formulated for a variety of different formalisms. However, it is difficult to see how these are related. We present the first semantic definition of Data-Independence which allows equality testing, and its extension which allows constant symbols and predicate symbols. Both are special cases of a definition of when a family of labelled transition systems is parametric. This provides a unified approach to Data-Independence and its extensions. The paper also contains two theorems which, given a system and a specification which are Data-independent, enable the verification for all instantiations of the Data types (and of the constant symbols and the predicate symbols, in the case of the extension) to be reduced to the verification for a finite number of finite instantiations. We illustrate the applicability of the approach to particular formalisms by a programming language similar to UNITY.

Martin Vechev - One of the best experts on this subject based on the ideXlab platform.

  • verifying atomicity via Data Independence
    International Symposium on Software Testing and Analysis, 2014
    Co-Authors: Ohad Shacham, Eran Yahav, Guy Gueta, Alex Aiken, Nathan Bronson, Mooly Sagiv, Martin Vechev
    Abstract:

    We present a technique for automatically verifying atomicity of composed concurrent operations. The main observation behind our approach is that many composed concurrent operations which occur in practice are Data-independent. That is, the control-flow of the composed operation does not depend on specific input values. While verifying Data-Independence is undecidable in the general case, we provide succint sufficient conditions that can be used to establish a composed operation as Data-independent. We show that for the common case of concurrent maps, Data-Independence reduces the hard problem of verifying linearizability to a verification problem that can be solved efficiently with a bounded number of keys and values. We implemented our approach in a tool called VINE and evaluated it on all composed operations from 57 real-world applications (112 composed operations). We show that many composed operations (49 out of 112) are Data-independent, and automatically verify 30 of them as linearizable and the rest 19 as having violations of linearizability that could be repaired and then subsequently automatically verified. Moreover, we show that the remaining 63 operations are not linearizable, thus indicating that Data Independence does not limit the expressiveness of writing realistic linearizable composed operations.

  • ISSTA - Verifying atomicity via Data Independence
    Proceedings of the 2014 International Symposium on Software Testing and Analysis - ISSTA 2014, 2014
    Co-Authors: Ohad Shacham, Eran Yahav, Guy Gueta, Alex Aiken, Nathan Bronson, Mooly Sagiv, Martin Vechev
    Abstract:

    We present a technique for automatically verifying atomicity of composed concurrent operations. The main observation behind our approach is that many composed concurrent operations which occur in practice are Data-independent. That is, the control-flow of the composed operation does not depend on specific input values. While verifying Data-Independence is undecidable in the general case, we provide succint sufficient conditions that can be used to establish a composed operation as Data-independent. We show that for the common case of concurrent maps, Data-Independence reduces the hard problem of verifying linearizability to a verification problem that can be solved efficiently with a bounded number of keys and values. We implemented our approach in a tool called VINE and evaluated it on all composed operations from 57 real-world applications (112 composed operations). We show that many composed operations (49 out of 112) are Data-independent, and automatically verify 30 of them as linearizable and the rest 19 as having violations of linearizability that could be repaired and then subsequently automatically verified. Moreover, we show that the remaining 63 operations are not linearizable, thus indicating that Data Independence does not limit the expressiveness of writing realistic linearizable composed operations.

David A. Patterson - One of the best experts on this subject based on the ideXlab platform.

  • PIQL: success-tolerant query processing in the cloud
    Proceedings of the VLDB Endowment, 2011
    Co-Authors: Michael Armbrust, Michael J. Franklin, Kristal Curtis, Tim Kraska, Armando Fox, David A. Patterson
    Abstract:

    Newly-released web applications often succumb to a "Success Disaster," where overloaded Database machines and resulting high response times destroy a previously good user experience. Unfortunately, the Data Independence provided by a traditional relational Database system, while useful for agile development, only exacerbates the problem by hiding potentially expensive queries under simple declarative expressions. As a result, developers of these applications are increasingly abandoning relational Databases in favor of imperative code written against distributed key/value stores, losing the many benefits of Data Independence in the process. Instead, we propose PIQL, a declarative language that also provides scale Independence by calculating an upper bound on the number of key/value store operations that will be performed for any query. Coupled with a service level objective (SLO) compliance prediction model and PIQL's scalable Database architecture, these bounds make it easy for developers to write success-tolerant applications that support an arbitrarily large number of users while still providing acceptable performance. In this paper, we present the PIQL query processing system and evaluate its scale Independence on hundreds of machines using two benchmarks, TPC-W and SCADr.

Wolfgang Lehner - One of the best experts on this subject based on the ideXlab platform.

  • Multi-schema-version Data management: Data Independence in the twenty-first century
    The VLDB Journal, 2018
    Co-Authors: Kai Herrmann, Hannes Voigt, Torben Bach Pedersen, Wolfgang Lehner
    Abstract:

    Agile software development allows us to continuously evolve and run a software system. However, this is not possible in Databases, as established methods are very expensive, error-prone, and far from agile. We present InVerDa , a multi-schema-version Database management system (MSVDB) for agile Database development. MSVDBs realize co-existing schema versions within one Database, where each schema version behaves like a regular single-schema Database and write operations are propagated between schema versions. Developers use a relationally complete and bidirectional Database evolution language ( BiDEL ) to easily evolve existing schema versions to new ones. BiDEL scripts are more robust, orders of magnitude shorter, and cause only a small performance overhead compared to handwritten SQL scripts. We formally guarantee Data Independence: no matter how the Data of the co-existing schema versions is physically materialized, each schema version is guaranteed to behave like a regular Database. Since, the chosen physical materialization significantly determines the overall performance, we equip Database administrators with an advisor that proposes an optimized materialization for the current workload, which can improve the performance by orders of magnitude compared to naïve solutions. To our best knowledge, we are the first to facilitate agile evolution of production Databases with full support of co-existing schema versions and formally guaranteed Data Independence.

  • Logical Data Independence in the 21st Century - Co-Existing Schema Versions with InVerDa.
    arXiv: Databases, 2016
    Co-Authors: Kai Herrmann, Hannes Voigt, Andreas Behrend, Jonas Rausch, Wolfgang Lehner
    Abstract:

    We present InVerDa, a tool for end-to-end support of co-existing schema versions within one Database. While it is state of the art to run multiple versions of a continuously developed application concurrently, the same is hard for Databases. In order to keep multiple co-existing schema versions alive, that all access the same Data set, developers usually employ handwritten delta code (e.g. views and triggers in SQL). This delta code is hard to write and hard to maintain: if a Database administrator decides to adapt the physical table schema, all handwritten delta code needs to be adapted as well, which is expensive and error-prone in practice. With InVerDa, developers use a simple bidirectional Database evolution language in the first place that carries enough information to generate all the delta code automatically. Without additional effort, new schema versions become immediately accessible and Data changes in any version are visible in all schema versions at the same time. We formally validate the correctness of this propagation. InVerDa also allows for easily changing the physical table designs without affecting the availability of co-existing schema versions. This greatly increases robustness (264 times less lines of code) and allows for significant performance optimization.

  • CAiSE Forum - Improving Data Independence, Efficiency and Functional Flexibility of Integration Platforms.
    2008
    Co-Authors: Matthias Böhm, Wolfgang Lehner, Jürgen Bittner, Dirk Habich, Uwe Wloka
    Abstract:

    The concept of Enterprise Application Integration (EAI) is widely used for integrating heterogeneous applications and systems via message-based communication. Typically, EAI servers provide a huge set of specific inbound and outbound adapters used for interacting with the external systems and for converting proprietary message formats. However, the main problems in currently available products are the monolithic design of these adapters and performance deficits caused by the need for Data Independence. First, we classify and discuss these open problems. Second, we introduce our model-driven DIEFOS (Data Independence, efficiency and functional flexibility using feature-oriented software engineering) approach and show how the feature-based generation of dynamic adapters can improve Data Independence, efficiency and functional flexibility. Finally, we analyze open research challenges we see in this context.