Runtime Complexity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1548 Experts worldwide ranked by ideXlab platform

Georg Moser - One of the best experts on this subject based on the ideXlab platform.

  • RTA - Closing the Gap Between Runtime Complexity and Polytime Computability.
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    In earlier work, we have shown that for confluent TRSs, innermost polynomial Runtime Complexity induces polytime computability of the functions defined. In this paper, we generalise this result to full rewriting, for that we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the Runtime Complexity of the TRS and the Runtime Complexity of the TM is polynomially related. Our result strengthens the evidence that the Complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of nondeterministic polytime-computation based on Runtime Complexity analysis of rewrite systems.

  • Technical Report: Complexity Analysis by Graph Rewriting Revisited
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    In this paper, we generalise the above result to full rewriting. Following our previous work, we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the Runtime Complexity of the TRS and the Runtime Complexity of the TM is polynomially related. Our result strengthens the evidence that the Complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of non-deterministic polytime-computation based on Runtime Complexity analysis of rewrite systems.

  • RTA - A Path Order for Rewrite Systems that Compute Exponential Time Functions
    2020
    Co-Authors: Martin Avanzini, Naohi Eguchi, Georg Moser
    Abstract:

    In this paper we present a new path order for rewrite systems, the exponential path order EPO ? . Suppose a term rewrite system is compatible with EPO ? , then the Runtime Complexity of this rewrite system is bounded from above by an exponential function. Furthermore, the class of function computed by a rewrite system compatible with EPO ? equals the class of functions computable in exponential time on a Turing machine.

  • RTA - Tyrolean Complexity Tool: Features and Usage.
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    The Tyrolean Complexity Tool, TCT for short, is an open source Complexity analyser for term rewrite systems. Our tool TCT features a majority of the known techniques for the automated characterisation of polynomial Complexity of rewrite systems and can investigate derivational and Runtime Complexity, for full and innermost rewriting. This system description outlines features and provides a short introduction to the usage of TCT.

  • RTA - A Combination Framework for Complexity
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    In this paper we present a combination framework for the automated polynomial Complexity analysis of term rewrite systems. The framework covers both derivational and Runtime Complexity analysis, and is employed as theoretical foundation in the automated Complexity tool TCT. We present generalisations of powerful Complexity techniques, notably a generalisation of Complexity pairs and (weak) dependency pairs. Finally, we also present a novel technique, called dependency graph decomposition, that in the dependency pair setting greatly increases modularity.

Martin Avanzini - One of the best experts on this subject based on the ideXlab platform.

  • RTA - A Path Order for Rewrite Systems that Compute Exponential Time Functions
    2020
    Co-Authors: Martin Avanzini, Naohi Eguchi, Georg Moser
    Abstract:

    In this paper we present a new path order for rewrite systems, the exponential path order EPO ? . Suppose a term rewrite system is compatible with EPO ? , then the Runtime Complexity of this rewrite system is bounded from above by an exponential function. Furthermore, the class of function computed by a rewrite system compatible with EPO ? equals the class of functions computable in exponential time on a Turing machine.

  • Technical Report: Complexity Analysis by Graph Rewriting Revisited
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    In this paper, we generalise the above result to full rewriting. Following our previous work, we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the Runtime Complexity of the TRS and the Runtime Complexity of the TM is polynomially related. Our result strengthens the evidence that the Complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of non-deterministic polytime-computation based on Runtime Complexity analysis of rewrite systems.

  • RTA - Closing the Gap Between Runtime Complexity and Polytime Computability.
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    In earlier work, we have shown that for confluent TRSs, innermost polynomial Runtime Complexity induces polytime computability of the functions defined. In this paper, we generalise this result to full rewriting, for that we exploit graph rewriting. We give a new proof of the adequacy of graph rewriting for full rewriting that allows for a precise control of the resources copied. In sum we completely describe an implementation of rewriting on a Turing machine (TM for short). We show that the Runtime Complexity of the TRS and the Runtime Complexity of the TM is polynomially related. Our result strengthens the evidence that the Complexity of a rewrite system is truthfully represented through the length of derivations. Moreover our result allows the classification of nondeterministic polytime-computation based on Runtime Complexity analysis of rewrite systems.

  • RTA - A Combination Framework for Complexity
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    In this paper we present a combination framework for the automated polynomial Complexity analysis of term rewrite systems. The framework covers both derivational and Runtime Complexity analysis, and is employed as theoretical foundation in the automated Complexity tool TCT. We present generalisations of powerful Complexity techniques, notably a generalisation of Complexity pairs and (weak) dependency pairs. Finally, we also present a novel technique, called dependency graph decomposition, that in the dependency pair setting greatly increases modularity.

  • RTA - Tyrolean Complexity Tool: Features and Usage.
    2020
    Co-Authors: Martin Avanzini, Georg Moser
    Abstract:

    The Tyrolean Complexity Tool, TCT for short, is an open source Complexity analyser for term rewrite systems. Our tool TCT features a majority of the known techniques for the automated characterisation of polynomial Complexity of rewrite systems and can investigate derivational and Runtime Complexity, for full and innermost rewriting. This system description outlines features and provides a short introduction to the usage of TCT.

Roger Zimmermann - One of the best experts on this subject based on the ideXlab platform.

  • learning based methods for code Runtime Complexity prediction
    European Conference on Information Retrieval, 2020
    Co-Authors: Jagriti Sikka, Kushal Satya, Yaman Kumar, Shagun Uppal, Rajiv Ratn Shah, Roger Zimmermann
    Abstract:

    Predicting the Runtime Complexity of a programming code is an arduous task. In fact, even for humans, it requires a subtle analysis and comprehensive knowledge of algorithms to predict time Complexity with high fidelity, given any code. As per Turing’s Halting problem proof, estimating code Complexity is mathematically impossible. Nevertheless, an approximate solution to such a task can help developers to get real-time feedback for the efficiency of their code. In this work, we model this problem as a machine learning task and check its feasibility with thorough analysis. Due to the lack of any open source dataset for this task, we propose our own annotated dataset, (The complete dataset is available for use at https://github.com/midas-research/corcod-dataset/blob/master/README.md) CoRCoD: Code Runtime Complexity Dataset, extracted from online coding platforms. We establish baselines using two different approaches: feature engineering and code embeddings, to achieve state of the art results and compare their performances. Such solutions can be highly useful in potential applications like automatically grading coding assignments, IDE-integrated tools for static code analysis, and others.

  • ECIR (1) - Learning Based Methods for Code Runtime Complexity Prediction
    Lecture Notes in Computer Science, 2020
    Co-Authors: Jagriti Sikka, Kushal Satya, Yaman Kumar, Shagun Uppal, Rajiv Ratn Shah, Roger Zimmermann
    Abstract:

    Predicting the Runtime Complexity of a programming code is an arduous task. In fact, even for humans, it requires a subtle analysis and comprehensive knowledge of algorithms to predict time Complexity with high fidelity, given any code. As per Turing’s Halting problem proof, estimating code Complexity is mathematically impossible. Nevertheless, an approximate solution to such a task can help developers to get real-time feedback for the efficiency of their code. In this work, we model this problem as a machine learning task and check its feasibility with thorough analysis. Due to the lack of any open source dataset for this task, we propose our own annotated dataset, (The complete dataset is available for use at https://github.com/midas-research/corcod-dataset/blob/master/README.md) CoRCoD: Code Runtime Complexity Dataset, extracted from online coding platforms. We establish baselines using two different approaches: feature engineering and code embeddings, to achieve state of the art results and compare their performances. Such solutions can be highly useful in potential applications like automatically grading coding assignments, IDE-integrated tools for static code analysis, and others.

  • learning based methods for code Runtime Complexity prediction
    arXiv: Learning, 2019
    Co-Authors: Jagriti Sikka, Kushal Satya, Yaman Kumar, Shagun Uppal, Rajiv Ratn Shah, Roger Zimmermann
    Abstract:

    Predicting the Runtime Complexity of a programming code is an arduous task. In fact, even for humans, it requires a subtle analysis and comprehensive knowledge of algorithms to predict time Complexity with high fidelity, given any code. As per Turing's Halting problem proof, estimating code Complexity is mathematically impossible. Nevertheless, an approximate solution to such a task can help developers to get real-time feedback for the efficiency of their code. In this work, we model this problem as a machine learning task and check its feasibility with thorough analysis. Due to the lack of any open source dataset for this task, we propose our own annotated dataset CoRCoD: Code Runtime Complexity Dataset, extracted from online judges. We establish baselines using two different approaches: feature engineering and code embeddings, to achieve state of the art results and compare their performances. Such solutions can be widely useful in potential applications like automatically grading coding assignments, IDE-integrated tools for static code analysis, and others.

Andy Schürr - One of the best experts on this subject based on the ideXlab platform.

  • Graph Transformations and Model-Driven Engineering - Extended triple graph grammars with efficient and compatible graph translators
    Lecture Notes in Computer Science, 2020
    Co-Authors: Felix Klar, A Königs, M. Lauder, Andy Schürr
    Abstract:

    Model-based software development processes often force their users to translate instances of one modeling language into related instances of another modeling language and vice-versa. The underlying data structure of such languages usually are some sort of graphs. Triple graph grammars (TGGs) are a formally founded language for describing correspondence relationships between two graph languages in a declarative way. Bidirectional graph language translators can be derived from a TGG, which maps pairs of related graph instances onto each other. These translators must fulfill certain compatibility properties with respect to the correspondence relationships established by their TGG. These properties are guaranteed for the original TGG approach as published 15 years ago. However, its expressiveness is pushed to the limit in most real world scenarios. Furthermore, the original approach relies on a parsing algorithm with exponential Runtime Complexity. In this contribution, we study a more expressive class of TGGs with negative application conditions and show for the first time that derived translators with a polynomial Runtime Complexity still preserve the above mentioned compatibility properties. For this purpose, we introduce a new characterization of wellformed TGGs together with a new translation rule scheduling algorithm that considers dangling edges of input graphs.

  • ECMFA - Bidirectional model transformation with precedence triple graph grammars
    Modelling Foundations and Applications, 2012
    Co-Authors: M. Lauder, Anthony Anjorin, Gergely Varró, Andy Schürr
    Abstract:

    Triple Graph Grammars (TGGs) are a rule-based technique with a formal background for specifying bidirectional model transformation. In practical scenarios, the unidirectional rules needed for the forward and backward transformations are automatically derived from the TGG rules in the specification, and the overall transformation process is governed by a control algorithm. Current implementations either have a worst case exponential Runtime Complexity, based on the number of elements to be processed, or pose such strong restrictions on the class of supported TGGs that practical real-world applications become infeasible. This paper, therefore, introduces a new class of TGGs together with a control algorithm that drops a number of practice-relevant restrictions on TGG rules and still has a polynomial Runtime Complexity.

  • Bidirectional model transformation with precedence triple graph grammars
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012
    Co-Authors: M. Lauder, Anthony Anjorin, Gergely Varró, Andy Schürr
    Abstract:

    Triple Graph Grammars (TGGs) are a rule-based technique with a formal background for specifying bidirectional model transformation. In practical scenarios, the unidirectional rules needed for the forward and backward transformations are automatically derived from the TGG rules in the specification, and the overall transformation process is governed by a control algorithm. Current implementations either have a worst case exponential Runtime Complexity, based on the number of elements to be processed, or pose such strong restrictions on the class of supported TGGs that practical real-world applications become infeasible. This paper, therefore, introduces a new class of TGGs together with a control algorithm that drops a number of practice-relevant restrictions on TGG rules and still has a polynomial Runtime Complexity.

  • Extended triple graph grammars with efficient and compatible graph translators
    Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2010
    Co-Authors: Felix Klar, A Königs, M. Lauder, Andy Schürr
    Abstract:

    Model-based software development processes often force their users to translate instances of one modeling language into related instances of another modeling language and vice-versa. The underlying data structure of such languages usually are some sort of graphs. Triple graph grammars (TGGs) are a formally founded language for describing correspondence relationships between two graph languages in a declarative way. Bidirectional graph language translators can be derived from a TGG, which maps pairs of related graph instances onto each other. These translators must fulfill certain compatibility properties with respect to the correspondence relationships established by their TGG. These properties are guaranteed for the original TGG approach as published 15 years ago. However, its expressiveness is pushed to the limit in most real world scenarios. Furthermore, the original approach relies on a parsing algorithm with exponential Runtime Complexity. In this contribution, we study a more expressive class of TGGs with negative application conditions and show for the first time that derived translators with a polynomial Runtime Complexity still preserve the above mentioned compatibility properties. For this purpose, we introduce a new characterization of well-formed TGGs together with a new translation rule scheduling algorithm that considers dangling edges of input graphs. Graph Transformations and Model-Driven Engineering Graph Transformations and Model-Driven Engineering Look Inside Share Share this content on Facebook Share this content on Twitter Share this content on LinkedIn Other actions Export citations About this Book Reprints and Permissions

  • extended triple graph grammars with efficient and compatible graph translators
    Graph transformations and model-driven engineering, 2010
    Co-Authors: Felix Klar, A Königs, M. Lauder, Andy Schürr
    Abstract:

    Model-based software development processes often force their users to translate instances of one modeling language into related instances of another modeling language and vice-versa. The underlying data structure of such languages usually are some sort of graphs. Triple graph grammars (TGGs) are a formally founded language for describing correspondence relationships between two graph languages in a declarative way. Bidirectional graph language translators can be derived from a TGG, which maps pairs of related graph instances onto each other. These translators must fulfill certain compatibility properties with respect to the correspondence relationships established by their TGG. These properties are guaranteed for the original TGG approach as published 15 years ago. However, its expressiveness is pushed to the limit in most real world scenarios. Furthermore, the original approach relies on a parsing algorithm with exponential Runtime Complexity. In this contribution, we study a more expressive class of TGGs with negative application conditions and show for the first time that derived translators with a polynomial Runtime Complexity still preserve the above mentioned compatibility properties. For this purpose, we introduce a new characterization of wellformed TGGs together with a new translation rule scheduling algorithm that considers dangling edges of input graphs.

Jacques Garrigue - One of the best experts on this subject based on the ideXlab platform.

  • on the Runtime Complexity of type directed unboxing
    International Conference on Functional Programming, 1998
    Co-Authors: Yasuhiko Minamide, Jacques Garrigue
    Abstract:

    Avoiding boxing when representing native objects is essential for the efficient compilation of any programming language For polymorphic languages this task is difficult, but several schemes have been proposed that remove boxing on the basis of type information. Leroy's type-directed unboxing transformation is one of them. One of its nicest properties is that it relies only on visible types, which makes it compatible with separate compilation. However it has been noticed that it is not safe both in terms of time and space Complexity ---i.e. transforming a program may raise its Complexity. We propose a refinement of this transformation, still relying only on visible types, and prove that it satisfies the safety condition for time Complexity. The proof is an extension of the usual logical relation method, in which correctness and safety are proved simultaneously.

  • ICFP - On the Runtime Complexity of type-directed unboxing
    Proceedings of the third ACM SIGPLAN international conference on Functional programming - ICFP '98, 1998
    Co-Authors: Yasuhiko Minamide, Jacques Garrigue
    Abstract:

    Avoiding boxing when representing native objects is essential for the efficient compilation of any programming language For polymorphic languages this task is difficult, but several schemes have been proposed that remove boxing on the basis of type information. Leroy's type-directed unboxing transformation is one of them. One of its nicest properties is that it relies only on visible types, which makes it compatible with separate compilation. However it has been noticed that it is not safe both in terms of time and space Complexity ---i.e. transforming a program may raise its Complexity. We propose a refinement of this transformation, still relying only on visible types, and prove that it satisfies the safety condition for time Complexity. The proof is an extension of the usual logical relation method, in which correctness and safety are proved simultaneously.