Compiler

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 593055 Experts worldwide ranked by ideXlab platform

David I. August - One of the best experts on this subject based on the ideXlab platform.

  • Compiler optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • CGO - Compiler optimization-space exploration
    International Symposium on Code Generation and Optimization 2003. CGO 2003., 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • Compiler optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

Spyridon Triantafyllis - One of the best experts on this subject based on the ideXlab platform.

  • Compiler optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • CGO - Compiler optimization-space exploration
    International Symposium on Code Generation and Optimization 2003. CGO 2003., 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • Compiler optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

Jeffrey Considine - One of the best experts on this subject based on the ideXlab platform.

  • program representation size in an intermediate language with intersection and union types
    Lecture Notes in Computer Science, 2000
    Co-Authors: Allyn Dimock, Ian Westmacott, Robert Muller, Franklyn Turbak, J B Wells, Jeffrey Considine
    Abstract:

    The CIL Compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that nonduplicating formulations of intersection (and union) types would not achieve significantly better space complexity.

Allyn Dimock - One of the best experts on this subject based on the ideXlab platform.

  • Type- and flow-directed compilation for specialized data representations
    2002
    Co-Authors: Stuart M. Shieber, Allyn Dimock
    Abstract:

    The combination of intersection and union types with flow types gives the Compiler writer unprecedented flexibility in choosing data representations in the context of a typed intermediate language. We present the design of such a language and the design of a framework for exploiting the type system to support multiple representations of the same data type in a single program. The framework can transform the input term, in a type-safe way, so that different data representations can be used in the transformed term—even if they share a use site in the pre-transformed term. We have implemented a Compiler using the typed intermediate language and instantiated the framework to allow specialized function representations. We test the Compiler on a set of benchmarks and show that the compile-time performance is reasonable. We further show that the compiled code does indeed benefit from specialized function representations.

  • program representation size in an intermediate language with intersection and union types
    Lecture Notes in Computer Science, 2000
    Co-Authors: Allyn Dimock, Ian Westmacott, Robert Muller, Franklyn Turbak, J B Wells, Jeffrey Considine
    Abstract:

    The CIL Compiler for core Standard ML compiles whole programs using a novel typed intermediate language (TIL) with intersection and union types and flow labels on both terms and types. The CIL term representation duplicates portions of the program where intersection types are introduced and union types are eliminated. This duplication makes it easier to represent type information and to introduce customized data representations. However, duplication incurs compile-time space costs that are potentially much greater than are incurred in TILs employing type-level abstraction or quantification. In this paper, we present empirical data on the compile-time space costs of using CIL as an intermediate language. The data shows that these costs can be made tractable by using sufficiently fine-grained flow analyses together with standard hash-consing techniques. The data also suggests that nonduplicating formulations of intersection (and union) types would not achieve significantly better space complexity.

Neil Vachharajani - One of the best experts on this subject based on the ideXlab platform.

  • Compiler optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • CGO - Compiler optimization-space exploration
    International Symposium on Code Generation and Optimization 2003. CGO 2003., 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.

  • Compiler optimization space exploration
    Symposium on Code Generation and Optimization, 2003
    Co-Authors: Spyridon Triantafyllis, Manish Vachharajani, Neil Vachharajani, David I. August
    Abstract:

    To meet the demands of modern architectures, optimizing Compilers must incorporate an ever larger number of increasingly complex transformation algorithms. Since code transformations may often degrade performance or interfere with subsequent transformations, Compilers employ predictive heuristics to guide optimizations by predicting their effects a priori. Unfortunately, the unpredictability of optimization interaction and the irregularity of today's wide-issue machines severely limit the accuracy of these heuristics. As a result, Compiler writers may temper high variance optimization with overly conservative heuristics or may exclude these optimizations entirely. While this process results in a Compiler capable of generating good average code quality across the target benchmark set, it is at the cost of missed optimization opportunities in individual code segments.To replace predictive heuristics, researchers have proposed Compilers which explore many optimization options, selecting the best one a posteriori. Unfortunately, these existing iterative compilation techniques are not practical for reasons of compile time and applicability. In this paper, we present the Optimization-Space Exploration (OSE) Compiler organization, the first practical iterative compilation strategy applicable to optimizations in general-purpose Compilers. Instead of replacing predictive heuristics, OSE uses the Compiler writer's knowledge encoded in the heuristics to select a small number of promising optimization alternatives for a given code segment. Compile time is limited by evaluating only these alternatives for hot code segments using a general compiletime performance estimator. An OSE-enhanced version of lntel's highly-tuned, aggressively optimizing production Compiler for IA-64 yields a significant performance improvement, more than 20% in some cases, on Itanium for SPEC codes.