Object-Oriented Languages

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 87717 Experts worldwide ranked by ideXlab platform

Craig Chambers - One of the best experts on this subject based on the ideXlab platform.

  • a retrospective on customization optimizing compiler technology for self a dynamically typed object oriented programming language
    Programming Language Design and Implementation, 2004
    Co-Authors: Craig Chambers, David Ungar
    Abstract:

    Dynamically-typed Object-Oriented Languages please programmers, but their lack of static type information penalizes performance. Our new implementation techniques extract static type information from declaration-free programs. Our system compiles several copies of a given procedure, each customized for one receiver type, so that the type of the receiver is bound at compile time. The compiler predicts types that are statically unknown but likely, and inserts run-time type tests to verify its predictions. It splits calls, compiling a copy on each control path, optimized to the specific types on that path. Coupling these new techniques with compile-time message lookup, aggressive procedure inlining, and traditional optimizations has doubled the performance of dynamically-typed Object-Oriented Languages.

  • Effective interprocedural optimization of Object-Oriented Languages
    1998
    Co-Authors: David Grove, Craig Chambers
    Abstract:

    This dissertation demonstrates that interprocedural analysis can be both practical and effective for sizeable Object-Oriented programs. Although frequent procedure calls and message sends are important structuring techniques in Object-Oriented Languages, they can also severely degrade application run-time performance. A number of analyses and transformations have been developed that attack this performance problem by enabling the compile-time replacement of message sends with procedure calls and of procedure calls with inlined copies of their callees. Despite the success of these techniques, even after they are applied it is extremely likely that some message send and non-inlined procedure calls will remain in the program. These remaining call sites can force an optimizing compiler to make pessimistic assumptions about program behavior, causing it to miss opportunities for potentially profitable optimizations. Interprocedural analysis is one well-known technique for enabling an optimizing compiler to more precisely model the effects of non-inlined calls, thus reducing their impact on application performance. Interprocedural analysis of Object-Oriented and functional Languages is quite challenging because message sends and/or applications of computed function values complicate the construction of the program call graph, a critical data structure for interprocedural analyses. Therefore, the core of this dissertation is an in-depth examination of the call graph construction problem for Object-Oriented Languages. It consists of the following components: (1) A general parameterized algorithm that encompasses many well-known and novel call graph construction algorithms is defined. (2) The general algorithm is implemented in the Vortex compiler infrastructure, a mature, multilanguage, optimizing compiler. The Vortex implementation provides a “level playing field” for meaningful cross-algorithm performance comparisons. (3) The costs and benefits of a number of call graph construction algorithms are empirically assessed by applying their Vortex implementation to a suite of sizeable (5,000 to 50,000 lines of code) Cecil and Java programs. Two small Smalltalk programs are also considered. For many of the benchmark applications, interprocedural analysis enabled substantial speed-ups over an already highly optimized baseline. Furthermore, a significant fraction of these speed-ups can be obtained through the use of a scalable, near-linear time call graph construction algorithm.

  • call graph construction in object oriented Languages
    Conference on Object-Oriented Programming Systems Languages and Applications, 1997
    Co-Authors: David Grove, Jeffrey Dean, Greg Defouw, Craig Chambers
    Abstract:

    Interprocedural analyses enable optimizing compilers to more precisely model the effects of non-inlined procedure calls, potentially resulting in substantial increases in application performance. Applying interprocedural analysis to programs written in Object-Oriented or functional Languages is complicated by the difficulty of constructing an accurate program call graph. This paper presents a parameterized algorithmic framework for call graph construction in the presence of message sends and/or first class functions. We use this framework to describe and to implement a number of well-known and new algorithms. We then empirically assess these algorithms by applying them to a suite of medium-sized programs written in Cecil and Java, reporting on the relative cost of the analyses, the relative precision of the constructed call graphs, and the impact of this precision on the effectiveness of a number of interprocedural optimizations.

  • OOPSLA - Vortex: an optimizing compiler for Object-Oriented Languages
    Proceedings of the 11th ACM SIGPLAN conference on Object-oriented programming systems languages and applications - OOPSLA '96, 1996
    Co-Authors: Jeffrey Dean, David Grove, Greg Defouw, Vassily Litvinov, Craig Chambers
    Abstract:

    Previously, techniques such as class hierarchy analysis and profile-guided receiver class prediction have been demonstrated to greatly improve the performance of applications written in pure Object-Oriented Languages, but the degree to which these results are transferable to applications written in hybrid Languages has been unclear. In part to answer this question, we have developed the Vortex compiler infrastructure, a language-independent optimizing compiler for Object-Oriented Languages, with front-ends for Cecil, C++, Java, and Modula-3. In this paper, we describe the Vortex compiler's intermediate language, internal structure, and optimization suite, and then we report the results of experiments assessing the effectiveness of different combinations of optimizations on sizable applications across these four Languages. We characterize the benchmark programs in terms of a collection of static and dynamic metrics, intended to quantify aspects of the "Object-Orientedness" of a program.

  • PLDI - Selective specialization for Object-Oriented Languages
    Proceedings of the ACM SIGPLAN 1995 conference on Programming language design and implementation - PLDI '95, 1995
    Co-Authors: Jeffrey Dean, Craig Chambers, David Grove
    Abstract:

    Dynamic dispatching is a major source of run-time overhead in Object-Oriented Languages, due both to the direct cost of method lookup and to the indirect effect of preventing other optimizations. To reduce this overhead, optimizing compilers for Object-Oriented Languages analyze the classes of objects stored in program variables, with the goal of bounding the possible classes of message receivers enough so that the compiler can uniquely determine the target of a message send at compile time and replace the message send with a direct procedure call. Specialization is one important technique for improving the precision of this static class information: by compiling multiple versions of a method, each applicable to a subset of the possible argument classes of the method, more precise static information about the classes of the method's arguments is obtained. Previous specialization strategies have not been selective about where this technique is applied, and therefore tended to significantly increase compile time and code space usage, particularly for large applications. In this paper, we present a more general framework for specialization in Object-Oriented Languages and describe a goal directed specialization algorithm that makes selective decisions to apply specialization to those cases where it provides the highest benefit. Our results show that our algorithm improves the performance of a group of sizeable programs by 65% to 275% while increasing compiled code space requirements by only 4% to 10%. Moreover, when compared to the previous state-of-the-art specialization scheme, our algorithm improves performance by 11% to 67% while simultaneously reducing code space requirements by 65% to 73%.

Urs Hölzle - One of the best experts on this subject based on the ideXlab platform.

  • type feedback vs concrete type inference a comparison of optimization techniques for object oriented Languages
    Conference on Object-Oriented Programming Systems Languages and Applications, 1995
    Co-Authors: Ole Agesen, Urs Hölzle
    Abstract:

    Two promising optimization techniques for Object-Oriented Languages are type feedback (profile-based receiver class prediction) and concrete type inference (static analysis). We directly compare the two techniques, evaluating their effectiveness on a suite of 23 S ELF programs while keeping other factors constant.Our results show that both systems inline over 95% of all sends and deliver similar overall performance with one exception: S ELF 's automatic coercion of machine integers to arbitrary-precision integers upon overflow confounds type inference and slows down arithmetic-intensive benchmarks.We discuss several other issues which, given the comparable run-time performance, may influence the choice between type feedback and type inference.

  • ECOOP - Do Object-Oriented Languages need special hardware support?
    Object-Oriented Programming, 1
    Co-Authors: Urs Hölzle, David Ungar
    Abstract:

    Previous studies have shown that Object-Oriented programs have different execution characteristics than procedural programs, and that special Object-Oriented hardware can improve performance. The results of these studies may no longer hold because compiler optimizations can remove a large fraction of the differences. Our measurements show that SELF programs are more similar to C programs than are C++ programs, even though SELF is much more radically Object-Oriented than C++ and thus should differ much more from C.Furthermore, the benefit of tagged arithmetic instructions in the SPARC architecture (originally motivated by Smalltalk and Lisp implementations) appears to be small. Also, special hardware could hardly reduce message dispatch overhead since dispatch sequences are already very short. Two generic hardware features, instruction cache size and data cache write policy, have a much greater impact on performance.

David Ungar - One of the best experts on this subject based on the ideXlab platform.

  • a retrospective on customization optimizing compiler technology for self a dynamically typed object oriented programming language
    Programming Language Design and Implementation, 2004
    Co-Authors: Craig Chambers, David Ungar
    Abstract:

    Dynamically-typed Object-Oriented Languages please programmers, but their lack of static type information penalizes performance. Our new implementation techniques extract static type information from declaration-free programs. Our system compiles several copies of a given procedure, each customized for one receiver type, so that the type of the receiver is bound at compile time. The compiler predicts types that are statically unknown but likely, and inserts run-time type tests to verify its predictions. It splits calls, compiling a copy on each control path, optimized to the specific types on that path. Coupling these new techniques with compile-time message lookup, aggressive procedure inlining, and traditional optimizations has doubled the performance of dynamically-typed Object-Oriented Languages.

  • OOPSLA - Making pure Object-Oriented Languages practical
    ACM SIGPLAN Notices, 1991
    Co-Authors: Craig Chambers, David Ungar
    Abstract:

    In the past, Object-Oriented language designers and programmers have been forced to choose between pure message passing and performance. Last year, our SELF system achieved close to half the speed of optimized C but suffered from impractically long compile times. Two new optimization techniques, deferred compilation of uncommon cases and non-backtracking splitting using path objects, have improved compilation speed by more than an order of magnitude. SELF now compiles about as fast as an optimizing C compiler and runs at over half the speed of optimized C. This new level of performance may make pure Object-Oriented Languages practical.

  • ECOOP - Do Object-Oriented Languages need special hardware support?
    Object-Oriented Programming, 1
    Co-Authors: Urs Hölzle, David Ungar
    Abstract:

    Previous studies have shown that Object-Oriented programs have different execution characteristics than procedural programs, and that special Object-Oriented hardware can improve performance. The results of these studies may no longer hold because compiler optimizations can remove a large fraction of the differences. Our measurements show that SELF programs are more similar to C programs than are C++ programs, even though SELF is much more radically Object-Oriented than C++ and thus should differ much more from C.Furthermore, the benefit of tagged arithmetic instructions in the SPARC architecture (originally motivated by Smalltalk and Lisp implementations) appears to be small. Also, special hardware could hardly reduce message dispatch overhead since dispatch sequences are already very short. Two generic hardware features, instruction cache size and data cache write policy, have a much greater impact on performance.

Andrew A. Chien - One of the best experts on this subject based on the ideXlab platform.

  • POPL - Obtaining sequential efficiency for concurrent Object-Oriented Languages
    Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages - POPL '95, 1995
    Co-Authors: John Plevyak, Xingbin Zhang, Andrew A. Chien
    Abstract:

    Concurrent Object-Oriented programming (COOP) Languages focus the abstraction and encapsulation power of abstract data types on the problem of concurrency control. In particular, pure fine-grained concurrent Object-Oriented Languages (as opposed to hybrid or data parallel) provides the programmer with a simple, uniform, and flexible model while exposing maximum concurrency. While such Languages promise to greatly reduce the complexity of large-scale concurrent programming, the popularity of these Languages has been hampered by efficiency which is often many orders of magnitude less than that of comparable sequential code. We present a sufficiency set of techniques which enables the efficiency of fine-grained concurrent Object-Oriented Languages to equal that of traditional sequential Languages (like C) when the required data is available. These techniques are empirically validated by the application to a COOP implementation of the Livermore Loops.

  • precise concrete type inference for object oriented Languages
    Conference on Object-Oriented Programming Systems Languages and Applications, 1994
    Co-Authors: John Plevyak, Andrew A. Chien
    Abstract:

    Concrete type information is invaluable for program optimization. The determination of concrete types in Object-Oriented Languages is a flow sensitive global data flow problem. It is made difficult by dynamic dispatch (virtual function invocation) and first class functions (and selectors)—the very program structures for whose optimization its results are most critical. Previous work has shown that constraint-based type inference systems can be used to safely approximate concrete types [15], but their use can be expensive and their results imprecise. We present an incremental constraint-based type inference which produces precise concrete type information for a much larger class of programs at lower cost. Our algorithm extends the analysis in response to discovered imprecisions, guiding the analysis' effort to where it is most productive. This produces precise information at a cost proportional to the type complexity of the program. Many programs untypable by previous approaches or practically untypable due to computational expense, can be precisely analyzed by our new algorithm. Performance results, precision, and running time, are reported for a number of concurrent Object-Oriented programs. These results confirm the algorithm's precision and efficiency.

  • the concert system compiler and runtime support for efficient fine grained concurrent object oriented programs
    1993
    Co-Authors: Andrew A. Chien, Vijay Karamcheti, John Plevyak
    Abstract:

    The introduction of concurrency complicates the already difficult task of large-scale programming. Concurrent Object-Oriented Languages provide a mechanism, encapsulation, for managing the increased complexity of large-scale concurrent programs, thereby reducing the difficulty of large scale concurrent programming. In particular, fine-grained Object-Oriented approaches provide modularity through encapsulation while exposing large degrees of concurrency. Though fine-grained concurrent Object-Oriented Languages are attractive from a programming perspective, they have historically suffered from poor efficiency. The goal of the Concert project is to develop portable, efficient implementations of fine-grained concurrent Object-Oriented Languages. Our approach incorporates careful program analysis and information management at every stage from the compiler to the runtime system. In this document, we outline the basic elements of the Concert approach. In particular, we discuss program analyses, program transformations, their potential payoff, and how they will be embodied in the Concert system. Initial performance results and specific plans for demonstrations and system development are also detailed.

Akinori Yonezawa - One of the best experts on this subject based on the ideXlab platform.

  • an efficient implementation scheme of concurrent object oriented Languages on stock multicomputers
    ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 1993
    Co-Authors: Kenjiro Taura, Satoshi Matsuoka, Akinori Yonezawa
    Abstract:

    Several novel techniques for efficient implementtion of concurrent Object-Oriented Languages on general purpose, stock multicomputers are presented. These techniques have been developed in implementing our concurrent Object-Oriented language ABCL on a Fujitsu Laboratory's experimental multicomputer AP1000 consisting of 512 SPARC chips. The propsed intra-node scheduling mechanism reduces the cost of local message passing. The cost of intra-node asynchronous message passing is about 20 SPARC instructions in the bst case, including locality checking, dynamic method lookup, and scheduling. The minimum latency of asynchronous internode message passing is about 9μs, or about 120 instructions, employing the self-dispatching mechanism independently proposed by Eicken et al. A large scale benchmark which involves 9,000,000 message passings shows 440 times speedup on the 512 nodes system compared to the sequential version of the same algorithm. We rely on simple hardware support for message passing and use no specialized architectural supports for Object-Oriented computing. Thus, we are able to enjoy the benefits of future progress in standard processor technology. Our result shows that concurrent Object-Oriented Languages can be implemented efficiently on conventional multicomputers.

  • implementing concurrent object oriented Languages on multicomputers
    IEEE Parallel & Distributed Technology: Systems & Applications, 1993
    Co-Authors: Akinori Yonezawa, Satoshi Matsuoka, Masahiro Yasugi, Kenjiro Taura
    Abstract:

    The implementations of ABCL (an object-based concurrent language) on two different types of multicomputers-Electrotechnical Laboratories' EM-4 extended dataflow computer, and Fujitsu's experimental AP1000-are described. ABCL/EM-4 takes advantage of that machine's packet-driven architecture to achieve very good preliminary performance results. The AP1000 does not have special hardware support for message passing, so ABCL/AP1000 includes several software technologies that are general enough for conventional parallel or concurrent Languages, again yielding promising performance. It is concluded that the results demonstrate the viability of attaining good performance with concurrent Object-Oriented Languages on current multicomputers, whether experimental or commercial. >

  • PPOPP - An efficient implementation scheme of concurrent Object-Oriented Languages on stock multicomputers
    Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming - PPOPP '93, 1993
    Co-Authors: Kenjiro Taura, Satoshi Matsuoka, Akinori Yonezawa
    Abstract:

    Several novel techniques for efficient implementtion of concurrent Object-Oriented Languages on general purpose, stock multicomputers are presented. These techniques have been developed in implementing our concurrent Object-Oriented language ABCL on a Fujitsu Laboratory's experimental multicomputer AP1000 consisting of 512 SPARC chips. The propsed intra-node scheduling mechanism reduces the cost of local message passing. The cost of intra-node asynchronous message passing is about 20 SPARC instructions in the bst case, including locality checking, dynamic method lookup, and scheduling. The minimum latency of asynchronous internode message passing is about 9μs, or about 120 instructions, employing the self-dispatching mechanism independently proposed by Eicken et al. A large scale benchmark which involves 9,000,000 message passings shows 440 times speedup on the 512 nodes system compared to the sequential version of the same algorithm. We rely on simple hardware support for message passing and use no specialized architectural supports for Object-Oriented computing. Thus, we are able to enjoy the benefits of future progress in standard processor technology. Our result shows that concurrent Object-Oriented Languages can be implemented efficiently on conventional multicomputers.