Asynchronous Method

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1305 Experts worldwide ranked by ideXlab platform

Ye Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Large Scale Parallel Hybrid GMRES Method for the Linear System on Grid System
    2008 International Symposium on Parallel and Distributed Computing, 2008
    Co-Authors: Ye Zhang, Guy Bergere, Serge Petiton
    Abstract:

    The Method GMRES is used widely to solve the large sparse linear systems. In this paper, we will present an effective parallel hybrid Asynchronous Method, which combines the typical parallel Method GMRES with the Least Square Method that needs some eigenvalues obtained from a parallel Arnoldi process. And we will apply it on a Grid Computing platform Grid5000. Grid computing in general is a special type of parallel computing. It intends to deliver high-performance computing over distributed platforms for computation and data-intensive applications by making use of a very large amount of resources. From the numeric results, we will present this hybrid Method has some advantage for some matrices compared to the general Method GMRES.

  • A Parallel Hybrid Method of GMRES on GRID System
    2007 IEEE International Parallel and Distributed Processing Symposium, 2007
    Co-Authors: Ye Zhang, Guy Bergere, Serge Petiton
    Abstract:

    Grid computing focuses on making use of a very large amount of resources from a large-scale computing environment. It intends to deliver high-performance computing over distributed platforms for computation and data-intensive applications. In this paper, we present an effective parallel hybrid Asynchronous Method to solve large sparse linear systems by the use of a grid computing platform Grid5000. This hybrid Method combines a parallel GMRES(m) (generalized minimum residual) algorithm with the least square Method that needs some eigenvalues obtained from a parallel Arnoldi algorithm. All of these algorithms run on the different processors of the platform Grid5000. Grid5000, a 5000 CPUs nation-wide infrastructure for research in grid computing, is designed to provide a scientific tool for computing. We discuss the performances of this hybrid Method deployed on Grid5000, and compare these performances with those on the IBM SP series supercomputers.

  • A parallel object-oriented manufacturing simulation language
    Proceedings 15th Workshop on Parallel and Distributed Simulation, 2001
    Co-Authors: Ye Zhang, S.j. Turner
    Abstract:

    When used to simulate manufacturing systems, most existing parallel simulation languages cannot easily implement some features of those systems, such as the scheduling rules of a machine or the sharing of operators by multiple machines. The paper presents the design and implementation of a parallel object oriented manufacturing simulation language, called POMSim. A POMSim simulation is developed by using the concept of classes (entity types) and inheritance to support iterative design of efficient simulation models. POMSim completely hides all the details of parallel simulation, and provides simple and direct constructs to efficiently model the scheduling rules in manufacturing simulations. It also provides Asynchronous Method invocation and synchronous function call. POMSim libraries predefine a set of basic classes for manufacturing simulation, each of which represents a particular component in the physical manufacturing system.

Einar Broch Johnsen - One of the best experts on this subject based on the ideXlab platform.

  • Reasoning about Asynchronous Method Calls and Inheritance
    2020
    Co-Authors: Johan Dovland, Einar Broch Johnsen
    Abstract:

    This paper considers the problem of reusing synchronization constraints for concurrent objects with Asynchronous Method calls. Our approach extends the Creol language with a specialized composition operator expressing synchronized merge. The use of synchronized merge allows synchronization classes to be added and combined with general purpose classes by means of multiple inheritance. The paper presents proof rules for synchronized merge and several examples.

  • A Hoare Logic for Concurrent Objects with Asynchronous Method Calls
    2020
    Co-Authors: Johan Dovland, Einar Broch Johnsen
    Abstract:

    The Creol language proposes high level language constructs to unite object orientation and distribution in a natural way. In this report, we show how the semantics of Creol programs may be defined in terms of standard sequential constructs. The highly nondeterministic nature of distributed systems is captured by introducing communication histories to record the observable activity of the system. Hoare rules expressing partial correctness are then derived based on the semantics.

  • ISoLA (2) - Fault Model Design Space for Cooperative Concurrency
    Leveraging Applications of Formal Methods Verification and Validation. Specialized Techniques and Applications, 2014
    Co-Authors: Ivan Lanese, Einar Broch Johnsen, Michael Lienhardt, Mario Bravetti, Rudolf Schlatte, Volker Stolz, Gianluigi Zavattaro
    Abstract:

    This paper critically discusses the different choices that have to be made when defining a fault model for an object-oriented programming language. We consider in particular the ABS language, and analyze the interplay between the fault model and the main features of ABS, namely the cooperative concurrency model, based on Asynchronous Method invocations whose return results via futures, and its emphasis on static analysis based on invariants.

  • COORDINATION - Fault in the future
    Lecture Notes in Computer Science, 2011
    Co-Authors: Einar Broch Johnsen, Ivan Lanese, Gianluigi Zavattaro
    Abstract:

    In this paper we consider the problem of fault handling inside an object-oriented language with Asynchronous Method calls whose results are returned inside futures. We present an extension for those languages where futures are used to return fault notifications and to coordinate error recovery between the caller and callee. This can be exploited to ensure that invariants involving many objects are restored after faults.

  • Backwards Type Analysis of Asynchronous Method Calls
    The Journal of Logic and Algebraic Programming, 2008
    Co-Authors: Einar Broch Johnsen, Ingrid Chieh Yu
    Abstract:

    Abstract Asynchronous Method calls have been proposed to better integrate object orientation with distribution. In the language, Asynchronous Method calls are combined with so-called processor release points in order to allow concurrent objects to adapt local scheduling to network delays in a very flexible way. However, Asynchronous Method calls complicate the type analysis by decoupling input and output information for Method calls, which can be tracked by a type and effect system. Interestingly, backwards type analysis simplifies the effect system considerably and allows analysis in a single pass. This paper presents a kernel language with Asynchronous Method calls and processor release points, a novel mechanism for local memory deallocation related to Asynchronous Method calls, an operational semantics in rewriting logic for the language, and a type and effect system for backwards analysis. Source code is translated into runtime code as an effect of the type analysis, automatically inserting inferred type information in Method invocations and operations for local memory deallocation in the process. We establish a subject reduction property, showing in particular that Method lookup errors do not occur at runtime and that the inserted deallocation operations are safe.

Antonio Giannitrapani - One of the best experts on this subject based on the ideXlab platform.

  • A distributed Asynchronous Method of multipliers for constrained nonconvex optimization
    Automatica, 2019
    Co-Authors: Francesco Farina, Andrea Garulli, Antonio Giannitrapani, Giuseppe Notarstefano
    Abstract:

    This paper presents a fully Asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the Method of multipliers, each node performs, when it wakes up, either a descent step on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an Asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian. This allows one to extend the properties of the centralized Method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural network.

  • Distributed Constrained Nonconvex Optimization: the Asynchronous Method of Multipliers
    arXiv: Optimization and Control, 2018
    Co-Authors: Francesco Farina, Andrea Garulli, Antonio Giannitrapani, Giuseppe Notarstefano
    Abstract:

    This paper presents a fully Asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex. In the considered network setting each node is active upon triggering of a local timer and has access only to a portion of the objective function and to a subset of the constraints. In the proposed technique, based on the Method of multipliers, each node performs, when it wakes up, either a descent steps on a local augmented Lagrangian or an ascent step on the local multiplier vector. Nodes realize when to switch from the descent step to the ascent one through an Asynchronous distributed logic-AND, which detects when all the nodes have reached a predefined tolerance in the minimization of the augmented Lagrangian. It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global non-separable augmented Lagrangian. This allows one to extend the properties of the centralized Method of multipliers to the considered distributed framework. Two application examples are presented to validate the proposed approach: a distributed source localization problem and the parameter estimation of a neural classifier.

  • Asynchronous Distributed Learning From Constraints
    IEEE Transactions on Neural Networks and Learning Systems, 1
    Co-Authors: Francesco Farina, Andrea Garulli, Stefano Melacci, Antonio Giannitrapani
    Abstract:

    In this brief, the extension of the framework of Learning from Constraints (LfC) to a distributed setting where multiple parties, connected over the network, contribute to the learning process is studied. LfC relies on the generic notion of ``constraint'' to inject knowledge into the learning problem, and, due to its generality, it deals with possibly nonconvex constraints, enforced either in a hard or soft way. Motivated by recent progresses in the field of distributed and constrained nonconvex optimization, we apply the (distributed) Asynchronous Method of multipliers (ASYMM) to LfC. The study shows that such a Method allows us to support scenarios where selected constraints (i.e., knowledge), data, and outcomes of the learning process can be locally stored in each computational node without being shared with the rest of the network, opening the road to further investigations into privacy-preserving LfC. Constraints act as a bridge between what is shared over the net and what is private to each node, and no central authority is required. We demonstrate the applicability of these ideas in two distributed real-world settings in the context of digit recognition and document classification.

Serge Petiton - One of the best experts on this subject based on the ideXlab platform.

  • Large Scale Parallel Hybrid GMRES Method for the Linear System on Grid System
    2008 International Symposium on Parallel and Distributed Computing, 2008
    Co-Authors: Ye Zhang, Guy Bergere, Serge Petiton
    Abstract:

    The Method GMRES is used widely to solve the large sparse linear systems. In this paper, we will present an effective parallel hybrid Asynchronous Method, which combines the typical parallel Method GMRES with the Least Square Method that needs some eigenvalues obtained from a parallel Arnoldi process. And we will apply it on a Grid Computing platform Grid5000. Grid computing in general is a special type of parallel computing. It intends to deliver high-performance computing over distributed platforms for computation and data-intensive applications by making use of a very large amount of resources. From the numeric results, we will present this hybrid Method has some advantage for some matrices compared to the general Method GMRES.

  • A Parallel Hybrid Method of GMRES on GRID System
    2007 IEEE International Parallel and Distributed Processing Symposium, 2007
    Co-Authors: Ye Zhang, Guy Bergere, Serge Petiton
    Abstract:

    Grid computing focuses on making use of a very large amount of resources from a large-scale computing environment. It intends to deliver high-performance computing over distributed platforms for computation and data-intensive applications. In this paper, we present an effective parallel hybrid Asynchronous Method to solve large sparse linear systems by the use of a grid computing platform Grid5000. This hybrid Method combines a parallel GMRES(m) (generalized minimum residual) algorithm with the least square Method that needs some eigenvalues obtained from a parallel Arnoldi algorithm. All of these algorithms run on the different processors of the platform Grid5000. Grid5000, a 5000 CPUs nation-wide infrastructure for research in grid computing, is designed to provide a scientific tool for computing. We discuss the performances of this hybrid Method deployed on Grid5000, and compare these performances with those on the IBM SP series supercomputers.

Wolf Zimmermann - One of the best experts on this subject based on the ideXlab platform.

  • EUROMICRO-SEAA - A Step Towards a More Practical Protocol Conformance Checking Algorithm
    2009 35th Euromicro Conference on Software Engineering and Advanced Applications, 2009
    Co-Authors: Andreas Both, Wolf Zimmermann
    Abstract:

    In previous works we suggested an approach to verify if in a component-based system the interaction behavior to a component obeys the specified requirements.We can capture unbounded recursion, synchronous Method calls and call backs as well as Asynchronous Method calls and unbounded parallel behavior including synchronization.In an industrial environment we have the problem, that extensive use of synchronizations results in an unacceptable verification time. In this paper we will describe an approach leading to a better practical applicability.

  • A Step Towards a More Practical Protocol Conformance Checking Algorithm
    2009 35th Euromicro Conference on Software Engineering and Advanced Applications, 2009
    Co-Authors: Andreas Both, Wolf Zimmermann
    Abstract:

    In previous works we suggested an approach to verify if in a component-based system the interaction behavior to a component obeys the specified requirements.We can capture unbounded recursion, synchronous Method calls and call backs as well as Asynchronous Method calls and unbounded parallel behavior including synchronization.In an industrial environment we have the problem, that extensive use of synchronizations results in an unacceptable verification time. In this paper we will describe an approach leading to a better practical applicability.