Asynchronous Version - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Asynchronous Version

The Experts below are selected from a list of 747 Experts worldwide ranked by ideXlab platform

Sandro Zampieri – 1st expert on this subject based on the ideXlab platform

  • Distributed Reactive Power Feedback Control for Voltage Regulation and Loss Minimization
    IEEE Transactions on Automatic Control, 2015
    Co-Authors: Saverio Bolognani, Ruggero Carli, Guido Cavraro, Sandro Zampieri

    Abstract:

    We consider the problem of exploiting the microgenerators dispersed in the power distribution network in order to provide distributed reactive power compensation for power losses minimization and voltage regulation. In the proposed strategy, microgenerators are smart agents that can measure their phasorial voltage, share these data with the other agents on a cyber layer, and adjust the amount of reactive power injected into the grid, according to a feedback control law that descends from duality-based methods applied to the optimal reactive power flow problem. Convergence to the configuration of minimum losses and feasible voltages is proved analytically for both a synchronous and an Asynchronous Version of the algorithm, where agents update their state independently one from the other. Simulations are provided in order to illustrate the performance and the robustness of the algorithm, and the innovative feedback nature of such strategy is discussed.

  • CDC – A distributed control strategy for optimal reactive power flow with power constraints
    52nd IEEE Conference on Decision and Control, 2013
    Co-Authors: Saverio Bolognani, Ruggero Carli, Guido Cavraro, Sandro Zampieri

    Abstract:

    We consider the problem of exploiting the microgenerators dispersed in the power distribution network in order to provide distributed reactive power compensation for power losses minimization. The proposed strategy requires that all the intelligent agents, located at the microgenerator buses, measure their voltage and share these data with the other agents on a cyber layer, then actuate the physical layer by adjusting the amount of reactive power injected into the grid, according to a feedback control law that descends from duality-based methods applied to the optimal reactive power flow problem subject to power constraints. Convergence of the algorithm is proved analytically for both a synchronous and an Asynchronous Version of the algorithm, where agents update their state independently one from the other. Simulations are provided in order to illustrate the algorithm behavior, and the innovative feedback nature of such strategy is discussed.

  • A distributed control strategy for optimal reactive power flow with power constraints
    52nd IEEE Conference on Decision and Control, 2013
    Co-Authors: Saverio Bolognani, Ruggero Carli, Guido Cavraro, Sandro Zampieri

    Abstract:

    We consider the problem of exploiting the microgenerators dispersed in the power distribution network in order to provide distributed reactive power compensation for power losses minimization. The proposed strategy requires that all the intelligent agents, located at the microgenerator buses, measure their voltage and share these data with the other agents on a cyber layer, then actuate the physical layer by adjusting the amount of reactive power injected into the grid, according to a feedback control law that descends from duality-based methods applied to the optimal reactive power flow problem subject to power constraints. Convergence of the algorithm is proved analytically for both a synchronous and an Asynchronous Version of the algorithm, where agents update their state independently one from the other. Simulations are provided in order to illustrate the algorithm behavior, and the innovative feedback nature of such strategy is discussed.

Sergey Levine – 2nd expert on this subject based on the ideXlab platform

  • Collective robot reinforcement learning with distributed Asynchronous guided policy search
    IEEE International Conference on Intelligent Robots and Systems, 2017
    Co-Authors: Ali Yahya, Adrian Li, Mrinal Kalakrishnan, Yevgen Chebotar, Sergey Levine

    Abstract:

    In principle, reinforcement learning and policy search methods can enable robots to learn highly complex and general skills that may allow them to function amid the complexity and diversity of the real world. However, training a policy that generalizes well across a wide range of real-world conditions requires far greater quantity and diversity of experience than is practical to collect with a single robot. Fortunately, it is possible for multiple robots to share their experience with one another, and thereby, learn a policy collectively. In this work, we explore distributed and Asynchronous policy learning as a means to achieve generalization and improved training times on challenging, real-world manipulation tasks. We propose a distributed and Asynchronous Version of Guided Policy Search and use it to demonstrate collective policy learning on a vision-based door opening task using four robots. We show that it achieves better generalization, utilization, and training times than the single robot alternative.

  • Collective robot reinforcement learning with distributed Asynchronous guided policy search
    2017 IEEE RSJ International Conference on Intelligent Robots and Systems (IROS), 2017
    Co-Authors: Ali Yahya, Adrian Li, Mrinal Kalakrishnan, Yevgen Chebotar, Sergey Levine

    Abstract:

    Policy search methods and, more broadly, reinforcement learning can enable robots to learn highly complex and general skills that may allow them to function amid the complexity and diversity of the real world. However, training a policy that generalizes well across a wide range of real-world conditions requires far greater quantity and diversity of experience than is practical to collect with a single robot. Fortunately, it is possible for multiple robots to share their experience with one another, and thereby, learn a policy collectively. In this work, we explore distributed and Asynchronous policy learning as a means to achieve generalization and improved training times on challenging, real-world manipulation tasks. We propose a distributed and Asynchronous Version of guided policy search and use it to demonstrate collective policy learning on a vision-based door opening task using four robots. We describe how both policy learning and data collection can be conducted in parallel across multiple robots, and present a detailed empirical evaluation of our system. Our results indicate that distributed learning significantly improves training time, and that parallelizing policy learning and data collection substantially improves utilization. We also demonstrate that we can achieve substantial generalization on a challenging real-world door opening task.

M. Rabinovich – 3rd expert on this subject based on the ideXlab platform

  • ICDE – Asynchronous Version advancement in a distributed three Version database
    Proceedings 14th International Conference on Data Engineering, 1998
    Co-Authors: H.v. Jagadish, I. Singh Mumick, M. Rabinovich

    Abstract:

    We present an efficient protocol for multi-Version concurrency control in distributed databases. The protocol creates no more than three Versions of any data item, while guaranteeing that: update transactions never interfere with read-only transactions; the Version advancement mechanism is completely Asynchronous with (both update and read-only) user transactions; and read-only transactions do not acquire locks and do not write control information into the data items being read. This is an improvement over existing multi-Versioning schemes for distributed databases, which either require a potentially unlimited number of Versions, or require coordination between Version advancement and user transactions. Our protocol can be applied in a centralized system also, where the improvement over existing techniques is in reducing the number of Versions from four to three. The proposed protocol is valuable in large applications that currently shut off access to the system while managing Version advancement manually, but now have a need for automating this process and providing continuous access to the data.

  • Asynchronous Version advancement in a distributed three Version database
    Proceedings 14th International Conference on Data Engineering, 1998
    Co-Authors: H.v. Jagadish, Singh I. Mumick, M. Rabinovich

    Abstract:

    We present an efficient protocol for multi-Version concurrency control in distributed databases. The protocol creates no more than three Versions of any data item, while guaranteeing that: update transactions never interfere with read-only transactions; the Version advancement mechanism is completely Asynchronous with (both update and read-only) user transactions; and read-only transactions do not acquire locks and do not write control information into the data items being read. This is an improvement over existing multi-Versioning schemes for distributed databases, which either require a potentially unlimited number of Versions, or require coordination between Version advancement and user transactions. Our protocol can be applied in a centralized system also, where the improvement over existing techniques is in reducing the number of Versions from four to three. The proposed protocol is valuable in large applications that currently shut off access to the system while managing Version advancement manually, but now have a need for automating this process and providing continuous access to the data.