Priority Queue

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 9807 Experts worldwide ranked by ideXlab platform

Philippas Tsigas - One of the best experts on this subject based on the ideXlab platform.

  • the lock free k lsm relaxed Priority Queue
    arXiv: Data Structures and Algorithms, 2015
    Co-Authors: Martin Wimmer, Jakob Gruber, Jesper Larsson Traff, Philippas Tsigas
    Abstract:

    Priority Queues are data structures which store keys in an ordered fashion to allow efficient access to the minimal (maximal) key. Priority Queues are essential for many applications, e.g., Dijkstra's single-source shortest path algorithm, branch-and-bound algorithms, and prioritized schedulers. Efficient multiprocessor computing requires implementations of basic data structures that can be used concurrently and scale to large numbers of threads and cores. Lock-free data structures promise superior scalability by avoiding blocking synchronization primitives, but the \emph{delete-min} operation is an inherent scalability bottleneck in concurrent Priority Queues. Recent work has focused on alleviating this obstacle either by batching operations, or by relaxing the requirements to the \emph{delete-min} operation. We present a new, lock-free Priority Queue that relaxes the \emph{delete-min} operation so that it is allowed to delete \emph{any} of the $\rho+1$ smallest keys, where $\rho$ is a runtime configurable parameter. Additionally, the behavior is identical to a non-relaxed Priority Queue for items added and removed by the same thread. The Priority Queue is built from a logarithmic number of sorted arrays in a way similar to log-structured merge-trees. We experimentally compare our Priority Queue to recent state-of-the-art lock-free Priority Queues, both with relaxed and non-relaxed semantics, showing high performance and good scalability of our approach.

  • the lock free k lsm relaxed Priority Queue
    ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2015
    Co-Authors: Martin Wimmer, Jakob Gruber, Jesper Larsson Traff, Philippas Tsigas
    Abstract:

    We present a new, concurrent, lock-free Priority Queue that relaxes the delete-min operation to allow deletion of any of the ρ smallest keys instead of only a minimal one, where ρ is a parameter that can be configured at runtime. It is built from a logarithmic number of sorted arrays, similar to log-structured merge-trees (LSM). For keys added and removed by the same thread the behavior is identical to a non-relaxed Priority Queue. We compare to state-of-the-art lock-free Priority Queues with both relaxed and non-relaxed semantics, showing high performance and good scalability of our approach.

David A Stanford - One of the best experts on this subject based on the ideXlab platform.

Daein Jeong - One of the best experts on this subject based on the ideXlab platform.

  • design of a generalized Priority Queue manager for atm switches
    IEEE Journal on Selected Areas in Communications, 1997
    Co-Authors: H J Chao, H Cheng, Yauren Jenq, Daein Jeong
    Abstract:

    Meeting quality of service (QoS) requirements for various services in ATM networks has been very challenging to network designers. Various control techniques at either the call or cell level have been proposed. In this paper, we deal with cell transmission scheduling and discarding at the output buffers of an ATM switch. We propose a generalized Priority Queue manager (GPQM) that uses per-virtual-connection Queueing to support multiple QoS requirements and achieve fairness in both cell transmission and discarding. It achieves the ultimate goal of guaranteeing the QoS requirement for each connection. The GPQM adopts the earliest due date (EDD) and self-clocked fair Queueing (SCFQ) schemes for scheduling cell transmission and a new self-calibrating pushout (SCP) scheme for discarding cells. The GPQM's performance in cell loss rate and delay is presented. An implementation architecture for the GPQM is also proposed, which is facilitated by a new VLSI chip called the Priority content-addressable memory (PCAM) chip.

  • generalized Priority Queue manager design for atm switches
    International Conference on Communications, 1996
    Co-Authors: H J Chao, Daein Jeong
    Abstract:

    Our concern is the problem of efficiently supporting multiple QoS requirements in ATM networks. A Queue manager in ATM network nodes schedules cell transmission based on urgencies at the decision moment, while it controls buffer access based on the cell loss priorities. In this paper, we propose a generalized Priority Queue manager (GPQM) which supports multiple QoS requirements at class level while also guaranteeing fairness at connection level. It adopts the self-clocked fair Queueing (SCFQ) algorithm to achieve fairness and the earliest-due date (EDD) scheme to meet various delay requirements. It supports delay requirements management at class level as well as fair scheduling at connection level. For buffer management, it adopts self-calibrating pushout (SCP) for class level control followed by connection level head-of-line cell discarding. The SCP buffer management scheme allows the buffer to be completely shared by all service classes. Moreover, it keeps an almost identical cell loss rate among connections in the same loss Priority. We present a practical architecture to implement GPQM, facilitated by a new VLSI chip (called the generalized sequencer chip), an enhanced version of the existing sequencer chip.

Martin Wimmer - One of the best experts on this subject based on the ideXlab platform.

  • the lock free k lsm relaxed Priority Queue
    arXiv: Data Structures and Algorithms, 2015
    Co-Authors: Martin Wimmer, Jakob Gruber, Jesper Larsson Traff, Philippas Tsigas
    Abstract:

    Priority Queues are data structures which store keys in an ordered fashion to allow efficient access to the minimal (maximal) key. Priority Queues are essential for many applications, e.g., Dijkstra's single-source shortest path algorithm, branch-and-bound algorithms, and prioritized schedulers. Efficient multiprocessor computing requires implementations of basic data structures that can be used concurrently and scale to large numbers of threads and cores. Lock-free data structures promise superior scalability by avoiding blocking synchronization primitives, but the \emph{delete-min} operation is an inherent scalability bottleneck in concurrent Priority Queues. Recent work has focused on alleviating this obstacle either by batching operations, or by relaxing the requirements to the \emph{delete-min} operation. We present a new, lock-free Priority Queue that relaxes the \emph{delete-min} operation so that it is allowed to delete \emph{any} of the $\rho+1$ smallest keys, where $\rho$ is a runtime configurable parameter. Additionally, the behavior is identical to a non-relaxed Priority Queue for items added and removed by the same thread. The Priority Queue is built from a logarithmic number of sorted arrays in a way similar to log-structured merge-trees. We experimentally compare our Priority Queue to recent state-of-the-art lock-free Priority Queues, both with relaxed and non-relaxed semantics, showing high performance and good scalability of our approach.

  • the lock free k lsm relaxed Priority Queue
    ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2015
    Co-Authors: Martin Wimmer, Jakob Gruber, Jesper Larsson Traff, Philippas Tsigas
    Abstract:

    We present a new, concurrent, lock-free Priority Queue that relaxes the delete-min operation to allow deletion of any of the ρ smallest keys instead of only a minimal one, where ρ is a parameter that can be configured at runtime. It is built from a logarithmic number of sorted arrays, similar to log-structured merge-trees (LSM). For keys added and removed by the same thread the behavior is identical to a non-relaxed Priority Queue. We compare to state-of-the-art lock-free Priority Queues with both relaxed and non-relaxed semantics, showing high performance and good scalability of our approach.

H J Chao - One of the best experts on this subject based on the ideXlab platform.

  • design of a generalized Priority Queue manager for atm switches
    IEEE Journal on Selected Areas in Communications, 1997
    Co-Authors: H J Chao, H Cheng, Yauren Jenq, Daein Jeong
    Abstract:

    Meeting quality of service (QoS) requirements for various services in ATM networks has been very challenging to network designers. Various control techniques at either the call or cell level have been proposed. In this paper, we deal with cell transmission scheduling and discarding at the output buffers of an ATM switch. We propose a generalized Priority Queue manager (GPQM) that uses per-virtual-connection Queueing to support multiple QoS requirements and achieve fairness in both cell transmission and discarding. It achieves the ultimate goal of guaranteeing the QoS requirement for each connection. The GPQM adopts the earliest due date (EDD) and self-clocked fair Queueing (SCFQ) schemes for scheduling cell transmission and a new self-calibrating pushout (SCP) scheme for discarding cells. The GPQM's performance in cell loss rate and delay is presented. An implementation architecture for the GPQM is also proposed, which is facilitated by a new VLSI chip called the Priority content-addressable memory (PCAM) chip.

  • generalized Priority Queue manager design for atm switches
    International Conference on Communications, 1996
    Co-Authors: H J Chao, Daein Jeong
    Abstract:

    Our concern is the problem of efficiently supporting multiple QoS requirements in ATM networks. A Queue manager in ATM network nodes schedules cell transmission based on urgencies at the decision moment, while it controls buffer access based on the cell loss priorities. In this paper, we propose a generalized Priority Queue manager (GPQM) which supports multiple QoS requirements at class level while also guaranteeing fairness at connection level. It adopts the self-clocked fair Queueing (SCFQ) algorithm to achieve fairness and the earliest-due date (EDD) scheme to meet various delay requirements. It supports delay requirements management at class level as well as fair scheduling at connection level. For buffer management, it adopts self-calibrating pushout (SCP) for class level control followed by connection level head-of-line cell discarding. The SCP buffer management scheme allows the buffer to be completely shared by all service classes. Moreover, it keeps an almost identical cell loss rate among connections in the same loss Priority. We present a practical architecture to implement GPQM, facilitated by a new VLSI chip (called the generalized sequencer chip), an enhanced version of the existing sequencer chip.