Transaction Processing System

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 90 Experts worldwide ranked by ideXlab platform

Haibo Chen - One of the best experts on this subject based on the ideXlab platform.

  • fast in memory Transaction Processing using rdma and htm
    ACM Transactions on Computer Systems, 2017
    Co-Authors: Haibo Chen, Yanzhe Chen, Zhaoguo Wang, Bin Yu Zang, Rong Chen, Haibing Guan
    Abstract:

    DrTM is a fast in-memory Transaction Processing System that exploits advanced hardware features such as remote direct memory access (RDMA) and hardware Transactional memory (HTM). To achieve high efficiency, it mostly offloads concurrency control such as tracking read/write accesses and conflict detection into HTM in a local machine and leverages the strong consistency between RDMA and HTM to ensure serializability among concurrent Transactions across machines. To mitigate the high probability of HTM aborts for large Transactions, we design and implement an optimized Transaction chopping algorithm to decompose a set of large Transactions into smaller pieces such that HTM is only required to protect each piece. We further build an efficient hash table for DrTM by leveraging HTM and RDMA to simplify the design and notably improve the performance. We describe how DrTM supports common database features like read-only Transactions and logging for durability. Evaluation using typical OLTP workloads including TPC-C and SmallBank shows that DrTM has better single-node efficiency and scales well on a six-node cluster; it achieves greater than 1.51, 34 and 5.24, 138 million Transactions per second for TPC-C and SmallBank on a single node and the cluster, respectively. Such numbers outperform a state-of-the-art single-node System (i.e., Silo) and a distributed Transaction System (i.e., Calvin) by at least 1.9X and 29.6X for TPC-C.

  • fast in memory Transaction Processing using rdma and htm
    Symposium on Operating Systems Principles, 2015
    Co-Authors: Yanzhe Chen, Rong Chen, Haibo Chen
    Abstract:

    We present DrTM, a fast in-memory Transaction Processing System that exploits advanced hardware features (i.e., RDMA and HTM) to improve latency and throughput by over one order of magnitude compared to state-of-the-art distributed Transaction Systems. The high performance of DrTM are enabled by mostly offloading concurrency control within a local machine into HTM and leveraging the strong consistency between RDMA and HTM to ensure serializability among concurrent Transactions across machines. We further build an efficient hash table for DrTM by leveraging HTM and RDMA to simplify the design and notably improve the performance. We describe how DrTM supports common database features like read-only Transactions and logging for durability. Evaluation using typical OLTP workloads including TPC-C and SmallBank show that DrTM scales well on a 6-node cluster and achieves over 5.52 and 138 million Transactions per second for TPC-C and SmallBank Respectively. This number outperforms a state-of-the-art distributed Transaction System (namely Calvin) by at least 17.9X for TPC-C.

Christopher R. Lumb - One of the best experts on this subject based on the ideXlab platform.

  • Towards higher disk head utilization : extracting "free" bandwidth from busy disk drives
    2018
    Co-Authors: Christopher R. Lumb
    Abstract:

    Abstract: "Freeblock scheduling is a new approach to utilizing more of disks' potential media bandwidths. By filling rotational latency periods with useful media transfers, 20[50% [sic] of a never-idle disk's bandwidth can often be provided to background applications with no effect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with two concrete applications: free segment cleaning and free data mining. Free segment cleaning often allows an LFS file System to maintain its ideal write performance when cleaning overheads would otherwise cause up to factor of 3 performance decreases. Free data mining can achieve 45 - 70 full disk scans per day on an active Transaction Processing System, with no effect on Transaction performance.

  • towards higher disk head utilization extracting free bandwidth from busy disk drives
    Operating Systems Design and Implementation, 2000
    Co-Authors: Christopher R. Lumb, Jiri Schindler, Gregory R. Ganger, David F Nagle, Erik Riedel
    Abstract:

    Freeblock scheduling is a new approach to utilizing more of a disk's potential media bandwidth. By filling rotational latency periods with useful media transfers, 20-50% of a never-idle disk's bandwidth can often be provided to background applications with no effect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with simulation studies of two concrete applications: segment cleaning and data mining. Free segment cleaning often allows an LFS file System to maintain its ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three. Free data mining can achieve over 47 full disk scans per day on an active Transaction Processing System, with no effect on its disk performance.

  • towards higher disk head utilization extracting free bandwidth from busy disk drives cmu cs 00 130
    2000
    Co-Authors: Christopher R. Lumb, Jiri Schindler, Gregory R. Ganger, David F Nagle, Erik Riedel
    Abstract:

    Freeblock scheduling is a new approach to utilizing more of a disk's potential media bandwidth. By lling rotational latency periods with useful media transfers, 20{50% of a never-idle disk's bandwidth can often be provided to background applications with no e ect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with simulation studies of two concrete applications: segment cleaning and data mining. Free segment cleaning often allows an LFS le System to maintain its ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three. Free data mining can achieve over 47 full disk scans per day on an active Transaction Processing System, with no e ect on its disk performance.

Erik Riedel - One of the best experts on this subject based on the ideXlab platform.

  • towards higher disk head utilization extracting free bandwidth from busy disk drives
    Operating Systems Design and Implementation, 2000
    Co-Authors: Christopher R. Lumb, Jiri Schindler, Gregory R. Ganger, David F Nagle, Erik Riedel
    Abstract:

    Freeblock scheduling is a new approach to utilizing more of a disk's potential media bandwidth. By filling rotational latency periods with useful media transfers, 20-50% of a never-idle disk's bandwidth can often be provided to background applications with no effect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with simulation studies of two concrete applications: segment cleaning and data mining. Free segment cleaning often allows an LFS file System to maintain its ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three. Free data mining can achieve over 47 full disk scans per day on an active Transaction Processing System, with no effect on its disk performance.

  • towards higher disk head utilization extracting free bandwidth from busy disk drives cmu cs 00 130
    2000
    Co-Authors: Christopher R. Lumb, Jiri Schindler, Gregory R. Ganger, David F Nagle, Erik Riedel
    Abstract:

    Freeblock scheduling is a new approach to utilizing more of a disk's potential media bandwidth. By lling rotational latency periods with useful media transfers, 20{50% of a never-idle disk's bandwidth can often be provided to background applications with no e ect on foreground response times. This paper describes freeblock scheduling and demonstrates its value with simulation studies of two concrete applications: segment cleaning and data mining. Free segment cleaning often allows an LFS le System to maintain its ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three. Free data mining can achieve over 47 full disk scans per day on an active Transaction Processing System, with no e ect on its disk performance.

Yanzhe Chen - One of the best experts on this subject based on the ideXlab platform.

  • fast in memory Transaction Processing using rdma and htm
    ACM Transactions on Computer Systems, 2017
    Co-Authors: Haibo Chen, Yanzhe Chen, Zhaoguo Wang, Bin Yu Zang, Rong Chen, Haibing Guan
    Abstract:

    DrTM is a fast in-memory Transaction Processing System that exploits advanced hardware features such as remote direct memory access (RDMA) and hardware Transactional memory (HTM). To achieve high efficiency, it mostly offloads concurrency control such as tracking read/write accesses and conflict detection into HTM in a local machine and leverages the strong consistency between RDMA and HTM to ensure serializability among concurrent Transactions across machines. To mitigate the high probability of HTM aborts for large Transactions, we design and implement an optimized Transaction chopping algorithm to decompose a set of large Transactions into smaller pieces such that HTM is only required to protect each piece. We further build an efficient hash table for DrTM by leveraging HTM and RDMA to simplify the design and notably improve the performance. We describe how DrTM supports common database features like read-only Transactions and logging for durability. Evaluation using typical OLTP workloads including TPC-C and SmallBank shows that DrTM has better single-node efficiency and scales well on a six-node cluster; it achieves greater than 1.51, 34 and 5.24, 138 million Transactions per second for TPC-C and SmallBank on a single node and the cluster, respectively. Such numbers outperform a state-of-the-art single-node System (i.e., Silo) and a distributed Transaction System (i.e., Calvin) by at least 1.9X and 29.6X for TPC-C.

  • fast in memory Transaction Processing using rdma and htm
    Symposium on Operating Systems Principles, 2015
    Co-Authors: Yanzhe Chen, Rong Chen, Haibo Chen
    Abstract:

    We present DrTM, a fast in-memory Transaction Processing System that exploits advanced hardware features (i.e., RDMA and HTM) to improve latency and throughput by over one order of magnitude compared to state-of-the-art distributed Transaction Systems. The high performance of DrTM are enabled by mostly offloading concurrency control within a local machine into HTM and leveraging the strong consistency between RDMA and HTM to ensure serializability among concurrent Transactions across machines. We further build an efficient hash table for DrTM by leveraging HTM and RDMA to simplify the design and notably improve the performance. We describe how DrTM supports common database features like read-only Transactions and logging for durability. Evaluation using typical OLTP workloads including TPC-C and SmallBank show that DrTM scales well on a 6-node cluster and achieves over 5.52 and 138 million Transactions per second for TPC-C and SmallBank Respectively. This number outperforms a state-of-the-art distributed Transaction System (namely Calvin) by at least 17.9X for TPC-C.

Lu Peng - One of the best experts on this subject based on the ideXlab platform.

  • architectural support for nvram persistence in gpus
    IEEE Transactions on Parallel and Distributed Systems, 2020
    Co-Authors: Sui Chen, Lei Liu, Weihua Zhang, Lu Peng
    Abstract:

    Non-volatile Random Access Memories (NVRAM) have emerged in recent years to bridge the performance gap between the main memory and external storage devices, such as Solid State Drives (SSD). In addition to higher storage density, NVRAM provides byte-addressability, higher bandwidth, near-DRAM latency, and easier access compared to block devices such as traditional SSDs. This enables new programming paradigms taking advantage of durability and larger memory footprint. With the range and size of GPU workloads expanding, NVRAM will present itself as a promising addition to GPU's memory hierarchy. To utilize the non-volatility of NVRAMs, programs should allow durable stores, maintaining consistency through a power loss event. This is usually done through a logging mechanism that works in tandem with a Transaction execution layer which can consist of a Transactional memory or a locking mechanism. Together, this results in a Transaction Processing System that preserves the ACID properties. GPUs are designed with high throughput in mind, leveraging high degrees of parallelism. Transactional memory proposals enable fine-grained Transactions at the GPU thread-level. However, with lower write bandwidths compared to that of DRAMs, using NVRAM as-is may yield sub-optimal overall System performance when threads experience long latency. To address this problem, we propose using Helper Warps to move persistence out of the critical path of Transaction execution, alleviating the impact of latencies. Our mechanism achieves a speedup of 4.4 and 1.5 under bandwidth limits of 1.6 GB/s and 12 GB/s and is projected to maintain speed advantage even when NVRAM bandwidth gets as high as hundreds of GB/s in certain cases. Due to the speedup, our proposed method also results in reduction in overall energy consumption.