Latency Penalty

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1992 Experts worldwide ranked by ideXlab platform

Tzicker Chiueh - One of the best experts on this subject based on the ideXlab platform.

  • Dynamic Multi-Process Information Flow Tracking for Web Application Security
    2008
    Co-Authors: Susanta N, Lap-chung Lam, Tzicker Chiueh
    Abstract:

    Abstract. Although there is a large body of research on detection and prevention of such memory corruption attacks as buffer overflow, integer overflow, and format string attacks, the web application security problem receives relatively less attention from the research community by comparison. The majority of web application security problems originate from the fact that web applications fail to perform sanity checks on inputs from the network that are eventually used as operands of securitysensitive operations. Therefore, a promising approach to this problem is to apply proper checks on tainted portions of the operands used in security-sensitive operations, where a byte is tainted if it is data/control dependent on some network packet(s). This paper presents the design, implementation and evaluation of a dynamic checking compiler called WASC, which automatically adds checks into web applications used in three-tier internet services to protect them from the most common two types of web application attacks: SQL- and script-injection attack. In addition to including a taint analysis infrastructure for multi-process and multi-language applications, WASC features the use of SQL and HTML parsers to defeat evasion techniques that exploit interpretation differences between attack detection engines and target applications. Experiments with a fully operational WASC prototype show that it can indeed stop all SQL/script injection attacks that we have tested. Moreover, the end-to-end Latency Penalty associated with the checks inserted by WASC is less than 30 % for the test web applications used in our performance study. Key words: web application security, dynamic checking compiler, SQL injection, Cross-site scripting, taint analysis, information flow tracking

  • dynamic multi process information flow tracking for web application security
    ACM IFIP USENIX international conference on Middleware, 2007
    Co-Authors: Susanta Nanda, Tzicker Chiueh
    Abstract:

    Although there is a large body of research on detection and prevention of such memory corruption attacks as buffer overflow, integer overflow, and format string attacks, the web application security problem receives relatively less attention from the research community by comparison. The majority of web application security problems originate from the fact that web applications fail to perform sanity checks on inputs from the network that are eventually used as operands of security-sensitive operations. Therefore, a promising approach to this problem is to apply proper checks on tainted portions of the operands used in security-sensitive operations, where a byte is tainted if it is data/control dependent on some network packet(s). This paper presents the design, implementation and evaluation of a dynamic checking compiler called WASC, which automatically adds checks into web applications used in three-tier internet services to protect them from the most common two types of web application attacks: SQL- and script-injection attack. In addition to including a taint analysis infrastructure for multi-process and multi-language applications, WASC features the use of SQL and HTML parsers to defeat evasion techniques that exploit interpretation differences between attack detection engines and target applications. Experiments with a fully operational WASC prototype show that it can indeed stop all SQL/script injection attacks that we have tested. Moreover, the end-to-end Latency Penalty associated with the checks inserted by WASC is less than 30% for the test web applications used in our performance study.

Tong Zhang - One of the best experts on this subject based on the ideXlab platform.

  • enabling nand flash memory use soft decision error correction codes at minimal read Latency overhead
    IEEE Transactions on Circuits and Systems, 2013
    Co-Authors: Guiqiang Dong, Tong Zhang
    Abstract:

    With the aggressive technology scaling and use of multi-bit per cell storage, NAND flash memory is subject to continuous degradation of raw storage reliability and demands more and more powerful error correction codes (ECC). This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as LDPC codes become very natural alternative options. However, these powerful coding solutions demand soft-decision memory sensing, which results in longer on-chip memory sensing Latency and memory-to-controller data transfer Latency. Leveraging well-established lossless data compression theories, this paper presents several simple design techniques that can reduce such Latency Penalty caused by soft-decision ECCs. Their effectiveness have been well demonstrated through extensive simulations, and the results suggest that the Latency can be reduced by up to 85.3%.

  • on the use of soft decision error correction codes in nand flash memory
    IEEE Transactions on Circuits and Systems, 2011
    Co-Authors: Guiqiang Dong, Ningde Xie, Tong Zhang
    Abstract:

    As technology continues to scale down, NAND Flash memory has been increasingly relying on error-correction codes (ECCs) to ensure the overall data storage integrity. Although advanced ECCs such as low-density parity-check (LDPC) codes can provide significantly stronger error-correction capability over BCH codes being used in current practice, their decoding requires soft-decision log-likelihood ratio (LLR) information. This results in two critical issues. First, accurate calculation of LLR demands fine-grained memory-cell sensing, which nevertheless tends to incur implementation overhead and access Latency Penalty. Hence, it is critical to minimize the fine-grained memory sensing precision. Second, accurate calculation of LLR also demands the availability of a memory-cell threshold-voltage distribution model. As the major source for memory-cell threshold-voltage distribution distortion, cell-to-cell interference must be carefully incorporated into the model. However, these two critical issues have not been ever addressed in the open literature. This paper attempts to address these open issues. We derive mathematical formulations to approximately model the threshold-voltage distribution of memory cells in the presence of cell-to-cell interference, based on which the calculation of LLRs is mathematically formulated. This paper also proposes a nonuniform memory sensing strategy to reduce the memory sensing precision and, thus, sensing Latency while still maintaining good error-correction performance. In addition, we investigate these design issues under the scenario when we can also sense interfering cells and hence explicitly estimate cell-to-cell interference strength. We carry out extensive computer simulations to demonstrate the effectiveness and involved tradeoffs, assuming the use of LDPC codes in 2-bits/cell NAND Flash memory.

Onur Mutlu - One of the best experts on this subject based on the ideXlab platform.

  • Linearly Compressed Pages: A Main Memory Compression Framework with Low Complexity and Low Latency
    2016
    Co-Authors: Gennady Pekhimenko, Vivek Seshadri, Yoongu Kim, Hongyi Xin, Onur Mutlu, Michael A. Kozuch, Phillip B. Gibbons, Todd C. Mowry
    Abstract:

    Data compression is a promising technique to address the increasing main memory capacity demand in future systems. Unfortunately, directly applying previously proposed compression algorithms to main memory requires the memory controller to perform non-trivial computations to locate a cache line within the compressed main memory. These additional computations lead to significant increase in access Latency, which can degrade system performance. Solutions proposed by prior work to address this performance degradation problem are either costly or energy ineffi-cient. In this paper, we propose a new main memory compression framework that neither incurs the Latency Penalty nor requires costly or power-inefficient hardware. The key idea behind our proposal is that if all the cache lines within a page are compressed to the same size, then the location of a cache line within a compressed page is simply the product of the index of the cache line within the page and the size of a compressed cache line. We call a page compressed in such a manner a Linearly Compressed Page (LCP). LCP greatly reduces the amount of computation required to locate a cache line within the compressed page, while keeping the hardware implementation of the proposed main memory compression framework simple. We adapt two previously proposed compression algorithms, Frequent Pattern Compression and Base-Delta

  • linearly compressed pages a main memory compression framework with low complexity and low Latency
    International Conference on Parallel Architectures and Compilation Techniques, 2012
    Co-Authors: Gennady Pekhimenko, Todd C. Mowry, Onur Mutlu
    Abstract:

    Data compression is a promising technique to address the increasing main memory capacity demand in future systems. Unfortunately, directly applying previously proposed compression algorithms to main memory requires the memory controller to perform non-trivial computations to locate a cache line within the compressed main memory. These additional computations lead to significant increase in access Latency, which can degrade system performance. Solutions proposed by prior work to address this performance degradation problem are either costly or energy inefficient. In this paper, we propose a new main memory compression framework that neither incurs the Latency Penalty nor requires costly or power-inefficient hardware. The key idea behind our proposal is that if all the cache lines within a page are compressed to the same size, then the location of a cache line within a compressed page is simply the product of the index of the cache line within the page and the size of a compressed cache line. We call a page compressed in such a manner a Linearly Compressed Page (LCP). LCP greatly reduces the amount of computation required to locate a cache line within the compressed page, while keeping the hardware implementation of the proposed main memory compression framework simple. We adapt two previously proposed compression algorithms, Frequent Pattern Compression and Base-DeltaImmediate compression, to fit the requirements of LCP. Evaluations using benchmarks from SPEC CPU 2006 and five server benchmarks show that our approach can significantly increase the effective memory capacity (69% on average). In addition to the capacity gains, we evaluate the benefit of transferring consecutive compressed cache lines between the memory controller and main memory. Our new mechanism considerably reduces the memory bandwidth requirements of most of the evaluated benchmarks (46%/48% for CPU/GPU on average), and improves overall performance (6.1%/13.9%/10.7% for single-/two-/four-core CPU workloads on average) compared to a baseline system that does not employ main memory compression.

Guiqiang Dong - One of the best experts on this subject based on the ideXlab platform.

  • enabling nand flash memory use soft decision error correction codes at minimal read Latency overhead
    IEEE Transactions on Circuits and Systems, 2013
    Co-Authors: Guiqiang Dong, Tong Zhang
    Abstract:

    With the aggressive technology scaling and use of multi-bit per cell storage, NAND flash memory is subject to continuous degradation of raw storage reliability and demands more and more powerful error correction codes (ECC). This inevitable trend makes conventional BCH code increasingly inadequate, and iterative coding solutions such as LDPC codes become very natural alternative options. However, these powerful coding solutions demand soft-decision memory sensing, which results in longer on-chip memory sensing Latency and memory-to-controller data transfer Latency. Leveraging well-established lossless data compression theories, this paper presents several simple design techniques that can reduce such Latency Penalty caused by soft-decision ECCs. Their effectiveness have been well demonstrated through extensive simulations, and the results suggest that the Latency can be reduced by up to 85.3%.

  • on the use of soft decision error correction codes in nand flash memory
    IEEE Transactions on Circuits and Systems, 2011
    Co-Authors: Guiqiang Dong, Ningde Xie, Tong Zhang
    Abstract:

    As technology continues to scale down, NAND Flash memory has been increasingly relying on error-correction codes (ECCs) to ensure the overall data storage integrity. Although advanced ECCs such as low-density parity-check (LDPC) codes can provide significantly stronger error-correction capability over BCH codes being used in current practice, their decoding requires soft-decision log-likelihood ratio (LLR) information. This results in two critical issues. First, accurate calculation of LLR demands fine-grained memory-cell sensing, which nevertheless tends to incur implementation overhead and access Latency Penalty. Hence, it is critical to minimize the fine-grained memory sensing precision. Second, accurate calculation of LLR also demands the availability of a memory-cell threshold-voltage distribution model. As the major source for memory-cell threshold-voltage distribution distortion, cell-to-cell interference must be carefully incorporated into the model. However, these two critical issues have not been ever addressed in the open literature. This paper attempts to address these open issues. We derive mathematical formulations to approximately model the threshold-voltage distribution of memory cells in the presence of cell-to-cell interference, based on which the calculation of LLRs is mathematically formulated. This paper also proposes a nonuniform memory sensing strategy to reduce the memory sensing precision and, thus, sensing Latency while still maintaining good error-correction performance. In addition, we investigate these design issues under the scenario when we can also sense interfering cells and hence explicitly estimate cell-to-cell interference strength. We carry out extensive computer simulations to demonstrate the effectiveness and involved tradeoffs, assuming the use of LDPC codes in 2-bits/cell NAND Flash memory.

Susanta Nanda - One of the best experts on this subject based on the ideXlab platform.

  • dynamic multi process information flow tracking for web application security
    ACM IFIP USENIX international conference on Middleware, 2007
    Co-Authors: Susanta Nanda, Tzicker Chiueh
    Abstract:

    Although there is a large body of research on detection and prevention of such memory corruption attacks as buffer overflow, integer overflow, and format string attacks, the web application security problem receives relatively less attention from the research community by comparison. The majority of web application security problems originate from the fact that web applications fail to perform sanity checks on inputs from the network that are eventually used as operands of security-sensitive operations. Therefore, a promising approach to this problem is to apply proper checks on tainted portions of the operands used in security-sensitive operations, where a byte is tainted if it is data/control dependent on some network packet(s). This paper presents the design, implementation and evaluation of a dynamic checking compiler called WASC, which automatically adds checks into web applications used in three-tier internet services to protect them from the most common two types of web application attacks: SQL- and script-injection attack. In addition to including a taint analysis infrastructure for multi-process and multi-language applications, WASC features the use of SQL and HTML parsers to defeat evasion techniques that exploit interpretation differences between attack detection engines and target applications. Experiments with a fully operational WASC prototype show that it can indeed stop all SQL/script injection attacks that we have tested. Moreover, the end-to-end Latency Penalty associated with the checks inserted by WASC is less than 30% for the test web applications used in our performance study.