Write Policy

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12933 Experts worldwide ranked by ideXlab platform

Sanjeev Setia - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of the Periodic Update Write Policy for Disk Cache
    IEEE Transactions on Software Engineering, 1992
    Co-Authors: S. D. Carson, Sanjeev Setia
    Abstract:

    A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The 'periodic update' Write Policy, widely used in existing computer systems, is one in ...

  • Analysis of the periodic update Write Policy for disk cache
    IEEE Transactions on Software Engineering, 1992
    Co-Authors: S. D. Carson, Sanjeev Setia
    Abstract:

    A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The 'periodic update' Write Policy, widely used in existing computer systems, is one in which dirty cache blocks are written to a disk on a periodic basis. The average response time for disk read requests when the periodic update Write Policy is used is determined. Read and Write load, cache-hit ratio, and the disk scheduler's ability to reduce service time under load are incorporated in the analysis, leading to design criteria that can be used to decide among competing cache Write policies. The main conclusion is that the bulk arrivals generated by the periodic update Policy cause a traffic jam effect which results in severely degraded service. Effective use of the disk cache and disk scheduling can alleviate this problem, but only under a narrow range of operating conditions. Based on this conclusion, alternate Write packages that retain the periodic update Policy's advantages and provide uniformly better service are proposed. >

S. D. Carson - One of the best experts on this subject based on the ideXlab platform.

  • Analysis of the Periodic Update Write Policy for Disk Cache
    IEEE Transactions on Software Engineering, 1992
    Co-Authors: S. D. Carson, Sanjeev Setia
    Abstract:

    A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The 'periodic update' Write Policy, widely used in existing computer systems, is one in ...

  • Analysis of the periodic update Write Policy for disk cache
    IEEE Transactions on Software Engineering, 1992
    Co-Authors: S. D. Carson, Sanjeev Setia
    Abstract:

    A disk cache is typically used in file systems to reduce average access time for data storage and retrieval. The 'periodic update' Write Policy, widely used in existing computer systems, is one in which dirty cache blocks are written to a disk on a periodic basis. The average response time for disk read requests when the periodic update Write Policy is used is determined. Read and Write load, cache-hit ratio, and the disk scheduler's ability to reduce service time under load are incorporated in the analysis, leading to design criteria that can be used to decide among competing cache Write policies. The main conclusion is that the bulk arrivals generated by the periodic update Policy cause a traffic jam effect which results in severely degraded service. Effective use of the disk cache and disk scheduling can alleviate this problem, but only under a narrow range of operating conditions. Based on this conclusion, alternate Write packages that retain the periodic update Policy's advantages and provide uniformly better service are proposed. >

Eric Rotenberg - One of the best experts on this subject based on the ideXlab platform.

  • HPCA - Tapping ZettaRAM/spl trade/ for low-power memory systems
    11th International Symposium on High-Performance Computer Architecture, 1
    Co-Authors: Ravi K. Venkatesan, Ahmed S. Al-zawawi, Eric Rotenberg
    Abstract:

    ZettaRAM/spl trade/ is a new memory technology under development by ZettaCore/spl trade/ as a potential replacement for conventional DRAM. The key innovation is replacing the conventional capacitor in each DRAM cell with "charge-storage" molecules - a molecular capacitor. We look beyond ZettaRAM's manufacturing benefits, and approach it from an architectural viewpoint to discover benefits within the domain of architectural metrics. The molecular capacitor is unusual because the amount of charge deposited (critical for reliable sensing) is independent of Write voltage, i.e., there is a discrete threshold voltage above/below which the device is fully charged/discharged. Decoupling charge from voltage enables manipulation via arbitrarily small bitline swings, saving energy. However, while charge is voltage-independent, speed is voltage-dependent. Operating too close to the threshold causes molecules to overtake peripheral circuitry as the overall performance limiter. Nonetheless, ZettaRAM offers a speed/energy trade-off whereas DRAM is inflexible, introducing new dimensions for architectural management of memory. We apply architectural insights to tap the full extent of ZettaRAM's power savings without compromising performance. Several factors converge nicely to direct focus on L2 Writebacks: (i) they account for 80% of row buffer misses in the main memory, thus most of the energy savings potential, and (ii) they do not directly stall the processor and thereby offer scheduling flexibility for tolerating extended molecule latency. Accordingly, slow Writes (low energy) are applied to non-critical Writebacks and fast Writes (high energy) to critical fetches. The hybrid Write Policy is combined with two options for tolerating delayed Writebacks: large buffers with access reordering or L2-cache eager Writebacks. Eager Writebacks are remarkably synergistic with ZettaRAM: initiating Writebacks early in the L2 cache compensates for delaying them at the memory controller. Dual-speed Writes coupled with eager Writebacks yields energy savings of 34% (out of 41% with uniformly slow Writes), with less than 1% performance degradation.

Alain Greiner - One of the best experts on this subject based on the ideXlab platform.

  • RWT: Suppressing Write-Through Cost When Coherence is Not Needed
    2015
    Co-Authors: Hao Liu, Clement Devigne, Lucas Garcia, Quentin L. Meunier, Franck Wajsbürt, Alain Greiner
    Abstract:

    —In shared-memory multicore architectures, handling a Write cache operation is more complicated than in single-processor systems. A cache line may be present in more than one private L1 cache. Any cache willing to Write this line must inform all the other sharers. Therefore, it is necessary to implement a cache coherence protocol for multicore architectures. At present, directory based protocols are popular cache coherence protocols in both industry and academic domains because of their reduced coherence traffic compared to snooping protocols, at the expense of an indirection. The Write PolicyWrite through or Write back – is crucial in the protocol design. The Write-through Policy reduces the bandwidth because it augments the Write traffic in the interconnection network, and also augments the energy consumption. However, it can efficiently solve the false sharing problem via Write updates. In this paper, we introduce a new way to reduce the Write traffic of a Write-through coherence protocol by combining Write-through coherence with a Write-back Policy for non coherent lines. The baseline Write-through used as reference is a scalable hybrid invalidate/update protocol. Simulation results show that with our enhanced protocol, we can reduce at least by 50% the Write traffic in the interconnection network, and gain up to 20% performance compared with the baseline Write-through protocol.

  • ISVLSI - RWT: Suppressing Write-Through Cost When Coherence is Not Needed
    2015 IEEE Computer Society Annual Symposium on VLSI, 2015
    Co-Authors: Hao Liu, Clement Devigne, Lucas Garcia, Quentin L. Meunier, Franck Wajsbürt, Alain Greiner
    Abstract:

    In shared-memory multicore architectures, handling a Write cache operation is more complicated than in single processor systems. A cache line may be present in more than one private L1 cache. Any cache willing to Write this line must in formall the other sharers. Therefore, it is necessary to implement a cache coherence protocol for multicore architectures. At present, directory based protocols are popular cache coherence protocols in both industry and academic domains because of their reduced coherence traffic compared to snooping protocols, at the expense of an indirection. The Write Policy -- Write through or Write back -- is crucial in the protocol design. The Write-through Policy reduces the bandwidth because it augments the Write traffic in the interconnection network, and also augments the energy consumption. However, it can efficiently solve the false sharing problem via Write updates. In this paper, we introduce a new way to reduce the Write traffic of a Write-through coherence protocol by combining Write-through coherence with a Write-back Policy for non coherent lines. The baseline Write-through used as reference is a scalable hybrid invalidate/update protocol. Simulation results show that with our enhanced protocol, we can reduce at least by 50% the Write traffic in the interconnection network, and gain up to 20% performance compared with the baseline Write-through protocol.

Seyed Ghassem Miremadi - One of the best experts on this subject based on the ideXlab platform.

  • joint Write Policy and fault tolerance mechanism selection for caches in dsm technologies energy reliability trade off
    International Symposium on Quality Electronic Design, 2009
    Co-Authors: Mehrtash Manoochehri, Alireza Ejlali, Seyed Ghassem Miremadi
    Abstract:

    Write-through caches potentially have higher reliability than Write-back caches. However, Write-back caches are more energy efficient. This paper provides a comparison between the Write-back and Write-through policies based on the combination of reliability and energy consumption criteria. In the experiments, SIMPLESCALAR tool and CACTI model are used to evaluate the characteristics of the caches. The results show that a Write-through cache with one parity bit per word is as reliable as a Write-back cache with SEC-DED code per word. Furthermore, the results show that the energy saving of the Write-through cache over the Write-back cache increases if any of the following changes happens: i) a decrease in the feature size, ii) a decrease in the L2 cache size, and iii) an increase in the L1 cache size. The results also show that when feature size is bigger than 32nm, the Write-back cache is usually more energy efficient. However, for 32nm and smaller feature sizes the Write-through cache can be more energy efficient.

  • ISQED - Joint Write Policy and fault-tolerance mechanism selection for caches in DSM technologies: Energy-reliability trade-off
    2009 10th International Symposium on Quality of Electronic Design, 2009
    Co-Authors: Mehrtash Manoochehri, Alireza Ejlali, Seyed Ghassem Miremadi
    Abstract:

    Write-through caches potentially have higher reliability than Write-back caches. However, Write-back caches are more energy efficient. This paper provides a comparison between the Write-back and Write-through policies based on the combination of reliability and energy consumption criteria. In the experiments, SIMPLESCALAR tool and CACTI model are used to evaluate the characteristics of the caches. The results show that a Write-through cache with one parity bit per word is as reliable as a Write-back cache with SEC-DED code per word. Furthermore, the results show that the energy saving of the Write-through cache over the Write-back cache increases if any of the following changes happens: i) a decrease in the feature size, ii) a decrease in the L2 cache size, and iii) an increase in the L1 cache size. The results also show that when feature size is bigger than 32nm, the Write-back cache is usually more energy efficient. However, for 32nm and smaller feature sizes the Write-through cache can be more energy efficient.