Cache Consistency

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Margo Seltzer - One of the best experts on this subject based on the ideXlab platform.

  • world wide web Cache Consistency
    USENIX Annual Technical Conference, 1996
    Co-Authors: James S Gwertzman, Margo Seltzer
    Abstract:

    The bandwidth demands of the World Wide Web continue to grow at a hyper-exponential rate. Given this rocketing growth, caching of web objects as a means to reduce network bandwidth consumption is likely to be a necessity in the very near future. Unfortunately, many Web Caches do not satisfactorily maintain Cache Consistency. This paper presents a survey of contemporary Cache Consistency mechanisms in use on the Internet today and examines recent research in Web Cache Consistency. Using trace-driven simulation, we show that a weak Cache Consistency protocol (the one used in the Alex ftp Cache) reduces network bandwidth consumption and server load more than either time-to-live fields or an invalidation protocol and can be tuned to return stale data less than 5% of the time.

  • USENIX Annual Technical Conference - World-wide web Cache Consistency
    1996
    Co-Authors: James S Gwertzman, Margo Seltzer
    Abstract:

    The bandwidth demands of the World Wide Web continue to grow at a hyper-exponential rate. Given this rocketing growth, caching of web objects as a means to reduce network bandwidth consumption is likely to be a necessity in the very near future. Unfortunately, many Web Caches do not satisfactorily maintain Cache Consistency. This paper presents a survey of contemporary Cache Consistency mechanisms in use on the Internet today and examines recent research in Web Cache Consistency. Using trace-driven simulation, we show that a weak Cache Consistency protocol (the one used in the Alex ftp Cache) reduces network bandwidth consumption and server load more than either time-to-live fields or an invalidation protocol and can be tuned to return stale data less than 5% of the time.

Scott Shenker - One of the best experts on this subject based on the ideXlab platform.

  • a scalable web Cache Consistency architecture
    ACM Special Interest Group on Data Communication, 1999
    Co-Authors: Lee Breslau, Scott Shenker
    Abstract:

    The rapid increase in web usage has led to dramatically increased loads on the network infrastructure and on individual web servers. To ameliorate these mounting burdens, there has been much recent interest in web caching architectures and algorithms. Web caching reduces network load, server load, and the latency of responses. However, web caching has the disadvantage that the pages returned to clients by Caches may be stale, in that they may not be consistent with the version currently on the server. In this paper we describe a scalable web Cache Consistency architecture that provides fairly tight bounds on the staleness of pages. Our architecture borrows heavily from the literature, and can best be described as an invalidation approach made scalable by using a caching hierarchy and application-level multicast routing to convey the invalidations. We evaluate this design with calculations and simulations, and compare it to several other approaches.

  • SIGCOMM - A scalable Web Cache Consistency architecture
    Proceedings of the conference on Applications technologies architectures and protocols for computer communication - SIGCOMM '99, 1999
    Co-Authors: Lee Breslau, Scott Shenker
    Abstract:

    The rapid increase in web usage has led to dramatically increased loads on the network infrastructure and on individual web servers. To ameliorate these mounting burdens, there has been much recent interest in web caching architectures and algorithms. Web caching reduces network load, server load, and the latency of responses. However, web caching has the disadvantage that the pages returned to clients by Caches may be stale, in that they may not be consistent with the version currently on the server. In this paper we describe a scalable web Cache Consistency architecture that provides fairly tight bounds on the staleness of pages. Our architecture borrows heavily from the literature, and can best be described as an invalidation approach made scalable by using a caching hierarchy and application-level multicast routing to convey the invalidations. We evaluate this design with calculations and simulations, and compare it to several other approaches.

Shansi Ren - One of the best experts on this subject based on the ideXlab platform.

  • Maintaining Strong Cache Consistency for the Domain Name System
    IEEE Transactions on Knowledge and Data Engineering, 2007
    Co-Authors: Xin Chen, Haining Wang, Shansi Ren, Xiaodong Zhang
    Abstract:

    Effective caching in the domain name system (DNS) is critical to its performance and scalability. Existing DNS only supports weak Cache Consistency by using the time-to-live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong Cache Consistency in DNS as an indispensable exceptional handling mechanism has become more and more demanding for three important objectives: 1) to quickly respond and handle exceptions such as sudden and dramatic Internet failures caused by natural and human disasters, 2) to adapt increasingly frequent changes of Internet Protocol (IP) addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and 3) to provide fine-grain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS Cache Consistency improves the availability and reliability of Internet services. In this paper, we first conduct extensive Internet measurements to quantitatively characterize DNS dynamics. Then, we propose a proactive DNS Cache update protocol (DNScup), running as middleware in DNS name servers, to provide strong Cache Consistency for DNS. The core of DNScup is an optimal lease scheme, called dynamic lease, to keep track of the local DNS name servers. We compare dynamic lease with other existing lease schemes through theoretical analysis and trace-driven simulations. Based on the DNS dynamic update protocol, we build a DNScup prototype with minor modifications to the current DNS implementation. Our system prototype demonstrates the effectiveness of DNScup and its easy and incremental deployment on the Internet.

  • dnscup strong Cache Consistency protocol for dns
    International Conference on Distributed Computing Systems, 2006
    Co-Authors: Xin Chen, Haining Wang, Shansi Ren
    Abstract:

    Effective caching in Domain Name System (DNS) is critical to its performance and scalability. Existing DNS only supports weak Cache Consistency by using the Time-To-Live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong Cache Consistency in DNS as an indispensable exceptional handling mechanismhas become more and more demanding for three important objectives: (1) to quickly respond and handle exceptional incidents, such as sudden and dramatic Internet failures caused by natural and human disasters, (2) to adapt increasingly frequent changes of IP addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and (3) to provide finegrain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS Cache Consistency improves the availability and reliability of Internet services. In this paper, we propose a proactive DNS Cache update protocol, called DNScup, running as middleware in DNS nameservers, to provide strong Cache Consistency for DNS. The core of DNScup is a dynamic lease technique to keep track of the local DNS nameservers, whose clients need Cache coherence to avoid losing service availability. Based on the DNS Dynamic Update protocol, we have built a DNScup prototype with minor modifications to the current DNS implementation. Our trace-driven simulation and system prototype demonstrate the effectiveness of DNScup and its easy and incremental deployment on the Internet.

  • ICDCS - DNScup: Strong Cache Consistency Protocol for DNS
    26th IEEE International Conference on Distributed Computing Systems (ICDCS'06), 1
    Co-Authors: Xin Chen, Haining Wang, Shansi Ren
    Abstract:

    Effective caching in Domain Name System (DNS) is critical to its performance and scalability. Existing DNS only supports weak Cache Consistency by using the Time-To-Live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong Cache Consistency in DNS as an indispensable exceptional handling mechanismhas become more and more demanding for three important objectives: (1) to quickly respond and handle exceptional incidents, such as sudden and dramatic Internet failures caused by natural and human disasters, (2) to adapt increasingly frequent changes of IP addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and (3) to provide finegrain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS Cache Consistency improves the availability and reliability of Internet services. In this paper, we propose a proactive DNS Cache update protocol, called DNScup, running as middleware in DNS nameservers, to provide strong Cache Consistency for DNS. The core of DNScup is a dynamic lease technique to keep track of the local DNS nameservers, whose clients need Cache coherence to avoid losing service availability. Based on the DNS Dynamic Update protocol, we have built a DNScup prototype with minor modifications to the current DNS implementation. Our trace-driven simulation and system prototype demonstrate the effectiveness of DNScup and its easy and incremental deployment on the Internet.

Craig E Wills - One of the best experts on this subject based on the ideXlab platform.

  • evaluating a new approach to strong web Cache Consistency with snapshots of collected content
    The Web Conference, 2003
    Co-Authors: Mikhail Mikhailov, Craig E Wills
    Abstract:

    The problem of Web Cache Consistency continues to be an important one. Current Web Caches use heuristic-based policies for determining the freshness of Cached objects, often forcing content providers to unnecessarily mark their content as unCacheable simply to retain control over it. Server-driven invalidation has been proposed as a mechanism for providing strong Cache Consistency for Web objects, but it requires servers to maintain per-client state even for infrequently changing objects. We propose an alternative approach to strong Cache Consistency, called MONARCH, which does not require servers to maintain per-client state. In this work we focus on a new approach for evaluation of MONARCH in comparison with current practice and other Cache Consistency policies. This approach uses snapshots of content collected from real Web sites as input to a simulator. Results of the evaluation show MONARCH generates little more request traffic than an optimal Cache coherency policy.

  • WWW - Evaluating a new approach to strong web Cache Consistency with snapshots of collected content
    Proceedings of the twelfth international conference on World Wide Web - WWW '03, 2003
    Co-Authors: Mikhail Mikhailov, Craig E Wills
    Abstract:

    The problem of Web Cache Consistency continues to be an important one. Current Web Caches use heuristic-based policies for determining the freshness of Cached objects, often forcing content providers to unnecessarily mark their content as unCacheable simply to retain control over it. Server-driven invalidation has been proposed as a mechanism for providing strong Cache Consistency for Web objects, but it requires servers to maintain per-client state even for infrequently changing objects. We propose an alternative approach to strong Cache Consistency, called MONARCH, which does not require servers to maintain per-client state. In this work we focus on a new approach for evaluation of MONARCH in comparison with current practice and other Cache Consistency policies. This approach uses snapshots of content collected from real Web sites as input to a simulator. Results of the evaluation show MONARCH generates little more request traffic than an optimal Cache coherency policy.

Xin Chen - One of the best experts on this subject based on the ideXlab platform.

  • Maintaining Strong Cache Consistency for the Domain Name System
    IEEE Transactions on Knowledge and Data Engineering, 2007
    Co-Authors: Xin Chen, Haining Wang, Shansi Ren, Xiaodong Zhang
    Abstract:

    Effective caching in the domain name system (DNS) is critical to its performance and scalability. Existing DNS only supports weak Cache Consistency by using the time-to-live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong Cache Consistency in DNS as an indispensable exceptional handling mechanism has become more and more demanding for three important objectives: 1) to quickly respond and handle exceptions such as sudden and dramatic Internet failures caused by natural and human disasters, 2) to adapt increasingly frequent changes of Internet Protocol (IP) addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and 3) to provide fine-grain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS Cache Consistency improves the availability and reliability of Internet services. In this paper, we first conduct extensive Internet measurements to quantitatively characterize DNS dynamics. Then, we propose a proactive DNS Cache update protocol (DNScup), running as middleware in DNS name servers, to provide strong Cache Consistency for DNS. The core of DNScup is an optimal lease scheme, called dynamic lease, to keep track of the local DNS name servers. We compare dynamic lease with other existing lease schemes through theoretical analysis and trace-driven simulations. Based on the DNS dynamic update protocol, we build a DNScup prototype with minor modifications to the current DNS implementation. Our system prototype demonstrates the effectiveness of DNScup and its easy and incremental deployment on the Internet.

  • dnscup strong Cache Consistency protocol for dns
    International Conference on Distributed Computing Systems, 2006
    Co-Authors: Xin Chen, Haining Wang, Shansi Ren
    Abstract:

    Effective caching in Domain Name System (DNS) is critical to its performance and scalability. Existing DNS only supports weak Cache Consistency by using the Time-To-Live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong Cache Consistency in DNS as an indispensable exceptional handling mechanismhas become more and more demanding for three important objectives: (1) to quickly respond and handle exceptional incidents, such as sudden and dramatic Internet failures caused by natural and human disasters, (2) to adapt increasingly frequent changes of IP addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and (3) to provide finegrain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS Cache Consistency improves the availability and reliability of Internet services. In this paper, we propose a proactive DNS Cache update protocol, called DNScup, running as middleware in DNS nameservers, to provide strong Cache Consistency for DNS. The core of DNScup is a dynamic lease technique to keep track of the local DNS nameservers, whose clients need Cache coherence to avoid losing service availability. Based on the DNS Dynamic Update protocol, we have built a DNScup prototype with minor modifications to the current DNS implementation. Our trace-driven simulation and system prototype demonstrate the effectiveness of DNScup and its easy and incremental deployment on the Internet.

  • ICDCS - DNScup: Strong Cache Consistency Protocol for DNS
    26th IEEE International Conference on Distributed Computing Systems (ICDCS'06), 1
    Co-Authors: Xin Chen, Haining Wang, Shansi Ren
    Abstract:

    Effective caching in Domain Name System (DNS) is critical to its performance and scalability. Existing DNS only supports weak Cache Consistency by using the Time-To-Live (TTL) mechanism, which functions reasonably well in normal situations. However, maintaining strong Cache Consistency in DNS as an indispensable exceptional handling mechanismhas become more and more demanding for three important objectives: (1) to quickly respond and handle exceptional incidents, such as sudden and dramatic Internet failures caused by natural and human disasters, (2) to adapt increasingly frequent changes of IP addresses due to the introduction of dynamic DNS techniques for various stationed and mobile devices on the Internet, and (3) to provide finegrain controls for content delivery services to timely balance server load distributions. With agile adaptation to various exceptional Internet dynamics, strong DNS Cache Consistency improves the availability and reliability of Internet services. In this paper, we propose a proactive DNS Cache update protocol, called DNScup, running as middleware in DNS nameservers, to provide strong Cache Consistency for DNS. The core of DNScup is a dynamic lease technique to keep track of the local DNS nameservers, whose clients need Cache coherence to avoid losing service availability. Based on the DNS Dynamic Update protocol, we have built a DNScup prototype with minor modifications to the current DNS implementation. Our trace-driven simulation and system prototype demonstrate the effectiveness of DNScup and its easy and incremental deployment on the Internet.