By Topic

Proxy prefetch and prefix caching

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Wei-Kuo Liao ; Dept. of Commun. Eng., Nat. Chiao Tung Univ., Hsinchu, Taiwan ; Chung-Ta King

Proxy prefetch caching aims to reduce the latency in serving web requests by prefetching objects into the proxy cache in anticipation that they might be requested. Thus, when the clients actually request them, these objects can be served directly from the proxy cache without having to be fetched all the way from the remote server. A key component in proxy prefetch caching is the mechanism to determine the profitability of prefetching. A good prefetch advisor must consider not only factors such as the request probability of the prefetched objects, the network bandwidth, and object size, but also the cache space and those objects already in the cache. A unified framework is therefore needed, under which the prefetch advisor can judge whether prefetching some objects by replacing some other objects in the cache can lead to a reduction in the average user access time. In this paper we introduce a framework that unifies prefetch caching and cache replacement with a single index called object profit. The proposed framework is very general that it can be applied to almost any type of web objects, and it considers network bandwidth, cache capacity and replacement, object size, and analysis simplicity together. We will derive the object profit mathematically and then discuss how to apply the framework in practice. The proposed scheme is evaluated with simulation. As the simulation shows, the latency reduction by using the proposed framework can be maximized.

Published in:

Parallel Processing, 2001. International Conference on

Date of Conference:

3-7 Sept. 2001