By Topic

Cost-Effective Caching for Mobility Support in IEEE 802.1X Frameworks

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Kuang-Hui Chi ; Dept. of Electr. Eng., Nat. Yunlin Unv. of Sci. & Technol. ; Ji-Han Jiang ; Li-Hsing Yen

This paper is concerned with caching support of access points (APs) for fast handoff within IEEE 802.11 networks. A common flavor of current schemes is to let a mobile station preauthenticate or distribute the security context of the station proactively to neighboring APs. Each target AP caches the received context beforehand and can save itself backend-network authentication if the station reassociates. We present an approach to ameliorating cache effectiveness under the least recently used (LRU) replacement policy, additionally allowing for distinct cache miss penalty indicative of authentication delay. We leverage the widely used LRU caching techniques to effect a new model where high-penalty cache entries are prevented from being prematurely evicted under the conventional replacement policy so as to save frequent, expensive authentications with remote sites. This is accomplished by introducing software-generated reference requests that trigger cache hardware machinery in APs to refresh certain entries in an automated manner. Performance evaluations are conducted using simulation and analytical modeling. Performance results show that our approach, when compared with the base LRU scheme, reduces authentication delay by more than 51 percent and cache miss ratio by over 28 percent on average. Quantitative and qualitative discussions indicate that our approach is applicable in pragmatic settings

Published in:

Mobile Computing, IEEE Transactions on  (Volume:5 ,  Issue: 11 )