Skip to Main Content
Efficient management of cached storage control resources has been important since the introduction of cached controllers in the early 1980s, and it continues to grow more important as technology advances. The need for cache resource management is due to the diversity of workloads that may coexist under a given controller. Some workloads may continually require the staging of new data into cache memory, with almost no benefit in terms of performance; other workloads may reap major performance benefits while requiring relatively little data staging. The sharing of resources among various workloads must therefore be controlled to ensure that workloads in the former group do not interfere too much with those in the latter. Management of cache functions is often viewed as the job of the host system to which the controller is attached. But it is now also possible for advanced controllers to perform such management functions in a stand-alone manner. Caching algorithms can change adaptively to match the workloads presented. This enables the controller to be ported across multiple platforms without dependencies on software support. This paper surveys the variety of techniques that have been used for cache resource control, and examines the rapid evolution in such techniques that is now occurring.
Note: The Institute of Electrical and Electronics Engineers, Incorporated is distributing this Article with permission of the International Business Machines Corporation (IBM) who is the exclusive owner. The recipient of this Article may not assign, sublicense, lease, rent or otherwise transfer, reproduce, prepare derivative works, publicly display or perform, or distribute the Article.