원문정보
초록
영어
Caching is one of the major steps in system designing to reach a higher performance in operating systems, databases and World Wide Web. High performance processors need memory systems with a proper access time, but still there is a big gap between performances of processors and memory systems. Virtual memory management and hierarchical memory models play an important role in system performance. In these architectures caching replacement policy defines the enhancement factor of the memory system and could modify the efficiency of the system. Different caching policies have different effects on the system performance. Because of the highlight role of replacement policies in the systems, there have been lots of work and proposed algorithms to overcome the problem of performance gap between processor and memory. Most of these policies are the enhancement of the Least-Recently-Used (LRU) and Least-Frequently-Used (LFU) schemes. Although most of the proposed schemes could solve the defects of the LRU and LFU, but they have lots of overhead and are difficult to implement. The most profit of LRU and LFU is their simple implementation. This article proposes an adaptive replacement policy which has low overhead on system and is easy to implement. This model is named Weighting Replacement Policy (WRP) which is based on ranking of the pages in the cache according to three factors. Whenever a miss occurs, a page with the lowest rank point is selected to be substituted by the new desired page. The most advantage of this model is it’s similarity to both LRU and LFU, which means it has the benefits of both (i.e. in cases like loops in which LRU fails, it will switch to LFU). Simulations show that this algorithm performs better than LRU and LFU. In addition, it performs similarly to LRU in the worst cases. The new approach can be applied as a replacement policy in virtual memory systems and web caching.
목차
1. Introduction
2. The Origins of the Idea
3. Concepts of the Ranking Policy by Weighting
4. The Results of Simulation
4.1. Input traces
4.2. Simulation Results
5. Summary & Conclusions
References
