Cache Memory Design Considerations to Support Languages with Dynamic Heap Allocation
Chih-Jui Peng and Gurindar S Sohi
In this report, we consider the design of cache memories to support the execution of languages that make extensive use of a dynamic heap. To get insight into the cache memory design, we define several characteristics of dynamic heap references and measure these characteristics for several benchmark programs using Lisp as our model heap-intensive language. We make several observations about the heap referencing characteristics and study the implications of the referencing characteristics on cache memory design. From our observations, we conclude that conventional cache memories are likely to be inadequate in supporting dynamic heap references. We also verify this conclusion with an extensive trace-driven simulation analysis. Then we present some cache optimizations that exploit the peculiarities of heap references. These optimizations include: i) the use of an ALLOCATE operation to improve the cache miss ratio as well as the data traffic ratio, ii) the use of a biased-LRU replacement algorithm that discriminates against garbage lines and moves the miss ratio of a cache closer to that of an unrealizable optimal cache and, iii) the use of a garbage bit with each cache line that eliminates unnecessary write back operations. Using trace-driven simulation, we conclude that with the heap-specific cache optimizations proposed, it is possible to design cache memories that have a miss ratio and a data traffic ratio that is close to 0. Without these optimizations, the miss ratio and data traffic ratio of a cache organization can be extremely poor, regardless of the cache size. Two of the proposed optimizations rely on a mechanism that detects garbage soon after it is created. Since cache memory performance without the proposed optimizations is very poor, we point out the need for garbage collection mechanisms that can detect garbage almost immediately after it is created and while the garbage heap cell is still resident in the cache.
Download this report (PDF)
Return to tech report index