In traditional cache-based computers, all memory references are made through cache. However, a significant number of items which are referenced in a program are referenced so infrequently that other cache traffic is certain to “bump” these items from cache before they are referenced again. I n such cases, not only is there no benefit in placing the item in cache, but there is the additional overhead of “bumping” some other item out of cache to make room for this useless cache entry. Where a cache line is larger than a processor word, there is an additional penalty in loading the entire line from memory into cache, whereas the reference could have been satisfied with a single word fetch. Simulations have shown that these effects typically degrade cache-based system performance (average reference time) by 10% to 30%. This performance loss is due to cache pollution; by simply forcing “polluting” references to directly reference main memory — bypassing the cache — much of this performance can be regained. The technique proposed in this paper involves the use of new hardware, called a Bypass-Cache, which, under program control, will determine whether each reference should be through the cache or bypassing the cache and referencing main memory directly. Several inexpensive heuristics for the compiler to determine how to make each reference are given.
: bypass-cache, cache-pollution, cache, compiler-analysis, compiler optimization, execution-time.
Date of this Version