We investigate the feasibility of using instruction compression at some level in a multi-level memory hierarchy to increase memory system performance. For example, compressing at main memory means that main memory and the file system would contain compressed instructions, but upstream caches would see normal uncompressed instructions. Compression effectively increases the memory size and the block size reducing the miss rate at the expense of increased access latency due to decompression delays. We present a simple compression scheme using the most frequently used symbols and evaluate it with several other compression schemes. On a SPARC processor, our scheme obtained compression rirtio of 150% for most programs. We analytically evaluate the impact of compression on the average memory access (ime for various memory systems and compression approaches. Our results show that feasibility of using compression is sensitive to the miss rates and miss penalties at the point of compression and to a lesser extent the amount of compression possible. For high performance workstations of today, compression already shows promise; as miss penalties increase in future, compression will only become more feasible.
Memory system performance, multi-level memory system, cache, data compression.
Date of this Version