Application of Automatic Parallelization to Modern Challenges of Scientific Computing Industries

Abstract

Sparse matrix-vector (SpMV) multiplication is a widely used kernel in scientific applications. In these applications, the SpMV multiplication is usually deeply nested within multiple loops and thus executed a large number of times. We have observed that there can be significant performance variability, due to irregular memory access patterns. Static performance optimizations are difficult because the patterns may be known only at runtime. In this paper, we propose adaptive runtime tuning mechanisms to improve the parallel performance on distributed memory systems. Our adaptive iteration-to-process mapping mechanism balances computational load at runtime with negligible overhead (1% on average), and our runtime communication selection algorithm searches for the best communication method for a given data distribution and mapping. Actual runs on 26 real matrices show that our runtime tuning system reduces execution time up to 68.8% (30.9% on average) over a base block distributed parallel algorithm on distributed systems with 32 nodes.

Keywords

performance measures, physics, process mapping, runtime tuning, parse matrix

Date of this Version

2008

Comments

ICS '08 Proceedings of the 22nd annual international conference on Supercomputing

Share

COinS