Programming Distributed Memory Sytems Using OpenMP


OpenMP has emerged as an important model and language extension for shared-memory parallel programming. On shared-memory platforms, OpenMP offers an intuitive, incremental approach to parallel programming. In this paper, we present techniques that extend the ease of shared-memory parallel programming in OpenMP to distributed-memory platforms as well. First, we describe a combined compile-time/runtime system that uses an underlying software distributed shared memory system and exploits repetitive data access behavior in both regular and irregular program sections. We present a compiler algorithm to detect such repetitive data references and an API to an underlying software distributed shared memory system to orchestrate the learning and proactive reuse of communication patterns. Second, we introduce a direct translation of standard OpenMP into MPI message-passing programs for execution on distributed memory systems. We present key concepts and describe techniques to analyze and efficiently handle both regular and irregular accesses to shared data. Finally, we evaluate the performance achieved by our approaches on representative OpenMP applications.


application software, communication system sofware, concurrent computing, distributed computing, message passing, parallel programming, runtime, software algorithms, software performance, software systems

Date of this Version



IEEE International Parallel and Distributed Processing Symposium, 2007. IPDPS 2007.26-30 March 2007, page(s): 1 - 8 , Long Beach, CA