Automatic construction of optimizing parallelizing compilers from specification

William Eden Cohen, Purdue University

Abstract

Most people write their programs in high-level languages because they want to develop their algorithm and not concern themselves with the low-level details of the computer on which the program will be executed. However, execution speed of the program is still a concern. Translating a program written in a high-level language into efficient code targeting a particular parallel machine requires a powerful compiler to perform transformations appropriate for that particular machine. Developing such a compiler requires a great deal of effort, and much of this effort must be repeated for each new target machine. A number of tools to assist in the generation of lexical analyzers, parsers, optimizers, parallelizers, and code generators have been developed. Each of these tools generates a part of the compiler from a specification, but the compiler writer must combine and augment these components to synthesize a complete compiler. In contrast, this dissertation suggests that complete compilers can be mechanically derived by processing a set of specifications defining the source language; available analysis, optimization, and parallelization phases; and the relevant attributes of the target computer. Further, an integrated framework for this process is presented; this includes new methods for description and management of parallelism, selection and ordering of analysis and transformation phases, and computation and use of estimates of execution time. Most aspects of this approach have been prototyped in a proof-of-concept tool, the Parallel Instruction Generator Generator (PIGG), which is also described in this dissertation.

Degree

Ph.D.

Advisors

Dietz, Purdue University.

Subject Area

Electrical engineering|Computer science

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS