Parallel Compiler Support Chih-Po Wen and Arvind Krishnamurthy (Professor K. A. Yelick) (ARPA) DABT63-92-C-0026, Lawrence Livermore National Laboratories, and Semiconductor Research Corporation Parallelizing applications with irregular structures often require great programming effort. In this project, we investigate advanced compilation techniques to ease the programming task without compromising performance. Our approach is demonstrated in two related compiler efforts, one to transform sequential code into speculatively parallel code, and the second to optimize parallel code to mask network latency. The first set of transformations takes a sequential program and produces a speculatively parallel one. Speculation increases the available parallelism by allowing possibly dependent tasks to run in parallel. Any real dependencies are caught and corrected by the runtime system, which is built to provide a shared memory abstraction. The compiler detects parallelism and performs extensive dataflow analysis to reduce the overhead of speculation and perform other optimizations. One obstacle to exploiting parallelism is the use of abstract data types, because the function calls that represent the abstraction barrier limit the transformations. We are investigating an annotation system in which the programmer can specify certain side effect properties of data type operation and enable optimizations. The second facet of this research addresses optimizing explicitly parallel programs by increasing the concurrency between network and processor usage. The compiler transforms programs that have been written in a single program multiple data (SPMD) style using a shared address-space model. The normal read and write operations on this shared space are implemented using lightweight messages. The use of non-blocking read and write operations increases processor utilization by permitting communication and computation overlap. However, because of uncertainties about the order of message delivery, these non-blocking operations may change the meaning of the program. It is burdensome for the programmer to explicitly specify which operations may be non-blocking, so our compiler automatically transforms blocking operations into non-blocking ones. We are currently implementing these techniques and have started an evaluation of the transformations by studying their utility in optimizing the programs in the Splash benchmark suite.