Abstract: In this paper, we consider the problem of generating efficient, portable communication in compilers for parallel languages. We introduce the Ironman abstraction, which separates data transfer from its implementing communication paradigm. This is done by annotating the compiler-generated code with legal ranges for data transfer in the form of calls to the Ironman library. On each target platform, these library calls are instantiated to perform the transfer using the machine's optimal communication paradigm. We confirm arguments against generating message passing calls in the compiler based on our experiences using PVM and MPI -- specifically, the observation that these interfaces do not perform well on machines that are not built with a message passing communication paradigm. The overhead for using Ironman, as opposed to a machine-specific back end, is demonstrated to be negligible. We give performance results for a number of benchmarks running with PVM, MPI, and machine-specific implementations of the Ironman abstraction, yielding performance improvements of up to 42% of communication time and 1-14% of total computation time.