Abstract
Exploiting high degrees of parallelism available in certain computer architectures offers the promise of ever-increasing power for the needs of scientific and engineering users; for example, in the fields of image and signal processing [11, 12], computational fluid dynamics [1, 13], finite element methods [8], optimization [7, 15] and neural networks [14, 16]. It also introduces the challenge of how to make the best of the available power. High level languages, for those parallel machines provide a good interface between a problem and the processor, thus aiding straightforward use of the parallelism. However, to get the very best performance, something more than minimal alteration to existing algorithms is needed. Since operations will normally be specified on sets of data items rather than on scalars, the recommended approach may be summarized as “think parallel”.