Skip to Main Content
In this paper, we use the tensor product notation as the framework of a programming methodology for designing block recursive algorithms on various computer networks. In our previous works, we propose a programming methodology for designing block recursive algorithms on shared memory and distributed-memory multiprocessors without considering the interconnection of processors. We extend the work to consider the block recursive algorithms on direct networks and multistage interconnection networks. We use parallel prefix computation as an example to illustrate the methodology. First, we represent the prefix computation problem as a computational matrix which may not be suitable for deriving algorithms on specific computer networks. In this methodology, we add two steps to derive tensor product formulas of parallel prefix algorithms on computer networks: (1) decompose the computational matrix into two submatrices, and (2) construct an augmented matrix. The augmented matrix can be factorized so that each term is a tensor product formula and can fit into a specified network topology. With the augmented matrix, the input data is also extended. It means, in addition to the input data, an auxiliary vector as temporary storage is used The content Of temporary storage is relevant to the decomposition of the original computational matrix. We present the methodology to derive various parallel prefix algorithms on hypercube, omega, and baseline networks and verify correctness of the resulting tensor product formulas using induction.