Skip to Main Content
The research of a parallelization efficiency of a batch pattern training algorithm of a multilayer perceptron on computational clusters is presented in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is presented. The efficiency of parallelization of the developed algorithm is investigated on the progressive increasing the dimension of the parallelized problem. The results of the experimental researches show that (i) the cluster with Infiniband interconnection shows better values of parallelization efficiency in comparison with general-purpose parallel computer with cc numa architecture due to lower communication overhead and (ii) the parallelization efficiency of the algorithm is high enough for its appropriate usage on general-purpose clusters and parallel computers available within modern computational grids.