Skip to Main Content
This work presents scalable algorithms for basic construction of parallel Radial Basis Probabilistic Neural Networks. The final goal is to build a neural network that can efficiently be implemented in distributed memory machines. Thus a fast simple parallel training scheme for RBPNNs is studied, that is based almost solely on Gaussian summations which can by their part be efficiently mapped on parallel as well as on pipeline distributed machines. The suggested training scheme is tested for accuracy and performance and can guarantee simplicity, parallelization and linear speed ups in common parallel implementations, namely neuron parallel and pipelining studied here.