Skip to Main Content
General purpose computing over graphical processing units (GPGPUs) is a huge shift of paradigm in parallel computing that promises a dramatic increase in performance. But GPGPUs also bring an unprecedented level of complexity in algorithmic design and software development. In this paper we describe the challenges and design choices involved in parallelization of Bayesian optimization algorithm (BOA) to solve complex combinatorial optimization problems over nVidia commodity graphics hardware using compute unified device architecture (CUDA). BOA is a well-known multivariate estimation of distribution algorithm (EDA) that incorporates methods for learning Bayesian network (BN). It then uses BN to sample new promising solutions. Our implementation is fully compatible with modern commodity GPUs and therefore we call it gBOA (BOA on GPU). In the results section, we show several numerical tests and performance measurements obtained by running gBOA over an nVidia Tesla C1060 GPU. We show that in the best case we can obtain a speedup of up to 13x.