Skip to Main Content
Led by General Purpose computing over Graphical Processing Units (GPGPUs), the parallel computing area is witnessing a rapid change in dominant parallel systems. A major hurdle in this switch is the Single Instruction Multiple Thread (SIMT) architecture of GPUs which is usually not suitable for the design of legacy parallel algorithms. Genetic Algorithms (GAs) is no exception for that. GAs are commonly parallelized due to the high demanding computational needs. Given the performance of GPGPUs, the need to best exploit them to maximize computing efficiency for parallel GAs is demandingly growing. The goal of this paper is to shed light on the challenges parallel GAs designers/programmers will likely face while trying to achieve this, and to provide some practical advice on how to maximize GPGPU exploitation as a result. To that end, this paper provides a study on adapting legacy parallel GAs on GPGPU systems. The paper exposes the design challenges of nVidia's GPU architecture to the parallel GAs community by: discussing features of GPU, reviewing design issues in GPU relevant to parallel GAs, the design and introduction of new techniques to achieve an efficient implementation for parallel GAs and observing the effect of the pivotal points that both capitalize on the strengths of GPU and limit the deficiencies/overheads of GPUs. The paper demonstrates the performance of designed-for-GPGPU parallel GAs representing the entire spectrum of legacy parallel model of GAs over nVidia Tesla C1060 workstation showing a significant improvement in performance after optimizing and tuning the algorithms for GPU.