Skip to Main Content
In this paper we present a novel parallel technique to compute t-wise covering arrays. The massive computational work, implied by the considered task when large configuration spaces are modeled, is distributed over a scalable set of parallel computing resources by means of an MPI-compliant algorithm. Due to NP-completeness of the covering array problem, existing research on combinatorial generation algorithms commonly assumes this computation task as strictly sequential. Conversely, basing on inherent combinatorial properties, we show that it is possible to scatter the overall workload into several and independent processing sub-tasks, and then collect all outcomes into a global solution whose size is still comparable to that of a sequentially computed solution. Reported results show that in this way significant speed-up is achieved on the computation times with respect to the sequential computation of the same task.