Skip to Main Content
In multi-core architectures large scale scientific applications have to be redesigned to efficiently use the multiple cores and deliver higher performance. Optimizing compilers rely upon program analysis techniques to detect data dependences between program statements, perform optimizations, and identify code fragments that can be executed concurrently. However, most data dependence tests are only able to analyze linear expressions, even though non-linear expressions occur frequently in practice. Therefore, considerable amounts of potential parallelism remained unexploited. In order to handle such complex instances of the dependence problem and increase program parallelization we developed new program analysis techniques. Our methods are based on a set of polynomial time techniques that can prove or disprove dependences in source codes with non-linear and symbolic expressions, complex loop bounds, arrays with coupled subscripts, and if-statement constraints. We performed an experimental evaluation of several data dependence tests and we compared them in terms of data dependence accuracy, compilation efficiency, effectiveness in parallelization and program execution performance. We run various experiments using the Perfect Club Benchmarks, the SPEC benchmarks, and the scientific library Lapack. We measured the accuracy and efficiently of each data dependence test. We also determined the impact of each data dependence test on the total compilation time. Finally, we measured the number of loops parallelized by each test and we compared the execution performance of each benchmark on a multi-core architecture. The experimental results indicate that our dependence analysis tool is accurate, efficient and more effective in program parallelization than past data dependence analysis techniques. The improved parallelization resulted into higher speedups and better program execution performance in several benchmarks.