Skip to Main Content
To benefit from grids, scientists require grid workflow engines that automatically manage the execution of inter-related jobs on the grid infrastructure. So far, the workflows community has focused on scheduling algorithms and on interface tools. Thus, while several grid workflow engines have been deployed, little is known about their performance-related characteristics, and there are no commonly-used testing practices. This situation limits the adoption of the grid workflow engines, and hampers their tuning and their further development. In this work we propose a testing methodology for grid workflow engines that focuses on five characteristics: overhead, raw performance, stability, scalability, and reliability. Using this methodology, we evaluate in a real test environment several middleware stacks that include grid workflow engines, including two based on DAGMan/Condor and on Karajan/Globus.