Skip to Main Content
Static analysis of source code is considered to be a powerful tool for detecting potential security vulnerabilities. However, only limited information regarding the current quality of static analysis tools exist. A public assessment of the capabilities of the competing approaches and products is not available. Also, neither a common benchmark nor a standard evaluation procedure has yet been defined. In this paper, we propose a general methodology for systematically evaluating static analysis tools. We document the design of an automatic execution and evaluation framework to support iterative test case design and reliable result analysis. Furthermore, we propose a methodology for creating test cases which can assess the specific capabilities of static analysis tools on a very fine level of detail. We conclude the paper with a brief discussion of our experiences which we collected through a practical evaluation study of six commercial static analysis products.