Skip to Main Content
Web intrusion prevention systems are popular for defending web applications against common attacks, such as SQL injection and cross-site scripting, but a standardized methodology to evaluate and benchmark such systems is not available. We outline several requirements for a testing and evaluation framework for these systems, and we introduce the concept of a benchmarking testbed, which automatically performs the evaluation in a standardized and reproducible way. By allowing benchmarks to draw from a corpus of installable modules which can be based on actual security vulnerabilities, members of the security community can continuously maintain and improve the benchmark, allowing it to be updated as threats and defenses evolve. We developed a prototype of this testbed and determined that the testbed should automate several common web testing tasks on behalf of its modules in order to ease module development. Although our experiences with the prototype suggest that developing such a testbed is viable, we identified several open questions related to benchmark coverage and performance measurement that should be resolved in order for the resulting benchmark to be useful to end users.
Note: As originally published there was an error in this document. Due to a production error final versions of the papers were not submitted. The corrected final article PDF is now provided.