Skip to Main Content
Access control policies are increasingly written in specification languages such as XACML. To increase confidence in the correctness of specified policies, policy developers can conduct policy testing with some typical test inputs (in the form of requests) and check test outputs (in the form of responses) against expected ones. Unfortunately, manual test generation is tedious and manually generated tests are often not sufficient to exercise various policy behaviors. In this paper we present a novel framework and its supporting tool called Cirg that generates tests based on change- impact analysis. Our experimental results show that Cirg can effectively generate tests to achieve high structural coverage of policies and outperforms random test generation in terms of structural coverage and fault-detection capability.