Skip to Main Content
This paper presents a game-theoretic and learning approach to security risk management based on a model that captures the diffusion of risk in an organization with multiple technical and business processes. Of particular interest is the way the interdependencies between processes affect the evolution of the organization's risk profile as time progresses, which is first developed as a probabilistic risk framework and then studied within a discrete Markov model. Using zero-sum dynamic Markov games, we analyze the interaction between a malicious adversary whose actions increases the risk level of the organization and a defender agent, e.g. security and risk management division of the organization, which aims to mitigate risks. We derive min-max (saddle point) solutions of this game to obtain the optimal risk management strategies for the organization to achieve a certain level of performance. This methodology also applies to worst-case scenario analysis where the adversary can be interpreted as a nature player in the game. In practice, the parameters of the Markov game may not be known due to the costly nature of collecting and processing information about the adversary as well an organization with many components itself. We apply ideas from Q-learning to analyze the behavior of the agents when little information is known about the environment in which the attacker and defender interact. The framework developed and results obtained are illustrated with a small example scenario and numerical analysis.
Date of Conference: 5-9 June 2011