Skip to Main Content
In this paper we consider the problem of quickest alarm intrusion detection for a computer network in a probabilistic setting where the number of opportunities to make observations on the status of a potential intruder is budgeted. Specifically, we model the activity of an intruder with a Markov chain of finite state space, corresponding to logical or physical states in a network, and suppose there is a state b which we would not like the intruder to enter. The intruder, on the other hand, would like to enter this sensitive part of the network and wants to spend as much time there as possible. The state of the intruder evolves in discrete time; also there are a limited number of opportunities for the security system to make state observations over the finite horizon of the problem. This model can be used to capture the essence of intrusion detection in a variety of situations such as hackers in a network or physical intruders in a spatial area where there is a constraint on the number of observations one may make due to power limitations. We develop an optimal policy for dynamically scheduling observations to minimize the amount of time that the intruder spends in b without being discovered.