Skip to Main Content
This paper describes a novel approach to the problem of automated visual surveillance. The authors have extended an existing algorithm which uses a cognitive model of navigation to explain behaviour in a surveillance setting. We then take this cognitive model and apply it to the problem of filtering surveillance data: typically, a surveillance or CCTV installation will have a limited number of operatives monitoring a large number of cameras. The proposed system filters upon inexplicability scores, on the grounds that those trajectories which we can explain in terms of simple goals are exactly those trajectories which are uninteresting: it is only those we cannot simply explain which are worth attending to. Initial results are promising, with over 50% of uninteresting trajectories being excluded.
Date of Conference: 15-16 Sept. 2005