Skip to Main Content
In many cases, data networks need to be monitored to ensure that they stay within acceptable parameters. The monitoring consists of measuring properties of the network, and of inferring an aggregate predicate from these measurements. In many cases it is too complex, or too expensive, to conduct explicit monitoring at all times. In these cases, information (integrity constraints) on the evolution of the network status can often allow us to use past measurements to infer the future behavior, thus reducing the monitoring cost. We provide a formal description of the problem of monitoring rapidly changing data, which we call the monitoring problem. We then classify this problem in terms of the integrity constraints that govern the evolution of the environment, and propose different algorithms for each of these classes. For the most restricted case, we can find a greedy algorithm which is optimal, while for the more general cases, we use competitive analysis and show that optimal worst and average case cost measuring algorithms exist. We then present heuristics for low-cost low-complexity measuring algorithms. We believe that the results of this paper can serve as a framework for further studies.