Skip to Main Content
Computer systems operating in close contact with humans today rely on machine vision as a favorite source of perceptive information. However, images contain massive amount of useless information surrounding the few meaningful signals. Extracting such signals with reliability is a task far out of grasp for today off-the-shelf processors. Reliability could be pursued by adding observing modalities computing in parallel and then fusing their outputs. But this technique collides with real-time constraints. An alternative consists in inserting a priori knowledge on the operative "context" and adding expectations on object appearances. Contextual information can provide the basis for selecting interesting signals more efficiently. If the "context" is known, a system can employ only those observing modalities that prove better fitted to the current situation, and "switch" to them opportunistically. In this paper we develop a framework for representing context evolution and supporting a "contextual switching" of active operators.