Skip to Main Content
Participants in collaborative sessions, whether in reality or virtuality, often experience difficulties interacting if all they can perceive about peer activities are actions without context. Contextual information has a significant influence on the evolution of collaborative interactions by allowing opportunities for timely interruptions and for peer intents to be identified. Computer supported cooperative work (CSCW) researchers have demonstrated collaborative systems that allow geographically dispersed users to work together synchronously. Evaluations of this class of systems have confirmed the need for transmitting at a distance, contextual cues, some socially relevant and others task relevant, in addition to the actions of the participants. For example, in a real world collaborative drawing task, cues that help to establish context can take the form of peer gestures, e.g. hand movements, gaze pursuits, gaze fixations, and head shifts. Unfortunately, in real-time groupware systems, resources for context building often cannot be made available. System specific constraints restrict user representations to artifacts that do not have sufficient expressive power to describe gestures. Thus, we propose a strategy to deal with the lack of contextual linkage between views of a collaborative information space that do not lend themselves easily to effective peer user representations.