Skip to Main Content
We propose a multicue gaze prediction framework for open signed video content, the benefits of which include coding gains without loss of perceived quality. We investigate which cues are relevant for gaze prediction and find that shot changes, facial orientation of the signer and face locations are the most useful. We then design a face orientation tracker based upon grid-based likelihood ratio trackers, using profile and frontal face detections. These cues are combined using a grid-based Bayesian state estimation algorithm to form a probability surface for each frame. We find that this gaze predictor outperforms a static gaze prediction and one based on face locations within the frame.
Date of Publication: Jan. 2009