Skip to Main Content
In this paper, we propose an approach towards high-level semantic annotation of 3D camera motion information in videos. For video search and retrieval, query by example is a common technique, which proved its capability even for image features. However, complex properties of video clips even in the temporal domain can hardly be described via examples as they often fail to capture the relevant information. Hence, high-level semantic information seems much more appropriate for future video search and retrieval engines fully based on user-friendly interfaces. Well known self-calibration techniques provide robust and reliable information on camera motion, but this rich source of information is not well exploited for high-level semantic video annotation, search and retrieval. The full temporal information along the sequence is evaluated and exploited for the annotation of meaningful high-level semantics such as turn, rotation, fast moving camera, transversal motion and so on. We present a first approach to derive generalmotion properties of the camera motion within video segments and show the potential for further investigations.