Loading [a11y]/accessibility-menu.js
Towards Surveillance Video-and-Language Understanding: New Dataset, Baselines, and Challenges | IEEE Conference Publication | IEEE Xplore

Towards Surveillance Video-and-Language Understanding: New Dataset, Baselines, and Challenges


Abstract:

Surveillance videos are important for public security. However, current surveillance video tasks mainly focus on classifying and localizing anomalous events. Existing met...Show More

Abstract:

Surveillance videos are important for public security. However, current surveillance video tasks mainly focus on classifying and localizing anomalous events. Existing methods are limited to detecting and classifying the predefined events with unsatisfactory semantic understanding, although they have obtained considerable performance. To address this issue, we propose a new research direction of surveillance video-and-language understanding (VALU), and construct the first multimodal surveillance video dataset. We manually annotate the real-world surveillance dataset UCF-Crime with fine-grained event content and timing. Our newly annotated dataset, UCA (UCF-Crime Annotation)1 1The dataset is provided at https://xuange923.github.io/Surveillance-Video-Understanding., contains 23,542 sentences, with an average length of 20 words, and its annotated videos are as long as 110.7 hours. Furthermore, we benchmark SOTA models for four multimodal tasks on this newly created dataset, which serve as new baselines for surveillance VALU. Through experiments, we find that mainstream models used in previously public datasets perform poorly on surveillance video, demonstrating new challenges in surveillance VALU. We also conducted experiments on multimodal anomaly detection. These results demonstrate that our multimodal surveillance learning can improve the performance of anomaly detection. All the experiments highlight the necessity of constructing this dataset to advance surveillance AI.
Date of Conference: 16-22 June 2024
Date Added to IEEE Xplore: 16 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA

1. Introduction

Surveillance videos are crucial and indispensable for public security. In recent years, various surveillance-video-oriented tasks have been widely studied, e.g., anomaly detection, anomalous/human action recognition, etc. However, the existing surveillance video datasets [20], [25], [26], [38] just provide the category labels and timing of anomalous events, and require all categories to be predefined. Thus, the related methods are still limited to detecting and classifying predefined events merely, lacking the semantic understanding capacity of video content. However, automatic understanding of surveillance video content is crucial to enhance the existing investigative measures. Some surveillance applications often need to search for specific event queries rather than board categories, i.e., using queries to retrieve events in surveillance videos. Meanwhile, intelligent surveillance exhibits a trend toward multimodal directions, especially in video-and-text interaction.

Annotation examples in our UCA dataset, including fine-grained sentence queries and the corresponding timing.

Contact IEEE to Subscribe

References

References is not available for this document.