Skip to Main Content
We present novel algorithms for detecting generic visual events from video. Target event models will produce binary decisions on each shot about classes of events involving object actions and their interactions with the scene, such as airplane taking off, exiting car, riot. While event detection has been studied in scenarios with strong scene and imaging assumptions, the detection of generic visual events from an unconstrained domain such as broadcast news has not been explored. This work extends our recent work on event detection by (1) using a novel bag-of-features representation along with the earth movers' distance to account for the temporal variations within a shot, (2) learn the importance among input modalities with a double-convex combination along both different kernels and different support vectors, which is in turn solved via multiple kernel learning. Experiments show that the bag-of-features representation significantly outperforms the static baseline; multiple kernel learning yields promising performance improvement while providing intuitive explanations for the importance of the input kernels.