Skip to Main Content
Video classification and retrieval is currently performed manually by individuals adding semantic annotation or creating a description of the videos. Current algorithmic methods often suffer from semantic gap between visual content and human interpretation. This paper proposes a biologically inspired system that automatically cluster videos based on visual attributes. For feature extraction, each video frame is processed with a multi-scale, multi-orientation Gabor filter. The resulting Gabor-filtered sub-band images are down-sampled on a regular grid to achieve global representation of the image. For clustering, the system employs an unsupervised, adaptive algorithm, the Self-Organizing Map, resulting in the automatic discovery of video content. SOM's are single layer, two-dimensional neural networks that use the delta update rule and competition based on-line learning scheme to learn internal relationship of input data without supervision. The baseline framework is deployed and evaluated using a small dataset. Initial system results reveal effective mapping of input video frames and topological regions on SOM.