Skip to Main Content
In this paper, a graph-based method for video scene detection is proposed. The method is based on a weighted undirected graph. Each shot is a vertex on the graph. Edge weights among the vertices are evaluated by using spatial and temporal similarities of shots. By using the complete information of the graph, a set of the vertices mostly similar to each other and dissimilar to the others is detected. Temporal continuity constraint is achieved on this set. This set is the first detected video scene. The vertices of the video scene are extracted from the graph and the process is repeated by a certain number. The certain number of the video scenes whose boundaries are determined are placed in the temporal domain. Each temporal part between two detected scenes is accepted as a video scene.