Skip to Main Content
This paper addresses the automatic analysis of court-net sports video content. We extract information about the players, the playing-field in a bottom-up way until we reach scene-level semantic concepts. Each part of our framework is general, so that the system is applicable to several kinds of sports. A central point in our framework is a camera calibration module that relates the a-priori information of the geometric layout in the form of a court model to the input image. Exploiting this information, several novel algorithms are proposed, including playing-frame detection, players segmentation and tracking. To address the player-occlusion problem, we model the contour map of the player silhouettes using a nonlinear regression algorithm, which enables to locate the players during the occlusions caused by players in the same team. Additionally, a Bayesian-based classifier helps to recognize predefined key events, where the input is a number of real-world visual features. We illustrate the performance and efficiency of the proposed system by evaluating it for a variety of sports videos containing badminton, tennis and volleyball, and we show that our algorithm can operate with more than 91% feature detection accuracy and 90% event detection.