Skip to Main Content
This paper proposes an occlusion resistant automatic fall detection framework for smart environments. There are two major contributions of the proposed method. First, synchronized RGB and depth data are utilized together to capture both appearance and geometrical characteristics of human silhouettes in the environment. Second, unlike existing methods, a single Kinect sensor is mounted on a ceiling and plan-view of the room is captured to avoid occlusions rising from furnitures. For each frame, silhouette of person is extracted from depth data. From silhouette data, depth histogram, bounding box, distribution of average and highest depth values are calculated. The system learns these parameters for different regions of the room to classify human poses into three categories as standing, fall down and other poses. Experimental results show successful application of the proposed framework to detect falls under complex situations.
Date of Conference: 18-20 April 2012