Skin segmentation and tracking play an important role in sign language recognition. A framework for segmenting and tracking skin objects from signing videos is described. It mainly consists of two parts: a skin colour model and a skin object tracking system. The skin colour model is first built based on the combination of support vector machine active learning and region segmentation. Then, the obtained skin colour model is integrated with the motion and position information to perform segmentation and tracking. The tracking system is able to predict occlusions among any of the skin objects using a Kalman filter (KF). Moreover, the skin colour model can be updated with the help of tracking to handle illumination variation. Experimental evaluations using real-world gesture videos and comparison with other existing algorithms demonstrate the effectiveness of the proposed work.