Skip to Main Content
On-body sensing has enabled scalable and unobtrusive activity recognition for context-aware wearable computing. Common methods for activity recognition are based on supervised learning requiring substantial amounts of labeled training data. Obtaining accurate and detailed annotations of activities is a great challenge for these approaches preventing their applicability in real-world settings. This paper introduces a new activity recognition method that combines small amounts of labeled data with easily obtainable unlabeled data in a semi-supervised learning process. The method propagates information through a graph that contains both labeled and unlabeled data. We propose two different ways of combining multiple graphs based on feature similarity and time. We evaluate both the quality of the label propagation process itself and the performance of classifiers trained on the propagated labels. Experimental results on two public datasets indicate that our approach outperforms a recently proposed multi-instance learning approach and in some cases even outperforms fully supervised approaches.