Skip to Main Content
We study auditory context recognition for context-aware mobile computing systems. Auditory contexts are recordings of a mixture of sounds, or ambient audio, from mobile users' everyday environments. Fortraining a classifier, a set of recordings from different environments are segmented and labeled. The segments are windowed into overlapping frames for feature extraction. While previous work in auditory context recognition has often treated the problem as a sequence classification task and used HMM-based classifiers to recognize a sequence of consecutive MFCCs of frames, we compute averaged Mel-spectrum over the segments and train a SVM-based classifier. Our scheme outperforms an already reported HMM-based scheme. This result is achieved using the same dataset. We also show that often the feature sets used by previous work are affected by attenuation, limiting their applicability in practice. Furthermore, we study the impact of segment duration on recognition accuracy.