By Topic

Recognizing Daily Life Context Using Web-Collected Audio Data

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)

This work presents an approach to model daily life contexts from web-collected audio data. Being available in vast quantities from many different sources, audio data from the web provides heterogeneous training data to construct recognition systems. Crowd-sourced textual descriptions (tags) related to individual sound samples were used in a configurable recognition system to model 23 sound context categories. We analysed our approach using different outlier filtering techniques with dedicated recordings of all 23 categories and in a study with 230 hours of full-day recordings of 10 participants using smart phones. Depending on the outlier technique, our system achieved recognition accuracies between 51% and 80%.

Published in:

2012 16th International Symposium on Wearable Computers

Date of Conference:

18-22 June 2012