Skip to Main Content
When notifying the user about an appointment, message arrival, or other timely event, the wearable should not blindly sound an alarm (visual or auditory). Integration with the user requires the wearable to be aware of the user's situational context. Is the user in a conversation? On the phone or with someone nearby? Who? Is the user driving in his car, walking down the street, or sitting at his desk? We have developed a system that allows us to infer environmental context by audio classification. It was designed using a statistical/pattern recognition framework called Hidden Markov Models (HMM) that allow us to recognize classes of sounds given enough examples from each class.