Skip to Main Content
Machine learning techniques are applied to the task of context awareness, or inferring aspects of the user's state given a stream of inputs from sensors worn by the person. We focus on the task of indoor navigation and show that, by integrating information from accelerometers, magnetometers and temperature and light sensors, we can collect enough information to infer the user's location. However, our navigation algorithm performs very poorly, with almost a 50% error rate, if we use only the raw sensor signals. Instead, we introduce a "data cooking" module that computes appropriate high-level features from the raw sensor data. By introducing these high-level features, we are able to reduce the error rate to 2% in our example environment.