Skip to Main Content
Many context-aware systems using accelerometers have been proposed. Contexts that have been recognized are categorized into postures (e.g. sitting), behaviors (e.g. walking), and gestures (e.g. a punch). Postures and behaviors are states lasting for a certain length of time. Gestures, however, are sporadic or once-off actions. It has been a challenging task to find gestures buried in other contexts. In this paper, we propose a method that classifies contexts into postures, behaviors, and gestures by using the autocorrelation of the acceleration values and recognizes contexts with an appropriate method. We evaluated the recall and precision of recognition for seven kinds of gestures while five kinds of behaviors; The conventional method gave values of 0.75 and 0.59 whereas our method gave 0.93 and 0.93. Our system enables a user to input by gesturing even while he or she is performing a behavior.