Skip to Main Content
Hand-based gesture interaction in smart environments provides a comfortable and discreet way to interact with the environment. However, most of the available approaches are not robust enough against inaccurately performed gestures and sensor noise in such environments. This usually leads to a decreased gesture recognition rate. Thus, the overall interaction experience suffers when performed in smart environments. To overcome these challenges and to improve the gesture recognition performance, we aim to develop a generic context aware gesture recognition framework inferring user's interaction intention from the output of an arbitrary gesture recognizer and the current context of a smart environment. The main feature of our framework is its ability to adapt to the dynamic context of arbitrary smart environments at runtime.