Skip to Main Content
For a lot of applications, and particularly for medical intra-operative applications, the exploration of and navigation through 3-D image data provided by sensors like ToF (time-of-flight) cameras, MUSTOF (multisensor-time-of-flight) endoscopes or CT (computed tomography) , requires a user-interface which avoids physical interaction with an input device. Thus, we process a touchless user-interface based on gestures classified by the data provided by a ToF camera. Reasonable and necessary user interactions are described. For those interactions a suitable set of gestures is introduced. A user-interface is then proposed, which interprets the current gesture and performs the assigned functionality. For evaluating the quality of the developed user-interface we considered the aspects of classification rate, real-time applicability, usability, intuitiveness and training time. The results of our evaluation show that our system, which provides a classification rate of 94.3% at a framerate of 11 frames per second, satisfactorily addresses all these quality requirements.