This paper proposes a human machine interface for assistive exoskeleton based on face analysis. The 4 DoF assistive robotic system designed is dedicated to people suffering from myopathy and aims to compensate for the loss of mobility in the upper limb. The proposed interface is able to convert user head gesture and mouth expression into a suitable control command. Moreover, we propose a visual context analysis component to make a more accurate command. The tests conducted show that the use of vision based interface is particularly adapted to disabled people. In this paper, we will first describe the problematic and the designed mechanical system. Next, we will describe the two approaches developed for visual sensing interface: head control and mouth expression control. We will focus on mouth extraction algorithm. Finally, we introduce the context detection for scene understanding.