I. Introduction
Although many advancements have happened in the field of computer vision [1] and natural language processing [2], not much has been done to help the specially abled population to communicate with the general population. Although some work has been done in static gesture recognition and dynamic gesture recognition for Indian Sign Language, nothing has been done for the sentence translation task till date. The work done by Sruthi et al. [3] which is one of the recent works also deals with static gestures. The work done by Neelkamal el al. [4] and Pratik et al. [5] deals with static and dynamic based gestures. This work mainly intends to build a system which can recognise Indian Sign Language sentences from video sequences in real-time and can be trained in an end-to-end manner.