Skip to Main Content
This paper proposes a novel real-time non-verbal communication system from natural language instruction by introducing an artificial intelligence method into the networked virtual environment (NVE). We extract semantic information as an interlingua from the input text by natural language processing, and then transmit this semantic feature extraction (SFE), which actually is a parameterized action representation, to the 3-D articulated humanoid models prepared in each client in remote locations. Once the SFE is received, the virtual human will be animated by the synthesized SFE. Experiments between Japanese sign language and Chinese sign language show this system makes the real-time animations of avatars available for the participants when chatting with each other, not just based on text or predefined gesture icons, so the communication is more natural. This proposed system is suitable for sign language distance training as well.