Skip to Main Content
Similar to real life, in a Collaborative Virtual Environment the users' interaction is accomplished not only by the verbal channel. The response to a request can be an action; also the users' avatars body movements such as pointing or their gazes' direction are helpful to send messages that may complement or even substitute verbal communication. In this paper we propose to extend the Sentence Opener approach, by which the intention of the verbal message is identified, with nonverbal communication analysis for a better collaborative interaction comprehension. This automatic analysis will provide an autonomous virtual tutor, with the tools to scaffold interaction during a training situation, for a task of socio-technical nature in a Collaborative Virtual Environment.