Skip to Main Content
The conveyance and recognition of affect and emotion partially determine how people interact with others and how they carry out and perform in their day-to-day activities. Hence, it is becoming necessary to endow technology with the ability to recognize users' affective states to increase the technologies' effectiveness. This paper makes three contributions to this research area. First, we demonstrate recognition models that automatically recognize affective states and affective dimensions from non-acted body postures instead of acted postures. The scenario selected for the training and testing of the automatic recognition models is a body-movement-based video game. Second, when attributing affective labels and dimension levels to the postures represented as faceless avatars, the level of agreement for observers was above chance level. Finally, with the use of the labels and affective dimension levels assigned by the observers as ground truth and the observers' level of agreement as base rate, automatic recognition models grounded on low-level posture descriptions were built and tested for their ability to generalize to new observers and postures using random repeated subsampling validation. The automatic recognition models achieve recognition percentages comparable to the human base rates as hypothesized.