Abstract:
Practical affect recognition needs to be efficient and unobtrusive in interactive contexts. One approach to a robust realtime system is to sense and automatically integra...Show MoreMetadata
Abstract:
Practical affect recognition needs to be efficient and unobtrusive in interactive contexts. One approach to a robust realtime system is to sense and automatically integrate multiple nonverbal sources. We investigated how users’ touch, and secondarily gaze, perform as affect-encoding modalities during physical interaction with a robot pet, in comparison to more-studied biometric channels. To elicit authentically experienced emotions, participants recounted two intense memories of opposing polarity in Stressed-Relaxed or Depressed-Excited conditions. We collected data (N=30) from a touch sensor embedded under robot fur (force magnitude and location), a robot-adjacent gaze tracker (location), and biometric sensors (skin conductance, blood volume pulse, respiration rate). Cross-validation of Random Forest classifiers achieved best-case accuracy for combined touch-with-gaze approaching that of biometric results: where training and test sets include adjacent temporal windows, subject-dependent prediction was 94% accurate. In contrast, subject-independent Leave-One-participant-Out predictions resulted in 30% accuracy (chance 25%). Performance was best where participant information was available in both training and test sets. Addressing computational robustness for dynamic, adaptive realtime interactions, we analyzed subsets of our multimodal feature set, varying sample rates and window sizes. We summarize design directions based on these parameters for this touch-based, affective, and hard, realtime robot interaction application.
Published in: IEEE Transactions on Affective Computing ( Volume: 14, Issue: 2, 01 April-June 2023)