Skip to Main Content
Participants haptically (vs. visually) classified universal facial expressions of emotion (FEEs) depicted in simple 2D raised-line displays. Experiments 1 and 2 established that haptic classification was well above chance; face-inversion effects further indicated that the upright orientation was privileged. Experiment 2 added a third condition in which the normal configuration of the upright features was spatially scrambled. Results confirmed that configural processing played a critical role, since upright FEEs were classified more accurately and confidently than either scrambled or inverted FEEs, which did not differ. Because accuracy in both scrambled and inverted conditions was above chance, feature processing also played a role, as confirmed by commonalities across confusions for upright, inverted, and scrambled faces. Experiment 3 required participants to visually and haptically assign emotional valence (positive/negative) and magnitude to upright and inverted 2-D FEE displays. While emotional magnitude could be assigned using either modality, haptic presentation led to more variable valence judgments. We also documented a new face-inversion effect for emotional valence visually, but not haptically. These results suggest emotions can be interpreted from 2-D displays presented haptically as well as visually; however, emotional impact is judged more reliably by vision than by touch. Potential applications of this work are also considered.