Abstract:
A common objection to the use and development of “emotional” robots is that they are deceptive. This intuitive response assumes 1) that these robots intend to deceive, 2)...Show MoreMetadata
Abstract:
A common objection to the use and development of “emotional” robots is that they are deceptive. This intuitive response assumes 1) that these robots intend to deceive, 2) that their emotions are not real, and 3) that they pretend to be a kind of entity they are not. We use these criteria to judge if an entity is deceptive in emotional communication (good intention, emotional authenticity, and ontological authenticity). They can also be regarded as “ideal emotional communication” conditions that saliently operate as presuppositions in our communications with other entities. While the good intention presupposition might be a bias or illusion we really need for sustaining the social life, in the future we may want to dispense with the other conditions in order to facilitate cross-entity communication. What we need instead are not “authentic” but appropriate emotional responses-appropriate to relevant social contexts. Criteria for this cannot be given a priori but must be learned-by humans and by robots. In the future, we may learn to live with “emotional” robots, especially if our values would change. However, contemporary robot designers who want their robots to receive trust from humans had better take into account current concerns about deception and create robots that do not evoke the three-fold deception response.
Published in: IEEE Transactions on Affective Computing ( Volume: 3, Issue: 4, Fourth Quarter 2012)