Abstract:
Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capa...Show MoreMetadata
Abstract:
Humans are deeply affective beings that expect other human-like agents to be sensitive to and express their own affect. Hence, complex artificial agents that are not capable of affective communication will inevitably cause humans harm, which suggests that affective artificial agents should be developed. Yet, affective artificial agents with genuine affect will then themselves have the potential for suffering, which leads to the “Affect Dilemma for Artificial Agents,” and more generally, artificial systems. In this paper, we discuss this dilemma in detail and argue that we should nevertheless develop affective artificial agents; in fact, we might be morally obligated to do so if they end up being the lesser evil compared to (complex) artificial agents without affect. Specifically, we propose five independent reasons for the utility of developing artificial affective agents and also discuss some of the challenges that we have to address as part of this endeavor.
Published in: IEEE Transactions on Affective Computing ( Volume: 3, Issue: 4, Fourth Quarter 2012)