Loading [MathJax]/extensions/MathMenu.js
Language-Capable Robots may Inadvertently Weaken Human Moral Norms | IEEE Conference Publication | IEEE Xplore

Language-Capable Robots may Inadvertently Weaken Human Moral Norms


Abstract:

Previous research in moral psychology and human-robot interaction has shown that technology shapes human morality, and research in human-robot interaction has shown that ...Show More

Abstract:

Previous research in moral psychology and human-robot interaction has shown that technology shapes human morality, and research in human-robot interaction has shown that humans naturally perceive robots as moral agents. Accordingly, we propose that language-capable autonomous robots are uniquely positioned among technologies to significantly impact human morality. We therefore argue that it is imperative that language-capable robots behave according to human moral norms and communicate in such a way that their intention to adhere to those norms is clear. Unfortunately, the design of current natural language oriented robot architectures enables certain architectural components to circumvent or preempt those architectures' moral reasoning capabilities. In this paper, we show how this may occur, using clarification request generation in current dialog systems as a motivating example. Furthermore, we present experimental evidence that the types of behavior exhibited by current approaches to clarification request generation can cause robots to (1) miscommunicate their moral intentions and (2) weaken humans' perceptions of moral norms within the current context. This work strengthens previous preliminary findings, and does so within an experimental paradigm that provides increased external and ecological validity over earlier approaches.
Date of Conference: 11-14 March 2019
Date Added to IEEE Xplore: 25 March 2019
ISBN Information:

ISSN Information:

Conference Location: Daegu, Korea (South)

Contact IEEE to Subscribe

References

References is not available for this document.