Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

The Good Our Field Can Hope to Do, the Harm It Should Avoid

Sign In

Full text access may be available.

To access full text, please use your member or institutional sign in.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Cowie, R. ; Sch. of Psychol., Queen's Univ. Belfast, Belfast, UK

This paper tries to achieve a balanced view of the ethical issues raised by emotion-oriented technology as it is, rather than as it might be imagined. A high proportion of applications seem ethically neutral. Uses in entertainment and allied areas do no great harm or good. Empowering professions may do either, but regulatory systems already exist. Ethically positive aspirations involve mitigating problems that already exist by supporting humans in emotion-related judgments, by replacing technology that treats people in dehumanized and/or demeaning ways, and by improving access for groups who struggle with existing interfaces. Emotion-oriented computing may also contribute to revaluing human faculties other than pure intellect. Many potential negatives apply to technology as a whole. Concerns specifically related to emotion involve creating a lie, by simulate emotions that the systems do not have, or promoting mechanistic conceptions of emotion. Intermediate issues arise where more general problems could be exacerbated-helping systems to sway human choices or encouraging humans to choose virtual worlds rather than reality. “SIIF” systems (semi-intelligent information filters) are particularly problematic. These use simplified rules to make judgments about people that are complex, and have potentially serious consequences. The picture is one of balances to recognize and negotiate, not uniform good or evil.

Published in:

Affective Computing, IEEE Transactions on  (Volume:3 ,  Issue: 4 )