By Topic

Cognitive fit and an intelligent agent for a word processor: should users take all that advice?

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
D. F. Galletta ; Pittsburgh Univ., PA, USA ; A. Everard ; A. Durcikova ; B. Jones

While intelligent agents have been developed to provide objective and expert advice to users, most experienced users know that they should not be followed blindly. Cognitive fit theory was developed about ten years ago to support the notion that tools should fit the tasks for which they were designed in light of the user's capabilities. Recently, intelligent agents have been provided to nearly every computer user as part of the Microsoft Office Suite. In nearly all of the applications in the suite, suggestions pop up as the software encounters recognized patterns. Users' capabilities vary widely, however. Some users have noticed anomalies in the advice, and their expertise leads them to override that advice. The computer credibility literature would predict that some users will take that advice without questioning it; this paper asserts that this will occur when there is lack of cognitive fit. In this study, the "advisor," one particular intelligent agent in Microsoft Word was examined. In this experimental study, 33 undergraduate students were exposed to a passage of text with five repetitions each of three types of error conditions: (1) errors flagged correctly, (2) errors found by the advisor that were not truly errors, and (3) errors missed by the Advisor. Hypotheses were that (1) the advisor would in general improve performance, (2) Expertise in English would in general improve performance, and (3) the advisor would help more those with higher English skills than those with lower English skills. Verbal SAT scores were obtained by permission of the subjects to serve as a measure of English skills. Analysis of the data showed that overall, all three hypotheses were supported in general. The paper also provides more detailed results for each of the error types. The results imply the need for careful use of intelligent agents; agents are not substitute for user expertise and could indeed degrade the performance of non-expert users.

Published in:

System Sciences, 2003. Proceedings of the 36th Annual Hawaii International Conference on

Date of Conference:

6-9 Jan. 2003