By Topic

Do agents need understanding?

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Haase, K. ; MIT Media Lab., Heidelberg, Germany

There are several important ideas and questions that arise within AI work on agents. I address one of these questions by asking how much “human-like” understanding is necessary for a useful agent. One of the lessons of the new wave of AI research in the late 1980s and early 1990s was a greater appreciation for the artifacts introduced by a-priori assumptions about the description of problems. Systems of surprising effectiveness, flexibility and simplicity could solve apparently complicated problems by dropping the assumptions about identity, reference and generality imported from their common-sense linguistic formulation. By using encodings or representations which are simultaneously more specialized in some ways (by being task-specific) and more general in other ways (by their systemic regard for embeddedness as a design constraint), these systems introduced a different way of thinking about representation and intelligence. The intellectual contribution of the idea of an intelligent agent lies in a similar attitude toward the tasks of helping us deal with the mass of information and responsibilities around us

Published in:

IEEE Expert  (Volume:12 ,  Issue: 4 )