By Topic

A framework for validation of rule-based systems

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
Knauf, R. ; Fac. of Comput. Sci. & Autom., Tech. Univ. of Ilmenau, Germany ; Gonzalez, A.J. ; Abel, T.

We describe a complete methodology for the validation of rule-based expert systems. This methodology is presented as a five-step process that has two central themes: 1) to create a minimal set of test inputs that adequately cover the domain represented in the knowledge base; and 2) a Turing Test-like methodology that evaluates the system's responses to the test inputs and compares them to the responses of human experts. The development of minimal set of test inputs takes into consideration various criteria, both user-defined, and domain-specific. These criteria are used to reduce the potentially very large set of test inputs to one that is practical, keeping in mind the nature and purpose of the developed system. The Turing Test-like evaluation methodology makes use of only one panel of experts to both evaluate each set of test cases and compare the results with those of the expert system, as well as with those of the other experts. The hypothesis being presented is that much can be learned about the experts themselves by having them anonymously evaluate each other's responses to the same test inputs. Thus, we are better able to determine the validity of an expert system. Depending on its purpose, we introduce various ways to express validity as well as a technique to use the validity assessment for the refinement of the rule base. Lastly, we describe a partial implementation of the test input minimalization process on a small but nontrivial expert system. The effectiveness of the technique was evaluated by seeding errors into the expert system, generating the appropriate set of test inputs and determining whether the errors could be detected by the suggested methodology

Published in:

Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on  (Volume:32 ,  Issue: 3 )