By Topic

Forward chaining parallel inference

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Labhart, J. ; Merit Technol. Inc., Plano, TX, USA ; Rowe, M.C. ; Matney, S. ; Carrow, S.

The completed and ongoing efforts of the parallel inferencing performance evaluation and refinement project (PIPER) are described. PIPER Phase I produced an initial parallel inference engine (expert system tool kit) for the BBN Butterfly Plus. The BBN Butterfly Plus computer consists of up to 256 processor nodes that are interconnected via a butterfly switch. The Phase I inference engine is based on the Merit enhanced traversal engine (METE) algorithm, which is an extension of C.L Forgy's (1979) RETE algorithm. To evaluate the efficacy of this design and implementation, an iterating 108-rule knowledge base was composed. This rule set was designed to roughly simulate the information-rich nature of its target application domain, Strategic Defense Initiative contact discrimination, and was processed on from 7 to 85 Butterfly Plus processor nodes. Three uniprocessor control groups were also used to gauge speed-up. Using the control group which produced the most conservative speed-up factors, the Phase I inference engine achieved a maximum true speed-up in excess of 29

Published in:

Aerospace and Electronics Conference, 1990. NAECON 1990., Proceedings of the IEEE 1990 National

Date of Conference:

21-25 May 1990