Skip to Main Content
The most obvious architectural solution for high-speed fuzzy inference is to exploit the temporal parallelism and spatial parallelism inherited in a fuzzy inference execution. However, the active rules in a fuzzy inference execution are often only a small part of the total rules. In this paper, we present a new architecture, which uses less hardware resources by discarding non-active rules in the earlier pipeline stage. Implementation data demonstrates that the proposed architecture achieves very good results in terms of the inference speed and the chip area.