Skip to Main Content
Explanation ability of a fuzzy rule-based classifier is its ability to explain why an input pattern is classified as a particular class in a convincing way. This ability is important especially when fuzzy rule-based classifiers are used as support systems for human users. This is because human users often want to know why the current input pattern is classified as a particular class. The explanation ability looks similar to the interpretability. They are, however, clearly different concepts. Whereas the explanation ability is directly related to the classification of each pattern, the interpretability is usually independent of classification results. The interpretability has been taken into account in multiobjective design of fuzzy rule-based classifiers. However, the explanation ability has not been used for fuzzy rule-based classifier design. This is because its quantitative definition is very difficult. In this paper, we discuss various factors that are related to quantitative definition of the explanation ability of fuzzy rule-based classifiers. Using simple numerical examples, we explain that the complexity minimization of fuzzy rule-based classifiers does not always lead to the explanation ability maximization. We also explain that the accuracy of fuzzy rules is related to the explanation ability.