Loading [a11y]/accessibility-menu.js
Restricted Minimum Error Entropy Criterion for Robust Classification | IEEE Journals & Magazine | IEEE Xplore

Restricted Minimum Error Entropy Criterion for Robust Classification


Abstract:

The minimum error entropy (MEE) criterion is a powerful approach for non-Gaussian signal processing and robust machine learning. However, the instantiation of MEE on robu...Show More

Abstract:

The minimum error entropy (MEE) criterion is a powerful approach for non-Gaussian signal processing and robust machine learning. However, the instantiation of MEE on robust classification is a rather vacancy in the literature. The original MEE purely focuses on minimizing Renyi’s quadratic entropy of the prediction errors, which could exhibit inferior capability in noisy classification tasks. To this end, we analyze the optimal error distribution with adverse outliers and introduce a specific codebook for restriction, which optimizes the error distribution toward the optimal case. Half-quadratic-based optimization and convergence analysis of the proposed learning criterion, called restricted MEE (RMEE), are provided. The experimental results considering logistic regression and extreme learning machine on synthetic data and UCI datasets, respectively, are presented to demonstrate the superior robustness of RMEE. Furthermore, we evaluate RMEE on a noisy electroencephalogram dataset, so as to strengthen its practical impact.
Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 33, Issue: 11, November 2022)
Page(s): 6599 - 6612
Date of Publication: 02 June 2021

ISSN Information:

PubMed ID: 34077373

Funding Agency:

Citations are not available for this document.

I. Introduction

Many tasks in machine learning require robustness—that the learning process of a model is less affected by noises [1]. Different from the noise in regression, which means that the attribute diverges from the expectant distribution, the noise in classification is more intricate and could be systematically classified into two categories: attribute noise and label noise [2], [3]. Attribute (or feature) noise means measurement errors resulting from noisy sensors, recordings, communications, and data storage, while label noise means wrong labeling. Label noise could result from mutual elements as attribute noise [4], such as communication errors, whereas it mainly arises from expert elements [5]: 1) unreliable labeling due to insufficient information; 2) unreliable nonexpert for low cost; and 3) subjective labeling. Not to mention, class is not always totally distinct as lived and died [6]. The outlier, a more adverse case of noise [7], usually causes serious performance degradation. We define attribute outliers appear with large attribute values while located in the opposite region of the expected decision boundary, and label outliers signify that recognizable samples are, however, assigned with wrong labels.

Cites in Papers - |

Contact IEEE to Subscribe

References

References is not available for this document.