Skip to Main Content
Over the last decade, nonlinear kernel-based learning methods have been widely used in the sciences and in industry for solving, e.g., classification, regression, and ranking problems. While their users are more than happy with the performance of this powerful technology, there is an emerging need to additionally gain better understanding of both the learning machine and the data analysis problem to be solved. Opening the nonlinear black box, however, is a notoriously difficult challenge. In this review, we report on a set of recent methods that can be universally used to make kernel methods more transparent. In particular, we discuss relevant dimension estimation (RDE) that allows to assess the underlying complexity and noise structure of a learning problem and thus to distinguish high/low noise scenarios of high/low complexity respectively. Moreover, we introduce a novel local technique based on RDE for quantifying the reliability of the learned predictions. Finally, we report on techniques that can explain the individual nonlinear prediction. In this manner, our novel methods not only help to gain further knowledge about the nonlinear signal processing problem itself, but they broaden the general usefulness of kernel methods in practical signal processing applications.