Skip to Main Content
The paper presents an analysis of the kurtosis performance surface as applied to linear estimation. The analysis concentrates on a modified kurtosis (MK) function used in implementations of the least mean kurtosis (LMK) adaptive algorithm. The MK function is shown to make the LMK algorithm applicable even for Gaussian inputs. The minimum of the MK function is derived and shown to be unique and to correspond to the Wiener solution of the mean square error (MSE) estimation problem. A quantitative comparison of the MSE and MK functions explains why the LMK adaptive algorithm is faster than MSE-based algorithms during the initial learning phase, becoming slower as it approaches steady-state.