Skip to Main Content
In time series classification, the nearest neighbor (NN) method is known to compare well against others over a wide range of benchmark data. However, adapting its instance-based learning format for application-specific goals, such as optimizing alternate performance measures or cost-sensitive learning is not a straightforward problem. In this paper, we attempt to extend the effectiveness of the NN model to address the practical cost settings and performance goals using a simple representation which takes the approximate density of each class as the numerical features of the data. As the nearest neighbor model exploits such approximations to predict the class with largest posterior probability, one can define a linear discriminative function of this feature which performs identically to the 1-NNclassifier. Further, the weight parameters of the function which intuitively represents the exponential weight of examples in the density estimation, can be trained to minimize the misclassification cost or to optimize an alternative performance measure such as F1. We evaluate the proposed representation inmulti-class classification and skewed class distribution problems using various public benchmarks. The results show that our approach outperforms the nearest neighbor and the SVMclassifiers based on the original time series features.