System Maintenance:
There may be intermittent impact on performance while updates are in progress. We apologize for the inconvenience.
By Topic

A Study of the Neighborhood Counting Similarity

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Hui Wang ; Ulster Univ., Newtownabbey ; Murtagh, F.

The neighborhood counting measure is the number of all common neighborhoods between a pair of data points. It can be used as a similarity measure for different types of data through the notion of neighborhood: multivariate, sequence, and tree data. It has been shown that this measure is closely related to a secondary probability G, which is defined in terms of a primary probability P of interest to a problem. It has also been shown that the G probability can be estimated by aggregating neighborhood counts. The following questions can be asked: What is the relationship between this similarity measure and the primary probability P, especially for the task of classification? How does this similarity measure compare with the euclidean distance, since they are directly comparable? How does the G probability estimation compare with the popular kernel density estimation for the task of classification? These questions are answered in this paper, some theoretically and some experimentally. It is shown that G is a linear function of P and, therefore, a G-based Bayes classifier is equivalent to a P-based Bayes classifier. It is also shown that a weighted k-nearest neighbor classifier equipped with the neighborhood counting measure is, in fact, an approximation of the G-based Bayes classifier. It is further shown that the G probability leads to a probability estimator similar in spirit to the kernel density estimator. New experimental results are presented in this paper, which show that this measure compares favorably with the euclidean distance not only on multivariate data but also on time-series data. New experimental results are also presented regarding probability/density estimation. It was found that the G probability estimation can outperform the kernel density estimation in classification tasks.

Published in:

Knowledge and Data Engineering, IEEE Transactions on  (Volume:20 ,  Issue: 4 )