By Topic

Intrusion detection using k-Nearest Neighbor

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

2 Author(s)
Govindarajan, M. ; Dept. of Comput. Sci. & Eng., Annamalai Univ., Annamalai Nagar, India ; Chandrasekaran, R.M.

Data mining is the use of algorithms to extract the information and patterns derived by the knowledge discovery in databases process. Classification maps data into predefined groups or classes. It is often referred to as supervised learning because the classes are determined before examining the data. In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes feature selection and model selection simultaneously for k-nearest neighbor (k-NN) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the classifier significantly: hybrid k-NN, comparative cross validation. The feasibility and the benefits of the proposed approach are demonstrated by means of data mining problem: intrusion detection in computer networks. It is shown that, compared to earlier k-NN technique, the run time is reduced by up to 0.01 % and 0.06 % while error rates are lowered by up to 0.002 % and 0.03 % for normal and abnormal behaviour respectively. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

Published in:

Advanced Computing, 2009. ICAC 2009. First International Conference on

Date of Conference:

13-15 Dec. 2009