I. Introduction
The k-Nearest Neighbor (KNN) method or also known as the k-Nearest Neighbor Rule (KNNR) is a non-parametric classification method that is known to be the simplest, effective, good performance, and robust [1], [2]. This method works by finding a number of patterns (among all the training patterns in all classes) closest to the input pattern, then determining the decision class based on using voting technique. Some of the weaknesses of the KNN method are that it is sensitive to less relevant features and the neighboring size of [3], [4]. It is relatively difficult to determine the exact because it can be high; in other cases, it can be very low. The most urgent problem in KNN is the voting technique, which makes it low-accuracy for several complex datasets which are randomly distributed [5]. To overcome the weakness of KNN, we created a new scheme in the form of dataset clustering so that the number of clusters is greater than the number of data classes. Furthermore, commissions will select each cluster, so it does not use voting techniques like the standard KNN method.