Skip to Main Content
Text classification is enduring to be one of the most researched problems due to continuously-increasing amount of electronic documents and digital data. Naive Bayes is an effective and a simple classifier for data mining tasks, but does not show much satisfactory results in automatic text classification problems. In this paper, the performance of naive Bayes classifier is analyzed by training the classifier with only the positive features selected by CHIR, a statistics based method as input. Feature selection is the most important preprocessing step that improves the efficiency and accuracy of text classification algorithms by removing redundant and irrelevant terms from the training corpus. Experiments were conducted for randomly selected training sets and the performance of the classifier with words as features was analyzed. The proposed method achieves higher classification accuracy compared to other native methods for the 20Newsgroup benchmark.