Skip to Main Content
One of the major problems of modern Information Retrieval (IR) systems is the vocabulary problem that concerns the discrepancies between terms used for describing documents and the terms used by the researchers to describe their information need. In this paper, we propose to use the well known abstractive model -Latent Semantic Analysis (LSA)- with a wide variety of distance functions and similarity measures to measure the similarity between Arabic words, such as the Euclidean Distance, Cosine Similarity, Jaccard Coefficient, and the Pearson Correlation Coefficient. LSA statistically analyze distribution of terms belonging to a large textual corpus to elaborate a semantic space in which each term is represented by a vector. In our experiments, we compare and analyze the effectiveness of this model with the different measures above in two cases: with and without Stemming, for two testing data: the first is composed of 252 documents from several categories (Economics, Politics, and Sports), and the second contains 257 politics documents only to test the influence of the variety of the used corpus. The obtained results show that, on the one hand, the variety of the corpus gives more accurate results; on the other hand, the use of the Stemming gives us more accuracy in some cases and the opposite in the others.