Skip to Main Content
The latent semantic indexing (LSI) methodology for information retrieval applies the singular value decomposition to identify an eigensystem for a large matrix, in which cells represent the occurrence of terms (words) within documents. This methodology is used to rank text documents, such as Web pages or abstracts, based on their relevance to a topic. The LSI was introduced to address the issues of synonymy (different words with the same meaning) and polysemy (the same words with multiple meanings), thus addressing the ambiguity in human language by utilizing the statistical context of words. Rather than keeping all k possible eigenvectors and eigenvalues from the singular value decomposition which approximates the original term by document matrix, a smaller number is used - essentially allowing a fuzzy match of a topic to the original term by document matrix. In this paper, we show that the choice k impacts the resultant ranking and there is no value of k that results in stability of ranked results for similarity of the topic to documents. This is a surprising result, because prior literature indicates that eigensystems based on successively large values of k should approximate the complete (max k) eigensystems. The finding that document-query similarity rankings with larger values of k do not, in fact, maintain consistency, makes it difficult to assert that any particular value of k is optimal. This in turn renders LSI potentially untrustworthy for use in ranking text documents, even for values that differ by only 10% of the max k.
Date of Conference: 5-8 Jan. 2010