Skip to Main Content
This study introduced a novel algorithm to compute similarities between natural languages. Using syntactical relationships derived from natural languages, the algorithm proposed a semantic structural model and quantified natural languages using the word similarity method based on WordNet and lexical databases. The experimental results indicated that the algorithm could yield optimal results in semantic recognition when applied to sentences or short texts that are grammatically complex or relatively long (longer than 12 words). The contribution of this study is in its conversion of the grammar of different natural languages into a unified semantic structure, through which the semantic similarity of two sentences or short texts can be obtained by comparison. This study aimed to enhance the capability of computers for fuzzy concept processing, which can be applied to the fields of search engines and artificial intelligence. For instance, in search engines, sentences or short text-based concepts may be semantically structured to replace key-word based queries when executing search tasks. In the field of artificial intelligence, this capability may be applied to intelligent agents to smooth the process of interaction between humans and machines.