Abstract:
Large language models (LLMs) have been shown to struggle with complex logical reasoning tasks due to the inherent ambiguity and complexity of natural language. These chal...Show MoreMetadata
Abstract:
Large language models (LLMs) have been shown to struggle with complex logical reasoning tasks due to the inherent ambiguity and complexity of natural language. These challenges are further amplified when processing large and diverse datasets, increasing the likelihood of unfaithful reasoning and predictive hallucinations. However, LLMs can provide accurate responses when queries are clear and direct. Symbolic logic provides precise, well-defined rules that can help overcome ambiguity and support reasoning. In this work, we leverage symbolic logic’s precision to enhance LLMs’ logical reasoning capabilities by introducing the Graph of Logic (GoL) framework. GoL combines the power of graph structures with the strengths of LLMs and symbolic logic. GoL utilizes the precise rules of symbolic logic to infer new facts and detect LLM hallucinations effectively on complex datasets. Furthermore, GoL utilizes graph structures to support scalability for large datasets and tackle long dependencies, enabling efficient handling of complex reasoning tasks. We conduct extensive experiments across seven benchmark datasets, encompassing various types of reasoning. These include deductive, inductive, and abductive reasoning, each testing distinct aspects of logical inference. The experimental results demonstrate GoL’s advantage in improving the reasoning capabilities of LLMs. GoL outperforms the baselines with an average margin of 18.18% for the GPT-3.5 and GPT-4 models, outperforming the baselines for all datasets for the GPT-3.5 model and six out of seven datasets for the GPT-4 model1.
Published in: 2024 IEEE International Conference on Big Data (BigData)
Date of Conference: 15-18 December 2024
Date Added to IEEE Xplore: 16 January 2025
ISBN Information: