Home OpenAI Logic-of-Thought: Enhancing Logical Reasoning in Large Language Models through Propositional Logic Augmentation
OpenAI

Logic-of-Thought: Enhancing Logical Reasoning in Large Language Models through Propositional Logic Augmentation

Share
Logic-of-Thought: Enhancing Logical Reasoning in Large Language Models through Propositional Logic Augmentation
Share


Large Language Models (LLMs) have made significant strides in various Natural Language Processing tasks, yet they still struggle with mathematics and complex logical reasoning. Chain-of-Thought (CoT) prompting has emerged as a promising approach to enhance reasoning capabilities by incorporating intermediate steps. However, LLMs often exhibit unfaithful reasoning, where conclusions don’t align with the generated reasoning chain. This challenge has led researchers to explore more sophisticated reasoning topologies and neuro-symbolic methods. These approaches aim to simulate human reasoning processes and integrate symbolic reasoning with LLMs. Despite these advancements, existing methods face limitations, particularly the issue of information loss during the extraction of logical expressions, which can lead to incorrect intermediate reasoning processes.

Researchers have developed various approaches to enhance LLMs’ reasoning capabilities. CoT prompting and its variants, such as Zero-shot CoT and CoT with Self-Consistency, have improved logical reasoning by breaking down complex problems into intermediate steps. Other methods like Least-To-Most prompting and Divide-and-Conquer focus on problem decomposition. Tree-of-Thoughts and Graph-of-Thoughts introduce more complex reasoning topologies. Neuro-symbolic approaches combine LLMs with symbolic reasoning to address unfaithful reasoning. These include LReasoner, LogicAsker, Logic-LM, SatLM, and LINC, which integrate logical formalization, symbolic solvers, and LLMs to enhance reasoning capabilities and overcome information loss issues.

Researchers from the University of Science and Technology of China, Institute of Automation, Chinese Academy of Sciences, Beihang University, and JD.com present Logic-of-Thought (LoT), a unique prompting method designed to address the information loss issue in existing neuro-symbolic approaches. LoT extracts propositions and logical expressions from the input context, expands them using logical reasoning laws, and translates the expanded expressions back into natural language. This extended logical description is then appended to the original input prompt, guiding the LLM’s reasoning process. By preserving the original prompt and adding logical information in natural language, LoT avoids complete reliance on symbolic solvers and mitigates information loss. The method is compatible with existing prompting techniques, allowing for seamless integration. Experiments across five logical reasoning datasets demonstrate LoT’s effectiveness in significantly boosting the performance of various prompting methods, including Chain-of-Thought, Self-Consistency, and Tree-of-Thoughts.

LoT framework comprises three key phases: Logic Extraction, Logic Extension, and Logic Translation. In the Logic Extraction phase, LLMs identify sentences with conditional reasoning relationships and extract propositional symbols and logical expressions from the input context. The Logic Extension phase employs a Python program to expand these logical expressions using predefined reasoning laws. Finally, the Logic Translation phase uses LLMs to convert the expanded logical expressions back into natural language descriptions. These descriptions are then incorporated into the original input prompt, creating a comprehensive new prompt for LLMs. This process preserves the original context while augmenting it with additional logical information, effectively guiding the LLM’s reasoning process without relying solely on symbolic solvers or risking information loss.

LoT prompting significantly enhances the performance of existing methods across five logical reasoning datasets. LoT+CoT-SC(5) consistently outperforms other methods, with LoT+SC achieving the highest accuracy on the FOLIO dataset with GPT-4. LoT improves baseline methods in 35 out of 40 comparisons, demonstrating its seamless integration and effectiveness. Minor improvements occur when combining LoT with CoT or CoT-SC due to overlapping capabilities. Some limitations are observed in the RuleTaker and ProofWriter datasets with GPT-4, attributed to information extraction issues. Overall, LoT standalone performance matches or exceeds CoT, highlighting its robust logical reasoning capabilities.

LoT is a robust symbolic-enhancement prompting approach that addresses information loss in neuro-symbolic methods. By deriving expanded logical information from input context using propositional logic, LoT augments original prompts to enhance LLMs’ logical reasoning capabilities. Its compatibility with existing prompting techniques like Chain-of-Thought, Self-Consistency, and Tree-of-Thoughts allows for seamless integration. Experiments demonstrate that LoT significantly improves the performance of various prompting methods across multiple logical reasoning datasets. Future work will focus on exploring additional logical relationships and reasoning laws, as well as supporting more prompting methods to further enhance LoT’s logical reasoning capabilities.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 50k+ ML SubReddit


Asjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.





Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

By submitting this form, you are consenting to receive marketing emails and alerts from: techaireports.com. You can revoke your consent to receive emails at any time by using the Unsubscribe link, found at the bottom of every email.

Latest Posts

Related Articles
Alibaba Researchers Introduce R1-Omni: An Application of Reinforcement Learning with Verifiable Reward (RLVR) to an Omni-Multimodal Large Language Model
OpenAI

Alibaba Researchers Introduce R1-Omni: An Application of Reinforcement Learning with Verifiable Reward (RLVR) to an Omni-Multimodal Large Language Model

Emotion recognition from video involves many nuanced challenges. Models that depend exclusively...

From Sparse Rewards to Precise Mastery: How DEMO3 is Revolutionizing Robotic Manipulation
OpenAI

From Sparse Rewards to Precise Mastery: How DEMO3 is Revolutionizing Robotic Manipulation

Long-horizon robotic manipulation tasks are a serious challenge for reinforcement learning, caused...

HybridNorm: A Hybrid Normalization Strategy Combining Pre-Norm and Post-Norm Strengths in Transformer Architectures
OpenAI

HybridNorm: A Hybrid Normalization Strategy Combining Pre-Norm and Post-Norm Strengths in Transformer Architectures

Transformers have revolutionized natural language processing as the foundation of large language...

This AI Paper Introduces R1-Searcher: A Reinforcement Learning-Based Framework for Enhancing LLM Search Capabilities
OpenAI

This AI Paper Introduces R1-Searcher: A Reinforcement Learning-Based Framework for Enhancing LLM Search Capabilities

Large language models (LLMs) models primarily depend on their internal knowledge, which...