Machine reasoning framework for Large Language Models

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We explore the current state and future directions of reasoning in Large Language Models (LLMs). Key approaches for enhancing machine reasoning capabilities are reviewed, such as Chain-of-Thought prompting, ReAct, self-reflection, and memory-augmented architectures. We highlight how attention mechanisms and memory modules form the foundation for information integration and context preservation, essential for any reasoning process. Further, we emphasize the computational trade-offs involved in achieving human-like reasoning within LLMs. Through analytical estimates and comparative evaluation, we show that systems aspiring to approximate the depth, coherence, and abstraction of human reasoning require exponentially greater memory, multi-step internal reflection loops, and more energy-efficient architectures. We conclude with a vision for next-generation models that balance reasoning power with computational sustainability, including quantum-inspired architectures and adaptive attention systems.
Original languageEnglish
Title of host publication2025 International Conference Automatics, Robotics and Artificial Intelligence (ICARAI)
PublisherInstitute of Electrical and Electronics Engineers Inc.
Number of pages6
ISBN (Electronic)9781665465663
ISBN (Print)9781665465670
Publication statusPublished - 3 Sept 2025
Event3rd International Conference Automatics, Robotics and Artificial Intelligence, ICARAI 2025 - Sozopol, Bulgaria
Duration: 13 Jun 202515 Jun 2025

Conference

Conference3rd International Conference Automatics, Robotics and Artificial Intelligence, ICARAI 2025
Country/TerritoryBulgaria
CitySozopol
Period13/06/2515/06/25

Keywords

  • reasoning
  • large language models
  • scaling

Fingerprint

Dive into the research topics of 'Machine reasoning framework for Large Language Models'. Together they form a unique fingerprint.

Cite this