Abstract
We explore the current state and future directions of reasoning in Large Language Models (LLMs). Key approaches for enhancing machine reasoning capabilities are reviewed, such as Chain-of-Thought prompting, ReAct, self-reflection, and memory-augmented architectures. We highlight how attention mechanisms and memory modules form the foundation for information integration and context preservation, essential for any reasoning process. Further, we emphasize the computational trade-offs involved in achieving human-like reasoning within LLMs. Through analytical estimates and comparative evaluation, we show that systems aspiring to approximate the depth, coherence, and abstraction of human reasoning require exponentially greater memory, multi-step internal reflection loops, and more energy-efficient architectures. We conclude with a vision for next-generation models that balance reasoning power with computational sustainability, including quantum-inspired architectures and adaptive attention systems.
| Original language | English |
|---|---|
| Title of host publication | 2025 International Conference Automatics, Robotics and Artificial Intelligence (ICARAI) |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| Number of pages | 6 |
| ISBN (Electronic) | 9781665465663 |
| ISBN (Print) | 9781665465670 |
| Publication status | Published - 3 Sept 2025 |
| Event | 3rd International Conference Automatics, Robotics and Artificial Intelligence, ICARAI 2025 - Sozopol, Bulgaria Duration: 13 Jun 2025 → 15 Jun 2025 |
Conference
| Conference | 3rd International Conference Automatics, Robotics and Artificial Intelligence, ICARAI 2025 |
|---|---|
| Country/Territory | Bulgaria |
| City | Sozopol |
| Period | 13/06/25 → 15/06/25 |
Keywords
- reasoning
- large language models
- scaling