Famous “thinking fast & slow” // mimicking humans in AI-Agents

Talker-Reasoner dual agent architecture // two agents to play

sbagency
5 min read6 days ago
https://arxiv.org/pdf/2410.08328

Large language models have enabled agents of all kinds to interact with users through natural conversation. Consequently, agents now have two jobs: conversing and planning/reasoning. Their conversational responses must be informed by all available information, and their actions must help to achieve goals. This dichotomy between conversing with the user and doing multi-step reasoning and planning can be seen as analogous to the human systems of “thinking fast and slow” as introduced by Kahneman [14]. Our approach is comprised of a “Talker” agent (System 1) that is fast and intuitive, and tasked with synthesizing the conversational response; and a “Reasoner” agent (System 2) that is slower, more deliberative, and more logical, and is tasked with multi-step reasoning and planning, calling tools, performing actions in the world, and thereby producing the new agent state. We describe the new Talker-Reasoner architecture and discuss its advantages, including modularity and decreased latency. We ground the discussion in the context of a sleep coaching agent, in order to demonstrate real-world relevance.

1. The Talker: The fast agent that interacts with the user via language, perceives the world, gets observations and feedback from the user, interacts with memory to prime its responses, and generates the conversational response.

2. The Reasoner: The slow and deliberative agent responsible for complex problem solving, which involves synergizing reasoning with taking actions augmenting its knowledge from the real world, such as calling tools or fetching information from external databases [17]. The Reasoner is also responsible for making and updating beliefs that drive its decisions, and the Talker’s subsequent utterances. The Reasoner is typically goal-conditioned, primed to solve a specific problem or goal [9], and hierarchical [36], dividing problems into sub-problems.

— -

This paper introduces the dual-system agent framework as a possible biologically-inspired architecture for foundation-model driven intelligent agents. Inspired by the behavioral science principles behind this framework, directions for future research include deciding when not to probe the Reasoner and how to utilize it in a lower capacity most of the time, when the Talker can handle most situations. Ideally, given a user query, the Talker should automatically determine whether it requires System 2 reasoning, and therefore the Reasoner, or whether it can safely proceed with its System 1 thinking. Another direction is to extend the Talker-Reasoner architecture to multiple Reasoners, each writing belief states to different part of the memory, for different types of reasoning.

https://arxiv.org/pdf/2402.12875

Instructing the model to generate a sequence of intermediate steps, a.k.a., a chain of thought (CoT), is a highly effective method to improve the accuracy of large language models (LLMs) on arithmetics and symbolic reasoning tasks. However, the mechanism behind CoT remains unclear. This work provides a theoretical understanding of the power of CoT for decoder-only transformers through the lens of expressiveness. Conceptually, CoT empowers the model with the ability to perform inherently serial computation, which is otherwise lacking in transformers, especially when depth is low. Given input length n, previous works have shown that constantdepth transformers with finite precision poly(n) embedding size can only solve problems in TC0 without CoT. We first show an even tighter expressiveness upper bound for constant-depth transformers with constant-bit precision, which can only solve problems in AC0 , a proper subset of TC0 . However, with T steps of CoT, constant-depth transformers using constant-bit precision and O(log n) embedding size can solve any problem solvable by boolean circuits of size T. Empirically, enabling CoT dramatically improves the accuracy for tasks that are hard for parallel computation, including the composition of permutation groups, iterated squaring, and circuit value problems, especially for low-depth transformers.

Current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data

https://arxiv.org/pdf/2410.05229

Recent advancements in Large Language Models (LLMs) have sparked interest in their formal reasoning capabilities, particularly in mathematics. The GSM8K benchmark is widely used to assess the mathematical reasoning of models on grade-school-level questions. While the performance of LLMs on GSM8K has significantly improved in recent years, it remains unclear whether their mathematical reasoning capabilities have genuinely advanced, raising questions about the reliability of the reported metrics. To address these concerns, we conduct a largescale study on several state-of-the-art open and closed models. To overcome the limitations of existing evaluations, we introduce GSM-Symbolic, an improved benchmark created from symbolic templates that allow for the generation of a diverse set of questions. GSM-Symbolic enables more controllable evaluations, providing key insights and more reliable metrics for measuring the reasoning capabilities of models.Our findings reveal that LLMs exhibit noticeable variance when responding to different instantiations of the same question. Specifically, the performance of all models declines when only the numerical values in the question are altered in the GSM-Symbolic benchmark. Furthermore, we investigate the fragility of mathematical reasoning in these models and demonstrate that their performance significantly deteriorates as the number of clauses in a question increases. We hypothesize that this decline is due to the fact that current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data. When we add a single clause that appears relevant to the question, we observe significant performance drops (up to 65%) across all state-of-the-art models, even though the added clause does not contribute to the reasoning chain needed to reach the final answer. Overall, our work provides a more nuanced understanding of LLMs’ capabilities and limitations in mathematical reasoning.

--

--

sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.