Can agents improve reasoning // LLMs reflection
Reasoning is the Achilles heel of artificial intelligence, there is no effective reasoning, but nevertheless, some methods, such as reflection, can be applied
If the problem can’t be solved by reasoning, reflection or other post-mortem analysis won’t help. But let’s give it a try.
Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. It involves prompting an LLM to reflect on and critique its past actions, sometimes incorporating additional external information such as tool observations.
LLMs are System 1 type, some reasoning (System 2) mechanisms can be applied to improve output quality.
All of the techniques above leverage additional LLM inference to increase the likelihood of generating a higher quality output, or of responding correctly to a more complex reasoning task. While this takes extra time, it can be appropriate when output quality matters more than response time, and if you save the trajectories to memory (or as fine-tuning data), you can update the model to avoid repeat mistakes in the future.
The ability of Large Language Models (LLMs) to critique and refine their reasoning is crucial for their application in evaluation, feedback provision, and self-improvement. This paper introduces CRITICBENCH, a comprehensive benchmark designed to assess LLMs’ abilities to critique and rectify their reasoning across a variety of tasks. CRITICBENCH encompasses five reasoning domains: mathematical, commonsense, symbolic, coding, and algorithmic. It compiles 15 datasets and incorporates responses from three LLM families. Utilizing CRITICBENCH, we evaluate and dissect the performance of 17 LLMs in generation, critique, and correction reasoning, i.e., GQC reasoning, and analyze the key factors affecting LLM critical reasoning. Our findings reveal: (1) a linear relationship in GQC capabilities, with critiquefocused training markedly enhancing performance; (2) a task-dependent variation in critique and correction effectiveness, with logicoriented tasks being more amenable to correction; (3) GQC knowledge inconsistencies that decrease as model size increases; and (4) an intriguing inter-model critiquing pattern, where stronger models are better at critiquing weaker ones, while weaker models can surprisingly surpass stronger ones in their self-critique. We hope these insights into the nuanced critiquecorrect reasoning of LLMs will foster further research in LLM critique and self-improvement1 .