Problem solving with LLMs

sbagency
4 min readMay 21, 2023

--

Tree of Thoughts: Deliberate Problem Solving with Large Language Models

ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. [paper link]

https://twitter.com/ShunyuYao12/status/1659359706614321152
https://www.allabtai.com/the-tree-of-thoughts-prompt-template/

Large Language Model Guided Tree-of-Thought

The ToT technique is inspired by the human mind’s approach for solving complex reasoning tasks through trial and error. In this process, the human mind explores the solution space through a tree-like thought process, allowing for backtracking when necessary. To implement ToT as a software system, we augment an LLM with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. In order to solve a given problem, these modules engage in a multi-round conversation with the LLM. The memory module records the conversation and state history of the problem solving process, which allows the system to backtrack to the previous steps of the thought-process and explore other directions from there.. [paper link]

https://github.com/jieyilong/tree-of-thought-puzzle-solver

LLM+P: Empowering Large Language Models with Optimal Planning Proficiency

Large language models (LLMs) have demonstrated remarkable zeroshot generalization abilities: state-of-the-art chatbots can provide plausible answers to many common questions that arise in daily life. However, so far, LLMs cannot reliably solve long-horizon planning problems. By contrast, classical planners, once a problem is given in a formatted way, can use efficient search algorithms to quickly identify correct, or even optimal, plans. In an effort to get the best of both worlds, this paper introduces LLM+P, the first framework that incorporates the strengths of classical planners into LLMs. LLM+P takes in a natural language description of a planning problem, then returns a correct (or optimal) plan for solving that problem in natural language [paper link]

https://github.com/Cranial-XIX/llm-pddl
https://twitter.com/cwolferesearch/status/1657122778984660993

The technique is called chain-of-thought (CoT) prompting. It improves the reasoning abilities of LLMs using few-shot learning. In particular, CoT prompting inserts several examples of “chains of thought” for solving a reasoning problem into the LLM’s prompt

https://twitter.com/cwolferesearch/status/1650988783897133059

PAL: Program-Aided Large Language Models

Recently, large language models (LLMs) have exhibited an impressive capacity for arithmetic and symbolic reasoning when given a few examples to establish a contextual framework.

This success can be largely attributed to prompting methods such as “chain-of-thought”, which make use of LLMs to understand the problem description by breaking it down into individual steps, and then solving each step.

LLMs have a tendency to be proficient in the process of decomposition into steps, however they often make mistakes regarding logic and arithmetics during the solution stage, even if the problem was broken down properly. [link]

https://arxiv.org/pdf/2211.10435.pdf
https://reasonwithpal.com/ https://github.com/luyug/pal
pip install langchain
pip install openai

# https://python.langchain.com/en/latest/modules/chains/examples/pal.html

import os
from langchain.chains import PALChain
from langchain import OpenAI

os.environ['OPENAI_API_KEY'] = str("xxxxxxxxxxxxxxxxx")
llm = OpenAI(temperature=0,max_tokens=512, model_name='gpt-4-0314')

pal_chain = PALChain.from_math_prompt(llm, verbose=True)

question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"

pal_chain.run(question)

Large Language Model Programs

The possibility to parameterise an LLM through such in-context examples widens their capability at a much lower cost than finetuning. We extend this line of reasoning and present a method which further expands the capabilities of an LLM by embedding it within an algorithm or program. To demonstrate the benefits of this approach, we present an illustrative example of evidence-supported question-answering. We obtain a 6.4% improvement over the chain of thought baseline through a more algorithmic approach without any finetuning. Furthermore, we highlight recent work from this perspective and discuss the advantages and disadvantages in comparison to the standard approaches. [paper link]

CS25 I Stanford Seminar — Transformers United 2023: Strategic Games

https://www.youtube.com/watch?v=phWxl0nkgKk https://web.stanford.edu/class/cs25/

Exploring Chain-of-Thought Style Prompting for Text-to-SQL

https://arxiv.org/pdf/2305.14215.pdf https://platform.openai.com/examples/default-sql-translate

// in progress…

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet