No single prompt can solve your problem, but a sequence of prompts. There are many ideas such as Chain-of-thoughts, Tree-of-thoughts, and now yet another one Graph-of-thoughts..
We can manipulate prompts and LLM-outputs to get better/correct results.
Here is a summary of the key ideas from the article “Graph of Thoughts”:
- The article proposes Graph of Thoughts (GoT), a new framework for enhancing prompting capabilities in large language models (LLMs) like GPT-3.
- In GoT, the reasoning process of an LLM is modeled as an arbitrary graph, where vertices represent LLM “thoughts” or intermediate solutions, and edges indicate dependencies between thoughts.
- This graph structure allows combining and transforming thoughts in novel ways not possible with previous prompting schemes like chain-of-thought or tree-of-thoughts. For example, aggregating the best thoughts, refining thoughts through feedback loops, etc.
- GoT provides finer-grained control over manipulating chains of reasoning compared to prior approaches. It also enables easy extension with new thought transformations and graph structures.
- Key advantages of GoT include ability to break down complex tasks into smaller subtasks, independently solve subtasks, and incrementally combine solutions. This improves accuracy and reduces inference costs.
- GoT outperforms previous prompting schemes on tasks like sorting, set operations, keyword counting, and document merging. For example, it improves sorting accuracy by 62% over tree-of-thoughts while cutting costs by >31%.
- Overall, modeling LLM reasoning as a graph brings it closer to human thinking and brain functions like recurrence. GoT spearheads more powerful prompting paradigms through this networked approach.