Prompt engineering // hacks

sbagency
2 min readNov 2, 2023

--

Image is generated by DALL-E 3 with prompt — “prompt engineering”)

Prompt engineering best practices:

  • clarity // prompt should be easy to understand
  • role // define a role for an assistant
  • context // docs, data, details, chat history, etc.
  • instruction // generate, summarize, write, analyze, etc.
  • don’ts // what not to do, don’t use lists, titles, etc.
  • singularity // one prompt — one task
  • step-by-step // use sequence of prompts
  • tone // what tone to use
  • criticize // analyze output of a prompt in another prompt to improve it, example: writer — editor pattern
  • output examples // use examples
  • format // format options
https://arxiv.org/pdf/2310.01714.pdf

Here is a summary of the key points from the paper:

The paper introduces a new prompting approach called analogical prompting that aims to improve reasoning capabilities of large language models (LLMs). The key ideas are:

- Inspired by how humans recall relevant past experiences when solving new problems, the approach prompts LLMs to self-generate examples of similar problems before solving a given problem.

- This eliminates the need for manually labeling reasoning exemplars for each task, which is required by existing prompting methods like few-shot chain-of-thought (CoT).

- The self-generated exemplars are tailored to the specific problem, providing more relevant guidance compared to fixed exemplars used in standard few-shot CoT.

- The approach can also prompt LLMs to generate high-level knowledge alongside exemplars, which further improves reasoning, especially for complex tasks like code generation.

- Both exemplars and knowledge are generated within a single prompt in one pass, leveraging LLMs’ in-context learning abilities.

- Experiments show the approach outperforms baselines like 0-shot CoT and few-shot CoT on math, code generation, and logical reasoning tasks.

In summary, the key idea is prompting LLMs to self-generate tailored reasoning guidance, taking inspiration from human analogical reasoning. This improves reasoning capabilities without needing manually labeled data.

--

--

sbagency
sbagency

Written by sbagency

Tech/biz consulting, analytics, research for founders, startups, corps and govs.

No responses yet