Skip to content

Prompt Engineering

The art and science of communicating with LLMs. These techniques transform how models reason, from simple chain-of-thought to sophisticated graph-structured exploration of solution spaces.

CoT ToT GoT Reasoning

Prompt Engineering

Technique Description Paper
CoT (Chain-of-Thought) Prompting that elicits step-by-step reasoning in LLMs for complex problem solving. Paper
CoT-SC (Self-Consistency) Samples multiple reasoning paths and takes the majority vote for improved chain-of-thought. Paper
ToT (Tree of Thoughts) Enables deliberate problem solving via tree-structured exploration of reasoning paths. Paper
GoT (Graph of Thoughts) Generalizes chain/tree of thought into arbitrary graph structures for more flexible reasoning. Paper
SoT (Skeleton-of-Thought) Enables LLMs to do parallel decoding by first generating a skeleton then filling in details. Paper
PoT (Program of Thoughts) Disentangles computation from reasoning by generating programs for numerical reasoning tasks. Paper
AoT (Algorithm of Thoughts) Enhances exploration of ideas in LLMs using algorithm-inspired prompting strategies. Paper
Cue-CoT Chain-of-thought prompting for responding to in-depth dialogue questions. Paper, Code

Long Context and Positional Encoding

Method Description Links
RoPE (Rotary Position Embedding) Rotary position encoding widely used in modern LLMs for handling positional information. -
LongRoPE Extends LLM context windows beyond 2 million tokens. Paper
RecurrentGPT Interactive ultra-long text generation using recurrent prompting mechanisms. Paper, Code
MEGALODON Efficient LLM pretraining and inference with unlimited context length. Paper, Code
CLongEval Chinese benchmark for evaluating long-context LLMs. Paper, Code