Chain of Thought

Download as pdf or txt
Download as pdf or txt
You are on page 1of 4

Chain-of-Thought (CoT) Prompting

Introduction of intermediate “reasoning” steps, improves the


performance of LLMs in tasks that require complex reasoning
like arithmetic, common sense, and symbolic reasoning.

In their paper, “Chain-of-Thought Prompting Elicits Reasoning


in Large Language Models”, Wei et. al. demonstrated how
LLMs naturally start reasoning with a few examples.

In this technique, a few logical reasoning steps are added to


the prompt as examples for the LLM to understand how to
arrive at the desired outcome.

Wei et al. (2022)


Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

Another idea of “Zero Shot CoT” was introduced by


Kojima et al. 2022 where, instead of adding examples
for Few Shot CoT, we just add “Let’s think step by
step” to the prompt.
abhinav-kimothi
Automatic Chain-of-Thought (Auto-CoT)
As we saw, CoT prompting involves creating examples for the
LLM. This is a manual process and introduces subjectivity. To
reduce this subjectivity, Zhang et al. (2022) introduced Auto-
COT. There are two stages involved in Auto-CoT

A: Question Clustering
A dataset of diverse questions Questions clustered into ‘k’ groups

1 3 k-1

2 4 k

Stage A: Create clusters from a dataset of diverse question

B: Demonstration Sampling

Demo Selecting one representative


question from each cluster
Q: While shopping for
music online....
A : Let’s think step by

LLM
step....
...
Q:
A:..

Reasoning Chain for Zero Shot CoT by adding “Let’s


representative questions think step by step”
Stage B: Select one question from each cluster and generate its reasoning chain
using Zero-Shot-CoT with simple heuristics

The Questions with Reasoning Chain in the Demo are then


used as examples for new questions

Checkout Auto-CoT Code


abhinav-kimothi
Benefits of Chain of Thought Prompting

Breaks down multi-step problems into simpler


components to enable more efficient solving

Provides transparency into models' reasoning


for interpretability

Applicable across diverse reasoning tasks like


math, commonsense, and symbolic
manipulation.

Easily integrated into existing models via


prompting. Does not require any architectural
change

Makes models' thought processes relatable to


facilitate human-AI collaboration

Adapts complexity of reasoning chain to task


difficulty for broad applicability

Enables error identification by exposing


models' step-by-step reasoning logic

Teaches generalizable structured problem-


solving strategies transferable across tasks.

abhinav-kimothi
Limitations of CoT

Task Complexity Prompt Quality


Chain of Thought Prompting The technique depends heavily
offers minimal additional value on prompt quality to steer
over standard prompting for tasks models through reasoning
that lack multi-step reasoning chains. Crafting prompts that
requirements or cannot be easily provide effective stepwise
decomposed. Its benefits are best guidance demands care and can
achieved for problems requiring prove difficult for complex
sequential logic or intermediate domains necessitating expert
explanatory steps knowledge.

Scalability Model Size


While Auto CoT tries to automate Chain of Thought reasoning works
the process of creating the well only on very large models
reasoning chains, it still remains a with more than 100 Billion
complex and labor intensive parameters. On smaller models
process to create them. As the the efficiency reduces. On the
tasks increase, the manual effort other hand, the efficacy of CoT
in creating or verifying the remains to be seen as the model
reasoning chains keeps on size increases further.
increasing.

abhinav-kimothi

You might also like