0% found this document useful (0 votes)
641 views18 pages

12 Prompt Engineering Techniques

The document discusses 12 different prompt engineering techniques for crafting prompts for large language models (LLMs) including: least-to-most prompting which breaks problems into simpler subproblems; self-ask prompting where the LLM asks follow up questions; meta-prompting which causes the LLM to reflect on its own performance; and chain-of-thought, iterative, sequential, and symbolic reasoning prompts. The goal is to provide context and examples to generate more accurate and less hallucinated responses from LLMs.

Uploaded by

koolz.fire007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
641 views18 pages

12 Prompt Engineering Techniques

The document discusses 12 different prompt engineering techniques for crafting prompts for large language models (LLMs) including: least-to-most prompting which breaks problems into simpler subproblems; self-ask prompting where the LLM asks follow up questions; meta-prompting which causes the LLM to reflect on its own performance; and chain-of-thought, iterative, sequential, and symbolic reasoning prompts. The goal is to provide context and examples to generate more accurate and less hallucinated responses from LLMs.

Uploaded by

koolz.fire007
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

12/21/23, 3:52 PM 12 Prompt Engineering Techniques.

Prompt Engineering can be described as… | by Cobus Greyling | Medium

Summarize
Chat With This Website

12 Prompt Engineering Techniques


Prompt Engineering can be described as an art form, creating input requests for
Large Language Models (LLMs) that will lead to a envisaged output. Here are twelve
different techniques in crafting a single or a sequence of prompts.

Cobus Greyling · Follow


7 min read · Aug 2

Listen Share More

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the
intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development
Frameworks, Data-Centric latent spaces & more.

Least-To-Most Prompting

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 1/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

The process of inference is reaching a conclusion based on evidence and reasoning.


And in turn reasoning can be engendered with LLMs by providing the LLM with a
few examples on how to reason and use evidence.

Hence a novel prompting strategy was developed, named least-to-most prompting.


This method is underpinned by the following strategy:

1. Decompose a complex problem into a series of simpler sub-problems.

2. And subsequently solving for each of these sub-questions.

Solving each subproblem is facilitated by the answers to previously solved


subproblems.

Hence least to most prompting is a technique of using a progressive sequence of


prompts to reach a final conclusion.

Least To Most Prompting


Least To Most Prompting for Large Language Models (LLMs)
enables the LLM to handle complex reasoning.
cobusgreyling.medium.com

Self-Ask Prompting
Considering the image below, it is evident that Self-Ask Prompting is a progression
from Direct and Chain-Of-Thought prompting.

The interesting thing about self-ask prompting is that the LLM reasoning is shown
explicitly and the LLM also decomposes the question into smaller follow-up
questions.

The LLM knows when the final answer is reached and can move from follow up
intermediate answers to a final answer.

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 2/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Source

Self-Ask Prompting
Self-Ask Prompting is a progression from Chain Of Thought
Prompting. Below are a few practical examples and an…
cobusgreyling.medium.com

Meta-Prompting
The key principle underpinning Meta-Prompting is to cause the agent to reflect on
its own performance and amend its own instructions accordingly.

While simultaneously using one overarching meta-prompt.

Meta-Prompt
Meta-Prompt for building self-improving agents with a single
universal prompt.
cobusgreyling.medium.com

Chain-Of-Thought Prompting

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 3/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Intuitively we as humans break a larger task or problem into sub-tasks, and then we
chain these sub-tasks together. Using the output of one sub-task as the input for the
next sub-task.

By using chain-of-thought prompting within the OpenAI Playground, a method


wherein specific examples of chain of thought are provided as guidance, it is
possible to showcase how large language models can develop sophisticated
reasoning capabilities.

Research has shown that sufficiently large language models can enable the
emergence of reasoning abilities when prompted in this way.

Chain-Of-Thought Prompting & LLM Reasoning


When we as humans are faced with a complicated reasoning task,
such as a multi-step math word problem, we segment our…
cobusgreyling.medium.com

ReAct
With humans the tight synergy between reasoning & acting allows for humans to
learn new tasks quickly and perform robust reasoning and decision making. We can
perform this even when unforeseen circumstances, information or uncertainties
are faced.

LLMs have demonstrated impressive results in chain-of-thought reasoning(CoT) and


prompting, and acting (generation of action plans).

The idea of ReAct is to combine reasoning and taking action.

Reasoning enables the model to induce, track and update action plans, while actions
allow for gathering additional information from external sources.

Combining these to ideas are named ReAct, and it was applied to a diverse set of
language and decision making tasks to demonstrate its effectiveness over state-of-
the-art baselines in addition to improved human interpretability and
trustworthiness.

ReAct: Synergy Between Reasoning & Acting In LLMs


https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 4/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

An element of human intelligence is the ability to seamlessly


combine task-oriented actions with verbal or inner…
cobusgreyling.medium.com

Symbolic Reasoning & PAL


LLMs should not only be able to perform mathematical reasoning, but also symbolic
reasoning which involves reasoning pertaining to colours and object types.

Consider the following question:

I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a

cabbage, two onions, and three fridges. How many vegetables do I have?

The LLM should convert the input into a dictionary with entities and values
according to their quantities, while filtering out non-vegetable entities.

Finally, the answer is the sum of the dictionary values, below the PAL output from
the LLM:

# note: I'm not counting the chair, tables, or fridges


vegetables_to_count = {
'potato': 2,
'cauliflower': 1,
'lettuce head': 1,
'cabbage': 1,
'onion': 2
}
answer = sum(vegetables_to_count.values())

Symbolic Reasoning & PAL: Program-Aided Large Language


Models
LLMs should not only be able to perform mathematical reasoning,
but also symbolic reasoning which involves reasoning…
cobusgreyling.medium.com

Iterative Prompting

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 5/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Of late the focus has shifted from LLM fine-tuning to enhanced prompt
engineering. Ensuring that prompts are contextual, contains few-shot training
examples and conversation history.

Ensure the prompt holds contextual information via


an iterative process.
Iterative prompting should establish a contextual chain-of-thought, which negates
the generation of irrelevant facts and hallucination. Interactive context-aware &
contextual prompting.

Source

Considering the image above, at C1 and C2 knowledge is important for accurately


answering the question. The approach of iterative prompting contains strong
elements of chain-of-thought prompting and pipelines.

Sequential Prompting
Sequential prompting considers the possibility of building a capable recommender
with LLMs. Usually recommender systems are developed in a pipeline architecture,
consisting of multi-stage candidate generation (retrieving more relevant items) and
ranking (ranking relevant items at a higher position) procedures.
https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 6/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Sequential Prompting focuses on the ranking stage of recommender systems, since


LLMs are more expensive to run on a large-scale candidate set.

The ranking performance is sensitive to the retrieved top-ranked candidate items,


which is more suitable to examine the subtle differences in the recommendation
abilities of LLMs.

Open in app

Search

Source

Self-Consistency
With Chain-Of-Thought reasoning a path of thought is generated which then in turn
is followed. And on the contrary, self-consistency leverages the intuition that a
complex reasoning problem typically admits multiple different ways of thinking
leading to its unique correct answer.

Source

The self-consistency method is constituted by three steps:

1. prompt the LLM to generate a chain of thought (CoT) reasoning part.


https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 7/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

2. Generate a diverse set of reasoning paths.

3. Select the most consistent output for the final answer.

The approach followed by the self-consistency method can introduce increased


overhead; especially if the steps of each CoT involves the calling of external tools
and APIs. The overhead will manifest in the form of additional cost and time to
complete the round-trips.

Automatic Reasoning & Tool Use (ART)


It has been illustrated that Chain Of Thought prompting elicits complex and
sequential reasoning from LLMs. It has also been proven that for each step external
tools can be used to improve the specific node’s generated output.

The premise of developing these methodologies is to leverage a frozen large


language model (LLM). Hence augmenting a previously trained model which is
time-stamped.

Automatic Reasoning and Tool-use (ART) is a framework which also leverages frozen
models to generate intermediate reasoning steps as a program.

The approach of ART strongly reminds of the principle of Agents, that of


decomposing a problem, and making use of tools for each decomposed step.

With ART, a frozen LLM decomposes instances of a


new task into multiple steps using external tools
whenever appropriate.
ART is a fine-tuning free approach to automate multi-step reasoning and automatic
tool selection and use.

Generated Knowledge
The principle of generated knowledge is that knowledge can be integrated at
inference time. Showing that reference knowledge can be used instead of model
fine tuning.

Tests were performed across multiple datasets, common-sense reasoning, etc.

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 8/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

The principle of generated knowledge is supported by developments like RAG,


pipelines, and more.

⭐️Follow me on LinkedIn for updates on Large Language Models ⭐️

I’m currently the Chief Evangelist @ HumanFirst. I explore & write about all things at the
intersection of AI & language; ranging from LLMs, Chatbots, Voicebots, Development
Frameworks, Data-Centric latent spaces & more.

HumanFirst — Design, test and launch custom NLU and prompts


HumanFirst makes sense of unstructured data quickly. Pairing
human-in-the-loop and AI-powered features, seamlessly…
www.humanfirst.ai

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 9/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

LinkedIn

Get an email whenever Cobus Greyling publishes.


Get an email whenever Cobus Greyling publishes. By signing up, you
will create a Medium account if you don’t already…
cobusgreyling.medium.com

Artificial Intelligence Prompt Engineering Large Language Models

Conversational AI Conversational UI

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 10/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Follow

Written by Cobus Greyling


13.1K Followers

I explore and write about all things at the intersection of AI & language; LLMs/NLP/NLU, Chat/Voicebots,
CCAI. www.cobusgreyling.com

More from Cobus Greyling

Cobus Greyling

A Comprehensive Survey of Large Language Models (LLMs)


This is probably the most comprehensive survey of all things related to LLMs.

5 min read · Dec 13

521 2

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 11/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Cobus Greyling

Five Stages Of LLM Implementation


The introduction of LLMs is causing both disruption and demand. Disruption in the way we
develop application and harness the power of LLMs…

8 min read · Dec 11

142 2

Cobus Greyling

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 12/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

The LangChain Ecosystem Is Expanding At A Tremendous Pace


The current LangChain ecosystem of tools is a good indication in terms of best practice when it
comes to LLM implementations.

5 min read · 6 days ago

152

Cobus Greyling

A New Prompt Engineering Technique Has Been Introduced Called Step-


Back Prompting
Step-Back Prompting is a prompting technique enabling LLMs to perform abstractions, derive
high-level concepts & first principles from…

5 min read · Oct 12

1.3K 11

See all from Cobus Greyling

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 13/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Recommended from Medium

Cobus Greyling

Least To Most Prompting


Least To Most Prompting for Large Language Models (LLMs) enables the LLM to handle
complex reasoning.

6 min read · Jul 27

25 1

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 14/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Maximilian Vogel in MLearning.ai

I Scanned 1000+ Prompts so You Don’t Have to: 10 Need-to-Know


Techniques
Free Prompt Engineering Course: The Art of the Prompt — Updated Dec-10 2023, added the
multiprompt approach.

11 min read · Aug 11

1.7K 24

Lists

AI Regulation
6 stories · 226 saves

Natural Language Processing


1004 stories · 489 saves

ChatGPT prompts
32 stories · 831 saves

ChatGPT
23 stories · 325 saves

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 15/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Shreyansh Shah

Prompt Tuning: A Powerful Technique for Adapting LLMs to New Tasks


Prompt tuning is a technique that allows for the adaptation of large language models (LLMs) to
new tasks by training a small number of…

8 min read · Oct 18

James Nguyen

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 16/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Forget RAG: Embrace agent design for a more intelligent grounded


ChatGPT!
The Retrieval Augmented Generation (RAG) design pattern has been commonly used to
develop a grounded ChatGPT in a specific data domain…

6 min read · Nov 18

720 7

Cameron R. Wolfe, Ph.D. in Towards Data Science

Advanced Prompt Engineering


What to do when few-shot learning isn’t enough…

· 17 min read · Aug 7

1.6K 13

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 17/18
12/21/23, 3:52 PM 12 Prompt Engineering Techniques. Prompt Engineering can be described as… | by Cobus Greyling | Medium

Trevor Lohrbeer

Retrieval-Augmented Prompting: Enabling prompt switching in GPTs


[Note: If you prefer to listen, check out this episode of AI Meets Productivity where ChatGPT
and I walk through and discuss this article…

11 min read · Dec 4

See more recommendations

https://cobusgreyling.medium.com/12-prompt-engineering-techniques-644481c857aa 18/18

You might also like