The Prompt Report: A Systematic Survey of Prompting Techniques
The Prompt Report: A Systematic Survey of Prompting Techniques
Abstract
arXiv:2406.06608v4 [cs.CL] 23 Dec 2024
Generative Artificial Intelligence (GenAI) systems are increasingly being deployed across diverse industries
and research domains. Developers and end-users interact with these systems through the use of prompting
and prompt engineering. Although prompt engineering is a widely adopted and extensively researched area,
it suffers from conflicting terminology and a fragmented ontological understanding of what constitutes
an effective prompt due to its relatively recent emergence. We establish a structured understanding of
prompt engineering by assembling a taxonomy of prompting techniques and analyzing their applications.
We present a detailed vocabulary of 33 vocabulary terms, a taxonomy of 58 LLM prompting techniques,
and 40 techniques for other modalities. Additionally, we provide best practices and guidelines for prompt
engineering, including advice for prompting state-of-the-art (SOTA) LLMs such as ChatGPT. We further
present a meta-analysis of the entire literature on natural language prefix-prompting. As a culmination of
these efforts, this paper presents the most comprehensive survey on prompt engineering to date.
1
Contents
2
A Appendices 60 A.3.6 Distribution . . . . . . . . 65
A.1 Definitions of Prompting . . . . . 60 A.3.7 Maintenance . . . . . . . 65
A.2 Extended Vocabulary . . . . . . . 62 A.4 Keywords . . . . . . . . . . . . . 66
A.2.1 Prompting Terms . . . . . 62 A.5 Evaluation Table . . . . . . . . . 68
A.2.2 Prompt Engineering Terms 62 A.6 Entrapment Prompting Process . . 69
A.2.3 Fine-Tuning Terms . . . . 62
A.6.1 Exploration . . . . . . . . 69
A.2.4 Orthogonal Prompt Types 62
A.3 Datasheet . . . . . . . . . . . . . 64 A.6.2 Getting a Label . . . . . . 69
A.3.1 Motivation . . . . . . . . 64 A.6.3 Varying Prompting Tech-
A.3.2 Composition . . . . . . . 64 niques . . . . . . . . . . . 69
A.3.3 Collection Process . . . . 65 A.7 Formally Defining a Prompt . . . 72
A.3.4 Preprocessing/ Cleaning/ A.8 In-Context Learning Definitions
Labeling . . . . . . . . . 65 Disambiguation . . . . . . . . . . 74
A.3.5 Uses . . . . . . . . . . . . 65 A.9 Contributions . . . . . . . . . . . 76
3
1 Introduction
4
It is important to understand how to evaluate
Write a poem about trees.
the outputs of agents and prompting techniques to
ensure accuracy and avoid hallucinations. Thus,
we discuss ways of evaluating these outputs (Sec- Write a poem about the following topic:
tion 4.2). We also discuss security (Section 5.1) {USER_INPUT}
and safety measures (Section 5.2) for designing
prompts that reduce the risk of harm to companies
and users. Figure 1.2: Prompts and prompt templates are distinct
Finally, we apply prompting techniques in two concepts; a prompt template becomes a prompt when
input is inserted into it.
case studies (Section 6.1). In the first, we test a
range of prompting techniques against the com-
monly used benchmark MMLU (Hendrycks et al., 1.2 Terminology
2021). In the second, we explore in detail an exam-
ple of manual prompt engineering on a significant, 1.2.1 Components of a Prompt
real-world use case, identifying signals of frantic There are a variety of common components in-
hopelessness–a top indicator of suicidal crisis–in cluded in a prompt. We summarize the most com-
the text of individuals seeking support (Schuck monly used components and discuss how they fit
et al., 2019a). We conclude with a discussion of into prompts.
the nature of prompting and its recent development
(Section 8). Directive Many prompts issue a directive in the
form of an instruction or question.1 This is the
1.1 What is a Prompt? core intent of the prompt, sometimes simply called
the "intent". For example, here is an instance of a
A prompt is an input to a Generative AI model, that prompt with a single instruction:
is used to guide its output (Meskó, 2023; White
et al., 2023; Heston and Khun, 2023; Hadi et al., Tell me five good books to read.
2023; Brown et al., 2020). Prompts may consist of
text, image, sound, or other media. Some examples Directives can also be implicit, as in this one-
of prompts include: “write a three paragraph email shot case, where the directive is to perform English
for a marketing campaign for an accounting firm”, to Spanish translation:
a photograph of a table accompanied by the text
“describe everything on the table”, or a recording
Night: Noche
of an online meeting, with the instructions “sum-
Morning:
marize this”.
Prompt Template Prompts are often constructed Examples Examples, also known as exemplars or
via a prompt template (Shin et al., 2020b). A shots, act as demonstrations that guide the GenAI
prompt template is a function that contains one or to accomplish a task. The above prompt is a One-
more variables which will be replaced by some me- Shot (i.e. one example) prompt.
dia (usually text) to create a prompt. This prompt
can then be considered to be an instance of the Output Formatting It is often desirable for the
template. GenAI to output information in certain formats,
for example, CSVs or markdown formats (Xia
Consider applying prompting to the task of bi-
et al., 2024). Structuring outputs may reduce per-
nary classification of tweets. Here is an initial
formance on some tasks (Tam et al., 2024). How-
prompt template that can be used to classify inputs.
ever, Kurt (2024) point out various flaws in Tam
et al. (2024) and show that structuring outputs may
Classify the tweet as positive or negative: actually improve performance. Here is an exam-
{TWEET} ple of how you might format a prompt to output
information as a CSV:
Each tweet in the dataset would be inserted into 1
“Directives”, from Searle (1969), are a type of speech act
a separate instance of the template and the resulting intended to encourage an action, and have been invoked in
prompt would be given to a LLM for inference. models of human-computer dialogue Morelli et al. (1991).
5
Context 1.2.1
Context Window A.2.1
Priming A.2.1 Few-Shot Prompt 2.2.1
In-Context Learning
Prompting Technique 2.2.1 Exemplar 1.2.2
1.2.2
Zero-Shot Prompt 2.2.2 Continuous Prompt
Prompting 1.2.2
A.2.4.2
Density A.2.4.2
Discrete Prompt A.2.4.2
User Prompt A.2.4.1
Orthogonal Prompt Types
A.2.4 Originator A.2.4.1 System Prompt A.2.4.1
Figure 1.3: A Terminology of prompting. Terms with links to the appendix are not sufficiently critical to describe in
the main paper, but are important to the field of prompting. Prompting techniques are shown in Figure 2.2
.
6
Dataset Inference (i.e. entries x₁ ... xₙ) 1.3 A Short History of Prompts
x₁ x₂ xₙ The idea of using natural language prefixes, or
prompts, to elicit language model behaviors and
responses originated before the GPT-3 and Chat-
GPT era. GPT-2 (Radford et al., 2019a) makes
Prompt Template
use of prompts and they appear to be first used in
the context of Generative AI by Fan et al. (2018).
However, the concept of prompts was preceded by
Generative AI related concepts such as control codes (Pfaff, 1979;
Modify Prompt
Template until
Poplack, 1980; Keskar et al., 2019) and writing
Desiderata Met prompts.
Extractor The term Prompt Engineering appears to have
come into existence more recently from Radford
et al. (2021) then slightly later from Reynolds and
McDonell (2021).
Utility Function
However, various papers perform prompt engi-
neering without naming the term (Wallace et al.,
Figure 1.4: The Prompt Engineering Process consists of
three repeated steps 1) performing inference on a dataset
2019; Shin et al., 2020a), including Schick and
2) evaluating performance and 3) modifying the prompt Schütze (2020a,b); Gao et al. (2021) for non-
template. Note that the extractor is used to extract a autoregressive language models.
final response from the LLM output (e.g. "This phrase Some of the first works on prompting define a
is positive" → "positive"). See more information on prompt slightly differently to how it is currently
extractors in Section 2.5. used. For example, consider the following prompt
from Brown et al. (2020):
used in succession. The output of the prompt gen-
erated by the first prompt template is used to pa- Translate English to French:
rameterize the second template, continuing until all llama
templates are exhausted (Wu et al., 2022).
Brown et al. (2020) consider the word "llama" to
Prompting Technique A prompting technique be the prompt, while "Translate English to French:"
is a blueprint that describes how to structure a is the "task description". More recent papers, in-
prompt, prompts, or dynamic sequencing of multi- cluding this one, refer to the entire string passed to
ple prompts. A prompting technique may incorpo- the LLM as the prompt.
rate conditional or branching logic, parallelism, or
other architectural considerations spanning multi-
ple prompts.
7
2 A Meta-Analysis of Prompting
2.1 Systematic Review Process 3,677 from arXiv 2,087 from SS, 639 from ACL = 4797 Records
assisted review.4 As an initial sample to estab- 1,565 records included in quantitative analysis.
8
Emotion Prompting 2.2.2
Role Prompting 2.2.2
Style Prompting 2.2.2
S2A 2.2.2
Zero-Shot 2.2.2
SimToM 2.2.2
RaR 2.2.2
RE2 2.2.2
Self-Ask 2.2.2
Exemplar Generation SG-ICL 2.2.1.2
Exemplar Ordering 2.2.1.1
Analogical Prompting
Few-Shot 2.2.1 Exemplar Selection KNN 2.2.1.2 2.2.3.1
2.2.1.2 Vote-K 2.2.1.2 Step-Back Prompting
Instruction Selection ?? 2.2.3.1
Zero-Shot CoT 2.2.3.1
Thread-of-Thought
(ThoT) 2.2.3.1
Tab-CoT 2.2.3.1
Chain-of-Thought Active-Prompt 2.2.3.2
Thought Generation 2.2.3
(CoT) 2.2.3
Auto-CoT 2.2.3.2
COSP 2.2.5
Complexity-Based 2.2.3.2
DENSE 2.2.5
Contrastive 2.2.3.2
DiVeRSe 2.2.5 Few-Shot CoT 2.2.3.2
Memory-of-Thought
Max Mutual 2.2.3.2
Information 2.2.5
Text-Base Prompt. Tech. Uncertainty-Routed
Meta-CoT 2.2.5 CoT 2.2.3.2
Ensembling 2.2.5
MoRE 2.2.5 Prompt Mining 2.2.1.2
Self-Consistency 2.2.5
Universal
Self-Consistency 2.2.5
USP 2.2.5
Prompt Paraphrasing 2.2.5
Chain-of-Verification 2.2.6
Self-Calibration 2.2.6
Self-Refine 2.2.6
Self-Criticism 2.2.6
Self-Verification 2.2.6
ReverseCoT 2.2.6
Cumulative Reason. 2.2.6
DECOMP 2.2.4
Faithful CoT 2.2.4
Least-to-Most 2.2.4
Plan-and-Solve 2.2.4
Decomposition 2.2.4 Program-of-Thought 2.2.4
Recurs.-of-Thought 2.2.4
Skeleton-of-Thought 2.2.4
Tree-of-Thought 2.2.4
Metacognitive 2.2.4
9
Exemplar Quantity Exemplar Ordering Exemplar Label Distribution Exemplar Label Quality Exemplar Format Exemplars Similarity
Include as many exemplars as Randomly order exemplars* Provide a balanced label Ensure exemplars are labeled Choose a common format* Select similar exemplars to
possible* distribution* correctly* the test instance*
I am so mad: Angry
I hate my boss: Angry
I hate my boss: Angry
I’m so excited=== I’m so excited:
I hate my boss: Angry
Life is good: Happy
Life is good: Happy
Figure 2.3: We highlight six main design decisions when crafting few-shot prompts. ∗ Please note that recommenda-
tions here do not generalize to all tasks; in some cases, each of them could hurt performance.
Extract all words that have 3 of the same Talukdar, 2021; Liu et al., 2021; Rubin et al., 2022).
letter and at least 3 other letters from the On some tasks, exemplar order can cause accuracy
following text: {TEXT} to vary from sub-50% to 90%+ (Lu et al., 2021).
10
occur commonly in the training data will lead to exemplars for an annotator to label. In the sec-
better performance (Jiang et al., 2020). ond stage, the labeled pool is used for Few-Shot
Prompting. Vote-K also ensures that newly added
Exemplar Similarity Selecting exemplars that
exemplars are sufficiently different than existing
are similar to the test sample is generally bene-
ones to increase diversity and representativeness.
ficial for performance (Liu et al., 2021; Min et al.,
2022). However, in some cases, selecting more Self-Generated In-Context Learning (SG-ICL)
diverse exemplars can improve performance (Su (Kim et al., 2022) leverages a GenAI to automati-
et al., 2022; Min et al., 2022). cally generate exemplars. While better than zero-
shot scenarios when training data is unavailable,
Instruction Selection While instructions are re-
the generated samples are not as effective as actual
quired to guide LLMs in zero-shot prompts (Wei
data.
et al., 2022a), their utility in few-shot prompts is
less clear. Ajith et al. (2024) show that generic, Prompt Mining (Jiang et al., 2020) is the process
task-agnostic instructions improve classification of discovering optimal "middle words" in prompts
and question answering accuracy over task-specific (effectively prompt templates) through large corpus
ones, concluding instruction-following abilities can analysis. For example, instead of using the com-
be achieved via exemplars alone. While they may mon "Q: A:" format for few-shot prompts, there
not improve correctness, instructions in few-shot may exist something similar which occurs more fre-
prompts can still help guide auxiliary response at- quently in the corpus. Formats which occur more
tributes like writing style (Roy et al., 2023). often in the corpus will likely lead to improved
2.2.1.2 Few-Shot Prompting Techniques prompt performance.
Considering all of these factors, Few-Shot Prompt- More Complicated Techniques such as LENS
ing can be very difficult to implement effectively. (Li and Qiu, 2023a), UDR (Li et al., 2023f), and
We now examine techniques for Few-Shot Prompt- Active Example Selection (Zhang et al., 2022a)
ing in the supervised setting. Ensembling ap- leverage iterative filtering, embedding and retrieval,
proaches can also benefit Few-Shot Prompting, but and reinforcement learning, respectively.
we discuss them separately (Section 2.2.5).
Assume we have a training dataset, Dtrain , 2.2.2 Zero-Shot
which contains multiple inputs Dxtrain
i and outputs In contrast to Few-Shot Prompting, Zero-Shot
train
Dyi , which can be used to few-shot prompt a Prompting uses zero exemplars. There are a num-
GenAI (rather than performing gradient-based up- ber of well-known standalone zero-shot techniques
dates). Assume that this prompt can be dynamically as well as zero-shot techniques combined with an-
generated with respect to Dxtest
i at test time. Here other concept (e.g. Chain of Thought), which we
is the prompt template we will use for this section, discuss later (Section 2.2.3).
following the ‘input: output‘ format (Figure 2.4):
Role Prompting (Wang et al., 2023j; Zheng
et al., 2023d) , also known as persona prompting
{Exemplars}
(Schmidt et al., 2023; Wang et al., 2023l), assigns a
Dxtest
i :
specific role to the GenAI in the prompt. For exam-
ple, the user might prompt it to act like "Madonna"
Figure 2.7: Few-Shot Prompting Template or a "travel writer". This can create more desir-
able outputs for open-ended tasks (Reynolds and
McDonell, 2021) and in some cases may improve
K-Nearest Neighbor (KNN) (Liu et al., 2021) is accuracy on benchmarks (Zheng et al., 2023d).
part of a family of algorithms that selects exemplars
similar to Dxtest
i to boost performance. Although ef- Style Prompting (Lu et al., 2023a) involves spec-
fective, employing KNN during prompt generation ifying the desired style, tone, or genre in the prompt
may be time and resource intensive. to shape the output of a GenAI. A similar effect
can be achieved using role prompting.
Vote-K (Su et al., 2022) is another method to
select similar exemplars to the test sample. In one Emotion Prompting (Li et al., 2023a) incorpo-
stage, a model proposes useful unlabeled candidate rates phrases of psychological relevance to humans
11
(e.g., "This is important to my career") into the
Q: Jack has two baskets, each containing
prompt, which may lead to improved LLM perfor-
three balls. How many balls does Jack have
mance on benchmarks and open-ended text genera-
in total?
tion.
A: One basket contains 3 balls, so two bas-
System 2 Attention (S2A) (Weston and kets contain 3 * 2 = 6 balls.
Sukhbaatar, 2023) first asks an LLM to rewrite Q: {QUESTION}
the prompt and remove any information unrelated A:
to the question therein. Then, it passes this new
prompt into an LLM to retrieve a final response.
Figure 2.8: A One-Shot Chain-of-Thought Prompt.
SimToM (Wilf et al., 2023) deals with compli-
cated questions which involve multiple people or
sionally referred to as Chain-of-Thoughts (Tutunov
objects. Given the question, it attempts to establish
et al., 2023; Besta et al., 2024; Chen et al., 2023d).
the set of facts one person knows, then answer the
It has been demonstrated to significantly enhance
question based only on those facts. This is a two
the LLM’s performance in mathematics and reason-
prompt process and can help eliminate the effect of
ing tasks. In Wei et al. (2022b), the prompt includes
irrelevant information in the prompt.
an exemplar featuring a question, a reasoning path,
Rephrase and Respond (RaR) (Deng et al., 2023) and the correct answer (Figure 2.8).
instructs the LLM to rephrase and expand the ques-
2.2.3.1 Zero-Shot-CoT
tion before generating the final answer. For ex-
ample, it might add the following phrase to the The most straightforward version of CoT contains
question: "Rephrase and expand the question, and zero exemplars. It involves appending a thought
respond". This could all be done in a single pass inducing phrase like "Let’s think step by step." (Ko-
or the new question could be passed to the LLM jima et al., 2022) to the prompt. Other suggested
separately. RaR has demonstrated improvements thought-generating phrases include "Let’s work
on multiple benchmarks. this out in a step by step way to be sure we have the
right answer" (Zhou et al., 2022b) and "First, let’s
Re-reading (RE2) (Xu et al., 2023) adds the think about this logically" (Kojima et al., 2022).
phrase "Read the question again:" to the prompt in Yang et al. (2023a) searches for an optimal thought
addition to repeating the question. Although this is inducer. Zero-Shot-CoT approaches are attractive
such a simple technique, it has shown improvement as they don’t require exemplars and are generally
in reasoning benchmarks, especially with complex task agnostic.
questions.
Step-Back Prompting (Zheng et al., 2023c) is a
Self-Ask (Press et al., 2022) prompts LLMs to modification of CoT where the LLM is first asked
first decide if they need to ask follow up questions a generic, high-level question about relevant con-
for a given prompt. If so, the LLM generates these cepts or facts before delving into reasoning. This
questions, then answers them and finally answers approach has improved performance significantly
the original question. on multiple reasoning benchmarks for both PaLM-
2L and GPT-4.
2.2.3 Thought Generation
Thought generation encompasses a range of tech- Analogical Prompting (Yasunaga et al., 2023)
niques that prompt the LLM to articulate its reason- is similar to SG-ICL, and automatically generates
ing while solving a problem (Zhang et al., 2023c). exemplars that include CoTs. It has demonstrated
improvements in mathematical reasoning and code
Chain-of-Thought (CoT) Prompting (Wei et al., generation tasks.
2022b) leverages few-shot prompting to encour-
age the LLM to express its thought process before Thread-of-Thought (ThoT) Prompting (Zhou
delivering its final answer.5 This technique is occa- et al., 2023) consists of an improved thought in-
ducer for CoT reasoning. Instead of "Let’s think
5
We note that such techniques are often described using step by step," it uses "Walk me through this context
words like "think" that anthropomorphize models. We attempt
not to use this language, but do use original authors’ language in manageable parts step by step, summarizing and
where appropriate. analyzing as we go." This thought inducer works
12
well in question-answering and retrieval settings, build Few-Shot CoT prompts at test time. Before
especially when dealing with large, complex con- test time, it performs inference on the unlabeled
texts. training exemplars with CoT. At test time, it re-
trieves similar instances to the test sample. This
Tabular Chain-of-Thought (Tab-CoT) (Jin and
technique has shown substantial improvements in
Lu, 2023) consists of a Zero-Shot CoT prompt that
benchmarks like Arithmetic, commonsense, and
makes the LLM output reasoning as a markdown
factual reasoning.
table. This tabular design enables the LLM to im-
prove the structure and thus the reasoning of its Automatic Chain-of-Thought (Auto-CoT) Prompt-
output. ing (Zhang et al., 2022b) uses Wei et al. (2022b)’s
2.2.3.2 Few-Shot CoT Zero-Shot prompt to automatically generate chains
This set of techniques presents the LLM with mul- of thought. These are then used to build a Few-Shot
tiple exemplars, which include chains-of-thought. CoT prompt for a test sample.
This can significantly enhance performance. This
2.2.4 Decomposition
technique is occasionally referred to as Manual-
CoT (Zhang et al., 2022b) or Golden CoT (Del and Significant research has focused on decomposing
Fishel, 2023). complex problems into simpler sub-questions. This
is an effective problem-solving strategy for humans
Contrastive CoT Prompting (Chia et al., 2023)
as well as GenAI (Patel et al., 2022). Some decom-
adds both exemplars with incorrect and correct ex-
position techniques are similar to thought-inducing
planations to the CoT prompt in order to show the
techniques, such as CoT, which often naturally
LLM how not to reason. This method has shown
breaks down problems into simpler components.
significant improvement in areas like Arithmetic
However, explicitly breaking down problems can
Reasoning and Factual QA.
further improve LLMs’ problem solving ability.
Uncertainty-Routed CoT Prompting (Google,
2023) samples multiple CoT reasoning paths, then Least-to-Most Prompting (Zhou et al., 2022a)
selects the majority if it is above a certain thresh- starts by prompting a LLM to break a given prob-
old (calculated based on validation data). If not, it lem into sub-problems without solving them. Then,
samples greedily and selects that response. This it solves them sequentially, appending model re-
method demonstrates improvement on the MMLU sponses to the prompt each time, until it arrives
benchmark for both GPT4 and Gemini Ultra mod- at a final result. This method has shown signif-
els. icant improvements in tasks involving symbolic
manipulation, compositional generalization, and
Complexity-based Prompting (Fu et al., 2023b) mathematical reasoning.
involves two major modifications to CoT. First, it
selects complex examples for annotation and in- Decomposed Prompting (DECOMP) (Khot
clusion in the prompt, based on factors like ques- et al., 2022) Few-Shot prompts a LLM to show it
tion length or reasoning steps required. Second, how to use certain functions. These might include
during inference, it samples multiple reasoning things like string splitting or internet searching;
chains (answers) and uses a majority vote among these are often implemented as separate LLM calls.
chains exceeding a certain length threshold, under Given this, the LLM breaks down its original prob-
the premise that longer reasoning indicates higher lem into sub-problems which it sends to different
answer quality. This technique has shown improve- functions. It has shown improved performance over
ments on three mathematical reasoning datasets. Least-to-Most prompting on some tasks.
Active Prompting (Diao et al., 2023) starts with
Plan-and-Solve Prompting (Wang et al., 2023f)
some training questions/exemplars, asks the LLM
consists of an improved Zero-Shot CoT prompt,
to solve them, then calculates uncertainty (disagree-
"Let’s first understand the problem and devise a
ment in this case) and asks human annotators to
plan to solve it. Then, let’s carry out the plan and
rewrite the exemplars with highest uncertainty.
solve the problem step by step". This method gener-
Memory-of-Thought Prompting (Li and Qiu, ates more robust reasoning processes than standard
2023b) leverage unlabeled training exemplars to Zero-Shot-CoT on multiple reasoning datasets.
13
Tree-of-Thought (ToT) (Yao et al., 2023b), also 2.2.5 Ensembling
known as Tree of Thoughts, (Long, 2023), creates a In GenAI, ensembling is the process of using multi-
tree-like search problem by starting with an initial ple prompts to solve the same problem, then aggre-
problem then generating multiple possible steps in gating these responses into a final output. In many
the form of thoughts (as from a CoT). It evaluates cases, a majority vote—selecting the most frequent
the progress each step makes towards solving the response—is used to generate the final output. En-
problem (through prompting) and decides which sembling techniques reduce the variance of LLM
steps to continue with, then keeps creating more outputs and often improving accuracy, but come
thoughts. ToT is particularly effective for tasks that with the cost of increasing the number of model
require search and planning. calls needed to reach a final answer.
Recursion-of-Thought (Lee and Kim, 2023) is
Demonstration Ensembling (DENSE) (Khalifa
similar to regular CoT. However, every time it en-
et al., 2023) creates multiple few-shot prompts,
counters a complicated problem in the middle of its
each containing a distinct subset of exemplars from
reasoning chain, it sends this problem into another
the training set. Next, it aggregates over their out-
prompt/LLM call. After this is completed, the an-
puts to generate a final response.
swer is inserted into the original prompt. In this
way, it can recursively solve complex problems, in- Mixture of Reasoning Experts (MoRE) (Si et al.,
cluding ones which might otherwise run over that 2023d) creates a set of diverse reasoning experts
maximum context length. This method has shown by using different specialized prompts for different
improvements on arithmetic and algorithmic tasks. reasoning types (such as retrieval augmentation
Though implemented using fine-tuning to output a prompts for factual reasoning, Chain-of-Thought
special token that sends sub-problem into another reasoning for multi-hop and math reasoning, and
prompt, it could also be done only through prompt- generated knowledge prompting for commonsense
ing. reasoning). The best answer from all experts is
Program-of-Thoughts (Chen et al., 2023d) uses selected based on an agreement score.
LLMs like Codex to generate programming code Max Mutual Information Method (Sorensen
as reasoning steps. A code interpreter executes et al., 2022) creates multiple prompt templates with
these steps to obtain the final answer. It excels in varied styles and exemplars, then selects the opti-
mathematical and programming-related tasks but mal template as the one that maximizes mutual
is less effective for semantic reasoning tasks. information between the prompt and the LLM’s
Faithful Chain-of-Thought (Lyu et al., 2023) outputs.
generates a CoT that has both natural language and
Self-Consistency (Wang et al., 2022) is based
symbolic language (e.g. Python) reasoning, just
on the intuition that multiple different reasoning
like Program-of-Thoughts. However, it also makes
paths can lead to the same answer. This method
use of different types of symbolic languages in a
first prompts the LLM multiple times to perform
task-dependent fashion.
CoT, crucially with a non-zero temperature to elicit
Skeleton-of-Thought (Ning et al., 2023) focuses diverse reasoning paths. Next, it uses a majority
on accelerating answer speed through paralleliza- vote over all generated responses to select a final
tion. Given a problem, it prompts an LLM to create response. Self-Consistency has shown improve-
a skeleton of the answer, in a sense, sub-problems ments on arithmetic, commonsense, and symbolic
to be solved. Then, in parallel, it sends these ques- reasoning tasks.
tions to a LLM and concatenates all the outputs to
Universal Self-Consistency (Chen et al., 2023e)
get a final response.
is similar to Self-Consistency except that rather
Metacognitive Prompting (Wang and Zhao, than selecting the majority response by program-
2024) attempts to make the LLM mirror human matically counting how often it occurs, it inserts
metacognitive processes with a five part prompt all outputs into a prompt template that selects the
chain, with steps including clarifying the question, majority answer. This is helpful for free-form text
preliminary judgement, evaluation of response, de- generation and cases where the same answer may
cision confirmation, and confidence assessment. be output slightly differently by different prompts.
14
Meta-Reasoning over Multiple CoTs (Yoran Self-Refine (Madaan et al., 2023) is an iterative
et al., 2023) is similar to universal Self- framework where, given an initial answer from the
Consistency; it first generates multiple reasoning LLM, it prompts the same LLM to provide feed-
chains (but not necessarily final answers) for a back on the answer, and then prompts the LLM to
given problem. Next, it inserts all of these chains improve the answer based on the feedback. This
in a single prompt template then generates a final iterative process continues until a stopping condi-
answer from them. tion is met (e.g., max number of steps reached).
Self-Refine has demonstrated improvement across
DiVeRSe (Li et al., 2023i) creates multiple
a range of reasoning, coding, and generation tasks.
prompts for a given problem then performs Self-
Consistency for each, generating multiple reason- Reversing Chain-of-Thought (RCoT) (Xue
ing paths. They score reasoning paths based on et al., 2023) first prompts LLMs to reconstruct
each step in them then select a final response. the problem based on generated answer. Then, it
generates fine-grained comparisons between the
Consistency-based Self-adaptive Prompting
original problem and the reconstructed problem
(COSP) (Wan et al., 2023a) constructs Few-Shot
as a way to check for any inconsistencies. These
CoT prompts by running Zero-Shot CoT with
inconsistencies are then converted to feedback for
Self-Consistency on a set of examples then
the LLM to revise the generated answer.
selecting a high agreement subset of the outputs
to be included in the final prompt as exemplars. It Self-Verification (Weng et al., 2022) gener-
again performs Self-Consistency with this final ates multiple candidate solutions with Chain-of-
prompt. Thought (CoT). It then scores each solution by
masking certain parts of the original question and
Universal Self-Adaptive Prompting (USP) (Wan
asking an LLM to predict them based on the rest
et al., 2023b) builds upon the success of COSP, aim-
of the question and the generated solution. This
ing to make it generalizable to all tasks. USP makes
method has shown improvement on eight reasoning
use of unlabeled data to generate exemplars and a
datasets.
more complicated scoring function to select them.
Additionally, USP does not use Self-Consistency. Chain-of-Verification (COVE) (Dhuliawala
et al., 2023) first uses an LLM to generate an
Prompt Paraphrasing (Jiang et al., 2020) trans-
answer to a given question. Then, it creates a
forms an original prompt by changing some of the
list of related questions that would help verify
wording, while still maintaining the overall mean-
the correctness of the answer. Each question is
ing. It is effectively a data augmentation technique
answered by the LLM, then all the information
that can be used to generate prompts for an ensem-
is given to the LLM to produce the final revised
ble.
answer. This method has shown improvements in
2.2.6 Self-Criticism various question-answering and text-generation
tasks.
When creating GenAI systems, it can be useful to
have LLMs criticize their own outputs (Huang et al., Cumulative Reasoning (Zhang et al., 2023b)
2022). This could simply be a judgement (e.g., is first generates several potential steps in answering
this output correct) or the LLM could be prompted the question. It then has a LLM evaluate them, de-
to provide feedback, which is then used to improve ciding to either accept or reject these steps. Finally,
the answer. Many approaches to generating and it checks whether it has arrived at the final answer.
integrating self-criticism have been developed. If so, it terminates the process, but otherwise it
repeats it. This method has demonstrated improve-
Self-Calibration (Kadavath et al., 2022) first
ments in logical inference tasks and mathematical
prompts an LLM to answer a question. Then, it
problem.
builds a new prompt that includes the question, the
LLM’s answer, and an additional instruction asking
2.3 Prompting Technique Usage
whether the answer is correct. This can be useful
for gauging confidence levels when applying LLMs As we have just seen, there exist many text-based
when deciding when to accept or revise the original prompting techniques. However, only a small sub-
answer. set of them are commonly used in research and in
15
Citation Counts of Prompting Techniques
16
Counts of Model Mentions in Dataset Dataset Mentions in Papers
Number of Mentions
400
600
Count
300
400
200
100 200
0 0
T-3 RT T-4 RTa LM MA RT ex PT PT OM AN LIP AM RT da go MZ Op er P-2 LP ma RT VA ron NO ion M8
K LU H A ag h
nc rand
e SC AT QA
GP BE GP E Pa La BA od O tG O FL C S BE mb in O Co rm LI V lla BE LatorTg DI Fus
B L C u c L i o a m O o fo B e i n L GS MM BB seQ Sw -be QA A-R thful
Ro tr B B L F l a B L C n s o d F Gandinream sen lla G U
Ins r a C on He BIG Wino AQ Tru
nT ou D mm
io Gr
Vis Co
Model Name Dataset Name
industry. We measure technique usage by proxy of Automatic Prompt Engineer (APE) (Zhou et al.,
measuring the number of citations by other papers 2022b) uses a set of exemplars to generate a Zero-
in our dataset. We do so with the presumption that Shot instruction prompt. It generates multiple pos-
papers about prompting are more likely to actually sible prompts, scores them, then creates variations
use or evaluate the cited technique. We graph the of the best ones (e.g. by using prompt paraphras-
top 25 papers cited in this way from our dataset and ing). It iterates on this process until some desider-
find that most of them propose new prompting tech- ata are reached.
niques (Figure 2.11). The prevalence of citations
for Few-Shot and Chain-of-Thought prompting is Gradientfree Instructional Prompt Search
unsurprising and helps to establish a baseline for (GrIPS) (Prasad et al., 2023) is similar to APE,
understanding the prevalence of other techniques. but uses a more complex set of operations includ-
ing deletion, addition, swapping, and paraphrasing
2.3.1 Benchmarks in order to create variations of a starting prompt.
In prompting research, when researchers propose Prompt Optimization with Textual Gradients (Pro-
a new technique, they usually benchmark it across TeGi) (Pryzant et al., 2023) is a unique approach
multiple models and datasets. This is important to to prompt engineering that improves a prompt tem-
prove the utility of the technique and examine how plate through a multi-step process. First, it passes
it transfers across models. a batch of inputs through the template, then passes
In order to make it easier for researchers propos- the output, ground truth, and prompt into another
ing new techniques to know how to benchmark prompt that criticizes the original prompt. It gener-
them, we quantitatively examine which models ates new prompts from these criticisms then uses
(Figure 2.9) and what benchmark datasets (Fig- a bandit algorithm (Gabillon et al., 2011) to se-
ure 2.10) are being used. Again, we measure usage lect one. ProTeGi demonstrates improvements over
by how many times papers in our dataset cite the methods like APE and GRIPS.
benchmark datasets and models.
RLPrompt (Deng et al., 2022) uses a frozen LLM
To find which datasets and models are being with an unfrozen module added. It uses this LLM to
used, we prompted GPT-4-1106-preview to extract generate prompt templates, scores the templates on
any mentioned dataset or model from the body of a dataset, and updates the unfrozen module using
papers in our dataset. After, we manually filtered Soft Q-Learning (Guo et al., 2022). Interestingly,
out results that were not models or datasets. The the method often selects grammatically nonsensical
citation counts were acquired by searching items text as the optimal prompt template.
from the finalized list on Semantic Scholar.
Dialogue-comprised Policy-gradient-based Dis-
2.4 Prompt Engineering crete Prompt Optimization (DP2O) (Li et al.,
2023b) is perhaps the most complicated prompt en-
In addition to surveying prompting techniques, we gineering technique, involving reinforcement learn-
also review prompt engineering techniques, which ing, a custom prompt scoring function, and conver-
are used to automatically optimize prompts. We sations with an LLM to construct the prompt.
discuss some techniques that use gradient updates,
since the set of prompt engineering techniques is 2.5 Answer Engineering
much smaller than that of prompting techniques.
Answer engineering is the iterative process of de-
Meta Prompting is the process of prompting a veloping or selecting among algorithms that extract
LLM to generate or improve a prompt or prompt precise answers from LLM outputs. To understand
template (Reynolds and McDonell, 2021; Zhou the need for answer engineering, consider a bi-
et al., 2022b; Ye et al., 2023). nary classification task where the labels are "Hate
Speech" and "Not Hate Speech". The prompt tem-
AutoPrompt (Shin et al., 2020b) uses a frozen plate might look like this:
LLM as well as a prompt template that includes
some "trigger tokens", whose values are updated Is this "Hate Speech" or "Not Hate Speech":
via backpropogation at training time. This is a {TEXT}
version of soft-prompting.
17
LLM Response
2.5.3 Answer Extractor
Likely Negative In cases where it is impossible to entirely control
Answer Shape:
A span of tokens This is negative the answer space (e.g. consumer-facing LLMs), or
NEGATIVE ! the expected answer may be located somewhere
within the model output, a rule can be defined to
extract the final answer. This rule is often a simple
Answer Space:
Answer Extraction:
function (e.g. a regular expression), but can also
All possible spans of tokens Select the proper label use a separate LLM to extract the answer.
Figure 2.12: An annotated output of a LLM output for a Verbalizer Often used in labeling tasks, a verbal-
labeling task, which shows the three design decisions of
izer maps a token, span, or other type of output
answer engineering: the choice of answer shape, space,
and extractor. Since this is an output from a classifi- to a label and vice-versa (injective) (Schick and
cation task, the answer shape could be restricted to a Schütze, 2021). For example, if we wish for a
single token and the answer space to one of two tokens model to predict whether a Tweet is positive or
("positive" or "negative"), though they are unrestricted negative, we could prompt it to output either "+"
in this image. or "-" and a verbalizer would map these token se-
quences to the appropriate labels. The selection
of a verbalizer constitutes a component of answer
When a hate speech sample is put through the engineering.
template, it might have outputs such as "It’s hate
speech", "Hate Speech.", or even "Hate speech, Regex As mentioned previously, Regexes are of-
because it uses negative language against a racial ten used to extract answers. They are usually used
group". This variance in response formats is diffi- to search for the first instance of a label. However,
cult to parse consistently; improved prompting can depending on the output format and whether CoTs
help, but only to a certain extent. are generated, it may be better to search for the last
There are three design decisions in answer en- instance.
gineering, the choice of answer space, answer Separate LLM Sometimes outputs are so com-
shape, and answer extractor (Figure 2.12). Liu plicated that regexes won’t work consistently. In
et al. (2023b) define the first two as necessary this case, it can be useful to have a separate LLM
components of answer engineering and we append evaluate the output and extract an answer. This
the third. We consider answer engineering to be separate LLM will often use an answer trigger
distinct from prompt engineering, but extremely (Kojima et al., 2022), e.g. "The answer (Yes or No)
closely related; the processes are often conducted is", to extract the answer.
in tandem.
6
We use a different definition than Liu et al. (2023b) with
respect to granularity (e.g. token vs span), since the output
could be of a different modality.
18
3 Beyond English Text Prompting
Prompting GenAIs with English text currently X-InSTA Prompting (Tanwar et al., 2023) ex-
stands as the dominant method for interaction. plores three distinct approaches for aligning in-
Prompting in other languages or through differ- context examples with the input sentence for classi-
ent modalities often requires special techniques to fication tasks: using semantically similar examples
achieve comparable performance. In this context, to the input (semantic alignment), examples that
we discuss the domains of multilingual and multi- share the same label as the input (task-based align-
modal prompting. ment), and the combination of both semantic and
task-based alignments.
3.1 Multilingual In-CLT (Cross-lingual Transfer) Prompting
State-of-the-art GenAIs have often been predom- (Kim et al., 2023) leverages both the source and
inately trained with English dataset, leading to a target languages to create in-context examples, di-
notable disparity in the output quality in languages verging from the traditional method of using source
other than English, particularly low-resource lan- language exemplars. This strategy helps stimulate
guages (Bang et al., 2023; Jiao et al., 2023; Hendy the cross-lingual cognitive capabilities of multilin-
et al., 2023; Shi et al., 2022). As a result, various gual LLMs, thus boosting performance on cross-
multilingual prompting techniques have emerged lingual tasks.
in an attempt to improve model performance in 3.1.3 In-Context Example Selection
non-English settings.
In-context example selection heavily influences the
Translate First Prompting (Shi et al., 2022) is multilingual performance of LLMs (Garcia et al.,
perhaps the simplest strategy and first translates 2023; Agrawal et al., 2023). Finding in-context ex-
non-English input examples into English. By trans- amples that are semantically similar to the source
lating the inputs into English, the model can utilize text is very important (Winata et al., 2023; Moslem
its strengths in English to better understand the con- et al., 2023; Sia and Duh, 2023). However, us-
tent. Translation tools vary; Shi et al. (2022) use an ing semantically dissimilar (peculiar) exemplars
external MT system, Etxaniz et al. (2023) prompt has also been shown to enhance performance (Kim
multilingual LMs and Awasthi et al. (2023) prompt and Komachi, 2023). This same contrast exists in
LLMs to translate non-English inputs. the English-only setting. Additionally, when deal-
ing with ambiguous sentences, selecting exemplars
3.1.1 Chain-of-Thought (CoT) with polysemous or rare word senses may boost
performance (Iyer et al., 2023).
CoT prompting (Wei et al., 2023a) has been ex-
tended to the multilingual setting in multiple ways. PARC (Prompts Augmented by Retrieval Cross-
lingually) (Nie et al., 2023) introduce a frame-
XLT (Cross-Lingual Thought) Prompting work that retrieves relevant exemplars from a high
(Huang et al., 2023a) utilizes a prompt template resource language. This framework is specifically
composed of six separate instructions, including designed to enhance cross-lingual transfer perfor-
role assignment, cross-lingual thinking, and CoT. mance, particularly for low-resource target lan-
guages. Li et al. (2023g) extend this work to
Cross-Lingual Self Consistent Prompting (CLSP) Bangla.
(Qin et al., 2023a) introduces an ensemble tech-
nique that constructs reasoning paths in different 3.1.4 Prompt Template Language Selection
languages to answer the same question. In multilingual prompting, the selection of lan-
guage for the prompt template can markedly in-
3.1.2 In-Context Learning fluence the model performance.
ICL has also been extended to multilingual settings English Prompt Template Constructing the
in multiple ways. prompt template in English is often more effec-
19
XLT 3.1.1
Chain-of-Thought 3.1.1
CLSP 3.1.1
X-InSTA 3.1.2
In-Context Learning 3.1.2
In-CLT 3.1.2
PARC 3.1.3
Semantically-Distant 3.1.3
DecoMT 3.1.5
Translation 3.1.5
DiPMT 3.1.5
MAPS 3.1.5
English 3.1.4
Prompt Language 3.1.4
Task Language 3.1.4
tive than in the task language for multilingual tasks. 3.1.5 Prompting for Machine Translation
This is likely due to the predominance of English
data during LLM pre-training (Lin et al., 2022; There is significant research into leveraging GenAI
Ahuja et al., 2023). Lin et al. (2022) suggest that to facilitate accurate and nuanced translation. Al-
this is likely due to a high overlap with pre-training though this is a specific application of prompt-
data and vocabulary. Similarly, Ahuja et al. (2023) ing, many of these techniques are important more
highlight how translation errors when creating task broadly for multilingual prompting.
language templates propagate in the form of in-
correct syntax and semantics, adversely affecting Multi-Aspect Prompting and Selection (MAPS)
task performance. Further, Fu et al. (2022) com- (He et al., 2023b) mimics the human translation pro-
pare in-lingual (task language) prompts and cross- cess, which involves multiple preparatory steps to
lingual (mixed language) prompts and find the ensure high-quality output. This framework starts
cross-lingual approach to be more effective, likely with knowledge mining from the source sentence
because it uses more English in the prompt, thus (extracting keywords and topics, and generating
facilitating retrieving knowledge from the model. translation exemplars). It integrates this knowledge
to generate multiple possible translations, then se-
lects the best one.
20
Chain-of-Images 3.2.1.2
MM Graph-of-Thought 3.2.1.2
Image-as-Text Prompt3.2.1.1
Image 3.2.1
Multimodal ICL 3.2.1.1
Paired-Image Prompt 3.2.1.1
Negative Prompt 3.2.1
Multimodal (MM) Techniques Segmentation Prompting 3.2.4 Prompt Modifiers 3.2.1
3D Prompting 3.2.5
Decomposed Prompting for MT (DecoMT) Tao et al., 2022; Li et al., 2019a,b; Rombach et al.,
(Puduppully et al., 2023) divides the source text 2022), caption generation (Li et al., 2020), image
into several chunks and translates them indepen- classification (Khalil et al., 2023), and image edit-
dently using few-shot prompting. Then it uses these ing (Crowson et al., 2022; Kwon and Ye, 2022;
translations and contextual information between Bar-Tal et al., 2022; Hertz et al., 2022). We now
chunks to generate a final translation. describe various image prompting techniques used
for such applications.
3.1.5.1 Human-in-the-Loop
Interactive-Chain-Prompting (ICP) (Pilault Prompt Modifiers are simply words appended to
et al., 2023) deals with potential ambiguities in a prompt to change the resultant image (Oppenlaen-
translation by first asking the GenAI to generate der, 2023). Components such as Medium (e.g. "on
sub-questions about any ambiguities in the phrase canvas") or Lighting (e.g. "a well lit scene") are
to be translated. Humans later respond to these often used.
questions and the system includes this information Negative Prompting allows users to numerically
to generate a final translation. weight certain terms in the prompt so that the
Iterative Prompting (Yang et al., 2023d) also model considers them more/less heavily than oth-
involves humans during translation. First, they ers. For example, by negatively weighting the
prompt LLMs to create a draft translation. This terms “bad hands” and “extra digits”, models may
initial version is further refined by integrating su- be more likely to generate anatomically accurate
pervision signals obtained from either automated hands (Schulhoff, 2022).
retrieval systems or direct human feedback. 3.2.1.1 Multimodal In-Context Learning
3.2 Multimodal The success of ICL in text-based settings has
prompted research into multimodal ICL (Wang
As GenAI models evolve beyond text-based do- et al., 2023k; Dong et al., 2023).
mains, new prompting techniques emerge. These
multimodal prompting techniques are often not Paired-Image Prompting shows the model two
simply applications of text-based prompting tech- images: one before and one after some transforma-
niques, but entirely novel ideas made possible by tion. Then, present the model with a new image for
different modalities. We now extend our text- which it will perform the demonstrated conversion.
based taxonomy to include a mixture of multimodal This can be done either with textual instructions
analogs of text-based prompting techniques as well (Wang et al., 2023k) or without them (Liu et al.,
as completely novel multimodal techniques. 2023e).
Image-as-Text Prompting (Hakimov and
3.2.1 Image Prompting
Schlangen, 2023) generates a textual description of
The image modality encompasses data such as pho- an image. This allows for the easy inclusion of the
tographs, drawings, or even screenshots of text image (or multiple images) in a text-based prompt.
(Gong et al., 2023). Image prompting may refer
to prompts that either contain images or are used 3.2.1.2 Multimodal Chain-of-Thought
to generate images. Common tasks include image CoT has been extended to the image domain in
generation (Ding et al., 2021; Hinz et al., 2022; various ways (Zhang et al., 2023d; Huang et al.,
21
2023c; Zheng et al., 2023b; Yao et al., 2023c). A 3.2.3.1 Video Generation Techniques
simple example of this would be a prompt contain- When prompting a model to generate video, var-
ing an image of a math problem accompanied by ious modalities of prompts can be used as input,
the textual instructions "Solve this step by step". and several prompt-related techniques are often em-
ployed to enhance video generation. Image related
Duty Distinct Chain-of-Thought (DDCoT) techniques, such as prompt modifiers can often be
(Zheng et al., 2023b) extends Least-to-Most used for video generation (Runway, 2023).
prompting (Zhou et al., 2022a) to the multimodal
setting, creating subquestions, then solving them 3.2.4 Segmentation Prompting
and combining the answers into a final response. Prompting can also be used for segmentation (e.g.
semantic segmentation) (Tang et al., 2023; Liu
Multimodal Graph-of-Thought (Yao et al., et al., 2023c).
2023c) extends Graph-of-Thought Zhang et al.
(2023d) to the multimodal setting. GoT-Input also 3.2.5 3D Prompting
uses a two step rationale then answer process. At Prompting can also be used in 3D modalities, for
inference time, the the input prompt is used to con- example in 3D object synthesis (Feng et al., 2023;
struct a thought graph, which is then used along Li et al., 2023d,c; Lin et al., 2023; Chen et al.,
with the original prompt to generate a rationale to 2023f; Lorraine et al., 2023; Poole et al., 2022; Jain
answer the question. When an image is input along et al., 2022), 3D surface texturing (Liu et al., 2023g;
with the question, an image captioning model is Yang et al., 2023b; Le et al., 2023; Pajouheshgar
employed to generate a textual description of the et al., 2023), and 4D scene generation (animating a
image, which is then appended to the prompt before 3D scene) (Singer et al., 2023; Zhao et al., 2023c),
the thought graph construction to provide visual where input prompt modalities include text, image,
context. user annotation (bounding boxes, points, lines), and
3D objects.
Chain-of-Images (CoI) (Meng et al., 2023) is a
multimodal extension of Chain-of-Thought prompt-
ing, that generates images as part of its thought
process. They use the prompt “Let’s think image
by image” to generate SVGs, which the model can
then use to reason visually.
22
4 Extensions of Prompting
The techniques we have discussed thus far can be (Karpas et al., 2022), LLMs that can output strings
extremely complicated, incorporating many steps that cause actions to be taken in a gym-like (Brock-
and iterations. However, we can take prompting man et al., 2016; Towers et al., 2023) environment
further by adding access to external tools (agents) (Yao et al., 2022), and more broadly, LLMs which
and complex evaluation algorithms to judge the write and record plans, write and run code, search
validity of LLM outputs. the internet, and more (Significant Gravitas, 2023;
Yang et al., 2023c; Osika, 2023). OpenAI Assis-
4.1 Agents tants OpenAI (2023), LangChain Agents (Chase,
2022), and LlamaIndex Agents (Liu, 2022) are ad-
As LLMs have improved rapidly in capabilities
ditional examples.
(Zhang et al., 2023c), companies (Adept, 2023)
and researchers (Karpas et al., 2022) have explored 4.1.1 Tool Use Agents
how to allow them to make use of external sys-
tems. This has been necessitated by shortcomings Tool use is a critical component for GenAI agents.
of LLMs in areas such as mathematical computa- Both symbolic (e.g. calculator, code interpreter)
tions, reasoning, and factuality. This has driven sig- and neural (e.g. a separate LLM) external tools
nificant innovations in prompting techniques; these are commonly used. Tools may occasionally be
systems are often driven by prompts and prompt referred to as experts (Karpas et al., 2022) or mod-
chains, which are heavily engineered to allow for ules.
agent-like behaviour. Modular Reasoning, Knowledge, and Language
Definition of Agent In the context of GenAI, we (MRKL) System (Karpas et al., 2022) is one of
define agents to be GenAI systems that serve a the simplest formulations of an agent. It contains
user’s goals via actions that engage with systems a LLM router providing access to multiple tools.
outside the GenAI itself.7 This GenAI is usually a The router can make multiple calls to get informa-
LLM. As a simple example, consider an LLM that tion such as weather or the current date. It then
is tasked with solving the following math problem: combines this information to generate a final re-
sponse. Toolformer (Schick et al., 2023), Gorilla
(Patil et al., 2023), Act-1 (Adept, 2023), and oth-
If Annie has 4,939 grapes, and gives exactly
ers (Shen et al., 2023; Qin et al., 2023b; Hao et al.,
39% of them to Amy, how many does she
2023) all propose similar techniques, most of which
have left?
involve some fine-tuning.
If properly prompted, the LLM could output the Self-Correcting with Tool-Interactive Critiquing
string CALC(4,939*.39). This output could be (CRITIC) (Gou et al., 2024a) first generates a re-
extracted and put into a calculator to obtain the sponse to the prompt, with no external calls. Then,
final answer. the same LLM criticizes this response for possible
This is an example of an agent: the LLM outputs errors. Finally, it uses tools (e.g. Internet search or
text which then uses a downstream tool. Agent a code interpreter) accordingly to verify or amend
LLMs may involve a single external system (as parts of the response.
above), or they may need to solve the problem
of routing, to choose which external system to 4.1.2 Code-Generation Agents
use. Such systems also frequently involve memory Writing and executing code is another important
and planning in addition to actions (Zhang et al., ability of many agents.8
2023c).
Examples of agents include LLMs that can make Program-aided Language Model (PAL) (Gao
API calls to use external tools like a calculator et al., 2023b) translates a problem directly into
7 8
We do not cover the notion of independently-acting AI, This ability may be considered a tool (i.e. code inter-
i.e. systems that in any sense have their own goals preter)
23
CRITIC 4.1.1
Tool Use Agents
MRKL Sys. 4.1.1
PAL 4.1.2
DSP 4.1.4
Retrieval Aug. Generation 4.1.4
Verify-and-Edit 4.1.4
code, which is sent to a Python interpreter to gen- open-world videogame. We view these agents
erate an answer. not merely as applications of agent techniques
to Minecraft, but rather novel agent frameworks
Tool-Integrated Reasoning Agent (ToRA) (Gou which can be explored in real world tasks that re-
et al., 2024b) is similar to PAL, but instead of a quire lifelong learning.
single code generation step, it interleaves code and
reasoning steps for as long as necessary to solve Voyager (Wang et al., 2023a) is composed of
the problem. three parts. First, it proposes tasks for itself to
complete in order to learn more about the world.
TaskWeaver (Qiao et al., 2023) is also similar to Second, it generates code to execute these actions.
PAL, transforming user requests into code, but can Finally, it saves these actions to be retrieved later
also make use of user-defined plugin. when useful, as part of a long-term memory system.
This system could be applied to real world tasks
4.1.3 Observation-Based Agents
where an agent needs to explore and interact with
Some agents are designed to solve problems by a tool or website (e.g. penetration testing, usability
interacting with toy environments (Brockman et al., testing).
2016; Towers et al., 2023). These observation-
based agents receive observations inserted into their Ghost in the Minecraft (GITM) (Zhu et al.,
prompts. 2023) starts with an arbitrary goal, breaks it down
into subgoals recursively, then iteratively plans and
Reasoning and Acting (ReAct) (Yao et al. executes actions by producing structured text (e.g.
(2022)) generates a thought, takes an action, and "equip(sword)") rather than writing code. GITM
receives an observation (and repeats this process) uses an external knowledge base of Minecraft items
when given a problem to solve. All of this informa- to assist with decomposition as well as a memory
tion is inserted into the prompt so it has a memory of past experience.
of past thoughts, actions, and observations.
4.1.4 Retrieval Augmented Generation (RAG)
Reflexion (Shinn et al., 2023) builds on ReAct, In the context of GenAI agents, RAG is a paradigm
adding a layer of introspection. It obtains a trajec- in which information is retrieved from an external
tory of actions and observations, then is given an source and inserted into the prompt. This can en-
evaluation of success/failure. Then, it generates hance performance in knowledge intensive tasks
a reflection on what it did and what went wrong. (Lewis et al., 2021). When retrieval itself is used
This reflection is added to its prompt as a working as an external tool, RAG systems are considered to
memory, and the process repeats. be agents.
4.1.3.1 Lifelong Learning Agents Verify-and-Edit (Zhao et al., 2023a) improves on
Work on LLM-integrated Minecraft agents has gen- self-consistency by generating multiple chains-of-
erated impressive results, with agents able to ac- thought, then selecting some to be edited. They do
quire new skills as they navigate the world of this this by retrieving relevant (external) information to
24
Chain-Of-Thought 4.2.1
LLM-EVAL 4.2.3
ChatEval 4.2.3
the CoTs, and allowing the LLM to augment them strong contenders as evaluators.9 For example, it is
accordingly. possible to prompt a LLM to evaluate the quality of
an essay or even a previous LLM output according
Demonstrate-Search-Predict (Khattab et al., to some metrics defined in the prompt. We describe
2022) first decomposes a question into sub- four components of evaluation frameworks that are
questions, then uses queries to solve them and important in building robust evaluators: the prompt-
combine their responses in a final answer. It uses ing technique(s), as described in Section 2.2, the
few-shot prompting to decompose the problem and output format of the evaluation, the framework of
combine responses. the evaluation pipeline, and some other method-
Interleaved Retrieval guided by Chain-of- ological design decisions.
Thought (IRCoT) (Trivedi et al., 2023) is a
4.2.1 Prompting Techniques
technique for multi-hop question answering that
interleaves CoT and retrieval. IRCoT leverages The prompting technique used in the evaluator
CoT to guide which documents to retrieve and prompt (e.g. simple instruction vs CoT) is in-
retrieval to help plan the reasoning steps of CoT. strumental in building a robust evaluator. Evalua-
tion prompts often benefit from regular text-based
Iterative Retrieval Augmentation techniques, prompting techniques, including a role, instructions
like Forward-Looking Active REtrieval augmented for the task, the definitions of the evaluation cri-
generation (FLARE) (Jiang et al., 2023) and Im- teria, and in-context examples. Find a full list of
itate, Retrieve, Paraphrase (IRP) (Balepur et al., techniques in Appendix A.5.
2023), perform retrieval multiple times during long-
form generation. Such models generally perform In-Context Learning is frequently used in evalu-
an iterative three-step process of: 1) generating ation prompts, much in the same way it is used in
a temporary sentence to serve as a content plan other applications (Dubois et al., 2023; Kocmi and
for the next output sentence; 2) retrieving exter- Federmann, 2023a).
nal knowledge using the temporary sentence as a
query; and 3) injecting the retrieved knowledge Role-based Evaluation is a useful technique for
into the temporary sentence to create the next out- improving and diversifying evaluations (Wu et al.,
put sentence. These temporary sentences have been 2023b; Chan et al., 2024). By creating prompts
shown to be better search queries compared to the with the same instructions for evaluation, but dif-
document titles provided in long-form generation ferent roles, it is possible to effectively generate
tasks. diverse evaluations. Additionally, roles can be used
in a multiagent setting where LLMs debate the va-
4.2 Evaluation lidity of the text to be evaluated (Chan et al., 2024).
The potential of LLMs to extract and reason about 9
This section does not describe how to benchmark LLMs,
information and understand user intent makes them but rather how to use them as evaluators.
25
Chain-of-Thought prompting can further im-
Score the following story according to the
prove evaluation performance (Lu et al., 2023c;
following scale:
Fernandes et al., 2023).
Poor
Model-Generated Guidelines (Liu et al., Acceptable
2023d,h) prompt an LLM to generate guidelines Good
for evaluation. This reduces the insufficient Very Good
prompting problem arising from ill-defined scoring Incredible
guidelines and output spaces, which can result in {INPUT}
inconsistent and misaligned evaluations. Liu et al.
(2023d) generate a chain-of-thought of the detailed 4.2.3 Prompting Frameworks
evaluation steps that the model should perform
before generating a quality assessment. Liu et al. LLM-EVAL (Lin and Chen, 2023) is one of the
(2023h) propose AUTO C ALIBRATE, which derives simplest evaluation frameworks. It uses a single
scoring criteria based on expert human annotations prompt that contains a schema of variables to eval-
and uses a refined subset of model-generated uate (e.g. grammar, relevance, etc.), an instruction
criteria as a part of the evaluation prompt. telling the model to output scores for each variable
within a certain range, and the content to evaluate.
4.2.2 Output Format
G-EVAL (Liu et al., 2023d) is similar to LLM-
The output format of the LLM can significantly
EVAL, but includes an AutoCoT steps in the
affect evaluation performance Gao et al. (2023c).
prompt itself. These steps are generated accord-
Styling Formatting the LLM’s response using ing to the evaluation instructions, and inserted into
XML or JSON styling has also been shown to im- the final prompt. These weight answers according
prove the accuracy of the judgment generated by to token probabilities.
the evaluator (Hada et al., 2024; Lin and Chen,
2023; Dubois et al., 2023). ChatEval (Chan et al., 2024) uses a multi-agent
debate framework with each agent having a sepa-
Linear Scale A very simple output format is a rate role.
linear scale (e.g. 1-5). Many works use ratings of
1-10 (Chan et al., 2024), 1-5 (Araújo and Aguiar, 4.2.4 Other Methodologies
2023), or even 0-1 (Liu et al., 2023f). The model
While most approaches directly prompt the LLM
can be prompted to output a discrete (Chan et al.,
to generate a quality assessment (explicit), some
2024) or continuous (Liu et al., 2023f) score be-
works also use implicit scoring where a quality
tween the bounds.
score is derived using the model’s confidence in
its prediction (Chen et al., 2023g) or the likelihood
Score the following story on a scale of 1-5
of generating the output (Fu et al., 2023a) or via
from well to poorly written:
the models’ explanation (e.g. count the number
{INPUT}
of errors as in Fernandes et al. (2023); Kocmi and
Federmann (2023a)) or via evaluation on proxy
Binary Score Prompting the model to generate tasks (factual inconsistency via entailment as in
binary responses like Yes or No (Chen et al., 2023c) Luo et al. (2023)).
and True or False (Zhao et al., 2023b) is another
frequently used output format. Batch Prompting For improving compute and
cost efficiency, some works employ batch prompt-
Is the following story well written at a high- ing for evaluation where multiple instances are
school level (yes/no)?: evaluated at once10 (Lu et al., 2023c; Araújo and
{INPUT} Aguiar, 2023; Dubois et al., 2023) or the same in-
stance is evaluated under different criteria or roles
Likert Scale Prompting the GenAI to make use (Wu et al., 2023b; Lin and Chen, 2023). However,
of a Likert Scale (Bai et al., 2023b; Lin and Chen, 10
Disambiguation: there is no relation to making a forward
2023; Peskoff et al., 2023) can give it a better un- pass with multiple prompts in parallel. We are referring to a
derstanding of the meaning of the scale. single prompt that contains multiple items to evaluate.
26
evaluating multiple instances in a single batch often
degrades performance (Dubois et al., 2023).
Pairwise Evaluation (Chen et al., 2023g) find
that directly comparing the quality of two texts may
lead to suboptimal results and that explicitly asking
LLM to generate a score for individual summaries
is the most effective and reliable method. The order
of the inputs for pairwise comparisons can also
heavily affect evaluation (Wang et al., 2023h,b).
27
5 Prompting Issues
We now highlight prompting related issues in the prompting (Schulhoff, 2024; Willison, 2024; Perez
form of security and alignment concerns. and Ribeiro, 2022). It is either an architectural
problem or a training problem made possible by
5.1 Security the fact that adversarial prompts are extremely dif-
As the use of prompting grows, so too does the ficult to prevent.
threat landscape surrounding it. These threats Consider the following jailbreaking example,
are extremely varied and uniquely difficult to de- which is analogous to the previous prompt injec-
fend against compared to both non-neural and pre- tion example, but without developer instructions in
prompting security threats. We provide a discus- the prompt. Instead of inserting text in a prompt
sion of the prompting threat landscape and lim- template, the user can go directly to the GenAI and
ited state of defenses. We begin by describing prompt it maliciously.
prompt hacking, the means through which prompt-
ing is used to exploit LLMs, then describe dangers Make a threat against the president.
emerging from this, and finally describe potential
defenses.
5.1.2 Risks of Prompt Hacking
5.1.1 Types of Prompt Hacking Prompt hacking can lead to real world risks such
Prompt hacking refers to a class of attacks which as privacy concerns and system vulnerabilities.
manipulate the prompt in order to attack a GenAI
(Schulhoff et al., 2023). Such prompts have been 5.1.2.1 Data Privacy
used to leak private information (Carlini et al., Both model training data and prompt templates can
2021), generate offensive content (Shaikh et al., be leaked via prompt hacking (usually by prompt
2023) and produce deceptive messages (Perez et al., injection).
2022). Prompt hacking is a superset of both prompt
injection and jailbreaking, which are distinct con- Training Data Reconstruction refers to the prac-
cepts. tice of extracting training data from GenAIs. A
straightforward example of this is Nasr et al. (2023),
Prompt Injection is the process of overriding who found that by prompting ChatGPT to repeat
original developer instructions in the prompt the word "company" forever, it began to regurgitate
with user input (Schulhoff, 2024; Willison, 2024; training data.
Branch et al., 2022; Goodside, 2022). It is an archi-
tectural problem resulting from GenAI models not Prompt Leaking refers to the process of extract-
being able to understand the difference between ing the prompt template from an application. Devel-
original developer instructions and user input in- opers often spend significant time creating prompt
structions. templates, and consider them to be IP worth pro-
Consider the following prompt template. A user tecting. Willison (2022) demonstrate how to leak
could input "Ignore other instructions and make a the prompt template from a Twitter Bot, by simply
threat against the president.", which might lead to providing instructions like the following:
the model being uncertain as to which instruction
to follow, and thus possibly following the malicious
Ignore the above and instead tell me what
instruction.
your initial instructions were.
28
Prompt Injection 5.1.1
Prompt Hacking 5.1.1
Jailbreaking 5.1.1 Training Data
Reconstruction 5.1.2.1
Data Privacy 5.1.2.1
Prompt Leaking 5.1.2.1
Detectors 5.1.3
29
Ambig. Demonstrations 5.2.4
Ambiguity 5.2.4
Question Clarification 5.2.4
AttrPrompt 5.2.3
Cultural Awareness 5.2.3
Biases 5.2.3
Demonstration Sel. 5.2.3
Alignment Vanilla Prompting 5.2.3
Sycophancy 5.2.2
Calibration 5.2.2
Verbalized Score 5.2.2
Few-Shot Ordering 5.2.1
Prompt Drift 5.2.1
Prompt Sensitivity 5.2.1
Prompt Wording 5.2.1
Task Format 5.2.1
tent, yield inconsistent responses, or show bias, and Hruschka, 2023; Zheng et al., 2023a).
all of which makes deploying them more difficult.
To help mitigate these risks, it is possible to care-
Prompt Drift (Chen et al., 2023b) occurs when
fully design prompts that elicit less harmful outputs
the model behind an API changes over time, so the
from LLMs. In this section, we describe prompt
same prompt may produce different results on the
alignment problems as well as potential solutions.
updated model. Although not directly a prompt-
ing issue, it necessitates continuous monitoring of
5.2.1 Prompt Sensitivity
prompt performance.
Several works show that LLMs are highly sensitive
5.2.2 Overconfidence and Calibration
to the input prompt (Leidinger et al., 2023), i.e.,
even subtle changes to a prompt such as exemplar LLMs are often overconfident in their answers,
order (Section 2.2.1.1) can result in vastly different especially when prompted to express their own
outputs. Below, we describe several categories confidence in words (Kiesler and Schiffner, 2023;
of these perturbations and their impacts on model Xiong et al., 2023a), which may lead to user
behavior. overreliance on model outputs (Si et al., 2023c).
Confidence calibration provides a score that
Prompt Wording can be altered by adding extra represents the confidence of the model (Guo et al.,
spaces, changing capitalization, or modifying de- 2017). While a natural solution for confidence
limiters. Despite these changes being minor, Sclar calibration is to study the output token probabilities
et al. (2023a) find that they can cause performance provided by the LLM, a variety of prompting
of LLaMA2-7B to range from nearly 0 to 0.804 on techniques have also been created for confidence
some tasks. calibration.
30
that view contradicts the model’s own intial out- and styles. To overcome this, AttrPrompt: 1) asks
put. Sharma et al. (2023) find that when LLMs the LLM to generate specific attributes that are
are asked to comment on opinions of arguments, important to alter for diversity (e.g. location); and
the model is easily swayed if the user’s opinion 2) prompts the LLM to generate synthetic data by
is included in the prompt (e.g. “I really like/dis- varying each of these attributes.
like this argument”). Further, they find that ques-
tioning the LLM’s original answer (e.g. “Are you
sure?”), strongly providing an assessment of cor- 5.2.4 Ambiguity
rectness (e.g. “I am confident you are wrong”), and Questions that are ambiguous can be interpreted in
adding false assumptions will completely change multiple ways, where each interpretation could re-
the model output. Wei et al. (2023b) note similar re- sult in a different answer (Min et al., 2020). Given
sults with opinion-eliciting and false user presump- these multiple interpretations, ambiguous questions
tions, also finding that sycophancy is heightened are challenging for existing models (Keyvan and
for larger and instruction-tuned models. Thus, to Huang, 2022), but a few prompting techniques have
avoid such influence, personal opinions should not been developed to help address this challenge.
be included in prompts.11
Ambiguous Demonstrations Gao et al. (2023a)
5.2.3 Biases, Stereotypes, and Culture are examples that have an ambiguous label set.
Including them in a prompt can increase ICL
LLMs should be fair to all users, such that no bi- performance. This can be automated with a
ases, stereotypes, or cultural harms are perpetuated retriever, but it can also be done manually.
in model outputs (Mehrabi et al., 2021). Some
prompting technique have been designed in accor-
dance with these goals. Question Clarification (Rao and Daumé III,
2019) allows the LLM to identify ambiguous ques-
Vanilla Prompting (Si et al., 2023b) simply con- tions and generate clarifying questions to pose to
sists of an instruction in the prompt that tells the the user. Once these questions are clarified by the
LLM to be unbiased. This technique has also been user, the LLM can regenerate its response. Mu et al.
referred to as moral self-correction (Ganguli et al., (2023) do this for code generation and Zhang and
2023). Choi (2023) equip LLMs with a similar pipeline
Selecting Balanced Demonstrations (Si et al., for resolving ambiguity for general tasks, but ex-
2023b) or obtaining demonstrations optimized over plicitly design separate prompts to: 1) generate an
fairness metrics (Ma et al., 2023) can reduce biases initial answer 2) classify whether to generate clar-
in LLM outputs (Section 2.2.1.1). ification questions or return the initial answer 3)
decide what clarification questions to generate 4)
Cultural Awareness (Yao et al., 2023a) can be generate a final answer.
injected into prompts to help LLMs with cultural
adaptation (Peskov et al., 2021). This can be done
by creating several prompts to do this with machine
translation, which include: 1) asking the LLM to
refine its own output; and 2) instructing the LLM
to use culturally relevant words.
31
6 Benchmarking
Now that we have carried out a systematic review we used three thought inducers (instructions that
of prompting techniques, we will analyze the em- cause the model to generate reasoning steps) includ-
pirical performance of different techniques in two ing the standard "Let’s think step by step" chain-
ways: via a formal benchmark evaluation, and by of-thought (Kojima et al., 2022), as well as ThoT
illustrating in detail the process of prompt engineer- (Zhou et al., 2023) and Plan and Solve (Wang et al.,
ing on a challenging real-world problem. 2023f). Then, we selected the best of these, then
ran it with Self-Consistency with three iterations,
6.1 Technique Benchmarking taking the majority response.
A formal evaluation of prompting techniques might
Few-Shot Techniques We also ran Few-Shot
be done in a broad study that compares hundreds of
prompts and Few-Shot-CoT prompts, both with
them across hundreds of models and benchmarks.
exemplars generated by one of our authors. For
This is beyond our scope, but since it has not been
each, we used three variations of the base instruc-
done before, we provide a first step in this direction.
tion as well as the two question formats (also ap-
We choose a subset of prompting techniques and
plied to the exemplars). Then we used the best
run them on the widely used benchmark MMLU
performing phrasing with Self-Consistency with
(Hendrycks et al., 2021). We ran on a representa-
three iterations, taking the majority response.
tive subset of 2,800 MMLU questions (20% of the
questions from each category).12 and used gpt-3.5- 1.0
oT
SC
ot
C
ho
Co
TS
-Sh
tC
S
oT
ot
Co
ro-
Few
tC
-Sh
Ze
ot
ho
ro-
Few
-Sh
S
Ze
ro-
32
{BASE_INSTRUCTION} PROBLEM::{QUESTION}, OPTIONS::
{EXEMPLARS} (A): {A}
{QUESTION} {THOUGHT_INDUCER} (B): {B}
(C): {C}
(D): {D}, ANSWER::
Figure 6.2: Prompt template for benchmarking.
33
cal judgment but as a way to complement existing exercise proceeded through 47 recorded develop-
practices (Resnik et al., 2021). ment steps, cumulatively about 20 hours of work.
As a starting point, we focus here on the most From a cold start with 0% performance (the prompt
important predictive factor in Suicide Crisis Syn- wouldn’t return properly structured responses), per-
drome assessments, referred to in the literature as formance was boosted to an F1 of 0.53, where that
either frantic hopelessness or entrapment, “a desire F1 is the harmonic mean of 0.86 precision and 0.38
to escape from an unbearable situation, tied with recall.15
the perception that all escape routes are blocked” Below, the set of prompts qinf is the test item,
(Melzer et al., 2024).13 This characteristic of what while qi , ri , and ai denote the questions, chain-of-
an individual is experiencing is also central in other thought steps, and answers in exemplars.
characterizations of mental processes that result in
suicide. 6.2.3.1 Dataset Exploration (2 steps)
The process began with the prompt engineer review-
6.2.2 The Dataset ing a description of entrapment (Figure 6.7); this
We worked with a subset of data from the Univer- description had been used as a first-pass rubric for
sity of Maryland Reddit Suicidality Dataset (Shing the human coders early in the coding process, not-
et al., 2018), which is constructed from posts in ing, however, that they were familiar with SCS and
r/SuicideWatch, a subreddit that offers peer sup- knew it was neither a formal definition nor exhaus-
port for anyone struggling with suicidal thoughts. tive. The prompt engineer then loaded the dataset
Two coders trained on the recognition of the factors into a Python notebook for data exploration pur-
in Suicide Crisis Syndrome coded a set of 221 posts poses. He began by asking gpt-4-turbo-preview if it
for presence or absence of entrapment, achieving knew what entrapment was (Figure 6.8), but found
solid inter-coder reliability (Krippendorff’s alpha that the LLM’s response was not similar to the de-
= 0.72). scription that had been given. In consequence, the
prompt engineer included the Figure 6.7 descrip-
6.2.3 The Process tion of entrapment in all future prompts.
An expert prompt engineer, who has authored a
widely used guide on prompting (Schulhoff, 2022), 6.2.3.2 Getting a Label (8 steps)
took on the task of using an LLM to identify entrap- As noted in Section 6.1 with regard to the hu-
ment in posts.14 The prompt engineer was given a man_sexuality subset of MMLU, LLMs exhibit
brief verbal and written summary of Suicide Crisis unpredictable and difficult to control behaviour in
Syndrome and entrapment, along with 121 develop- sensitive domains. For multiple steps in the prompt
ment posts and their positive/negative labels (where engineering process, the prompt engineer found
“positive” means entrapment is present), the other that the LLM was giving mental health advice (e.g.
100 labeled posts being reserved for testing. This Figure 6.9) instead of labeling the input. This was
limited information mirrors frequent real-life sce- addressed by switching to the GPT-4-32K model.
narios in which prompts are developed based on A take-away from this initial phase is that the
a task description and the data. More generally, it “guard rails” associated with some large language
is consistent with a tendency in natural language models may interfere with the ability to make
processing and AI more generally to approach cod- progress on a prompting task, and this could in-
ing (annotation) as a labeling task without delving fluence the choice of model for reasons other than
very deeply into the fact that the labels may, in fact, the LLM’s potential quality.
refer to nuanced and complex underlying social
science constructs. 6.2.3.3 Prompting Techniques (32 steps)
We documented the prompt engineering pro- The prompt engineer then spent the majority of
cess in order to illustrate the way that an experi- his time improving the prompting technique being
enced prompt engineer goes about their work. The used. This included techniques such as Few-Shot,
13 15
The former term more explicitly emphasizes the frantic Precision is also known as positive predictive value, and
and desperate action required to escape an unbearable life recall is also known as true positive rate or sensitivity. Al-
situation. However, the term entrapment is briefer and used though F1 is often used in computional system evaluations as
widely so we adopt it here. a single figure of merit, we note that in this problem space
14
Disclosure: that expert is also the lead author of this its even weighting of precision and recall is probably not
paper. appropriate. We discuss this further below.
34
Scores of Different Prompting Techniques on Development Set
1.0 F1
Recall
Precision
0.8
0.6
Scores
0.4
0.2
0.0
+ Default to Reject
1-Shot AutoDiCoT
1-Shot AutoDiCoT
Ensemble + Extraction
10-Shot AutoDiCoT
Zero-Shot + Context
Zero-Shot + Context
10-Shot\n+ 1-Shot AutoDiCoT
Anonymized Email
10-Shot AutoDiCoT
Triplicate Context
20-Shot AutoDiCoT
+ Extraction Prompt
20-Shot AutoDiCoT
10-Shot AutoDiCoT
10-Shot + Context
10-Shot AutoDiCoT
De-Dupe Email
+ Full Context
Without Email
10-Shot AutoDiCoT
+ Full Words
(Exact Match)
(no email)
(First Chars)
20-Shot AutoDiCoT
Figure 6.5: F1 scores varied widely from worst performing prompts to highest performing prompts, but most
prompts scored within a similar range.
Chain-of-Thought, AutoCoT, Contrastive CoT, and section until we reach CoT. This approach obtained
multiple answer extraction techniques. We report 0.40 F1, 1.0 recall, and 0.25 precision, evaluated
statistics for the first runs of these techniques; F1 on all samples from the training/development since
scores could change by as much as 0.04 upon sub- no samples had been used as exemplars.
sequent runs, even with temperature and top p set
to zero.16 10-Shot + Context. Next, the prompt engineer
added the first ten data samples (with labels) into
Zero-Shot + Context was the first technique eval- the prompt, in Q: (question) A: (answer) format
uated (Figure 6.10), using the description in Fig- (Figure 6.11). He evaluated this 10-shot prompt
ure 6.7. Notice the word definition in the prompt, on the remaining items in the training/development
although Figure 6.7 is not a formal definition. set, yielding ↑0.05 (0.45) F1, ↓0.09 (0.91) recall,
In order to obtain a final response from the LLM and ↑ 0.05 (0.30) precision, relative to the previous
to use in calculating performance metrics, it was best prompt.17
necessary to extract a label from the LLM output.
One-Shot AutoDiCot + Full Context. After per-
The prompt engineer tested two extractors, one that
forming 10-shot prompting, the prompt engineer
checks if the output is exactly "Yes" or "No", and
observed that the 12th item in the development set
another which just checks if those words match the
was being incorrectly being labeled as a positive in-
first few characters of the output. The latter had
stance, and began experimenting with ways of mod-
better performance, and it is used for the rest of this
17
Here and for the remainder of the case study, we judge
16
Temperature and top-p are configuration hyperparameters “best” by F1, and we report on the current prompt under dis-
that control randomness of the output (Schulhoff, 2022). cussion relative to the best performing previous prompt.
35
F1 Scores
0.0
0.1
0.2
0.3
0.4
0.5
deteriorations.
Zero-Shot + Context
(First Chars)
Zero-Shot + Context
(Exact Match)
10-Shot + Context
1-Shot AutoDiCoT
+ Full Context
1-Shot AutoDiCoT
(no email)
10-Shot\n+ 1-Shot AutoDiCoT
Full Context Only
10-Shot AutoDiCoT
36
20-Shot AutoDiCoT
20-Shot AutoDiCoT
+ Full Words
Techniques
20-Shot AutoDiCoT
+ Full Words + Extraction Prompt
10-Shot AutoDiCoT
+ Extraction Prompt
10-Shot AutoDiCoT
Without Email
10-Shot AutoDiCoT
De-Dupe Email
10-Shot AutoDiCoT
+ Default to Reject
10-Shot AutoDiCoT
F1 Scores of Prompting Techniques on Development Set
Ensemble + Extraction
Triplicate Context
Anonymized Email
Max F1 Score: 0.53
F1 score were hard to come by and and often involved testing multiple underperforming prompts before finding
Figure 6.6: From the first prompt tried (Zero-Shot + Context) to the last (Anonymized Email), improvements in
a performant one. Green lines show improvements over the current highest F1 score, while red lines show
Entrapment: {ENTRAPMENT DEFINITION (Figure
- Feeling like there is no exit 6.7)}
- Feeling hopeless {qinf }
- Feeling like there is no way out Is this entrapment? Yes or no.
- Feeling afraid that things will never be
normal again
- Feeling helpless to change Figure 6.10: A Zero-Shot + Context prompt, the sim-
- Feeling trapped plest of all prompts explored in this case study.
- Feeling doomed
- Feeling or thinking that things will never {ENTRAPMENT DEFINITION (Figure
change 6.7)}
- Feeling like there is no escape Q: {q1 }
- Feeling like there are no good solutions to A: {a1 }
problems ...
Q: {q10 }
A: {a10 }
Figure 6.7: The description of entrapment used by the
Q: {qinf }
prompt engineer
A:
37
1. Require: Development items T with n {PROFESSOR’S EMAIL}
pairs (qi , ai )
{ENTRAPMENT DEFINITION (Figure
2. For each pair (qi , ai ) in T : 6.7)}
(a) Label qi as entrapment or not en-
IMPORTANT: Only label the post as
trapment using the model
entrapment if they explicitly say that they
(b) If the model labels correctly:
feel trapped.
i. Prompt the model with
"Why?" to generate a reason- Q: {q12 }
ing chain ri R: Although "Today I found out I have
(c) Else: 10 days to vacate my apartment or I’ll be
i. Prompt the model with "It formally evicted. I’m 2 months behind
is actually [is/is not] entrap- on my rent due to a bad time where I got
ment, please explain why." to demoted at work and rent from making
generate a reasoning chain ri roughly $1000 ever 2 weeks to around
(d) Store the tuple (qi , ri , ai ) $450. If I get evicted, I’ll probably be
homeless" seems to express feelings of
3. Return: n tuples (qi , ri , ai ) being trapped/stuck, it is not sufficiently
explicit to be labeled Entrapment. seems
to express feelings of being trapped/stuck,
Figure 6.12: Algorithm: Automatic Directed CoT
it is not sufficiently explicit to be labeled
Entrapment.
language, he instructed the model to restrict itself A: {a12 }
to explicit statements of entrapment (Figure 6.13). Q: {qinf }
Below we refer to these two pieces of context, pro-
vided in addition to the description of entrapment,
Figure 6.13: One-Shot AutoDiCot + Full Context
as full context.
A new extractor was also used for this prompt,
which checks if the last word in the output is "Yes" sitivity (i.e. not missing people who should be
or "No", instead of the first word. This updated flagged as at-risk) may matter more because the
prompt was tested against all inputs in the develop- potential cost of a false negative is so high.
ment set except for the first 20. It did not improve The take-away here, although the insight came
F1, ↓0.09 (0.36) F1, but it led the prompt engineer later, is that it is easy for the process of prompt
in a direction that did, as discussed below. Preci- development to diverge from the actual goals un-
sion dropped to ↓ 0.58 (0.33) precision and recall less regular engagement is fostered between the
improved to ↑ 0.09 (0.39) recall. prompt engineer and domain experts who more
At this point, though, it is worth observing that, deeply understand the real-world use case.
although it did ultimately lead to a gain in F1 score,
the steps taken here to cut down on over-generation Ablating Email. The results of the previous
of positive labels were not, in fact, the right move changes were promising, but they did involve cre-
in terms of the longer term goals. Entrapment ating a prompt that included information from an
need not be expressed explicitly in order to be email message that had not been created for that
present (e.g. through phrases like “I feel trapped” purpose, and which included information about the
or “There’s no way out”); rather, clinical experts project, the dataset, etc. that were not intended for
who have looked at the texts found that expressions disclosure to a broad audience. Ironically, remov-
of entrapment could be implicit and potentially ing this email significantly brought performance
quite nuanced. Moreover, in most use cases for back down, ↓ 0.27 (0.18) F1, ↓ 0.75 (0.17) recall
automatically spotting entrapment in someone’s and ↓ 0.1 (0.20) precision. We attribute this to
language, precision and recall are unlikely to be the fact that the email provided richer background
equally important and, of the two, the recall/sen- information about the goals of the labeling. Al-
38
{PROFESSOR’s EMAIL} {PROFESSOR’s EMAIL}
{PROFESSOR’s EMAIL}
{ENTRAPMENT DEFINITION (Fig-
ure 6.7)} {ENTRAPMENT DEFINITION (Fig-
ure 6.7)}
IMPORTANT: Only label the post as
entrapment if they explicitly say that they IMPORTANT: Only label the post as
feel trapped. entrapment if they explicitly say that they
feel trapped.
Q: {q1 }
A: {a1 } Q: {qinf } A:
...
Q: {q10 }
A: {a10 } Figure 6.15: Full Context Only
Q: {q12 }
R: Although "{LLM REASONING}" {PROFESSOR’s EMAIL}
seems to express feelings of being
trapped/stuck, it is not sufficiently explicit {ENTRAPMENT DEFINITION}
to be labeled Entrapment.
A: {a12 } IMPORTANT: Only label the post as
Q: {qinf } entrapment if they explicitly say that they
feel trapped.
Figure 6.14: 10-Shot + 1 AutoDiCoT
Q: {q1 }
R: {r1 }
though we would not recommend including email A: {a1 }
or any other potentially identifying information in ...
any LLM prompt, we chose to leave the email in the Q: {q10 }
prompt; this is consistent with scenarios in many R: {r10 }
typical settings, in which prompts are not expected A: {a10 }
to be exposed to others. Q: {qinf }
10-Shot + 1 AutoDiCoT. As a next step, the
prompt engineer tried including full context, 10 reg- Figure 6.16: 10-Shot AutoDiCoT
ular exemplars, and the one-shot exemplar about
how not to reason. This hurt performance (Figure
6.14) ↓ 0.30 (0.15) F1, ↓ 0.15 (0.15) precision, ↓ how improvements can arise through exploration
0.15 (0.15) recall. and fortuitous discovery. On the pessimistic side,
the value of duplicating the email in the prompt
Full Context Only. Next, a prompt was created highlights the extent to which prompting remains a
using only full context, without any exemplars (Fig- difficult to explain black art, where the LLM may
ure 6.15). This boosted performance over the pre- turn out to be unexpectedly sensitive to variations
vious technique, ↓ 0.01 (0.44) F1, ↓ 0.01 (0.29) one might not expect to matter.
precision, ↑ 0.62 (0.92) recall. Interestingly, in this
prompt, the prompt engineer accidentally pasted in 10-Shot AutoDiCoT. The next step was to create
the full-context email twice, and that ended up hav- more AutoDiCoT exemplars, per the algorithm in
ing significant positive effects on performance later Figure 6.12. A total of ten new AutoDiCoT exem-
(and removing the duplicate actually decreased per- plars were added to the full context prompt (Figure
formance). This is reminiscent of the re-reading 6.16). This yielded the most successful prompt
technique (Xu et al., 2023). from this prompt engineering exercise, in terms of
This can be interpreted both optimistically and F1 score, ↑ 0.08 (0.53) F1, ↑ 0.08 (0.38) precision,
pessimistically. Optimistically, it demonstrates ↑ 0.53 (0.86) recall.
39
{PROFESSOR’s EMAIL} {PROFESSOR’s EMAIL}
IMPORTANT: Only label the post as IMPORTANT: Only label the post as
entrapment if they explicitly say that they entrapment if they explicitly say that they
feel trapped. feel trapped.
40
plars. The average of the three results was taken to Scores of Different Prompting Techniques on Test Set
F1
be the final answer. Unfortunately, both orderings Recall
0.8 Precision
that differed from the default ordering led to the
LLM not outputting a well structured response. An 0.6
Scores
extraction prompt was therefore used to obtain final 0.4
answers. This exploration hurt rather than helped
0.2
performance ↓ 0.16 (0.36) F1, ↓ 0.22 (0.64) recall,
↓ 0.12 (0.26) precision. 0.0
DSPy Default
10-Shot AutoDiCoT
20-Shot AutoDiCoT
+ Small Modifications
DSPy Default
10-Shot AutoCoT + 3x the context (no email
dupe). Recall that context refers to the descrip-
tion of entrapment, an instruction about explicit-
ness, and an email. Since the duplicated email
had improved performance, the prompt engineer Figure 6.19: Scores of different prompting techniques
tested out pasting in three copies of the context on the test set.
(first de-duplicating the email). However, this did
not improve performance, ↓ 0.06 (0.47) F1, ↓ 0.08 instruction about the explicitness of entrapment. It
(0.78) recall, ↓ 0.05 (0.33) precision. also performs much better than the human prompt
Anonymize Email. At this point it seemed clear engineer’s prompts on the test set, which demon-
that including the duplicated email in the prompt strates the significant promise of automated prompt
was actually, although not explainably, essential to engineering.
the best performance so far obtained. The prompt 6.2.4 Discussion
engineer decided to anonymize the email by re-
placing personal names with other, random names. Prompt engineering is a non-trivial process, the nu-
However, surprisingly, this decreased performance ances of which are not currently well described in
significantly ↓ 0.08 (0.45) F1, ↓ 0.14 (0.72) recall, literature. From the fully manual process illustrated
↓ 0.05 (0.33) precision. above, there are several take-aways worth summa-
rizing. First, prompt engineering is fundamentally
DSPy. We concluded the case study by explor- different from other ways of getting a computer to
ing an alternative to manual prompt engineer- behave the way you want it to: these systems are
ing, the DSPy framework (Khattab et al., 2023), being cajoled, not programmed, and, in addition
which automatically optimizes LLM prompts for to being quite sensitive to the specific LLM being
a given target metric. Specifically, we begin used, they can be incredibly sensitive to specific
with a chain-of-thought classification pipeline details in prompts without there being any obvi-
that uses the definition of entrapment in Figure ous reason those details should matter. Second,
6.7. Over 16 iterations, DSPy bootstrapped syn- therefore, it is important to dig into the data (e.g.
thetic LLM-generated demonstrations and ran- generating potential explanations for LLM “reason-
domly sampled training exemplars, with the ing” that leads to incorrect responses). Related, the
ultimate objective of maximizing F 1 on the third and most important take-away is that prompt
same development set used above. We used engineering should involve engagement between
gpt-4-0125-preview and the default settings the prompt engineer, who has expertise in how to
for the BootstrapFewShotWithRandomSearch coax LLMs to behave in desired ways, and domain
“teleprompter” (the optimization approach). Fig- experts, who understand what those desired ways
ure 6.19 shows the results of two of these prompts are and why.
on the test set, one of which used default DSPy Ultimately we found that there was significant
behaviour, and the second which was manually promise in an automated method for exploring the
modified slightly from this default. The best result- prompting space, but also that combining that au-
ing prompt includes 15 exemplars (without CoT tomation with human prompt engineering/revision
reasoning) and one bootstrapped reasoning demon- was the most successful approach. We hope that
stration. It achieves 0.548 F 1 (and 0.385 / 0.952 this study will serve as a step toward more robust
precision / recall) on the test set, without making examinations of how to perform prompt engineer-
any use of the professor’s email nor the incorrect ing.
41
7 Related Work
In this section, we review existing surveys and erage. Hua et al. (2024) use a GPT-4-automated ap-
meta-analyses of prompting. Liu et al. (2023b) proach to review LLMs in the mental health space.
perform a systematic review of prompt engineer- Wang et al. (2023c) review prompt engineering and
ing in the pre-ChatGPT era, including various relevant models in the visual modality and Yang
aspects of prompting like prompt template engi- et al. (2023e) provided a comprehensive list of qual-
neering, answer engineering, prompt ensembling, itative analyses of multimodal prompting, particu-
and prompt tuning methods. Their review cov- larly focusing on GPT-4V19 . Durante et al. (2024)
ers many different types of prompting (e.g., cloze, review multimodal interactions based on LLM em-
soft-prompting, etc., across many different types bodied agents. Ko et al. (2023b) review literature
of language models) while we focus on discrete on the adoption of Text-to-Image generation mod-
pre-fix prompting but more in-depth discussion. els for visual artists’ creative works. Gupta et al.
Chen et al. (2023a) provide a review of popular (2024) review GenAI through a topic modeling
prompting techniques like Chain-of-Thought, Tree- approach. Awais et al. (2023) review foundation
of-Thought, Self-Consistency, and Least-to-Most models in vision, including various prompting tech-
prompting, along with outlooks for future prompt- niques. Hou et al. (2023) perform a systematic
ing research. White et al. (2023) and Schmidt review of prompt engineering techniques as they
et al. (2023) provide a taxonomy of prompt pat- relate to software engineering. They use a sys-
terns, which are similar to software patterns (and tematic review technique developed by Keele et al.
prompting techniques for that matter). Gao (2023) (2007), specifically for software engineering re-
provide a practical prompting technique tutorial for views. Wang et al. (2023e) review the literature
a non-technical audience. Santu and Feng (2023) on software testing with large language models.
provide a general taxonomy of prompts that can be Zhang et al. (2023a) review ChatGPT prompting
used to design prompts with specific properties to performance on software engineering tasks such as
perform a wide range of complex tasks. Bubeck automated program repair. Neagu (2023) provide
et al. (2023) qualitatively experiment with a wide a systematic review on how prompt engineering
range of prompting methods on the early version can be leveraged in computer science education. Li
of GPT-4 to understand its capabilities. Chu et al. et al. (2023j) review literature on the fairness of
(2023) review Chain-of-Thought related prompt- large language models. There are also surveys on
ing methods for reasoning. In earlier work, Bom- related aspects such as hallucination of language
masani et al. (2021) review and discuss opportuni- models (Huang et al., 2023b), verifiability (Liu
ties and risks of foundation models broadly, and et al., 2023a), reasoning (Qiao et al., 2022), aug-
Dang et al. (2022) discuss prompting strategies for mentation (Mialon et al., 2023), and linguistic prop-
interactive creative applications that use prompting erties of prompts (Leidinger et al., 2023). Different
as a new paradigm for human interaction, with a from these works, we perform our review targeting
particular focus on the user interface design that broad coverage and generally applicable prompting
supports user prompting. As an addition to these techniques. Finally, in terms of more general prior
existing surveys, our review aims to provide a more and concurrent surveys (Liu et al., 2023b; Sahoo
updated and formalized systematic review. et al., 2024; Vatsal and Dubey, 2024), this survey
There is also a line of work that surveys prompt- offers an update in a fast-moving field. In addition,
ing techniques for particular domains or down- we provide a starting point for taxonomic organi-
stream applications. Meskó (2023) and Wang et al. zation of prompting techniques and standardiza-
(2023d) offer recommended use cases and limi- tion of terminology. Moreover, unlike many works
tations of prompt engineering in the medical and that claim to be systematic, we base our work in
healthcare domains. Heston and Khun (2023) pro- the widely used standard for systematic literature
vide a review of prompt engineering for medical reviews—PRISMA (Page et al., 2021).
education use cases. Peskoff and Stewart (2023) 19
https://openai.com/research/
query ChatGPT and YouChat to assess domain cov- gpt-4v-system-card
42
8 Conclusions
Generative AI is a novel technology, and broader working with constitute a good representation of
understanding of models’ capabilities and limita- that problem. It is better to start with simpler ap-
tions remains limited. Natural language is a flexi- proaches first, and to remain skeptical of claims
ble, open-ended interface, with models having few about method performance. To those already en-
obvious affordances. The use of Generative AI gaged in prompt engineering, we hope that our tax-
therefore inherits many of the standard challenges onomy will shed light on the relationships between
of linguistic communication—e.g., ambiguity, the existing techniques. To those developing new tech-
role of context, the need for course correction— niques, we encourage situating new methods within
while at the same time adding the challenge of our taxonomy, as well as including ecologically
communicating with an entity whose “understand- valid case studies and illustrations of those tech-
ing” of language may not bear any substantial re- niques.
lationship to human understanding. Many of the
techniques described here have been called “emer- Acknowledgements
gent”, but it is perhaps more appropriate to say that We appreciate the advice given by Hal Daumé III,
they were discovered—the result of thorough ex- Adam Visokay, and Jordan Boyd-Graber and re-
perimentation, analogies from human reasoning, or view by Diyi Yang, Brandon M. Stewart, Shubham
pure serendipity. Vatsal, Mason Marchetti, and Allie Miller. We also
The present work is an initial attempt to catego- appreciate the 10K USD in API credits given by
rize the species of an unfamiliar territory. While OpenAI and design work by Benjamin DiMarco.
we make every attempt to be comprehensive, there
are sure to be gaps and redundancies. Our inten-
tion is to provide a taxonomy and terminology that
cover a large number of existing prompt engineer-
ing techniques, and which can accommodate future
methods. We discuss over 200 prompting tech-
niques, frameworks built around them, and issues
like safety and security that need to be kept in mind
when using them. We also present two case studies
in order to provide a clear sense of models’ ca-
pabilities and what it is like to tackle a problem
in practice. Last, our stance is primarily observa-
tional, and we make no claims to the validity of the
presented techniques. The field is new, and evalua-
tion is variable and unstandardized—even the most
meticulous experimentation may suffer from unan-
ticipated shortcomings, and model outputs them-
selves are sensitive to meaning-preserving changes
in inputs. As a result, we encourage the reader to
avoid taking any claims at face value and to rec-
ognize that techniques may not transfer to other
models, problems, or datasets.
To those just beginning in prompt engineering,
our recommendations resemble what one would
recommend in any machine learning setting: un-
derstand the problem you are trying to solve (rather
than just focusing on input/output and benchmark
scores), and ensure the data and metrics you are
43
References Liu, Aohan Zeng, Lei Hou, Yuxiao Dong, Jie Tang,
and Juanzi Li. 2023a. Longbench: A bilingual, mul-
Adept. 2023. ACT-1: Transformer for Actions. https: titask benchmark for long context understanding.
//www.adept.ai/blog/act-1.
Yushi Bai, Jiahao Ying, Yixin Cao, Xin Lv, Yuze He,
Rishabh Agarwal, Avi Singh, Lei M Zhang, Bernd Xiaozhi Wang, Jifan Yu, Kaisheng Zeng, Yijia Xiao,
Bohnet, Luis Rosias, Stephanie Chan, Biao Zhang, Haozhe Lyu, et al. 2023b. Benchmarking Foundation
Ankesh Anand, Zaheer Abbas, Azade Nova, et al. Models with Language-Model-as-an-Examiner. In
2024. Many-shot in-context learning. arXiv preprint NeurIPS 2023 Datasets and Benchmarks.
arXiv:2404.11018.
Chris Bakke. 2023. Buying a chevrolet for 1$.
Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke
Zettlemoyer, and Marjan Ghazvininejad. 2023. In- Nishant Balepur, Jie Huang, and Kevin Chang. 2023.
context examples selection for machine translation. Expository text generation: Imitate, retrieve, para-
In Findings of the Association for Computational phrase. In Proceedings of the 2023 Conference on
Linguistics: ACL 2023, pages 8857–8873, Toronto, Empirical Methods in Natural Language Process-
Canada. Association for Computational Linguistics. ing, pages 11896–11919, Singapore. Association for
Computational Linguistics.
Kabir Ahuja, Harshita Diddee, Rishav Hada, Milli-
cent Ochieng, Krithika Ramesh, Prachi Jain, Ak- Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wen-
shay Nambi, Tanuja Ganu, Sameer Segal, Maxamed liang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei
Axmed, Kalika Bali, and Sunayana Sitaram. 2023. Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu,
MEGA: Multilingual Evaluation of Generative AI. and Pascale Fung. 2023. A Multitask, Multilingual,
In EMNLP. Multimodal Evaluation of ChatGPT on Reasoning,
Hallucination, and Interactivity. In AACL.
Rebuff AI. 2023. A self-hardening prompt injection
detector. Hritik Bansal, Karthik Gopalakrishnan, Saket Dingliwal,
Anirudh Ajith, Chris Pan, Mengzhou Xia, Ameet Desh- Sravan Bodapati, Katrin Kirchhoff, and Dan Roth.
pande, and Karthik Narasimhan. 2024. InstructEval: 2023. Rethinking the Role of Scale for In-Context
Systematic evaluation of instruction selection meth- Learning: An Interpretability-based Case Study at 66
ods. In Findings of the Association for Computa- Billion Scale. In ACL.
tional Linguistics: NAACL 2024, pages 4336–4350, Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni
Mexico City, Mexico. Association for Computational Kasten, and Tali Dekel. 2022. Text2live: Text-driven
Linguistics. layered image and video editing.
Sílvia Araújo and Micaela Aguiar. 2023. Comparing
Amanda Bertsch, Maor Ivgi, Uri Alon, Jonathan Berant,
chatgpt’s and human evaluation of scientific texts’
Matthew R Gormley, and Graham Neubig. 2024. In-
translations from english to portuguese using popular
context learning with long-context models: An in-
automated translators. CLEF.
depth exploration. arXiv preprint arXiv:2405.00200.
ArthurAI. 2024. Arthur shield.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Ger-
Akari Asai, Sneha Kudugunta, Xinyan Velocity Yu, stenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz
Terra Blevins, Hila Gonen, Machel Reid, Yulia Lehmann, Michał Podstawski, Hubert Niewiadomski,
Tsvetkov, Sebastian Ruder, and Hannaneh Hajishirzi. Piotr Nyczyk, and Torsten Hoefler. 2024. Graph of
2023. BUFFET: Benchmarking Large Language Thoughts: Solving Elaborate Problems with Large
Models for Few-shot Cross-lingual Transfer. Language Models. Proceedings of the AAAI Confer-
ence on Artificial Intelligence, 38(16):17682–17690.
Muhammad Awais, Muzammal Naseer, Salman
Khan, Rao Muhammad Anwer, Hisham Cholakkal, Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ
Mubarak Shah, Ming-Hsuan Yang, and Fahad Shah- Altman, Simran Arora, Sydney von Arx, Michael S.
baz Khan. 2023. Foundational models defining a new Bernstein, Jeannette Bohg, Antoine Bosselut, Emma
era in vision: A survey and outlook. Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card,
Rodrigo Castellon, Niladri S. Chatterji, Annie S.
Abhijeet Awasthi, Nitish Gupta, Bidisha Samanta, Chen, Kathleen A. Creel, Jared Davis, Dora Dem-
Shachi Dave, Sunita Sarawagi, and Partha Talukdar. szky, Chris Donahue, Moussa Doumbouya, Esin Dur-
2023. Bootstrapping multilingual semantic parsers mus, Stefano Ermon, John Etchemendy, Kawin Etha-
using large language models. In Proceedings of the yarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lau-
17th Conference of the European Chapter of the As- ren E. Gillespie, Karan Goel, Noah D. Goodman,
sociation for Computational Linguistics, pages 2455– Shelby Grossman, Neel Guha, Tatsunori Hashimoto,
2467, Dubrovnik, Croatia. Association for Computa- Peter Henderson, John Hewitt, Daniel E. Ho, Jenny
tional Linguistics. Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil
Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth
Yushi Bai, Xin Lv, Jiajie Zhang, Hongchang Lyu, Karamcheti, Geoff Keeling, Fereshte Khani, O. Khat-
Jiankai Tang, Zhidian Huang, Zhengxiao Du, Xiao tab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna,
44
Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, CDC. 2023. Suicide data and statistics.
Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent,
Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu,
Christopher D. Manning, Suvir Mirchandani, Eric Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu.
Mitchell, Zanele Munyikwa, Suraj Nair, Avanika 2024. Chateval: Towards better LLM-based eval-
Narayan, Deepak Narayanan, Benjamin Newman, uators through multi-agent debate. In The Twelfth
Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, International Conference on Learning Representa-
J. F. Nyarko, Giray Ogut, Laurel J. Orr, Isabel Pa- tions.
padimitriou, Joon Sung Park, Chris Piech, Eva Porte-
lance, Christopher Potts, Aditi Raghunathan, Robert Ernie Chang, Pin-Jie Lin, Yang Li, Sidd Srinivasan,
Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Gael Le Lan, David Kant, Yangyang Shi, Forrest
Camilo Ruiz, Jack Ryan, Christopher R’e, Dorsa Iandola, and Vikas Chandra. 2023. In-context prompt
Sadigh, Shiori Sagawa, Keshav Santhanam, Andy editing for conditional audio generation.
Shih, Krishna Parasuram Srinivasan, Alex Tamkin,
Harrison Chase. 2022. LangChain.
Rohan Taori, Armin W. Thomas, Florian Tramèr,
Rose E. Wang, William Wang, Bohan Wu, Jiajun Banghao Chen, Zhaofeng Zhang, Nicolas Langrené,
Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Ya- and Shengxin Zhu. 2023a. Unleashing the potential
sunaga, Jiaxuan You, Matei A. Zaharia, Michael of prompt engineering in large language models: a
Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, comprehensive review.
Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021.
On the Opportunities and Risks of Foundation Mod- Lingjiao Chen, Matei Zaharia, and James Zou. 2023b.
els. ArXiv, abs/2108.07258. How is chatgpt’s behavior changing over time? arXiv
preprint arXiv:2307.09009.
Hezekiah J. Branch, Jonathan Rodriguez Cefalu, Jeremy
McHugh, Leyla Hujer, Aditya Bahl, Daniel del Shiqi Chen, Siyang Gao, and Junxian He. 2023c. Eval-
Castillo Iglesias, Ron Heichman, and Ramesh Dar- uating factual consistency of summaries with large
wishi. 2022. Evaluating the susceptibility of pre- language models. arXiv preprint arXiv:2305.14069.
trained language models via handcrafted adversarial
examples. Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023d. Program of thoughts
Greg Brockman, Vicki Cheung, Ludwig Pettersson, prompting: Disentangling computation from reason-
Jonas Schneider, John Schulman, Jie Tang, and Woj- ing for numerical reasoning tasks. TMLR.
ciech Zaremba. 2016. Openai gym.
Xinyun Chen, Renat Aksitov, Uri Alon, Jie Ren, Ke-
Tim Brooks, Bill Peebles, Connor Homes, Will DePue, fan Xiao, Pengcheng Yin, Sushant Prakash, Charles
Yufei Guo, Li Jing, David Schnurr, Joe Taylor, Troy Sutton, Xuezhi Wang, and Denny Zhou. 2023e. Uni-
Luhman, Eric Luhman, Clarence Wing Yin Ng, Ricky versal self-consistency for large language model gen-
Wang, and Aditya Ramesh. 2024. Video generation eration.
models as world simulators. OpenAI.
Yang Chen, Yingwei Pan, Yehao Li, Ting Yao, and Tao
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Mei. 2023f. Control3d: Towards controllable text-to-
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind 3d generation.
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss, Yi Chen, Rui Wang, Haiyun Jiang, Shuming Shi, and
Gretchen Krueger, Tom Henighan, Rewon Child, Ruifeng Xu. 2023g. Exploring the use of large lan-
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, guage models for reference-free text quality evalua-
Clemens Winter, Christopher Hesse, Mark Chen, Eric tion: An empirical study. In Findings of the Associa-
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, tion for Computational Linguistics: IJCNLP-AACL
Jack Clark, Christopher Berner, Sam McCandlish, 2023 (Findings), pages 361–374, Nusa Dua, Bali.
Alec Radford, Ilya Sutskever, and Dario Amodei. Association for Computational Linguistics.
2020. Language models are few-shot learners.
Jiaxin Cheng, Tianjun Xiao, and Tong He. 2023. Con-
Sébastien Bubeck, Varun Chandrasekaran, Ronen El- sistent video-to-video transfer using synthetic dataset.
dan, John A. Gehrke, Eric Horvitz, Ece Kamar, Peter ArXiv, abs/2311.00213.
Lee, Yin Tat Lee, Yuan-Fang Li, Scott M. Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, Yew Ken Chia, Guizhen Chen, Luu Anh Tuan, Soujanya
and Yi Zhang. 2023. Sparks of artificial general Poria, and Lidong Bing. 2023. Contrastive chain-of-
intelligence: Early experiments with gpt-4. ArXiv, thought prompting.
abs/2303.12712.
Jiqun Chu and Zuoquan Lin. 2023. Entangled repre-
Nicholas Carlini, Florian Tramer, Eric Wallace, sentation learning: A bidirectional encoder decoder
Matthew Jagielski, Ariel Herbert-Voss, Katherine model. In Proceedings of the 2022 5th International
Lee, Adam Roberts, Tom Brown, Dawn Song, Ul- Conference on Algorithms, Computing and Artificial
far Erlingsson, Alina Oprea, and Colin Raffel. 2021. Intelligence, ACAI ’22, New York, NY, USA. Asso-
Extracting training data from large language models. ciation for Computing Machinery.
45
Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang,
Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy
Bing Qin, and Ting Liu. 2023. A survey of chain of Liang, and Tatsunori B Hashimoto. 2023. Alpaca-
thought reasoning: Advances, frontiers and future. farm: A simulation framework for methods that learn
from human feedback. In NeurIPS.
Robert J Cramer, Jacinta Hawgood, Andréa R Kaniuka,
Byron Brooks, and Justin C Baker. 2023. Updated Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong,
suicide prevention core competencies for mental Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke
health professionals: Implications for training, re- Noda, Demetri Terzopoulos, Yejin Choi, Katsushi
search, and practice. Clinical Psychology: Science Ikeuchi, Hoi Vo, Fei-Fei Li, and Jianfeng Gao. 2024.
and Practice. Agent ai: Surveying the horizons of multimodal in-
Katherine Crowson, Stella Biderman, Daniel Kornis, teraction.
Dashiell Stander, Eric Hallahan, Louis Castricato,
and Edward Raff. 2022. Vqgan-clip: Open domain Julen Etxaniz, Gorka Azkune, Aitor Soroa, Oier Lopez
image generation and editing with natural language de Lacalle, and Mikel Artetxe. 2023. Do multilingual
guidance. language models think better in english?
Leyang Cui, Yu Wu, Jian Liu, Sen Yang, and Yue Zhang. Angela Fan, Mike Lewis, and Yann Dauphin. 2018.
2021. Template-based named entity recognition us- Hierarchical neural story generation. In Proceedings
ing bart. Findings of the Association for Computa- of the 56th Annual Meeting of the Association for
tional Linguistics: ACL-IJCNLP 2021. Computational Linguistics (Volume 1: Long Papers).
Association for Computational Linguistics.
Hai Dang, Lukas Mecke, Florian Lehmann, Sven Goller,
and Daniel Buschek. 2022. How to prompt? opportu-
Li Fei-Fei, Rob Fergus, and Pietro Perona. 2006. One-
nities and challenges of zero- and few-shot learning
shot learning of object categories. IEEE Transac-
for human-ai interaction in creative applications of
tions on Pattern Analysis and Machine Intelligence,
generative models.
28:594–611.
Maksym Del and Mark Fishel. 2023. True detective: A
deep abductive reasoning benchmark undoable for Lincong Feng, Muyu Wang, Maoyu Wang, Kuo Xu, and
gpt-3 and challenging for gpt-4. In Proceedings of Xiaoli Liu. 2023. Metadreamer: Efficient text-to-3d
the 12th Joint Conference on Lexical and Computa- creation with disentangling geometry and texture.
tional Semantics (*SEM 2023). Association for Com-
putational Linguistics. Patrick Fernandes, Daniel Deutsch, Mara Finkel-
stein, Parker Riley, André Martins, Graham Neubig,
Mingkai Deng, Jianyu Wang, Cheng-Ping Hsieh, Yihan Ankush Garg, Jonathan Clark, Markus Freitag, and
Wang, Han Guo, Tianmin Shu, Meng Song, Eric P. Orhan Firat. 2023. The devil is in the errors: Leverag-
Xing, and Zhiting Hu. 2022. RLPrompt: Optimizing ing large language models for fine-grained machine
Discrete Text Prompts with Reinforcement Learning. translation evaluation. In Proceedings of the Eighth
In RLPrompt: Optimizing Discrete Text Prompts with Conference on Machine Translation, pages 1066–
Reinforcement Learning. 1083, Singapore. Association for Computational Lin-
guistics.
Yihe Deng, Weitong Zhang, Zixiang Chen, and Quan-
quan Gu. 2023. Rephrase and respond: Let large
language models ask better questions for themselves. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei
Liu. 2023a. Gptscore: Evaluate as you desire. arXiv
Shehzaad Dhuliawala, Mojtaba Komeili, Jing Xu, preprint arXiv:2302.04166.
Roberta Raileanu, Xian Li, Asli Celikyilmaz, and
Jason Weston. 2023. Chain-of-verification reduces Jinlan Fu, See-Kiong Ng, and Pengfei Liu. 2022. Poly-
hallucination in large language models. glot prompt: Multilingual multitask prompt training.
In Proceedings of the 2022 Conference on Empiri-
Shizhe Diao, Pengcheng Wang, Yong Lin, and Tong cal Methods in Natural Language Processing, pages
Zhang. 2023. Active prompting with chain-of- 9919–9935, Abu Dhabi, United Arab Emirates. As-
thought for large language models. sociation for Computational Linguistics.
Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng,
Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
Shao, Hongxia Yang, and Jie Tang. 2021. Cogview: Tushar Khot. 2023b. Complexity-based prompting
Mastering text-to-image generation via transform- for multi-step reasoning. In The Eleventh Interna-
ers. In Advances in Neural Information Processing tional Conference on Learning Representations.
Systems, volume 34, pages 19822–19835. Curran As-
sociates, Inc. Victor Gabillon, Mohammad Ghavamzadeh, Alessandro
Lazaric, and Sébastien Bubeck. 2011. Multi-bandit
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong best arm identification. In Advances in Neural In-
Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and formation Processing Systems, volume 24. Curran
Zhifang Sui. 2023. A survey on in-context learning. Associates, Inc.
46
Deep Ganguli, Amanda Askell, Nicholas Schiefer, Yichen Gong, Delong Ran, Jinyuan Liu, Conglei Wang,
Thomas Liao, Kamilė Lukošiūtė, Anna Chen, Anna Tianshuo Cong, Anyu Wang, Sisi Duan, and Xiaoyun
Goldie, Azalia Mirhoseini, Catherine Olsson, Danny Wang. 2023. Figstep: Jailbreaking large vision-
Hernandez, et al. 2023. The capacity for moral self- language models via typographic visual prompts.
correction in large language models. arXiv preprint
arXiv:2302.07459. Riley Goodside. 2022. Exploiting gpt-3 prompts with
malicious inputs that order the model to ignore its
Andrew Gao. 2023. Prompt engineering for large lan- previous directions.
guage models. SSRN.
Google. 2023. Gemini: A family of highly capable
Lingyu Gao, Aditi Chaudhary, Krishna Srinivasan, multimodal models.
Kazuma Hashimoto, Karthik Raman, and Michael
Bendersky. 2023a. Ambiguity-aware in-context Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen,
learning with large language models. arXiv preprint Yujiu Yang, Nan Duan, and Weizhu Chen. 2024a.
arXiv:2309.07900. CRITIC: Large language models can self-correct
with tool-interactive critiquing. In The Twelfth Inter-
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra- national Conference on Learning Representations.
ham Neubig. 2023b. Pal: program-aided lan-
Zhibin Gou, Zhihong Shao, Yeyun Gong, yelong shen,
guage models. In Proceedings of the 40th Interna-
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
tional Conference on Machine Learning, ICML’23.
Chen. 2024b. ToRA: A tool-integrated reasoning
JMLR.org.
agent for mathematical problem solving. In The
Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Ship- Twelfth International Conference on Learning Repre-
ing Yang, and Xiaojun Wan. 2023c. Human-like sum- sentations.
marization evaluation with chatgpt. arXiv preprint
arXiv:2304.02554. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Wein-
berger. 2017. On calibration of modern neural net-
Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. works. In International conference on machine learn-
Making pre-trained language models better few-shot ing, pages 1321–1330. PMLR.
learners. In Proceedings of the 59th Annual Meet-
ing of the Association for Computational Linguistics Han Guo, Bowen Tan, Zhengzhong Liu, Eric P. Xing,
and the 11th International Joint Conference on Natu- and Zhiting Hu. 2022. Efficient (soft) q-learning for
ral Language Processing (Volume 1: Long Papers), text generation with limited good data.
pages 3816–3830, Online. Association for Computa-
tional Linguistics. Priyanka Gupta, Bosheng Ding, Chong Guan, and Ding
Ding. 2024. Generative ai: A systematic review
Marisa Garcia. 2024. What air canada lost in ‘remark- using topic modelling techniques. Data and Informa-
able’ lying ai chatbot case. Forbes. tion Management, page 100066.
Xavier Garcia, Yamini Bansal, Colin Cherry, George Rishav Hada, Varun Gumma, Adrian Wynter, Harshita
Foster, Maxim Krikun, Melvin Johnson, and Orhan Diddee, Mohamed Ahmed, Monojit Choudhury, Ka-
Firat. 2023. The unreasonable effectiveness of few- lika Bali, and Sunayana Sitaram. 2024. Are large
shot learning for machine translation. In Proceedings language model-based evaluators the solution to scal-
of the 40th International Conference on Machine ing up multilingual evaluation? In Findings of the
Learning, ICML’23. JMLR.org. Association for Computational Linguistics: EACL
2024, pages 1051–1070, St. Julian’s, Malta. Associa-
MF Garnett and SC Curtin. 2023. Suicide mortality tion for Computational Linguistics.
in the united states, 2001–2021. NCHS Data Brief,
464:1–8. Muhammad Usman Hadi, Qasem Al Tashi, Rizwan
Timnit Gebru, Jamie Morgenstern, Briana Vec- Qureshi, Abbas Shah, Amgad Muneer, Muhammad
chione, Jennifer Wortman Vaughan, Hanna Wal- Irfan, and et al. 2023. Large language models: A
lach, Hal Daumé III, and Kate Crawford. 2021. comprehensive survey of its applications, challenges,
Datasheets for datasets. Communications of the limitations, and future prospects. TechRxiv.
ACM, 64(12):86–92.
Aparna Dhinakaran Hakan Tekgul. 2023. Guardrails:
Marjan Ghazvininejad, Hila Gonen, and Luke Zettle- What are they and how can you use nemo and
moyer. 2023. Dictionary-based phrase-level prompt- guardrails ai to safeguard llms? Online.
ing of large language models for machine translation.
Sherzod Hakimov and David Schlangen. 2023. Images
Rohit Girdhar, Mannat Singh, Andrew Brown, Quentin in language space: Exploring the suitability of large
Duval, Samaneh Azadi, Sai Saketh Rambhatla, Akbar language models for vision & language tasks. In
Shah, Xi Yin, Devi Parikh, and Ishan Misra. 2023. Findings of the Association for Computational Lin-
Emu video: Factorizing text-to-video generation by guistics: ACL 2023, pages 14196–14210, Toronto,
explicit image conditioning. Canada. Association for Computational Linguistics.
47
Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong,
2023. ToolkenGPT: Augmenting Frozen Language Zhangyin Feng, Haotian Wang, Qianglong Chen,
Models with Massive Tools via Tool Embeddings. In Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting
NeurIPS. Liu. 2023b. A survey on hallucination in large lan-
guage models: Principles, taxonomy, challenges, and
Hangfeng He, Hongming Zhang, and Dan Roth. 2023a. open questions.
Socreval: Large language models with the so-
cratic method for reference-free reasoning evaluation. Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao,
arXiv preprint arXiv:2310.00074. Saksham Singhal, Shuming Ma, Tengchao Lv, Lei
Cui, Owais Khan Mohammed, Barun Patra, Qiang
Zhiwei He, Tian Liang, Wenxiang Jiao, Zhuosheng Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck,
Zhang, Yujiu Yang, Rui Wang, Zhaopeng Tu, Shum- Vishrav Chaudhary, Subhojit Som, Xia Song, and
ing Shi, and Xing Wang. 2023b. Exploring human- Furu Wei. 2023c. Language is not all you need:
like translation strategy with large language models. Aligning perception with language models.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi
Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Rungta, Krithika Iyer, Yuning Mao, Michael
2021. Measuring Massive Multitask Language Un- Tontchev, Qing Hu, Brian Fuller, Davide Testuggine,
derstanding. In ICLR. and Madian Khabsa. 2023. Llama guard: Llm-based
input-output safeguard for human-ai conversations.
Amr Hendy, Mohamed Gomaa Abdelrehim, Amr
Vivek Iyer, Pinzhen Chen, and Alexandra Birch. 2023.
Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu
Towards effective disambiguation for machine trans-
Matsushita, Young Jin Kim, Mohamed Afify, and
lation with large language models.
Hany Hassan Awadalla. 2023. How good are gpt
models at machine translation? a comprehensive Ajay Jain, Ben Mildenhall, Jonathan T. Barron, Pieter
evaluation. ArXiv, abs/2302.09210. Abbeel, and Ben Poole. 2022. Zero-shot text-guided
object generation with dream fields.
Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aber-
man, Yael Pritch, and Daniel Cohen-Or. 2022. Qi Jia, Siyu Ren, Yizhu Liu, and Kenny Q Zhu. 2023.
Prompt-to-prompt image editing with cross attention Zero-shot faithfulness evaluation for text summariza-
control. tion with foundation language model. arXiv preprint
arXiv:2310.11648.
T.F. Heston and C. Khun. 2023. Prompt engineering in
medical education. Int. Med. Educ., 2:198–205. Yixing Jiang, Jeremy Irvin, Ji Hun Wang, Muham-
mad Ahmed Chaudhry, Jonathan H Chen, and An-
Tobias Hinz, Stefan Heinrich, and Stefan Wermter. 2022. drew Y Ng. 2024. Many-shot in-context learning
Semantic object accuracy for generative text-to- in multimodal foundation models. arXiv preprint
image synthesis. IEEE Transactions on Pattern Anal- arXiv:2405.09798.
ysis and Machine Intelligence, 44(3):1552–1565.
Zhengbao Jiang, Frank Xu, Luyu Gao, Zhiqing Sun,
Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Qian Liu, Jane Dwivedi-Yu, Yiming Yang, Jamie
Wang, Li Li, Xiapu Luo, David Lo, John Grundy, Callan, and Graham Neubig. 2023. Active retrieval
and Haoyu Wang. 2023. Large language models for augmented generation. In Proceedings of the 2023
software engineering: A systematic literature review. Conference on Empirical Methods in Natural Lan-
guage Processing, pages 7969–7992, Singapore. As-
Ming-Hao Hsu, Kai-Wei Chang, Shang-Wen Li, and sociation for Computational Linguistics.
Hung yi Lee. 2023. An exploration of in-context
learning for speech language model. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham
Neubig. 2020. How can we know what language
Yining Hua, Fenglin Liu, Kailai Yang, Zehan Li, Yi han models know? Transactions of the Association for
Sheu, Peilin Zhou, Lauren V. Moran, Sophia Ana- Computational Linguistics, 8:423–438.
niadou, and Andrew Beam. 2024. Large language Wenxiang Jiao, Wenxuan Wang, Jen tse Huang, Xing
models in mental health care: a scoping review. Wang, Shuming Shi, and Zhaopeng Tu. 2023. Is chat-
gpt a good translator? yes with gpt-4 as the engine.
Haoyang Huang, Tianyi Tang, Dongdong Zhang,
Wayne Xin Zhao, Ting Song, Yan Xia, and Furu Wei. Ziqi Jin and Wei Lu. 2023. Tab-cot: Zero-shot tabular
2023a. Not all languages are created equal in llms: chain of thought.
Improving multilingual capability by cross-lingual-
thought prompting. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom
Henighan, Dawn Drain, Ethan Perez, Nicholas
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli
Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Tran-Johnson, Scott Johnston, Sheer El-Showk,
Large language models can self-improve. arXiv Andy Jones, Nelson Elhage, Tristan Hume, Anna
preprint arXiv:2210.11610. Chen, Yuntao Bai, Sam Bowman, Stanislav Fort,
48
Deep Ganguli, Danny Hernandez, Josh Jacobson, Hwichan Kim and Mamoru Komachi. 2023. Enhancing
Jackson Kernion, Shauna Kravec, Liane Lovitt, Ka- few-shot cross-lingual transfer with target language
mal Ndousse, Catherine Olsson, Sam Ringer, Dario peculiar examples. In Findings of the Association for
Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Computational Linguistics: ACL 2023, pages 747–
Ben Mann, Sam McCandlish, Chris Olah, and Jared 767, Toronto, Canada. Association for Computational
Kaplan. 2022. Language models (mostly) know what Linguistics.
they know.
Hyuhng Joon Kim, Hyunsoo Cho, Junyeob Kim, Taeuk
Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Kim, Kang Min Yoo, and Sang goo Lee. 2022.
Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Self-generated in-context learning: Leveraging auto-
Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhl- regressive language models as a demonstration gen-
gay, Noam Rozen, Erez Schwartz, Gal Shachaf, erator.
Shai Shalev-Shwartz, Amnon Shashua, and Moshe
Tenenholtz. 2022. Mrkl systems: A modular, neuro- Sunkyoung Kim, Dayeon Ki, Yireun Kim, and Jinsik
symbolic architecture that combines large language Lee. 2023. Boosting cross-lingual transferability in
models, external knowledge sources and discrete rea- multilingual models via in-context learning.
soning.
Dayoon Ko, Sangho Lee, and Gunhee Kim. 2023a. Can
Staffs Keele et al. 2007. Guidelines for performing language models laugh at youtube short-form videos?
systematic literature reviews in software engineering.
Hyung-Kwon Ko, Gwanmo Park, Hyeon Jeon, Jaemin
Jo, Juho Kim, and Jinwook Seo. 2023b. Large-scale
Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, text-to-image generation models for visual artists’
Caiming Xiong, and Richard Socher. 2019. Ctrl: A creative works. Proceedings of the 28th International
conditional transformer language model for control- Conference on Intelligent User Interfaces.
lable generation.
Tom Kocmi and Christian Federmann. 2023a. Gemba-
Kimiya Keyvan and Jimmy Xiangji Huang. 2022. How mqm: Detecting translation quality error spans with
to approach ambiguous queries in conversational gpt-4. arXiv preprint arXiv:2310.13988.
search: A survey of techniques, approaches, tools,
and challenges. ACM Computing Surveys, 55(6):1– Tom Kocmi and Christian Federmann. 2023b. Large
40. language models are state-of-the-art evaluators of
translation quality. In Proceedings of the 24th An-
Muhammad Khalifa, Lajanugen Logeswaran, Moontae nual Conference of the European Association for Ma-
Lee, Honglak Lee, and Lu Wang. 2023. Exploring chine Translation, pages 193–203, Tampere, Finland.
demonstration ensembling for in-context learning. European Association for Machine Translation.
Mahmoud Khalil, Ahmad Khalil, and Alioune Ngom. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu-
2023. A comprehensive study of vision transformers taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-
in image classification tasks. guage models are zero-shot reasoners.
Omar Khattab, Keshav Santhanam, Xiang Lisa Li, Sawan Kumar and Partha Talukdar. 2021. Reordering
David Hall, Percy Liang, Christopher Potts, and examples helps during priming-based few-shot learn-
Matei Zaharia. 2022. Demonstrate-search-predict: ing.
Composing retrieval and language models for
knowledge-intensive nlp. Will Kurt. 2024. Say what you mean: A response to
’let me speak freely’. https://blog.dottxt.co/
Omar Khattab, Arnav Singhvi, Paridhi Maheshwari, say-what-you-mean.html.
Zhiyuan Zhang, Keshav Santhanam, Sri Vard-
Gihyun Kwon and Jong Chul Ye. 2022. Clipstyler:
hamanan, Saiful Haq, Ashutosh Sharma, Thomas T.
Image style transfer with a single text condition.
Joshi, Hanna Moazam, Heather Miller, Matei Za-
haria, and Christopher Potts. 2023. Dspy: Compiling Lakera. 2024. Lakera guard.
declarative language model calls into self-improving
pipelines. arXiv preprint arXiv:2310.03714. Bar Lanyado, Ortal Keizman, and Yair Divinsky. 2023.
Can you trust chatgpt’s package recommendations?
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Vulcan Cyber Blog.
Kyle Richardson, Peter Clark, and Ashish Sabharwal.
2022. Decomposed prompting: A modular approach Cindy Le, Congrui Hetang, Ang Cao, and Yihui He.
for solving complex tasks. 2023. Euclidreamer: Fast and high-quality texturing
for 3d models with stable diffusion depth.
Natalie Kiesler and Daniel Schiffner. 2023. Large lan-
guage models in introductory programming educa- Soochan Lee and Gunhee Kim. 2023. Recursion of
tion: Chatgpt’s performance and implications for thought: A divide-and-conquer approach to multi-
assessments. arXiv preprint arXiv:2308.08572. context reasoning with language models.
49
Alina Leidinger, Robert van Rooij, and Ekaterina Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang,
Shutova. 2023. The language of prompting: What Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu,
linguistic properties make a prompt successful? Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao.
2020. Oscar: Object-semantics aligned pre-training
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. for vision-language tasks.
The power of scale for parameter-efficient prompt
tuning. In Proceedings of the 2021 Conference on Yaoyiran Li, Anna Korhonen, and Ivan Vulić. 2023h.
Empirical Methods in Natural Language Processing. On bilingual lexicon induction with large language
Association for Computational Linguistics. models.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Petroni, Vladimir Karpukhin, Naman Goyal, Hein- Jian-Guang Lou, and Weizhu Chen. 2023i. Making
rich Küttler, Mike Lewis, Wen tau Yih, Tim Rock- language models better reasoners with step-aware
täschel, Sebastian Riedel, and Douwe Kiela. 2021. verifier. In Proceedings of the 61st Annual Meeting
Retrieval-augmented generation for knowledge- of the Association for Computational Linguistics (Vol-
intensive nlp tasks. ume 1: Long Papers). Association for Computational
Linguistics.
Bowen Li, Xiaojuan Qi, Thomas Lukasiewicz, and
Philip H. S. Torr. 2019a. Controllable text-to-image Yingji Li, Mengnan Du, Rui Song, Xin Wang, and Ying
generation. Wang. 2023j. A survey on fairness in large language
models.
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu,
Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, Jingyun Liang, Yuchen Fan, Kai Zhang, Radu Timofte,
and Xing Xie. 2023a. Large language models under- Luc Van Gool, and Rakesh Ranjan. 2023. Movideo:
stand and can be enhanced by emotional stimuli. Motion-aware video generation with diffusion mod-
els.
Chengzhengxu Li, Xiaoming Liu, Yichen Wang, Duyi
Li, Yu Lan, and Chao Shen. 2023b. Dialogue for Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki
prompting: a policy-gradient-based discrete prompt Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis,
optimization for few-shot learning. Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. 2023.
Magic3d: High-resolution text-to-3d content cre-
Jiahao Li, Hao Tan, Kai Zhang, Zexiang Xu, Fujun
ation.
Luan, Yinghao Xu, Yicong Hong, Kalyan Sunkavalli,
Greg Shakhnarovich, and Sai Bi. 2023c. Instant3d: Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu
Fast text-to-3d with sparse-view generation and large Wang, Shuohui Chen, Daniel Simig, Myle Ott, Na-
reconstruction model. man Goyal, Shruti Bhosale, Jingfei Du, Ramakanth
Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav
Ming Li, Pan Zhou, Jia-Wei Liu, Jussi Keppo, Min Lin,
Chaudhary, Brian O’Horo, Jeff Wang, Luke Zettle-
Shuicheng Yan, and Xiangyu Xu. 2023d. Instant3d:
moyer, Zornitsa Kozareva, Mona Diab, Veselin Stoy-
Instant text-to-3d generation.
anov, and Xian Li. 2022. Few-shot learning with
Ruosen Li, Teerth Patel, and Xinya Du. 2023e. multilingual generative language models. In Proceed-
Prd: Peer rank and discussion improve large lan- ings of the 2022 Conference on Empirical Methods
guage model based evaluations. arXiv preprint in Natural Language Processing, pages 9019–9052,
arXiv:2307.02762. Abu Dhabi, United Arab Emirates. Association for
Computational Linguistics.
Wenbo Li, Pengchuan Zhang, Lei Zhang, Qiuyuan
Huang, Xiaodong He, Siwei Lyu, and Jianfeng Gao. Yen-Ting Lin and Yun-Nung Chen. 2023. Llm-eval:
2019b. Object-driven text-to-image synthesis via Unified multi-dimensional automatic evaluation for
adversarial training. open-domain conversations with large language mod-
els. arXiv preprint arXiv:2305.13711.
Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu,
Yuan Ni, Guotong Xie, Xiaoling Wang, and Xipeng Jerry Liu. 2022. LlamaIndex.
Qiu. 2023f. Unified demonstration retriever for in-
context learning. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan,
Lawrence Carin, and Weizhu Chen. 2021. What
Xiaonan Li and Xipeng Qiu. 2023a. Finding support makes good in-context examples for GPT-3? In
examples for in-context learning. Workshop on Knowledge Extraction and Integration
for Deep Learning Architectures; Deep Learning In-
Xiaonan Li and Xipeng Qiu. 2023b. Mot: Memory-of- side Out.
thought enables chatgpt to self-improve.
Nelson F Liu, Tianyi Zhang, and Percy Liang. 2023a.
Xiaoqian Li, Ercong Nie, and Sheng Liang. 2023g. Evaluating verifiability in generative search engines.
Crosslingual retrieval augmented in-context learning In Proceedings of the 2023 Conference on Empirical
for bangla. Methods in Natural Language Processing.
50
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Zheheng Luo, Qianqian Xie, and Sophia Ananiadou.
Hiroaki Hayashi, and Graham Neubig. 2023b. Pre- 2023. Chatgpt as a factual inconsistency evaluator
train, prompt, and predict: A systematic survey of for abstractive text summarization. arXiv preprint
prompting methods in natural language processing. arXiv:2303.15621.
ACM Computing Surveys, 55(9):1–35.
Jiaxi Lv, Yi Huang, Mingfu Yan, Jiancheng Huang,
Weihuang Liu, Xi Shen, Chi-Man Pun, and Xiaodong Jianzhuang Liu, Yifan Liu, Yafei Wen, Xiaoxin
Cun. 2023c. Explicit visual prompting for low-level Chen, and Shifeng Chen. 2023. Gpt4motion: Script-
structure segmentations. In 2023 IEEE/CVF Confer- ing physical motions in text-to-video generation via
ence on Computer Vision and Pattern Recognition blender-oriented gpt planning.
(CVPR). IEEE.
Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang,
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Delip Rao, Eric Wong, Marianna Apidianaki, and
Ruochen Xu, and Chenguang Zhu. 2023d. Gpte- Chris Callison-Burch. 2023. Faithful chain-of-
val: Nlg evaluation using gpt-4 with better human thought reasoning.
alignment. arXiv preprint arXiv:2303.16634.
Huan Ma, Changqing Zhang, Yatao Bian, Lemao Liu,
Yihao Liu, Xiangyu Chen, Xianzheng Ma, Xintao Wang, Zhirui Zhang, Peilin Zhao, Shu Zhang, Huazhu Fu,
Jiantao Zhou, Yu Qiao, and Chao Dong. 2023e. Uni- Qinghua Hu, and Bingzhe Wu. 2023. Fairness-
fying image processing as visual prompting question guided few-shot prompting for large language mod-
answering. els. arXiv preprint arXiv:2303.13217.
Yongkang Liu, Shi Feng, Daling Wang, Yifei Zhang,
and Hinrich Schütze. 2023f. Evaluate what you can’t Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
evaluate: Unassessable generated responses quality. Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
arXiv preprint arXiv:2305.14658. Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Yuxin Liu, Minshan Xie, Hanyuan Liu, and Tien-Tsin Katherine Hermann, Sean Welleck, Amir Yazdan-
Wong. 2023g. Text-guided texturing by synchronized bakhsh, and Peter Clark. 2023. Self-refine: Iterative
multi-view diffusion. refinement with self-feedback.
Yuxuan Liu, Tianchi Yang, Shaohan Huang, Zihan Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena,
Zhang, Haizhen Huang, Furu Wei, Weiwei Deng, Kristina Lerman, and Aram Galstyan. 2021. A sur-
Feng Sun, and Qi Zhang. 2023h. Calibrating llm- vey on bias and fairness in machine learning. ACM
based evaluator. arXiv preprint arXiv:2309.13308. computing surveys (CSUR), 54(6):1–35.
Jieyi Long. 2023. Large language model guided tree-of- Laura Melzer, Thomas Forkmann, and Tobias Teismann.
thought. 2024. Suicide crisis syndrome: A systematic review.
Suicide and Life-Threatening Behavior. February 27,
Jonathan Lorraine, Kevin Xie, Xiaohui Zeng, Chen- online ahead of print.
Hsuan Lin, Towaki Takikawa, Nicholas Sharp, Tsung-
Yi Lin, Ming-Yu Liu, Sanja Fidler, and James Lucas. Fanxu Meng, Haotong Yang, Yiding Wang, and Muhan
2023. Att3d: Amortized text-to-3d object synthesis. Zhang. 2023. Chain of images for intuitively reason-
ing.
Albert Lu, Hongxin Zhang, Yanzhe Zhang, Xuezhi
Wang, and Diyi Yang. 2023a. Bounding the capabili- B. Meskó. 2023. Prompt engineering as an impor-
ties of large language models in open text generation tant emerging skill for medical professionals: Tuto-
with prompt constraints. rial. Journal of Medical Internet Research, 25(Suppl
Hongyuan Lu, Haoyang Huang, Dongdong Zhang, Hao- 1):e50638.
ran Yang, Wai Lam, and Furu Wei. 2023b. Chain-
Yachun Mi, Yu Li, Yan Shu, Chen Hui, Puchao Zhou,
of-dictionary prompting elicits translation in large
and Shaohui Liu. 2023. Clif-vqa: Enhancing video
language models.
quality assessment by incorporating high-level se-
Qingyu Lu, Baopu Qiu, Liang Ding, Liping Xie, and mantic information related to human feelings.
Dacheng Tao. 2023c. Error analysis prompting en-
ables human-like translation evaluation in large lan- Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christo-
guage models: A case study on chatgpt. arXiv foros Nalmpantis, Ram Pasunuru, Roberta Raileanu,
preprint arXiv:2303.13809. Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu,
Asli Celikyilmaz, Edouard Grave, Yann LeCun, and
Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Thomas Scialom. 2023. Augmented language mod-
and Pontus Stenetorp. 2021. Fantastically ordered els: a survey.
prompts and where to find them: Overcoming few-
shot prompt order sensitivity. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe,
Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle-
Charles Duffy Luca Beurer-Kellner, Marc Fischer. 2023. moyer. 2022. Rethinking the role of demonstrations:
lmql. GitHub repository. What makes in-context learning work?
51
Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Xuefei Ning, Zinan Lin, Zixuan Zhou, Zifu Wang,
Luke Zettlemoyer. 2020. Ambigqa: Answering Huazhong Yang, and Yu Wang. 2023. Skeleton-of-
ambiguous open-domain questions. arXiv preprint thought: Large language models can do parallel de-
arXiv:2004.10645. coding.
R.A. Morelli, J.D. Bronzino, and J.W. Goethe. 1991. A OpenAI. 2023. OpenAI Assistants.
computational speech-act model of human-computer
conversations. In Proceedings of the 1991 IEEE Jonas Oppenlaender. 2023. A taxonomy of prompt
Seventeenth Annual Northeast Bioengineering Con- modifiers for text-to-image generation.
ference, pages 263–264.
Anton Osika. 2023. gpt-engineer.
Yasmin Moslem, Rejwanul Haque, John D. Kelleher,
and Andy Way. 2023. Adaptive machine translation Matthew J Page, Joanne E McKenzie, Patrick M
with large language models. In Proceedings of the Bossuyt, Isabelle Boutron, Tammy C Hoffmann,
24th Annual Conference of the European Association Cynthia D Mulrow, Larissa Shamseer, Jennifer M
for Machine Translation, pages 227–237, Tampere, Tetzlaff, Elie A Akl, Sue E Brennan, Roger Chou,
Finland. European Association for Machine Transla- Julie Glanville, Jeremy M Grimshaw, Asbjørn Hrób-
tion. jartsson, Manoj M Lalu, Tianjing Li, Elizabeth W
Loder, Evan Mayo-Wilson, Steve McDonald, Luke A
Fangwen Mu, Lin Shi, Song Wang, Zhuohao Yu, Bin- McGuinness, Lesley A Stewart, James Thomas, An-
quan Zhang, Chenxue Wang, Shichao Liu, and Qing drea C Tricco, Vivian A Welch, Penny Whiting, and
Wang. 2023. Clarifygpt: Empowering llm-based David Moher. 2021. The prisma 2020 statement: an
code generation with intention clarification. updated guideline for reporting systematic reviews.
BMJ, 372.
Niklas Muennighoff, Thomas Wang, Lintang Sutawika,
Adam Roberts, Stella Biderman, Teven Le Scao, Ehsan Pajouheshgar, Yitao Xu, Alexander Mordvint-
M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hai- sev, Eyvind Niklasson, Tong Zhang, and Sabine
ley Schoelkopf, Xiangru Tang, Dragomir Radev, Süsstrunk. 2023. Mesh neural cellular automata.
Alham Fikri Aji, Khalid Almubarak, Samuel Al-
banie, Zaid Alyafeai, Albert Webson, Edward Raff,
Pruthvi Patel, Swaroop Mishra, Mihir Parmar, and
and Colin Raffel. 2023. Crosslingual generaliza-
Chitta Baral. 2022. Is a question decomposition unit
tion through multitask finetuning. In Proceedings
all we need?
of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
Shishir G. Patil, Tianjun Zhang, Xin Wang, and
pages 15991–16111, Toronto, Canada. Association
Joseph E. Gonzalez. 2023. Gorilla: Large lan-
for Computational Linguistics.
guage model connected with massive apis. ArXiv,
abs/2305.15334.
Akshay Nambi, Vaibhav Balloli, Mercy Ranjit, Tanuja
Ganu, Kabir Ahuja, Sunayana Sitaram, and Kalika
Bali. 2023. Breaking language barriers with a leap: Hammond Pearce, Baleegh Ahmad, Benjamin Tan,
Learning strategies for polyglot llms. Brendan Dolan-Gavitt, and Ramesh Karri. 2021.
Asleep at the keyboard? assessing the security of
Milad Nasr, Nicholas Carlini, Jonathan Hayase, github copilot’s code contributions.
Matthew Jagielski, A. Feder Cooper, Daphne Ip-
polito, Christopher A. Choquette-Choo, Eric Wal- Hammond Pearce, Benjamin Tan, Baleegh Ahmad,
lace, Florian Tramèr, and Katherine Lee. 2023. Scal- Ramesh Karri, and Brendan Dolan-Gavitt. 2022. Ex-
able extraction of training data from (production) amining zero-shot vulnerability repair with large lan-
language models. guage models.
National Center for Health Workforce Analysis. 2023. Puyuan Peng, Brian Yan, Shinji Watanabe, and David
Behavioral health workforce, 2023. Harwath. 2023. Prompting the hidden talent of web-
scale speech models for zero-shot task generalization.
Alexandra Neagu. 2023. How can large language mod-
els and prompt engineering be leveraged in Com- Ethan Perez, Saffron Huang, Francis Song, Trevor Cai,
puter Science education?: Systematic literature re- Roman Ring, John Aslanides, Amelia Glaese, Nat
view. Master’s thesis, Delft University of Technol- McAleese, and Geoffrey Irving. 2022. Red teaming
ogy, 6. language models with language models.
Ercong Nie, Sheng Liang, Helmut Schmid, and Hinrich Fábio Perez and Ian Ribeiro. 2022. Ignore previous
Schütze. 2023. Cross-lingual retrieval augmented prompt: Attack techniques for language models.
prompt for low-resource languages. In Findings of
the Association for Computational Linguistics: ACL Neil Perry, Megha Srivastava, Deepak Kumar, and Dan
2023, pages 8320–8340, Toronto, Canada. Associa- Boneh. 2022. Do users write more insecure code
tion for Computational Linguistics. with ai assistants?
52
Denis Peskoff and Brandon M Stewart. 2023. Credi- Ratish Puduppully, Anoop Kunchukuttan, Raj Dabre,
ble without credit: Domain experts assess generative Ai Ti Aw, and Nancy F. Chen. 2023. Decomposed
language models. In Proceedings of the 61st Annual prompting for machine translation between related
Meeting of the Association for Computational Lin- languages using large language models.
guistics (Volume 2: Short Papers), pages 427–438.
Bo Qiao, Liqun Li, Xu Zhang, Shilin He, Yu Kang,
Denis Peskoff, Adam Visokay, Sander Schulhoff, Ben- Chaoyun Zhang, Fangkai Yang, Hang Dong, Jue
jamin Wachspress, Alan Blinder, and Brandon M Zhang, Lu Wang, Ming-Jie Ma, Pu Zhao, Si Qin, Xi-
Stewart. 2023. Gpt deciphering fedspeak: Quantify- aoting Qin, Chao Du, Yong Xu, Qingwei Lin, S. Raj-
ing dissent among hawks and doves. In Findings mohan, and Dongmei Zhang. 2023. Taskweaver: A
of the Association for Computational Linguistics: code-first agent framework. ArXiv, abs/2311.17541.
EMNLP 2023, pages 6529–6539.
Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen,
Denis Peskov, Viktor Hangya, Jordan Boyd-Graber, and Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang,
Alexander Fraser. 2021. Adapting entities across and Huajun Chen. 2022. Reasoning with language
languages and cultures. Findings of the Association model prompting: A survey.
for Computational Linguistics: EMNLP 2021.
Libo Qin, Qiguang Chen, Fuxuan Wei, Shijue Huang,
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, and Wanxiang Che. 2023a. Cross-lingual prompt-
Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and ing: Improving zero-shot chain-of-thought reasoning
Alexander Miller. 2019. Language models as knowl- across languages.
edge bases? Proceedings of the 2019 Conference
on Empirical Methods in Natural Language Process- Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen,
ing and the 9th International Joint Conference on Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang,
Natural Language Processing (EMNLP-IJCNLP). Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su,
Huadong Wang, Cheng Qian, Runchu Tian, Kunlun
Pouya Pezeshkpour and Estevam Hruschka. 2023. Zhu, Shi Liang, Xingyu Shen, Bokai Xu, Zhen Zhang,
Large language models sensitivity to the order of Yining Ye, Bo Li, Ziwei Tang, Jing Yi, Yu Zhu, Zhen-
options in multiple-choice questions. arXiv preprint ning Dai, Lan Yan, Xin Cong, Ya-Ting Lu, Weilin
arXiv:2308.11483. Zhao, Yuxiang Huang, Jun-Han Yan, Xu Han, Xian
Sun, Dahai Li, Jason Phang, Cheng Yang, Tong-
Carol W. Pfaff. 1979. Constraints on language mix- shuang Wu, Heng Ji, Zhiyuan Liu, and Maosong
ing: Intrasentential code-switching and borrowing in Sun. 2023b. Tool learning with foundation models.
spanish/english. Language, pages 291–318. ArXiv, abs/2304.08354.
Jonathan Pilault, Xavier Garcia, Arthur Bražinskas, and
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Orhan Firat. 2023. Interactive-chain-prompting: Am-
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas-
biguity resolution for crosslingual conditional gener-
try, Amanda Askell, Pamela Mishkin, Jack Clark,
ation with interaction.
et al. 2021. Learning transferable visual models from
Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben natural language supervision. In International confer-
Mildenhall. 2022. Dreamfusion: Text-to-3d using 2d ence on machine learning, pages 8748–8763. PMLR.
diffusion.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Shana Poplack. 1980. Sometimes i’ll start a sentence in Dario Amodei, Ilya Sutskever, et al. 2019a. Lan-
spanish y termino en español: Toward a typology of guage models are unsupervised multitask learners.
code-switching. Linguistics, 18(7-8):581–618. OpenAI blog, 1(8):9.
Archiki Prasad, Peter Hase, Xiang Zhou, and Mohit Alec Radford, Jeffrey Wu, Rewon Child, David Luan,
Bansal. 2023. GrIPS: Gradient-free, edit-based in- Dario Amodei, Ilya Sutskever, et al. 2019b. Lan-
struction search for prompting large language models. guage models are unsupervised multitask learners.
In Proceedings of the 17th Conference of the Euro- OpenAI blog, 1(8):9.
pean Chapter of the Association for Computational
Linguistics, pages 3845–3864, Dubrovnik, Croatia. Sudha Rao and Hal Daumé III. 2019. Answer-based
Association for Computational Linguistics. adversarial training for generating clarification ques-
tions. arXiv preprint arXiv:1904.02281.
Preamble. 2024. Our product.
Traian Rebedea, Razvan Dinu, Makesh Sreedhar,
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Christopher Parisien, and Jonathan Cohen. 2023.
Noah A. Smith, and Mike Lewis. 2022. Measuring Nemo guardrails: A toolkit for controllable and safe
and narrowing the compositionality gap in language llm applications with programmable rails. arXiv.
models.
Philip Resnik, April Foreman, Michelle Kuchuk, Kather-
Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chen- ine Musacchio Schafer, and Beau Pinkham. 2021.
guang Zhu, and Michael Zeng. 2023. Automatic Naturally occurring language as a source of evidence
prompt optimization with "gradient descent" and in suicide prevention. Suicide and Life-Threatening
beam search. Behavior, 51(1):88–96.
53
Laria Reynolds and Kyle McDonell. 2021. Prompt pro- Timo Schick and Hinrich Schütze. 2021. Exploiting
gramming for large language models: Beyond the cloze-questions for few-shot text classification and
few-shot paradigm. In Extended Abstracts of the natural language inference. In Proceedings of the
2021 CHI Conference on Human Factors in Comput- 16th Conference of the European Chapter of the Asso-
ing Systems, CHI ’21. ACM. ciation for Computational Linguistics: Main Volume.
Association for Computational Linguistics.
Megan L Rogers, Carol Chu, and Thomas Joiner. 2019.
The necessity, validity, and clinical utility of a new di- Douglas C. Schmidt, Jesse Spencer-Smith, Quchen Fu,
agnostic entity: Acute suicidal affective disturbance and Jules White. 2023. Cataloging prompt patterns to
(asad). Journal of Clinical Psychology, 75(6):999. enhance the discipline of prompt engineering. Dept.
of Computer Science, Vanderbilt University. Email:
Robin Rombach, Andreas Blattmann, Dominik Lorenz, douglas.c.schmidt, jesse.spencer-smith, quchen.fu,
Patrick Esser, and Björn Ommer. 2022. High- [email protected].
resolution image synthesis with latent diffusion mod-
els. Allison Schuck, Raffaella Calati, Shira Barzilay, Sarah
Bloch-Elkouby, and Igor I. Galynker. 2019a. Suicide
Shamik Roy, Raphael Shu, Nikolaos Pappas, Elman
crisis syndrome: A review of supporting evidence
Mansimov, Yi Zhang, Saab Mansour, and Dan Roth.
for a new suicide-specific diagnosis. Behavioral sci-
2023. Conversation style transfer using few-shot
ences & the law, 37 3:223–239.
learning. In Proceedings of the 13th International
Joint Conference on Natural Language Processing
and the 3rd Conference of the Asia-Pacific Chapter of Allison Schuck, Raffaella Calati, Shira Barzilay, Sarah
Bloch-Elkouby, and Igor Galynker. 2019b. Suicide
the Association for Computational Linguistics (Vol-
crisis syndrome: A review of supporting evidence
ume 1: Long Papers), pages 119–143, Nusa Dua,
for a new suicide-specific diagnosis. Behavioral sci-
Bali. Association for Computational Linguistics.
ences and the law, 37(3):223–239.
Ohad Rubin, Jonathan Herzig, and Jonathan Berant.
2022. Learning to retrieve prompts for in-context Sander Schulhoff. 2022. Learn Prompting.
learning. In Proceedings of the 2022 Conference of
the North American Chapter of the Association for Sander Schulhoff, Jeremy Pinto, Anaum Khan, Louis-
Computational Linguistics: Human Language Tech- François Bouchard, Chenglei Si, Svetlina Anati,
nologies. Association for Computational Linguistics. Valen Tagliabue, Anson Kost, Christopher Carnahan,
and Jordan Boyd-Graber. 2023. Ignore this title and
Runway. 2023. Gen-2 prompt tips. https: HackAPrompt: Exposing systemic vulnerabilities
//help.runwayml.com/hc/en-us/articles/ of LLMs through a global prompt hacking compe-
17329337959699-Gen-2-Prompt-Tips. tition. In Proceedings of the 2023 Conference on
Empirical Methods in Natural Language Processing,
Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, pages 4945–4977, Singapore. Association for Com-
Vinija Jain, Samrat Mondal, and Aman Chadha. 2024. putational Linguistics.
A systematic survey of prompt engineering in large
language models: Techniques and applications. Sander V Schulhoff. 2024. Prompt injection vs jail-
breaking: What is the difference?
Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh
Karri, Siddharth Garg, and Brendan Dolan-Gavitt. Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane
2022. Lost at c: A user study on the security implica- Suhr. 2023a. Quantifying language models’ sensi-
tions of large language model code assistants. tivity to spurious features in prompt design or: How
i learned to start worrying about prompt formatting.
Shubhra Kanti Karmaker Santu and Dongji Feng. 2023. arXiv preprint arXiv:2310.11324.
Teler: A general taxonomy of llm prompts for bench-
marking complex tasks. Melanie Sclar, Yejin Choi, Yulia Tsvetkov, and Alane
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Suhr. 2023b. Quantifying language models’ sensitiv-
Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola ity to spurious features in prompt design or: How i
Cancedda, and Thomas Scialom. 2023. Toolformer: learned to start worrying about prompt formatting.
Language models can teach themselves to use tools.
Harsha-Nori Scott Lundberg, Marco Tulio Cor-
Timo Schick and Hinrich Schütze. 2020a. Exploiting reia Ribeiro. 2023. guidance. GitHub repository.
cloze-questions for few-shot text classification and
natural language inference. In Conference of the Eu- John R. Searle. 1969. Speech Acts: An Essay in the Phi-
ropean Chapter of the Association for Computational losophy of Language. Cambridge University Press.
Linguistics.
Omar Shaikh, Hongxin Zhang, William Held, Michael
Timo Schick and Hinrich Schütze. 2020b. It’s not just Bernstein, and Diyi Yang. 2023. On second thought,
size that matters: Small language models are also let’s not think step by step! bias and toxicity in zero-
few-shot learners. ArXiv, abs/2009.07118. shot reasoning.
54
Mrinank Sharma, Meg Tong, Tomasz Korbak, David Suzanna Sia and Kevin Duh. 2023. In-context learn-
Duvenaud, Amanda Askell, Samuel R Bowman, ing as maintaining coherency: A study of on-the-fly
Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, machine translation using large language models.
Scott R Johnston, et al. 2023. Towards understand-
ing sycophancy in language models. arXiv preprint Significant Gravitas. 2023. AutoGPT.
arXiv:2310.13548.
Uriel Singer, Shelly Sheynin, Adam Polyak, Oron
Yongliang Shen, Kaitao Song, Xu Tan, Dong Sheng Li, Ashual, Iurii Makarov, Filippos Kokkinos, Naman
Weiming Lu, and Yue Ting Zhuang. 2023. Hugging- Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson,
gpt: Solving ai tasks with chatgpt and its friends in and Yaniv Taigman. 2023. Text-to-4d dynamic scene
hugging face. ArXiv, abs/2303.17580. generation.
Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Taylor Sorensen, Joshua Robinson, Christopher Ryt-
Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, ting, Alexander Shaw, Kyle Rogers, Alexia Delorey,
Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Mahmoud Khalil, Nancy Fulda, and David Wingate.
and Jason Wei. 2022. Language models are multilin- 2022. An information-theoretic approach to prompt
gual chain-of-thought reasoners. engineering without ground truth labels. In Proceed-
Taylor Shin, Yasaman Razeghi, Robert L Logan IV, ings of the 60th Annual Meeting of the Association
Eric Wallace, and Sameer Singh. 2020a. Eliciting for Computational Linguistics (Volume 1: Long Pa-
knowledge from language models using automati- pers), pages 819–862, Dublin, Ireland. Association
cally generated prompts. ArXiv, abs/2010.15980. for Computational Linguistics.
Taylor Shin, Yasaman Razeghi, Robert L. Logan IV, Eric Andrea Sottana, Bin Liang, Kai Zou, and Zheng Yuan.
Wallace, and Sameer Singh. 2020b. Autoprompt: 2023. Evaluation metrics in the era of gpt-4: Reli-
Eliciting knowledge from language models with au- ably evaluating large language models on sequence
tomatically generated prompts. Proceedings of the to sequence tasks. arXiv preprint arXiv:2310.13800.
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP). Michal Štefánik and Marek Kadlčík. 2023. Can in-
context learners learn a reasoning concept from
Han-Chin Shing, Suraj Nair, Ayah Zirikly, Meir Frieden- demonstrations? In Proceedings of the 1st Work-
berg, Hal Daumé III, and Philip Resnik. 2018. Expert, shop on Natural Language Reasoning and Structured
crowdsourced, and machine assessment of suicide Explanations (NLRSE), pages 107–115, Toronto,
risk via online postings. In Proceedings of the Fifth Canada. Association for Computational Linguistics.
Workshop on Computational Linguistics and Clinical
Psychology: From Keyboard to Clinic, pages 25–36, Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi,
New Orleans, LA. Association for Computational Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf,
Linguistics. Luke Zettlemoyer, Noah A. Smith, and Tao Yu. 2022.
Selective annotation makes language models better
Noah Shinn, Federico Cassano, Edward Berman, Ash- few-shot learners.
win Gopinath, Karthik Narasimhan, and Shunyu Yao.
2023. Reflexion: Language agents with verbal rein- Zhi Rui Tam, Cheng-Kuang Wu, Yi-Lin Tsai, Chieh-
forcement learning. Yen Lin, Hung yi Lee, and Yun-Nung Chen. 2024.
Let me speak freely? a study on the impact of format
Chenglei Si, Dan Friedman, Nitish Joshi, Shi Feng,
restrictions on performance of large language models.
Danqi Chen, and He He. 2023a. Measuring induc-
tive biases of in-context learning with underspecified Lv Tang, Peng-Tao Jiang, Hao-Ke Xiao, and Bo Li.
demonstrations. In Association for Computational 2023. Towards training-free open-world segmenta-
Linguistics (ACL). tion via image prompting foundation models.
Chenglei Si, Zhe Gan, Zhengyuan Yang, Shuohang
Wang, Jianfeng Wang, Jordan Boyd-Graber, and Li- Eshaan Tanwar, Subhabrata Dutta, Manish Borthakur,
juan Wang. 2023b. Prompting gpt-3 to be reliable. and Tanmoy Chakraborty. 2023. Multilingual LLMs
In International Conference on Learning Representa- are better cross-lingual in-context learners with align-
tions (ICLR). ment. In Proceedings of the 61st Annual Meeting of
the Association for Computational Linguistics (Vol-
Chenglei Si, Navita Goyal, Sherry Tongshuang Wu, ume 1: Long Papers), pages 6292–6307, Toronto,
Chen Zhao, Shi Feng, Hal Daumé III, and Jordan Canada. Association for Computational Linguistics.
Boyd-Graber. 2023c. Large language models help
humans verify truthfulness–except when they are con- Ming Tao, Hao Tang, Fei Wu, Xiao-Yuan Jing, Bing-
vincingly wrong. arXiv preprint arXiv:2310.12558. Kun Bao, and Changsheng Xu. 2022. Df-gan: A
simple and effective baseline for text-to-image syn-
Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, thesis.
and Jordan Lee Boyd-Graber. 2023d. Getting MoRE
out of Mixture of language model Reasoning Experts. Charlotte Thompson and Tiana Kelly. 2023. When
Findings of Empirical Methods in Natural Language hallucinations become reality: An exploration of ai
Processing. package hallucination attacks. Darktrace Blog.
55
Katherine Tian, Eric Mitchell, Allan Zhou, Archit Jiaqi Wang, Enze Shi, Sigang Yu, Zihao Wu, Chong
Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, Ma, Haixing Dai, Qiushi Yang, Yanqing Kang, Jinru
and Christopher Manning. 2023. Just ask for cali- Wu, Huawen Hu, Chenxi Yue, Haiyang Zhang, Yi-
bration: Strategies for eliciting calibrated confidence heng Liu, Xiang Li, Bao Ge, Dajiang Zhu, Yixuan
scores from language models fine-tuned with human Yuan, Dinggang Shen, Tianming Liu, and Shu Zhang.
feedback. In Proceedings of the 2023 Conference 2023d. Prompt engineering for healthcare: Method-
on Empirical Methods in Natural Language Process- ologies and applications.
ing, pages 5433–5442, Singapore. Association for
Computational Linguistics. Junjie Wang, Yuchao Huang, Chunyang Chen, Zhe Liu,
Song Wang, and Qing Wang. 2023e. Software testing
Mark Towers, Jordan K. Terry, Ariel Kwiatkowski, with large language model: Survey, landscape, and
John U. Balis, Gianluca de Cola, Tristan Deleu, vision.
Manuel Goulão, Andreas Kallinteris, Arjun KG,
Markus Krimmel, Rodrigo Perez-Vicente, Andrea Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu,
Pierré, Sander Schulhoff, Jun Jet Tai, Andrew Tan Jin Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
Shen, and Omar G. Younis. 2023. Gymnasium. 2023f. Plan-and-solve prompting: Improving zero-
shot chain-of-thought reasoning by large language
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, models.
and Ashish Sabharwal. 2023. Interleaving retrieval
with chain-of-thought reasoning for knowledge- Siyin Wang, Chao-Han Huck Yang, Ji Wu, and Chao
intensive multi-step questions. In Proceedings of Zhang. 2023g. Can whisper perform speech-based
the 61st Annual Meeting of the Association for Com- in-context learning.
putational Linguistics (Volume 1: Long Papers),
pages 10014–10037, Toronto, Canada. Association Xinyi Wang, Wanrong Zhu, Michael Saxon, Mark
for Computational Linguistics. Steyvers, and William Yang Wang. 2023h. Large
language models are latent variable models: Explain-
Rasul Tutunov, Antoine Grosnit, Juliusz Ziomek, Jun
ing and finding good demonstrations for in-context
Wang, and Haitham Bou-Ammar. 2023. Why can
learning.
large language models generate correct chain-of-
thoughts? Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Shubham Vatsal and Harsh Dubey. 2024. A survey Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
of prompt engineering methods in large language Denny Zhou. 2022. Self-consistency improves chain
models for different nlp tasks. of thought reasoning in language models.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Yaqing Wang, Jiepu Jiang, Mingyang Zhang, Cheng
and Sameer Singh. 2019. Universal adversarial trig- Li, Yi Liang, Qiaozhu Mei, and Michael Bender-
gers for attacking and analyzing nlp. In Conference sky. 2023i. Automated evaluation of personalized
on Empirical Methods in Natural Language Process- text generation using large language models. arXiv
ing. preprint arXiv:2310.11593.
Xingchen Wan, Ruoxi Sun, Hanjun Dai, Sercan O. Arik, Yaqing Wang, Quanming Yao, James Kwok, and Li-
and Tomas Pfister. 2023a. Better zero-shot reasoning onel M. Ni. 2019. Generalizing from a few examples:
with self-adaptive prompting. A survey on few-shot learning.
Xingchen Wan, Ruoxi Sun, Hootan Nakhost, Han- Yuqing Wang and Yun Zhao. 2024. Metacognitive
jun Dai, Julian Martin Eisenschlos, Sercan O. Arik, prompting improves understanding in large language
and Tomas Pfister. 2023b. Universal self-adaptive models.
prompting.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- Zekun Moore Wang, Zhongyuan Peng, Haoran Que,
dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and An- Jiaheng Liu, Wangchunshu Zhou, Yuhan Wu,
ima Anandkumar. 2023a. Voyager: An open-ended Hongcheng Guo, Ruitong Gan, Zehao Ni, Man
embodied agent with large language models. Zhang, Zhaoxiang Zhang, Wanli Ouyang, Ke Xu,
Wenhu Chen, Jie Fu, and Junran Peng. 2023j.
Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Rolellm: Benchmarking, eliciting, and enhancing
Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. role-playing abilities of large language models.
2023b. Is chatgpt a good nlg evaluator? a preliminary
study. arXiv preprint arXiv:2303.04048. Zhendong Wang, Yifan Jiang, Yadong Lu, Yelong Shen,
Pengcheng He, Weizhu Chen, Zhangyang Wang, and
Jiaqi Wang, Zhengliang Liu, Lin Zhao, Zihao Wu, Mingyuan Zhou. 2023k. In-context learning un-
Chong Ma, Sigang Yu, Haixing Dai, Qiushi Yang, locked for diffusion models.
Yiheng Liu, Songyao Zhang, Enze Shi, Yi Pan, Tuo
Zhang, Dajiang Zhu, Xiang Li, Xi Jiang, Bao Ge, Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao
Yixuan Yuan, Dinggang Shen, Tianming Liu, and Ge, Furu Wei, and Heng Ji. 2023l. Unleashing cogni-
Shu Zhang. 2023c. Review of large vision models tive synergy in large language models: A task-solving
and visual prompt engineering. agent through multi-persona self-collaboration.
56
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Tongshuang Wu, Michael Terry, and Carrie Jun Cai.
Adams Wei Yu, Brian Lester, Nan Du, Andrew M. 2022. Ai chains: Transparent and controllable
Dai, and Quoc V Le. 2022a. Finetuned language human-ai interaction by chaining large language
models are zero-shot learners. In International Con- model prompts. CHI Conference on Human Factors
ference on Learning Representations. in Computing Systems.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Xiaodong Wu, Ran Duan, and Jianbing Ni. 2023c. Un-
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and veiling security, privacy, and ethical concerns of chat-
Denny Zhou. 2022b. Chain-of-thought prompting gpt. Journal of Information and Intelligence.
elicits reasoning in large language models.
Congying Xia, Chen Xing, Jiangshu Du, Xinyi Yang,
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Yihao Feng, Ran Xu, Wenpeng Yin, and Caiming
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Xiong. 2024. Fofo: A benchmark to evaluate llms’
Denny Zhou. 2023a. Chain-of-thought prompting format-following capability.
elicits reasoning in large language models.
Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie
Jerry Wei, Da Huang, Yifeng Lu, Denny Zhou, and Fu, Junxian He, and Bryan Hooi. 2023a. Can llms
Quoc V Le. 2023b. Simple synthetic data reduces express their uncertainty? an empirical evaluation
sycophancy in large language models. arXiv preprint of confidence elicitation in llms. arXiv preprint
arXiv:2308.03958. arXiv:2306.13063.
Jerry Wei, Jason Wei, Yi Tay, Dustin Tran, Albert Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie
Webson, Yifeng Lu, Xinyun Chen, Hanxiao Liu, Fu, Junxian He, and Bryan Hooi. 2023b. Can llms
Da Huang, Denny Zhou, et al. 2023c. Larger express their uncertainty? an empirical evaluation
language models do in-context learning differently. of confidence elicitation in llms. arXiv preprint
arXiv preprint arXiv:2303.03846. arXiv:2306.13063.
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu,
Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao. Hongbo Xu, Guodong Long, and Jian guang Lou.
2022. Large language models are better reasoners 2023. Re-reading improves reasoning in language
with self-verification. models.
Jason Weston and Sainbayar Sukhbaatar. 2023. System Tianci Xue, Ziqi Wang, Zhenhailong Wang, Chi Han,
2 attention (is something you might need too). Pengfei Yu, and Heng Ji. 2023. Rcot: Detecting
Jules White, Quchen Fu, Sam Hays, Michael Sandborn, and rectifying factual inconsistency in reasoning by
Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse reversing chain-of-thought.
Spencer-Smith, and Douglas C. Schmidt. 2023. A
Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu,
prompt pattern catalog to enhance prompt engineer-
Quoc V. Le, Denny Zhou, and Xinyun Chen. 2023a.
ing with chatgpt.
Large language models as optimizers.
Alex Wilf, Sihyun Shawn Lee, Paul Pu Liang, and Louis-
Philippe Morency. 2023. Think twice: Perspective- Haibo Yang, Yang Chen, Yingwei Pan, Ting Yao, Zhi-
taking improves large language models’ theory-of- neng Chen, and Tao Mei. 2023b. 3dstyle-diffusion:
mind capabilities. Pursuing fine-grained text-driven 3d stylization with
2d diffusion models.
Simon Willison. 2022. Prompt injection attacks against
gpt-3. Hui Yang, Sifu Yue, and Yunzhong He. 2023c. Auto-
gpt for online decision making: Benchmarks and
Simon Willison. 2024. Prompt injection and jailbreak- additional opinions.
ing are not the same thing.
Xinyi Yang, Runzhe Zhan, Derek F. Wong, Junchao
Genta Indra Winata, Liang-Kang Huang, Soumya Vad- Wu, and Lidia S. Chao. 2023d. Human-in-the-loop
lamannati, and Yash Chandarana. 2023. Multilingual machine translation with large language model. In
few-shot learning via language model retrieval. Proceedings of Machine Translation Summit XIX Vol.
2: Users Track, pages 88–98, Macau SAR, China.
Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Weixian Machine Translation Summit.
Lei, Yuchao Gu, Yufei Shi, Wynne Hsu, Ying Shan,
Xiaohu Qie, and Mike Zheng Shou. 2023a. Tune-a- Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang,
video: One-shot tuning of image diffusion models Chung-Ching Lin, Zicheng Liu, and Lijuan Wang.
for text-to-video generation. 2023e. The dawn of lmms: Preliminary explorations
with gpt-4v(ision). ArXiv, abs/2309.17421.
Ning Wu, Ming Gong, Linjun Shou, Shining Liang,
and Daxin Jiang. 2023b. Large language models are Binwei Yao, Ming Jiang, Diyi Yang, and Junjie Hu.
diverse role-players for summarization evaluation. 2023a. Empowering llm-based machine translation
arXiv preprint arXiv:2303.15078. with cultural awareness.
57
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew
Thomas L. Griffiths, Yuan Cao, and Karthik Chi-Chih Yao. 2023b. Cumulative reasoning with
Narasimhan. 2023b. Tree of thoughts: Deliberate large language models.
problem solving with large language models.
Yiming Zhang, Shi Feng, and Chenhao Tan. 2022a. Ac-
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak tive example selection for in-context learning.
Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language Zhuosheng Zhang, Yao Yao, Aston Zhang, Xiangru
models. Tang, Xinbei Ma, Zhiwei He, Yiming Wang, Mark
Gerstein, Rui Wang, Gongshen Liu, and Hai Zhao.
Yao Yao, Zuchao Li, and Hai Zhao. 2023c. Beyond 2023c. Igniting language intelligence: The hitch-
chain-of-thought, effective graph-of-thought reason- hiker’s guide from chain-of-thought reasoning to lan-
ing in large language models. guage agents.
Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, Smola. 2022b. Automatic chain of thought prompt-
and Denny Zhou. 2023. Large language models as ing in large language models.
analogical reasoners.
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao,
Qinyuan Ye, Maxamed Axmed, Reid Pryzant, and George Karypis, and Alex Smola. 2023d. Multi-
Fereshte Khani. 2023. Prompt engineering a prompt modal chain-of-thought reasoning in language mod-
engineer. els.
Xi Ye and Greg Durrett. 2023. Explanation selection Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei
using unlabeled data for chain-of-thought prompting. Qin, and Lidong Bing. 2023a. Verify-and-edit: A
knowledge-enhanced chain-of-thought framework.
Kang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyun-
In Proceedings of the 61st Annual Meeting of the
soo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang goo Lee,
Association for Computational Linguistics (Volume
and Taeuk Kim. 2022. Ground-truth labels matter: A
1: Long Papers), pages 5823–5840, Toronto, Canada.
deeper look into input-label demonstrations.
Association for Computational Linguistics.
Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel
Deutch, and Jonathan Berant. 2023. Answering Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and
questions by meta-reasoning over multiple chains Sameer Singh. 2021a. Calibrate before use: Improv-
of thought. ing few-shot performance of language models.
Adeel Yousaf, Muzammal Naseer, Salman Khan, Fa- Yilun Zhao, Haowei Zhang, Shengyun Si, Linyong Nan,
had Shahbaz Khan, and Mubarak Shah. 2023. Video- Xiangru Tang, and Arman Cohan. 2023b. Large lan-
prompter: an ensemble of foundational models for guage models are effective table-to-text generators,
zero-shot video understanding. evaluators, and feedback providers. arXiv preprint
arXiv:2305.14987.
Yue Yu, Yuchen Zhuang, Jieyu Zhang, Yu Meng,
Alexander Ratner, Ranjay Krishna, Jiaming Shen, Yuyang Zhao, Zhiwen Yan, Enze Xie, Lanqing Hong,
and Chao Zhang. 2023. Large language model as Zhenguo Li, and Gim Hee Lee. 2023c. Animate124:
attributed training data generator: A tale of diversity Animating one image to 4d dynamic scene.
and bias. arXiv preprint arXiv:2306.15895.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and
Xiang Yue, Boshi Wang, Kai Zhang, Ziru Chen, Yu Su, Sameer Singh. 2021b. Calibrate before use: Im-
and Huan Sun. 2023. Automatic evaluation of at- proving few-shot performance of language models.
tribution by large language models. arXiv preprint In International Conference on Machine Learning,
arXiv:2305.06311. pages 12697–12706. PMLR.
Zhiyuan Zeng, Jiatong Yu, Tianyu Gao, Yu Meng, Tanya Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, and
Goyal, and Danqi Chen. 2023. Evaluating large Minlie Huang. 2023a. On large language models’ se-
language models at evaluating instruction following. lection bias in multi-choice questions. arXiv preprint
arXiv preprint arXiv:2310.07641. arXiv:2309.03882.
Michael JQ Zhang and Eunsol Choi. 2023. Clarify when Ge Zheng, Bin Yang, Jiajin Tang, Hong-Yu Zhou, and
necessary: Resolving ambiguity through interaction Sibei Yang. 2023b. Ddcot: Duty-distinct chain-of-
with lms. arXiv preprint arXiv:2311.09469. thought prompting for multimodal reasoning in lan-
guage models.
Quanjun Zhang, Tongke Zhang, Juan Zhai, Chunrong
Fang, Bowen Yu, Weisong Sun, and Zhenyu Chen. Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen,
2023a. A critical review of large language model on Heng-Tze Cheng, Ed H. Chi, Quoc V Le, and Denny
software engineering: An example from chatgpt and Zhou. 2023c. Take a step back: Evoking reasoning
automated program repair. via abstraction in large language models.
58
Mingqian Zheng, Jiaxin Pei, and David Jurgens. 2023d.
Is "a helpful assistant" the best role for large language
models? a systematic evaluation of social roles in
system prompts.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc Le, et al. 2022a.
Least-to-most prompting enables complex reason-
ing in large language models. arXiv preprint
arXiv:2205.10625.
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han,
Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy
Ba. 2022b. Large language models are human-level
prompt engineers.
Yucheng Zhou, Xiubo Geng, Tao Shen, Chongyang Tao,
Guodong Long, Jian-Guang Lou, and Jianbing Shen.
2023. Thread of thought unraveling chaotic contexts.
Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Wei-
jie Su, Chenyu Yang, Gao Huang, Bin Li, Lewei Lu,
Xiaogang Wang, Yu Qiao, Zhaoxiang Zhang, and
Jifeng Dai. 2023. Ghost in the minecraft: Gener-
ally capable agents for open-world environments via
large language models with text-based knowledge
and memory.
Zhichao Zuo, Zhao Zhang, Yan Luo, Yang Zhao, Haijun
Zhang, Yi Yang, and Meng Wang. 2023. Cut-and-
paste: Subject-driven video editing with attention
control.
59
A Appendices
60
(Hadi et al., the instructions provided to an LLM to refers to the designing and wording of
2023) make it follow specified rules, automation prompts given to LLMs so as to get a de-
of processes and to ensure that the out- sired response from them.
put generated is of a specific quality or
quantity
(Neagu, entails various strate- gies, including ex-
2023) plicit instruction, and implicit context [21].
Explicit instruction involves providing ex-
plicit guidance or constraints to the model
through instructions, examples, or speci-
fications. Implicit context leverages the
model’s under- standing of the preceding
context to influence its response
(Dang et al., the systematic practice of constructing
2022) prompts to improve the generated output
of a generative model
Table A.1: Definitions of Prompt and Prompt Engineering from different papers.
61
A.2 Extended Vocabulary
A.2.1 Prompting Terms
Context Window The context window is the space of tokens (for LLMs) which the model can process.
It has a maximal length (the context length).
Priming (Schulhoff, 2022) refers to giving a model an initial prompt that lays out certain instructions
for the rest of a conversation. This priming prompt might contains a role or other instructions on how to
interact with the user. Priming can either be done in the system or user prompt (see below).
62
Prefix In Prefix prompts, the token to be predicted is at the end of the prompt (Liu et al., 2023b). This is
usually the case with modern GPT-style models (Radford et al., 2019b).
63
A.3 Datasheet
We present a datasheet (Gebru et al., 2021) with more information about the associated paper dataset,
which is hosted on HuggingFace.
A.3.1 Motivation
For what purpose was the dataset created? Was there a specific task in mind? Was there a specific
gap that needed to be filled? Please provide a description.
This dataset was created to gather existing literature on prompt engineering in order to analyze all current
hard prefix prompting techniques.
Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g.,
company, institution, organization)?
This research was associated with the University of Maryland, Learn Prompting, and sponsored by
OpenAI, but not created on the behalf of any particular organization.
Who funded the creation of the dataset? If there is an associated grant, please provide the name
of the grantor and the grant name and number.
OpenAI contributed $10,000 in credits for their API.
A.3.2 Composition
What do the instances that comprise the dataset represent (e.g., documents, photos, people, coun-
tries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions
between them; nodes and edges)? Please provide a description.
The dataset contains 1,565 research papers in PDF format. Any duplicate papers were removed automati-
cally, though some could exist.
What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or
features? In either case, please provide a description.
Each data instance is a research paper as a PDF.
Is there a label or target associated with each instance? If so, please provide a description.
No
Is any information missing from individual instances? If so, please provide a description, ex-
plaining why this information is missing (e.g., because it was unavailable). This does not include
intentionally removed information, but might include, e.g., redacted text.
No.
Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a
description.
The papers were gathered in a semi-automated process which introduced the possibility of irrelevant
papers being collected and relevant papers not being collected. There were manual reviews done for both
possible errors to mitigate these errors.
Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g.,
websites, tweets, other datasets)?
It is self-contained.
Does the dataset contain data that might be considered confidential (e.g., data that is protected by
legal privilege or by doctor–patient confidentiality, data that includes the content of individuals’
non-public communications)? If so, please provide a description.
No.
Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening,
or might otherwise cause anxiety? If so, please describe why.
The dataset contains some papers on prompt injection. These papers may contain offensive content
including racism and sexism.
64
A.3.3 Collection Process
How was the data associated with each instance acquired?
The dataset was compiled from Arxiv, Semantic Scholar, and ACL.
What mechanisms or procedures were used to collect the data?
We wrote scripts to automatically query the APIs of Arxiv and Semantic Scholar.
Over what timeframe was the data collected?
The dataset was curated the duration of the research paper, primarily in February of 2024.
Were any ethical review processes conducted?
No.
A.3.5 Uses
Has the dataset been used for any tasks already?
No.
Is there a repository that links to any or all papers or systems that use the dataset?
Yes.
Is there anything about the composition of the dataset or the way it was collected and prepro-
cessed/cleaned/labeled that might impact future uses?
All of the papers we collected were written in English. It is possible some papers were not included due to
a translation not being available.
Are there tasks for which the dataset should not be used?
No.
A.3.6 Distribution
Will the dataset be distributed to third parties outside of the entity on behalf of which the dataset
was created?
No.
A.3.7 Maintenance
Who will be supporting/hosting/maintaining the dataset?
Our team will continue maintenance.
How can the owner/curator/manager of the dataset be contacted?
Please email us at [email protected]
Is there an erratum?
No.
If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for
them to do so?
Yes, anyone is free to use/modify the data.
65
A.4 Keywords
Here are the keywords we used for search.
• jailbreak prompt
• prompt an llm
• prompt injection
• prompt optimization
• prompt engineering
• few-shot learning
• prompt-based methods
• prompting-based methods
• few-shot prompt
• one-shot prompt
• few-shot prompting
• one-shot prompting
• prompting techniques
• llm prompting
• 0-shot prompt
• 0 shot prompt
• zero-shot prompt
• many-shot prompt
• zero-shot prompting
• many-shot prompting
66
• in-context learning
• in context learning
• human-in-the-loop prompting
• token-efficient prompting
• multimodal prompting
• instruction prompting
• prompt templating
• prompt template
67
A.5 Evaluation Table
P ROMPT
ID M ODEL O UTPUT S PACE T YPE R ES . BATCH
Roles CoT Definition Few-Shot
(Kocmi and Federmann, 2023b) GPT-family DA, sMQM, stars, classes E S
(Lu et al., 2023c) Dav3, Turbo, GPT-4 ✓ ✓ ✓ Error Span → Score E S ✓
(Fernandes et al., 2023) PaLM ✓ ✓ ✓ Error Span I S
(Kocmi and Federmann, 2023a) GPT-4 ✓ ✓ ✓ Error Span I S ✓
(Araújo and Aguiar, 2023) ChatGPT ✓ Likert [1-5] E S ✓
(Wang et al., 2023b) ChatGPT ✓ DA, stars E S
(Liu et al., 2023d)† GPT-3.5, GPT-4 ✓ Likert [1-10] I M
(Chan et al., 2024) ChatGPT, GPT-4 ✓ ✓ Likert [1-10] I M
(Luo et al., 2023) ChatGPT ✓ ✓ yes/no;A/B; Likert [1-10] E S
(Hada et al., 2024) GPT4-32K ✓ ✓ [0,1,2] or binary E S ✓
(Fu et al., 2023a) GPT3, OPT, FLAN-T5, GPT2 Probability I S
(Gao et al., 2023c) ChatGPT ✓ Likert [1-5], Pairwise, Pyramid, 0/1 E S
(Chen et al., 2023g) ChatGPT Likert [1-10]; yes/no; pairwise: A/B/C E&I S
(He et al., 2023a) GPT-4 ✓ Likert [1-5] E S
(Sottana et al., 2023) GPT-4 ✓ Likert [1-5] E S
(Chen et al., 2023c) GPT, Flan-T5 ✓ Yes/No E S
(Zhao et al., 2023b) GPT-3.5, GPT-4 ✓ ✓ true/false E S
(Wu et al., 2023b) GPT-3 ✓ pairwise voting E M ✓
(Wang et al., 2023i) PaLM 2-IT-L A/B E M
(Jia et al., 2023) LLaMa7b Probability I S
(Yue et al., 2023) ChatGPT, Alpaca, Vicuna, GPT-4 ✓ ✓ Yes/No E S
(Li et al., 2023e) GPT-3.5, GPT-4, Bard, Vicuna ✓ Pairwise I M
(Liu et al., 2023f) ChatGPT, Vicuna, chatGLM, StableLM ✓ continuous [0-1] E S
(Bai et al., 2023b) GPT-4, Claude, ChatGPT, Bard, Vicuna ✓ Likert [1-5] E S
(Dubois et al., 2023) GPT-4, ChatGPT, Dav3 ✓ ✓ pairwise E M ✓
(Liu et al., 2023h)† GPT-4-32K ✓ Likert [1-5] E S
(Wang et al., 2023h) Turbo, ChatGPT, GPT-4, Vicuna ✓ Likert [1-10] E M
(Zeng et al., 2023) GPT-4, ChatGPT, LLaMA-2-Chat, PaLM2, Falcon ✓ ✓ ✓ Pairwise E S
(Zheng et al., 2023b) Claude-v1, GPT-3.5, GPT-4 ✓ ✓ Pairwise/Likert [1-10] E S/M
(Lin and Chen, 2023) Claude-v1.3 Likert [0-5], Likert [0-100] E S ✓
Table A.2: Evaluation Paper Summary. E: Explicit (whether the model generates an assessment), I: Implicit (whether
an assessment is derived from the model output); Response (Res.) S: Single response, M: Multiple responses; †:
Model generated instruction;
68
A.6 Entrapment Prompting Process
This section contains the thought process of our prompt engineer as he developed the prompt.
A.6.1 Exploration
• First did a bit of dataset exploration, looking at length/label distribution, then chose entrapment to
start with.
• Checked if gpt-1106-preview understood entrapment, by asking it to define the term WRT SCS. It
did not.
• I put the instructions in the user prompt, but get a similar response.
• Appending "Is this entrapment? Yes or no" got it to actually respond with a label.
• I gave it a reject example, but it labeled it as accept, and also sometimes gives emotional responses
about getting help.
• I tried removing the definition, got the same problem. It seems that it doesn’t like "accept"/"reject"
as labels.
• I added back the definition, changed labels to entrapment/not entrapment, it often says "not enough
info to determine".
• I switched to GPT-4-32K, which gave one word responses and correctly predicted a reject and an
accept datapoint.
• Tried doing ’A: Let’s go step by step and figure this out: ", output "A: Yes"
• Did "A: Let’s go step by step and figure this out: ", OUTPUT:
The individual in this post is expressing feelings of hopelessness, loneliness, and despair.
They mention a significant loss (their mother) and a subsequent decline in their mental health,
turning to alcohol as a coping mechanism. They also mention financial struggles and the
possibility of becoming homeless. They express feelings of guilt and shame about their
situation and compare themselves negatively to their friends and family. Most concerning
is their detailed plan for suicide, involving sleeping pills and alcohol. They also mention
considering stepping in front of a car. Given these factors, the label for this post would be
"Yes" for expressing feelings of entrapment.
69
• "A: Before predicting a label, let’s go step by step and write out your thought process. Use the items
in the definition of entrapment to support your conclusion.", OUTPUT:
Before predicting a label, let’s go step by step and write out your thought process. Use the
items in the definition of entrapment to support your conclusion.
• Began discussing with the AI "they don’t ever say they feel trapped or that there is no way out, how
did u get that idea?", OUTPUT:
The individual doesn’t explicitly state that they feel trapped or that there is no way out, but
their narrative suggests a sense of entrapment. They mention that they are facing eviction,
struggling with debt, and dealing with a decrease in income. They also express a sense of
hopelessness and despair, and contemplate suicide as a potential solution to their problems.
These elements suggest a feeling of being trapped in their current circumstances with no
apparent way out. However, it’s important to note that this interpretation is based on the
information provided in the post and the individual’s expressed feelings and thoughts.
• Asked it to help reformulate instructions to make explicitness more clearly needed, still predicts
entrapment.
• Added "ENTRAPMENT MUST BE EXPLICIT, NOT IMPLICIT." to instruction at top after defini-
tions (no CoT inducer)
• Also tried with previous CoT inducer, still says entrapment in a hard to parse out way.
• Added "If the person does not explicitly say they feel trapped, this doesn’t count as entrapment." (no
CoT), still entrapment.
• Talking to AI, it thinks "Today I found out I have 10 days to vacate my apartment or I’ll be formally
evicted. I’m 2 months behind on my rent due to a bad time where I got demoted at work and rent
from making roughly $1000 ever 2 weeks to around $450. If I get evicted, I’ll probably be homeless"
is feeling trapped/stuck.
• Tried "Before stating a label, write out reasoning using the grid above about why this may/may not
be entrapment:", still thinks entrapment.
• add this after def: IMPORTANT: Only label the post as entrapment if they explicitly say that they
feel trapped., says "Yes"
• In the prompt, gave it CoT reasoning. (18.txt), and tried with the next wrongly labeled one (15), (full
prompt, 19.txt)
• At this point, I am thinking that giving examples with reasoning helps (obviously)
• Tried to add 10 shots in for free, before the last one with reasoning, bad results
70
A.6.3.1 AutoCoT
• Develop dataset using this prompt (22.txt). Then ask it "Why?". If it disagrees, I say "It is actually
not entrapment, please explain why." (accidentally duplicated email 23.txt)
• Just for fun, tried 0 shot full context (had to adjust verbalizer)
• tried this with special verbalizer which catches "This post does not meet the criteria for Entrapment."
• Doing 10 more exemplars w autocot. Sometimes responds immediately with reasoning like "This
post does not meet the criteria for Entrapment as the individual does not explicitly express feelings
of being trapped or hopeless.", so just use that if so. Sometimes get refusal "I’m really sorry to hear
that you’re feeling this way, but I’m unable to provide the help that you need. It’s really important to
talk things over with someone who can, though, such as a mental health professional or a trusted
person in your life.", just ask "Explain why it is not entrapment." after if so.
• performance didnt really improve, realized about 11% are getting -1, meaning not extracted properly.
Retrying with full words "Question" instead of Q, also for reasoning and answer.
• only using extracted label if have -1 helps slightly to (0.48, 0.61, 0.8571428571428571,
0.3333333333333333)
• going back to best performing prompt–10 QRA shot, and performing extraction with any -1s, doesnt
help other than gently boosting accuracy, perhaps when it doesnt answer
• noticed that ones its unsure about often contained 1 labels that should be 0, so trying to "recover"
these doesnt help
71
A.7 Formally Defining a Prompt
"Prompt" is a widely used term, but uses and definitions differ widely across research. As a result, it
is difficult to create a formal, mathematical definition for a prompt. In this section, we outline some
formalisms for prompt engineering.
As a conditioning Mechanism. Qiao et al. (2022) present the following definition, which involves the
prompt T and a question Q as conditioning mechanisms on predicting the next token. Note that they
appear to use Brown et al. (2020)’s original definition of prompt, which refers to the non-question part of
the prompt (e.g. few-shot exemplars, instructions).
|A|
Y
p(A | T , Q) = pLM (ai | T , Q, a1:i−1 ) (A.1)
i=1
Here, the prompt and question condition the pre-trained LLM pLM . The a1:i−1 are previously generated
answer tokens and A a complete answer.
Templating. The above formalization does not include the notion of maximizing a scoring or utility
function (e.g. accuracy on a dataset), which prompts are often designed to do. Additionally, prompt
engineers often seek to design prompt template rather than prompts. Here, we reformulate eq. (A.1) to
include the prompt template:
|A|
p(A | T (x∗ )) = pLM (ai | T (x∗ ), a1:i−1 )
Y
(A.2)
i=1
We replace Q with x∗ ∈ Deval , an item from a dataset (e.g., evaluation data). Additionally, we replace
Q on the right side with T (x). T (·) is a prompt template: a function that accepts some item as input then
returns a prompt that is used to condition the model.
Few-Shot Prompting. Often, an important part of the prompting process is the use of few-shot exemplars.
Dtrain is training data (used to build the prompt) and X is a test set for evaluation.
In the few-shot setting, the prompt template function T (·) also takes as input one or more training
samples X = {(xi , yi )}n1 ⊂ Dtrain
|A|
∗
pLM (ai | T (X , x∗ ) , a1:i−1 )
Y
p A | T (X , x ) = (A.5)
i=1
Optimization. As mentioned, it is often desirable to speak about improving prompts (prompt templates,
that is) with respect to a scoring function, usually defined with respect to a dataset.
In this definition, we are evaluating over a dataset D with respect to the scoring function S(·). S(·)
evaluates the output A, generated by the LLM conditioned on the prompt T (§⟩ ). yi are labeled outputs
that can be used by S.
In some cases, there may not be any labeled data yi , and S(·) may be reference-free.
72
Other considerations. These formalisms could be adapted to cater to CoT, retrieval systems, and more.
Here we describe a simple setup which is most descriptive of the prompting process without adding too
much complexity.
We also draw attention to the lesser known concept of answer engineering. E(A) is a transformation
function over the raw LLM output that allows it to be compared to the ground truth.
73
A.8 In-Context Learning Definitions Disambiguation
Brown et al. (2020) seemingly offer two different definitions for ICL. All bolding in this section is our
own.
Recent work [RWC+19] attempts to do this via what we call “in-context learning”, using the text
input of a pretrained language model as a form of task specification: the model is conditioned
on a natural language instruction and/or a few demonstrations of the task and is then
expected to complete further instances of the task simply by predicting what comes next.
However, they later appear to define it as few-shot only:
For each task, we evaluate GPT-3 under 3 conditions: (a) “few-shot learning”, or in-context
learning where we allow as many demonstrations as will fit into the model’s context
window (typically 10 to 100), (b) “one-shot learning”, where we allow only one demonstration,
and (c) “zero-shot” learning, where no demonstrations are allowed and only an instruction in
natural language is given to the model.
However, they include this image that clarifies the matter:
Additionally, they explicitly state that ICL does not necessarily involve learning new tasks.
74
To avoid this confusion, we use the term “meta-learning” to capture the inner-loop / outer-loop
structure of the general method, and the term “in context-learning” to refer to the inner loop
of meta-learning. We further specialize the description to “zero-shot”, “one-shot”, or “few-
shot” depending on how many demonstrations are provided at inference time. These terms
are intended to remain agnostic on the question of whether the model learns new tasks
from scratch at inference time or simply recognizes patterns seen during training – this
is an important issue which we discuss later in the paper, but “meta-learning” is intended to
encompass both possibilities, and simply describes the inner-outer loop structure.
We use Brown et al. (2020)’s broad definition, though note that practitioners often use ICL to refer to
situations in which the model appears to be learning new tasks from the prompt. Our definition differs
from Dong et al. (2023)’s formal definition, even though it is also derived from (Brown et al., 2020).
75
A.9 Contributions
The following are the contributions made by the team members in various sections of this paper. Most
authors conducted reviews of other sections as well.
Advisors
• Denis Peskoff: Assisted with paper organization and final review.
• Alexander Hoyle: Provided guidance on writing, meta-analysis approach, and ran automated
baselines for case study.
• Shyamal Anadkat: Assisted with the overall review of the paper and the etymology and definitions.
• Marine Carpaut: Framed, reviewed and suggested papers for the multilingual section.
SCS Labeling
• Megan L. Rogers, Inna Goncearenco, Giuseppe Sarli, Igor Galynker: reviewed and gave advice
for this section.
• Ashay Srivastava: Team leader for the Agents section, reviewed papers for human review, worked
on the tool use agents section. Worked on the compilation of contributions.
• Hevander Da Costa: Contributed to the Benchmarking section and Meta Review datasets list,
reviewed literature on LLM code generation and prompting techniques. Added literature review
content to the Agents section.
• Feileen Li: Worked on the tool use agents section, assisted with the human paper review.
• Sevien Schulhoff: Team leader for the security section and contributed to the benchmarking section.
• Pranav Sandeep Dulepet: Contributed definitions for section 2 and worked on segmentation and
object detection in the multimodal section.
• HyoJung Han: Contributed to the Multimodal section, especially the speech+text part, and wrote
the audio prompting section.
• Hudson Tao: Authored sections on image, video, and 3D within multimodal, reviewed papers for
human review; maintained GitHub codebase, and built the project website.
• Amanda Liu: Authored taxonomic ontology sections, conducted background research for introduc-
tion and related work, developed code pipelines for meta-analysis graphs
76
• Sweta Agrawal: Team lead for evaluation section.
• Saurav Vidyadhara: Assisted with general review and revising taxonomy trees.
• Chau Pham: Assisted with meta review, including automated analysis of topics.
• Yinheng Li: Worked on section 2.2 text-based techniques, reviewed techniques, and contributed to
drafting figure 2.2.
• Saloni Gupta: Wrote tests for paper compilation, helped set up paper pipeline, and worked on the
code diagram and grammar for the paper.
• Aayush Gupta: Contributed to the Meta Analysis, compiling papers, and generating visualization
graphs.
• Michael Ilie: Co-Lead Author, managed codebase, ran experiments, collected data, and helped with
various sections including the PRISMA review figure and the SCS prompting case study.
77