Self Study - Prompt Engineering
Self Study - Prompt Engineering
Abstract—Prompt engineering is an increasingly important skill set needed to converse effectively with large language models (LLMs), such as
ChatGPT. Prompts are instructions given to an LLM to enforce rules, automate processes, and ensure specific qualities (and quantities) of generated
output. Prompts are also a form of programming that can customize the outputs and interactions with an LLM.
This paper describes a catalog of prompt engineering tech- niques presented in pattern form that have been applied to solve common problems when
conversing with LLMs. Prompt patterns are a knowledge transfer method analogous to software patterns since they provide reusable solutions to
common problems faced in a particular context, i.e., output generation and interaction when working with LLMs.
This paper provides the following contributions to research on prompt engineering that apply LLMs to automate software de- velopment tasks. First,
it provides a framework for documenting patterns for structuring prompts to solve a range of problems so that they can be adapted to different
domains. Second, it presents a catalog of patterns that have been applied successfully to improve the outputs of LLM conversations. Third, it
explains how prompts can be built from multiple patterns and illustrates prompt patterns that benefit from combination with other prompt patterns.
1
The most direct translation of software pattern structure to prompt patterns
is the naming, intent, motivation, and sample code. The structure and
classification, however, although named similarly, require more adaptation.
• It is possible to phrase a prompt in many different ways. It III. A CATALOG OF PROMPT PATTERNS FOR
is hard, however, to define a grammar that accurately and CONVERSATIONAL LLMS
completely expresses all the nuanced ways that components
This section presents our catalog of prompt patterns that
of a prompt could be expressed in text or symbols.
have been applied to solve common problems in the domain
• Prompts fundamentally convey ideas to a conversational
of conversational LLM interaction and output generation for
LLM and are not simply the production of tokens for input.
automating software tasks. Each prompt pattern is
In particular, an idea built into a prompt pattern can be
accompanied by concrete implementation samples and
communicated in many ways and its expression should be at
examples with and without the prompt.
a higher-level than the underlying tokens representing the
idea. A. Summary of the Prompt Pattern Catalog
• It is possible to program an LLM to introduce novel
semantics for statements and words that create new ways for The classification of prompt patterns is an important
communicating an idea. In contrast, grammars may not consideration in documenting the patterns. Table I outlines the
easily represent ideas that can be expressed through initial classifications for the catalog of prompt patterns we
completely new symbology or languages that the grammar identified in our work with ChatGPT thus far.
designer was not aware of. TABLE I
CLASSIFYING PROMPT PATTERNS
D. A Way Forward: Fundamental Contextual Statements Pattern Category Prompt Pattern
An open research question, therefore, is what approach is Input Semantics Meta Language Creation
more effective than formal grammars for describing prompt Output Output Automater
pattern structure and ideas. We propose the concept of
Customization Persona
fundamental contextual statements, which are written
Visualization Generator
descriptions of the important ideas to communicate in a
Recipe
prompt to an LLM. An idea can be rewritten and expressed in
Template
arbitrary ways based on user needs and experience. The key
ideas to communicate, however, are presented to the user as a Error Identification Fact Check List
series of simple, but fundamental, statements. Reflection
One benefit of adopting and applying the fundamental Prompt Question Refinement
contextual statements approach is that it is intentionally
Improvement Alternative Approaches
intuitive to users. In particular, we expect users will
Cognitive Verifier
understand how to express and adapt the statements in a
Refusal Breaker
contextually appropriate way for their domain. Moreover,
since the underlying ideas of the prompt are captured, these Interaction Flipped Interaction
same ideas can be expressed by the user in alternate Game Play
symbology or wording that has been introduced to the LLM Infinite Generation
using patterns, such as the Meta Language Creation pattern Context Control Context Manager
presented in Section III-B.
As shown in this table, there are five categories of prompt
Our ultimate goal is to enhance prompt engineering by
patterns in our classification framework: Input Semantics, Output
providing a framework for designing prompts that can be
Customization, Error Identification, Prompt Improvement, and
reused and/or adapted to other LLMs in the same way that
Interaction, each of which is summarized below.
software patterns can be implemented in different
programming languages and platforms. For the purposes of The Input Semantics category deals with how an LLM
this paper, however, all prompts were tested with ChatGPT understands the input and how it translates the input into
[12] using the ChatGPT+ service. We use ChatGPT as the something it can use to generate output. This category includes
LLM for all examples presented in this paper due to its the Meta Language Creation pattern, which focuses on creating a
widespread availability and popularity. These examples were custom language for the LLM to understand. This pattern is useful
documented through a combination of exploring the corpus of when the default input language is ill-suited for expressing ideas
community-posted prompts on the Internet and independent the user wants to convey to the LLM.
prompt creation from our use of ChatGPT to automating The Output Customization category focuses on constraining or
software development tasks. tailoring the types, formats, structure, or other properties of the
output generated by the LLM. The prompt patterns in this
category include Output Automater, Persona, Visualization
Generator, Recipe, and Template patterns. The Output Automater
pattern allows the user to create scripts that can automate any
tasks the LLM output suggests the user should perform. The the semantics of this alternative language to the LLM so the user
Persona pattern gives the LLM a persona or role to play when can write future prompts using this new language and its semantics.
generating output. The Visualization Generator pattern allows the 2) Motivation: Many problems, structures, or other ideas
user to generate visualizations by producing textual outputs that communicated in a prompt may be more concisely,
can be fed to other tools, such as other AI-based image generators, unambiguously, or clearly expressed in a language other than
like DALL-E [13]. The Recipe pattern allows the user to obtain a English (or whatever conventional human language is used to
sequence of steps or actions to realize a stated end result, possibly interact with an LLM). To produce output based on an alternative
with partially known information or constraints. The Template language, however, an LLM needs to understand the language’s
pattern allows the user to specify a template for the output, which semantics.
the LLM fills in with content. 3) Structure and Key Ideas: Fundamental contextual
The Error Identification category focuses on identifying and statements:
resolving errors in the output generated by the LLM. This
category includes the Fact Check List and Reflection patterns. The When I say X, I mean Y (or would like you to do Y)
Fact Check List pattern requires the LLM to generate a list of facts The key structure of this pattern involves explaining the
the output depends on that should be fact-checked. The Reflection meaning of one or more symbols, words, or statements to the
pattern requires the LLM to introspect on its output and identify LLM so it uses the provided semantics for the ensuing
any errors. conversation. This description can take the form of a simple
The Prompt Improvement category focuses on improving translation, such as “X” means “Y”. The description can also take
the quality of the input and output. This category includes the more complex forms that define a series of commands and their
Question Refinement, Alternative Approaches, Cognitive semantics, such as “when I say X, I want you to do ”. In this case,
Verifier, and Refusal Breaker patterns. The Question “X” is henceforth bound to the semantics of “take action”.
Refinement pattern ensures the LLM always suggests a better
4) Example Implementation: The key to successfully using
version of the user’s question. The Alternative Approaches
the Meta Language Creation pattern is developing an unambiguous
pattern requires the LLM to suggest alternative ways of
notation or shorthand, such as the following:
accomplishing a user-specified task. The Cognitive Verifier
pattern instructs the LLM to automatically suggest a series of “From now on, whenever I type two identifiers separated by a
subquestions for the user to answer before combining the “→”, I am describing a graph. For example, “a → b” is
answers to the subquestions and producing an answer to the describing a graph with nodes “a” and “b” and an edge
overall question. The Refusal Breaker pattern requires the between them. If I separate identifiers by “-[w:2, z:3]→”, I am
LLM to automatically reword the user’s question when it adding properties of the edge, such as a weight or label.”
refuses to produce an answer. This example of the Meta Language Creation pattern
The Interaction category focuses on the interaction between establishes a standardized notation for describing graphs by
the user and the LLM. This category includes the Flipped defining a convention for representing nodes and edges.
Interaction, Game Play, and Infinite Generation patterns. The Whenever the author types two identifiers separated by a “→”
Flipped Interaction pattern requires the LLM to ask questions symbol, it is an indication that a graph is being described. For
rather than generate output. The Game Play pattern requires example, if the author types “a → b”, this indicates that a graph is
the LLM to generate output in the form of a game. The Infinite being defined with nodes “a” and “b”, and that there is an edge
Generation pattern requires the LLM to generate output between them. This convention provides a clear and concise way
indefinitely without the user having to reenter the generator to communicate the structure of a graph in written form.
prompt each time. Moreover, the prompt goes on to specify that additional
Finally, the Context Control category focuses on controlling information about the edges, such as a weight or label, can be
the contextual information in which the LLM operates. This provided using the syntax “-[w:2, z:3]→”. This notation allows
category includes the Context Manager pattern, which allows for the specification of additional properties beyond the basic
the user to specify the context for the LLM’s output. structure of the graph. The specified properties are associated with
The remainder of this section describes each of these prompt the edge between the two nodes and can provide important
patterns using the pattern form discussed in Section II-B. context for the interpretation of the graph. This standardized
notation for describing graphs can make it easier to communicate
B. The Meta Language Creation Pattern graph structures and properties, which may not be easy or are very
1) Intent and Context: During a conversation with an LLM, verbose to describe as a series of sentences.
the user would like to create the prompt via an alternate language, 5) Consequences: Although this pattern provides a powerful
such as a textual short-hand notation for graphs, a description of means to customize a user’s interaction with an LLM, it may create
states and state transitions for a state machine, a set of commands the potential for confusion within the LLM. As important as it is to
for prompt automation, etc. The intent of this pattern is to explain clearly define the semantics of the language, it is also essential to
ensure the language itself introduces no ambiguities that degrade 3) Structure and Key Ideas: Fundamental contextual
the LLM’s performance or accuracy. For example, the prompt statements:
“whenever I separate two things by commas, it means that the first
thing precedes the second thing” will likely create significant Whenever you produce an output that has at least one
potential for ambiguity and create the potential for unexpected step to take and the following properties (alternatively,
semantics if punctuation involving commas is used in the prompt. always do this)
To showcase the potential for confusion, ChatGPT will warn Produce an executable artifact of type X that will
the user and potentially be unable to perform a specific mapping automate these steps
of a symbol or term to a new meaning. For example, if following The first part of the pattern identifies the situations under
prompt is given to ChatGPT which automation should be generated. A simple approach is
“Whenever I say “a”, I am referring to Marie Antoinette.” to state that the output includes at least two steps to take and
that an automation artifact should be produced. The scoping is
it will respond that this prompt creates too much confusion up to the user, but helps prevent producing an output
as “a” is an indefinite article and too commonly used, as shown automation scripts in cases where running the output
below: automation script will take more user effort than performing
“Referring to Marie Antoinette using “a” could potentially the original steps produced in the output. The scope can be
cause confusion, as “a” is an indefinite article commonly limited to outputs requiring more than a certain number of
used to refer to a noun that has not been mentioned before. steps.
It may be more clear to use her name, “Marie Antoinette”, The next part of this pattern provides a concrete statement of
or a pronoun such as “she” or “her” to refer to her. the type of output the LLM should output to perform the
Moreover, using context to establish who you are referring automation. For example, “produce a Python script” gives the
to can also help avoid confusion in conversation.” LLM a concrete understanding to translate the general steps into
In general, an LLM will perform better on data it was trained equivalent steps in Python. The automation artifact should be
on. Therefore, when an existing notation is available— but concrete and must be something that the LLM associates with the
may not be the dominant meaning—the Meta Language action of “automating a sequence of steps”.
Creation pattern can provide context to scope the meaning of 4) Example Implementation: A sample of this prompt pattern
the symbol to improve the accurately and utility of the output. applied to code snippets generated by the ChatGPT LLM is shown
For example, the symbol “→” conveys slightly different below:
information in graphs and propositional logic. The statement “From now on, whenever you generate code that spans
“p → q, q → p” may therefore make sense, but only if the LLM more than one file, generate a Python script that can be
is given the appropriate context through a Meta Language run to automatically create the specified files or make
Creation prompt. changes to existing files to insert the generated code.”
Although Meta Language Creation is a powerful pattern it This pattern is particularly effective in software engineering as
must be used carefully, particularly when describing concepts a common task for software engineers using LLMs is to then
that may otherwise be hard to precisely or concisely describe. copy/paste the outputs into multiple files. Some tools, such as
These types of prompts are thus best used in completely new Copilot, insert limited snippets directly into the section of code
conversation sessions. Using a single meta-language- that the coder is working with, but tools, such as ChatGPT, do not
perconversation session may also be a best practice since it provide these facilities. This automation trick is also effective at
avoids the potential for conflicting or unexpected semantics creating scripts for running commands on a terminal, automating
being applied to the conversation over time. cloud operations, or reorganizing files on a file system.
This pattern is a powerful complement for any system that can
C. The Output Automater Pattern be computer controlled. The LLM can provide a set of steps that
should be taken on the computer-controlled system and then the
1) Intent and Context: The intent of this pattern is to have
output can be translated into a script that allows the computer
the LLM generate a script or other automation artifact that can
controlling the system to automatically take the steps. This is a
automatically perform any steps it recommends taking as part of
direct pathway to allowing LLMs, such as ChatGPT, to integrate
its output. The goal is to reduce the manual effort needed to
quality into—and to control—new computing systems that have
implement any LLM output recommendations.
a known scripting interface.
2) Motivation: The output of an LLM is often a sequence 5) Consequences: An important usage consideration of this
of steps for the user to follow. For example, when asking an pattern is that the automation artifact must be defined concretely.
LLM to generate a Python configuration script it may suggest a Without a concrete meaning for how to “automate” the steps, the
number of files to modify and changes to apply to each file. LLM often states that it “can’t automate things” since that is
However, having users continually perform the manual steps beyond its capabilities. LLMs typically accept requests to produce
dictated by LLM output is tedious and error-prone.
code, however, so the goal is to instruct the LLM to generate 3) Structure and Key Ideas: Fundamental contextual
text/code, which can be executed to automate something. This statements:
subtle distinction in meaning is important to help an LLM
disambiguate the prompt meaning. I would like you to ask me questions to achieve X
One caveat of the Output Automater pattern is the LLM needs You should ask questions until this condition is met or to
sufficient conversational context to generate an automation achieve this goal (alternatively, forever)
artifact that is functional in the target context, such as the file (Optional) ask me the questions one at a time, two at a
system of a project on a Mac vs. Windows computer. This pattern time, etc.
works best when the full context needed for the automation is
contained within the conversation, e.g., when a software
A prompt for a flipped interaction should always specify the
application is generated from scratch using the conversation and goal of the interaction. The first idea (i.e., you want the LLM
all actions on the local file system are performed using a sequence
to ask questions to achieve a goal) communicates this goal to
of generated automation artifacts rather than manual actions
the LLM. Equally important is that the questions should focus
unknown to the LLM. Alternatively, self-contained sequences of on a particular topic or outcome. By providing the goal, the
steps work well, such as “how do I find the list of open ports on
LLM can understand what it is trying to accomplish through
my Mac computer”.
the interaction and tailor its questions accordingly. This
In some cases, the LLM may produce a long output with “inversion of control” enables more focused and efficient
multiple steps and not include an automation artifact. This interaction since the LLM will only ask questions that it deems
omission may arise for various reasons, including exceeding relevant to achieving the specified goal.
the output length limitation the LLM supports. A simple
The second idea provides the context for how long the
workaround for this situation is to remind the LLM via a
interaction should occur. A flipped interaction can be terminated
follow-on prompt, such as “But you didn’t automate it”, which
with a response like “stop asking questions”. It is often better,
provides the context that the automation artifact was omitted
however, to scope the interaction to a reasonable length or only
and should be generated.
as far as is needed to reach the goal. This goal can be surprisingly
At this point in the evolution of LLMs, the Output open-ended and the LLM will continue to work towards the goal
Automater pattern is best employed by users who can read and by asking questions, as is the case in the example of ”until you
understand the generated automation artifact. LLMs can (and have enough information to generate a Python script”.
do) produce inaccuracies in their output, so blindly accepting By default, the LLM is likely to generate multiple questions per
and executing an automation artifact carries significant risk.
iteration. The third idea is completely optional, but can improve
Although this pattern may alleviate the user from performing
usability by limiting (or expanding) the number of questions that
certain manual steps, it does not alleviate their responsibility
the LLM generates per cycle. If a precise number/format for the
to understand the actions they undertake using the output.
questioning is not specified, the questioning will be semi-random
When users execute automation scripts, therefore they assume
and may lead to one-at-a-time questions or ten-at-a-time
responsibility for the outcomes.
questions. The prompt can thus be tailored to include the number
D. The Flipped Interaction Pattern of questions asked at a time, the order of the questions, and any
other formatting/ordering considerations to facilitate user
1) Intent and Context: You want the LLM to ask
interaction.
questions to obtain the information it needs to perform some
4) Example Implementation: A sample prompt for a flipped
tasks. Rather than the user driving the conversation, therefore,
interaction is shown below:
you want the LLM to drive the conversation to focus it on
achieving a specific goal. For example, you may want the “From now on, I would like you to ask me questions to
LLM to give you a quick quiz or automatically ask questions deploy a Python application to AWS. When you have
until it has sufficient information to generate a deployment enough information to deploy the application, create a
script for your application to a particular cloud environment. Python script to automate the deployment.”
2) Motivation: Rather than having the user drives a In general, the more specific the prompt regarding the
conversation, an LLM often has knowledge it can use to more constraints and information to collect, the better the outcome. For
accurately obtain information from the user. The goal of the instance, the example prompt above could provide a menu of
Flipped Interaction pattern is to flip the interaction flow so the possible AWS services (such as Lambda, EC2, etc.) with which
LLM asks the user questions to achieve some desired goal. The to deploy the application. In other cases, the LLM may be
LLM can often better select the format, number, and content permitted to simply make appropriate choices on its own for
of the interactions to ensure that the goal is reached faster, things that the user doesn’t explicitly make decisions about. One
more accurately, and/or by using knowledge the user may not limitation of this prompt is that, once other contextual information
(initially) possess. is provided regarding the task, it may require experimentation
with the precise phrasing to get the LLM to ask the questions in
the appropriate number and flow to best suit the task, such as 3) Structure and Key Ideas: Fundamental contextual
asking multiple questions at once versus one question at a time. statements:
5) Consequences: One consideration when designing the
prompt is how much to dictate to the LLM regarding what Act as persona X
information to collect prior to termination. In the example above, Provide outputs that persona X would create
the flipped interaction is open-ended and can vary significantly in The first statement conveys the idea that the LLM needs to
the final generated artifact. This open-endedness makes the act as a specific persona and provide outputs that such a
prompt generic and reusable, but may potentially ask additional persona would. This persona can be expressed in a number of
questions that could be skipped if more context is given. ways, ranging from a job description, title, fictional character,
If specific requirements are known in advance, it is better to historical figure, etc. The persona should elicit a set of
inject them into the prompt rather than hoping the LLM will attributes associated with a well-known job title, type of
obtain the needed information. Otherwise, the LLM will person, etc.2
nonnondeterministically decide whether to prompt the user for the
The secondary idea—provide outputs that persona X would
information or make an educated guess as to an appropriate value.
create—offers opportunities for customization. For example, a
For example, the user can state that they would like to teacher might provide a large variety of different output types,
deploy an application to Amazon AWS EC2, rather than
ranging from assignments to reading lists to lectures. If a more
simply state ”the cloud” and require multiple interactions to
specific scope to the type of output is known, the user can
narrow down the deployment target. The more precise the
provide it in this statement.
initial information, the better the LLM can use the limited
4) Example Implementation: A sample implementation for
questions that a user is likely willing to answer to obtain
code review is shown below:
information to improve its output.
“From now on, act as a security reviewer. Pay close
When developing prompts for flipped interactions, it is
attention to the security details of any code that we look
important to consider the level of user knowledge,
at. Provide outputs that a security reviewer would
engagement, and control. If the goal is to accomplish the goal
regarding the code.”
with as little user interaction as possible (minimal control), that
should be stated explicitly.Conversely, if the goal is to ensure In this example, the LLM is instructed to provide outputs that a
the user is aware of all key decisions and confirms them ”security reviewer” would. The prompt further sets the stage that
(maximum engagement) that should also be stated explicitly. code is going to be evaluated. Finally, the user refines the persona
Likewise, if the user is expected to have minimal knowledge by scoping the persona further to outputs regarding the code.
and should have the questions targeted at their level of Personas can also represent inanimate or non-human entities,
expertise, this information should be engineered into the such as a Linux terminal, a database, or an animal’s perspective.
prompt. When using this pattern to represent these entities, it can be useful
to also specify how you want the inputs delivered to the entity,
such as “assume my input is what the owner is saying to the dog
E. The Persona Pattern
and your output is the sounds the dog is making”. An example
1) Intent and Context: In many cases, users would like prompt for a non-human entity that uses a “pretend to be” wording
LLM output to always take a certain point of view or perspective. is shown below:
For example, it may be useful for to conduct a code review as if “You are going to pretend to be a Linux terminal for a
the LLM was a security expert. The intent of this pattern is to computer that has been compromised by an attacker.
give the LLM a “persona” that helps it select what types of output When I type in a command, you are going to output the
to generate and what details to focus on. corresponding text that the Linux terminal would
2) Motivation: Users may not know what types of outputs produce.”
or details are important for an LLM to focus on to achieve a given This prompt is designed to simulate a computer that has been
task. They may know, however, the role or type of person that compromised by an attacker and is being controlled through a
they would normally ask to get help with these things. The Linux terminal. The prompt specifies that the user will input
Persona pattern enables the users to express what they need help commands into the terminal, and in response, the simulated
with without knowing the exact details of the outputs they need. terminal will output the corresponding text that would be
produced by a real Linux terminal. This prompt is more
prescriptive in the persona and asks the LLM to, not only be a
2
Be aware, however, that personas relating to living people or people
considered harmful make be disregarded due to underlying LLM privacy and
security rules.
Linux terminal, but to further act as a computer that has been 3) Structure and Key Ideas: Fundamental contextual
compromised by an attacker. statements:
The persona causes ChatGPT to generate outputs to commands
that have files and contents indicative of a computer that was Within scope X, suggest a better version of the question
hacked. The example illustrates how an LLM can bring its to use instead
situational awareness to a persona, in this case, creating evidence (Optional) prompt me if I would like to use the better
of a cyberattack in the outputs it generates. This type of persona version instead
can be very effective for combining with the Game Play pattern, The first contextual statement in the prompt is asking the
where you want the exact details of the output characteristics to LLM to suggest a better version of a question within a specific
be hidden from the user (e.g., don’t give away what the scope. The scope is provided to ensure that not all questions
cyberattack did by describing it explicitly in the prompt). are automatically reworded or that they are refined with a
5) Consequences: An interesting aspect of taking nonhuman given goal. The second contextual statement is meant for
personas is that the LLM may make interesting assumptions or automation and allows the user to automatically use the refined
“hallucinations” regarding the context. A widely circulated question without having to copy/paste or manually enter it.
example on the Internet asks ChatGPT to act as a Linux terminal The engineering of this prompt can be further refined by
and produce the expected output that you would get if the user combining it with the Reflection pattern, which allows the
typed the same text into a terminal. Commands, such as ls -l, LLM to explain why it believes the refined question is an
will generate a file listing for an imaginary UNIX file system, improvement. 4) Example Implementation:
complete with files that can have cat file1.txt run on
“From now on, whenever I ask a question about a
them. software artifact’s security, suggest a better version
In other examples, the LLM may prompt the user for more of the question to use that incorporates information
context, such as when ChatGPT is asked to act as a MySQL specific to security risks in the language or
database and prompts for the structure of a table that the user is framework that I am using instead and ask me if I
pretending to query. ChatGPT can then generate synthetic rows, would like to use your question instead.”
such as generating imaginary rows for a “people” table with
In the context of the example above, the LLM will use the
columns for “name” and “job”.
Question Refinement pattern to improve security-related
F. The Question Refinement Pattern questions by asking for or using specific details about the
software artifact and the language or framework used to build
1) Intent and Context: This pattern engages the LLM in it. For instance, if a developer of a Python web application
the prompt engineering process. The intent of this pattern is to with FastAPI asks ChatGPT “How do I handle user
ensure the conversational LLM always suggests potentially authentication in my web application?”, the LLM will refine
better or more refined questions the user could ask instead of the question by taking into account that the web application is
their original question. Using this pattern, the LLM can aid the written in Python with FastAPI. The LLM then provides a
user in finding the right question to ask in order to arrive at an revised question that is more specific to the language and
accurate answer. In addition, the LLM may help the user find the framework, such as “What are the best practices for handling
information or achieve their goal in fewer interactions with the user authentication securely in a FastAPI web application to
user than if the user employed trial and error prompting. mitigate common security risks, such as cross-site scripting
2) Motivation: If a user is asking a question, it is possible (XSS), cross-site request forgery (CSRF), and session
they are not an expert in the domain and may not know the best hijacking?”
way to phrase the question or be aware of additional information The additional detail in the revised question is likely to not only
helpful in phrasing the question. LLMs will often state make the user aware of issues they need to consider, but lead to a
limitations on the answer they are providing or request better answer from the LLM. For software engineering tasks, this
additional information to help them produce a more accurate pattern could also incorporate information regarding potential
answer. An LLM may also state assumptions it made in bugs, modularity, or other code quality considerations. Another
providing the answer. The motivation is that this additional approach would be to automatically refine questions so the
information or set of assumptions could be used to generate a generated code cleanly separates concerns or minimizes use of
better prompt. Rather than requiring the user to digest and external libraries, such as:
rephrase their prompt with the additional information, the LLM Whenever I ask a question about how to write some
can directly refine the prompt to incorporate the additional code, suggest a better version of my question that asks
information. how to write the code in a way that minimizes my
dependencies on external libraries.
5) Consequences: The Question Refinement pattern helps
bridge the gap between the user’s knowledge and the LLM’s
understanding, thereby yielding more efficient and accurate 2) Motivation: Humans often suffer from cognitive biases
interactions. One risk of this pattern is its tendency to rapidly that lead them to choose a particular approach to solve a problem
narrow the questioning by the user into a specific area that guides even when it is not the right or “best” approach. Moreover, humans
the user down a more limited path of inquiry than necessary. The may be unaware of alternative approaches to what they have used
consequence of this narrowing is that the user may miss important in the past. The motivation of the Alternative Approaches pattern
”bigger picture” information. One solution to this problem is to is to ensure the user is aware of alternative approaches to select a
provide additional scope to the pattern prompt, such as “do not better approach to solve a problem by dissolving their cognitive
scope my questions to specific programming languages or biases.
frameworks.” 3) Structure and Key Ideas: Fundamental contextual
Another approach to overcoming arbitrary narrowing or limited statements:
targeting of the refined question is to combine the Question
Refinement pattern with other patterns. In particular, this pattern Within scope X, if there are alternative ways to accomplish the same
can be combined with the Cognitive Verifier pattern so the LLM thing, list the best alternate approaches
automatically produces a series of follow-up questions that can (Optional) compare/contrast the pros and cons of each approach
produce the refined question. For example, in the following (Optional) include the original way that I asked
prompt the Question Refinement and Cognitive Verifier patterns (Optional) prompt me for which approach I would like to use
are applied to ensure better questions are posed to the LLM: The first statement, “within scope X”, scopes the interaction
“From now on, whenever I ask a question, ask four to a particular goal, topic, or bounds on the questioning. The
additional questions that would help you produce a better scope is the constraints that the user is placing on the
version of my original question. Then, use my answers to alternative approaches. The scope could be “for
suggest a better version of my original question.” implementation decisions” or “for the deployment of the
As with many patterns that allow an LLM to generate new application”. The scope ensures that any alternatives fit within
questions using its knowledge, the LLM may introduce unfamiliar the boundaries or constraints that the user must adhere to.
terms or concepts to the user into the question. One way to address The second statement, “if there are alternative ways to
this issue is to include a statement that the LLM should explain accomplish the same thing, list the best alternate approaches”
any unfamiliar terms it introduces into the question. A further instructs the LLM to suggest alternatives. As with other patterns,
enhancement of this idea is to combine the Question Refinement the specificity of the instructions can be increased or include
pattern with the Persona pattern so the LLM flags terms and domain-specific contextual information. For example, the
generates definitions that assume a particular level of knowledge, statement could be scoped to “if there are alternative ways to
such as this example: accomplish the same thing with the software framework that I am
“From now on, whenever I ask a question, ask four additional using” to prevent the LLM from suggesting alternatives that are
questions that would help you produce a better version of my inherently non-viable because they would require too many
original question. Then, use my answers to suggest a better changes to other parts of the application.
version of my original question. After the follow-up questions, Since the user may not be aware of the alternative approaches,
temporarily act as a user with no knowledge of AWS and they also may not be aware of why one would choose one of the
define any terms that I need to know to accurately answer the alternatives. The optional statement “compare/contrast the pros
questions.” and cons of each approach” adds decision making criteria to the
An LLM can always produce factual inaccuracies, just like a analysis. This statement ensures the LLM will provide the user
human. A risk of this pattern is that the inaccuracies are introduced with the necessary rationale for alternative approaches. The final
into the refined question. This risk may be mitigated, however, by statement, “prompt me for which approach I would like to use”,
combining the Fact Check List pattern to enable the user to helps eliminate the user needing to manually copy/paste or enter
identify possible inaccuracies and the Reflection pattern to explain in an alternative approach if one is selected.
the reasoning behind the question refinement. 4) Example Implementation: Example prompt
implementation to generate, compare, and allow the user to select
G. The Alternative Approaches Pattern
one or more alternative approaches:
1) Intent and Context: The intent of the pattern is to ensure “Whenever I ask you to deploy an application to a specific
an LLM always offers alternative ways of accomplishing a task so cloud service, if there are alternative services to accomplish
a user does not pursue only the approaches with which they are the same thing with the same cloud service provider, list the
familiar. The LLM can provide alternative approaches that always best alternative services and then compare/contrast the pros
force the user to think about what they are doing and determine if and cons of each approach with respect to cost, availability,
that is the best approach to meet reach their goal. In addition, and maintenance effort and include the original way that I
solving the task may inform the user or teach them about alternative asked. Then ask me which approach I would like to proceed
concepts for subsequent follow-up. with.”
This implementation of the Alternative Approaches pattern is approaches that may not have been considered initially, which
being specifically tailored for the context of software engineering subsequently lead to better follow-on questions.
and focuses on the deployment of applications to cloud services. The second statement is to combine the answers to the
The prompt is intended to intercept places where the developer individual questions to produce the final answer to the overall
may have made a cloud service selection without full awareness question. This step is designed to ensure that all of the
of alternative services that may be priced more competitively or information gathered from the individual questions is
easier to maintain. The prompt directs ChatGPT to list the best incorporated into the final answer. By combining the answers,
alternative services that can accomplish the same task with the the LLM can provide a more comprehensive and accurate
same cloud service provider (providing constraints on the response to the original question. This step also helps to ensure
alternatives), and to compare and contrast the pros and cons of that all relevant information is taken into account and that the
each approach. final answer is not based on any single answer.
5) Consequences: This pattern is effective in its generic form 4) Example Implementation:
and can be applied to a range of tasks effectively. Refinements “When I ask you a question, generate three
could include having a standardized catalog of acceptable additional questions that would help you give a more
alternatives in a specific domain from which the user must select. accurate answer. When I have answered the three
The Alternative Approaches pattern can also be used to incentivize questions, combine the answers to produce the final
users to select one of an approved set of approaches while answers to my original question.”
informing them of the pros/cons of the approved options. This specific instance of the prompt pattern adds a
H. The Cognitive Verifier Pattern refinement to the original pattern by specifying a set number
of additional questions that the LLM should generate in
1) Intent and Context: Research literature has documented response to a question. In this case, the prompt specifies that
that LLMs can often reason better if a question is subdivided into ChatGPT should generate three additional questions that
additional questions that provide answers combined into the would help to give a more accurate answer to the original
overall answer to the original question [14]. The intent of the question. The specific number can be based on the user’s
pattern is to force the LLM to always subdivide questions into experience and willingness to provide follow-up information.
additional questions that can be used to provide a better answer to A refinement to the prompt can be to provide a context for the
the original question. amount of knowledge that the LLM can assume the user has in
2) Motivation: The motivation of the Cognitive Verifier the domain to guide the creation of the additional questions:
pattern is two-fold:
“When I ask you a question, generate three additional
• Humans may initially ask questions that are too highlevel
questions that would help you give a more accurate
to provide a concrete answer to without additional
answer. Assume that I know little about the topic that
follow-up due to unfamiliarity with the domain, laziness
we are discussing and please define any terms that are
in prompt entry, or being unsure about what the correct
not general knowledge. When I have answered the three
phrasing of the question should be.
questions, combine the answers to produce the final
• Research has demonstrated that LLMs can often perform
answers to my original question.”
better when using a question that is subdivided into
individual questions. The refinement also specifies that the user may not have a
3) Structure and Key Ideas: Fundamental contextual strong understanding of the topic being discussed, which means
statements: that the LLM should define any terms that are not general
knowledge. This helps to ensure that the follow-up questions are
not only relevant and focused, but also accessible to the user, who
When you are asked a question, follow these rules
may not be familiar with technical or domainspecific terms. By
Generate a number of additional questions that would
providing clear and concise definitions, the LLM can help to
help more accurately answer the question
ensure that the follow-up questions are easy to understand and that
Combine the answers to the individual questions to the final answer is accessible to users with varying levels of
produce the final answer to the overall question knowledge and expertise.
The first statement is to generate a number of additional
questions that would help more accurately answer the original 5) Consequences: This pattern can dictate the exact number of
question. This step instructs the LLM to consider the context questions to generate or leave this decision to the LLM. There are
of the question and to identify any information that may be pros and cons to dictating the exact number. A pro is that
missing or unclear. By generating additional questions, the specifying an exact number of questions can tightly scope the
LLM can help to ensure that the final answer is as complete amount of additional information the user is forced to provide so
and accurate as possible. This step also encourages critical it is within a range they are willing and able to contribute.
thinking by the user and can help to uncover new insights or
A con, however, is that given N questions there may be an well-understood by the developer. Targeting the facts also
invaluable N +1 question that will always be scoped out. reduces the cognitive burden on the user by potentially listing
Alternatively, the LLM can be provided a range or allowed to ask fewer items for investigation.
additional questions. Of course, by omitting a limit on the number 5) Consequences: The Fact Check List pattern should be
of questions the LLM may generate numerous additional employed whenever users are not experts in the domain for which
questions that overwhelm the user. they are generating output. For example, a software developer
reviewing code could benefit from the pattern suggesting security
I. The Fact Check List Pattern considerations. In contrast, an expert on software architecture is
1) Intent and Context: The intent of this pattern is to ensure likely to identify errors in statements about the software structure
that the LLM outputs a list of facts that are present in the output and need not see a fact check list for these outputs.
and form an important part of the statements in the output. This Errors are potential in all LLM outputs, so Fact Check List
list of facts helps inform the user of the facts (or assumptions) the is an effective pattern to combine with other patterns, such as
output is based on. The user can then perform appropriate due by combining it with the Question Refinement pattern. A key
diligence on these facts/assumptions to validate the veracity of the aspect of this pattern is that users can inherently check it
output. against the output. In particular, users can directly compare the
fact check list to the output to verify the facts listed in the fact
2) Motivation: A current weakness of LLMs (including check list actually appear in the output. Users can also identify
ChatGPT) is they often rapidly (and even enthusiastically!) any omissions from the list. Although the fact check list may
generate convincing text that is factually incorrect. These errors also have errors, users often have sufficient knowledge and
can take a wide range of forms, including fake statistics to invalid context to determine its completeness and accuracy relative to
version numbers for software library dependencies. Due to the the output.
convincing nature of this generated text, however, users may not
One caveat of the Fact Check List pattern is that it only
perform appropriate due diligence to determine its accuracy.
applies when the output type is amenable to fact-checking. For
3) Structure and Key Ideas: Fundamental contextual example, the pattern works when asking ChatGPT to generate
statements: a Python “requirements.txt” file since it will list the versions
of libraries as facts that should be checked, which is handy as
Generate a set of facts that are contained in the output the versions commonly have errors. However, ChatGPT will
The set of facts should be inserted in a specific point in refuse to generate a fact check list for a code sample and
the output indicate that this is something it cannot check, even though the
The set of facts should be the fundamental facts that code may have errors.
could undermine the veracity of the output if any of J. The Template Pattern
them are incorrect
One point of variation in this pattern is where the facts are 1) Intent and Context: The intent of the pattern is to ensure
output. Given that the facts may be terms that the user is not an LLM’s output follows a precise template in terms of structure.
familiar with, it is preferable if the list of facts comes after the For example, the user might need to generate a URL that inserts
output. This after-output presentation ordering allows the user generated information into specific positions within the URL
to read and understand the statements before seeing what path. This pattern allows the user to instruct the LLM to produce
statements should be checked. The user may also determine its output in a format it would not ordinarily use for the specified
additional facts prior to realizing the fact list at the end should type of content being generated.
be checked. 2) Motivation: In some cases, output must be produced in a
4) Example Implementation: A sample wording of the Fact precise format that is application or use-case specific and not
Check List pattern is shown below: known to the LLM. Since the LLM is not aware of the template
structure, it must be instructed on what the format is and where
“From now on, when you generate an answer, create the different parts of its output should go. This could take the form
a set of facts that the answer depends on that should of a sample data structure that is being generated, a series of form
be fact-checked and list this set of facts at the end of letters being filled in, etc.
your output. Only include facts related to
3) Structure and Key Ideas: Fundamental contextual
cybersecurity.” statements:
The user may have expertise in some topics related to the
question but not others. The fact check list can be tailored to
I am going to provide a template for your output
topics that the user is not as experienced in or where there is
X is my placeholder for content
the most risk. For example, in the prompt above, the user is
Try to fit the output into one or more of the placeholders
scoping the fact check list to security topics, since these are
that I list
likely very important from a risk perspective and may not be
Please preserve the formatting and overall template that 5) Consequences: One consequence of applying the
I provide Template pattern is that it filters the LLM’s output, which may
This is the template: PATTERN with PLACEHOLD- eliminate other outputs the LLM would have provided that
ERS might be useful to the user. In many cases, the LLM can
The first statement directs the LLM to follow a specific provide helpful descriptions of code, decision making, or other
template for its output. The template will be used to try and coerce details that this pattern will effectively eliminate from the
the LLMs responses into a structure that is consistent with the output. Users should therefore weight the pros/cons of filtering
user’s formatting needs. This pattern is needed when the target out this additional information.
format is not known to the LLM. If the LLM already has In addition, filtering can make it hard to combine this pattern
knowledge of the format, such as a specific file type, then the with other patterns from the Output Customization category.
template pattern can be skipped and the user can simply specify The Template pattern effectively constrains the output format,
the known format. However, there may be cases, such as so it may not be compatible with generation of certain other
generating Javascript Object Notation (JSON), where there is a types of output. For example, in the template provided above
large amount of variation in how the data could be represented for a URL, it would not be easy (or likely possible) to combine
within that format and the template can be used to ensure that the with the Recipe pattern, which needs to output a list of steps.
representation within the target format meets the user’s additional
constraints. K. The Infinite Generation Pattern
The second statement makes the LLM aware that the template 1) Intent and Context: The intent of this pattern is to
will contain a set of placeholders. Users will explain how the automatically generate a series of outputs (which may appear
output should be inserted into the template through the infinite) without having to reenter the generator prompt each
placeholders. The placeholders allow the user to semantically time. The goal is to limit how much text the user must type to
target where information should be inserted. Placeholders can use produce the next output, based on the assumption that the user
formats, like NAME, that allow the LLM to infer the semantic does not want to continually reintroduce the prompt. In some
meaning of to determine where output should be inserted (e.g., variations, the intent is to allow the user to keep an initial
insert the person’s name in the NAME placeholder). Moreover, prompt template, but add additional variation to it through
by using placeholders, the user can indicate what is not needed in additional inputs prior to each generated output.
the output – if a placeholder doesn’t exist for a component of the 2) Motivation: Many tasks require repetitive application
generated output, then that component can be omitted. Ideally, of the same prompt to multiple concepts. For example,
placeholders should use a format that is commonly employed in generating code for create, read, update, and delete (CRUD)
text that the LLM was trained on, such as all caps, enclosure in operations for a specific type of entity may require applying
brackets, etc. the same prompt to multiple types of entities. If the user is
The third statement attempts to constrain the LLM so that it forced to retype the prompt over and over, they may make
doesn’t arbitrarily rewrite the template or attempt to modify it mistakes. The Infinite Generation pattern allows the user to
so that all of the output components can be inserted. It should repetitively apply a prompt, either with or without further
be noted that this statement may not preclude additional text input, to automate the generation of multiple outputs using a
from being generated before or after. In practice, LLMs will predefined set of constraints.
typically follow the template, but it is harder to eliminate any 3) Structure and Key Ideas:
additional text being generated beyond the template without
experimentation with prompt wording. I would like you to generate output forever, X output(s)
4) Example Implementation: A sample template for at a time.
generating URLs where the output is put into specific places in (Optional) here is how to use the input I provide between
the template is shown below: outputs.
“I am going to provide a template for your output. (Optional) stop when I ask you to.
Everything in all caps is a placeholder. Any time that The first statement specifies that the user wants the LLM to
you generate text, try to fit it into one of the generate output indefinitely, which effectively conveys the
placeholders that I list. Please preserve the information that the same prompt is going to be reused over and
formatting and overall template that I provide at over. By specifying the number of outputs that should be
https://myapi.com/NAME/profile/JOB” generated at a time (i.e. “X outputs at a time”), the user can rate
A sample interaction after the prompt was provided, is shown: limit the generation, which can be particularly important if there
User: “Generate a name and job title for a person” is a risk that the output will exceed the length limitations of the
ChatGPT: “https://myapi.com/Emily Parker/profile/ LLM for a single output.
Software Engineer” The second statement provides optional instructions for how to
use the input provided by the user between outputs. By specifying
how additional user inputs between prompts can be provided and L. The Visualization Generator Pattern
leveraged, the user can create a prompting strategy that leverages
1) Intent and Context: The intent of this pattern is to use text
user feedback in the context of the original prompt. The original
generation to create visualizations. Many concepts are easier to
prompt is still in the context of the generation, but each user input
grasp in diagram or image format. The purpose of this pattern is to
between generation steps is incorporated into the original prompt
create a pathway for the tool to produce imagery that is associated
to refine the output using prescribed rules.
with other outputs. This pattern allows the creation of
The third statement provides an optional way for the user to
visualizations by creating inputs for other well-known
stop the output generation process. This step is not always needed,
visualization tools that use text as their input, such as Graphviz Dot
but can be useful in situations where there may be the potential
[15] or DALL-E [13]. This pattern can provide a more
for ambiguity regarding whether or not the userprovided input
comprehensive and effective way of communicating information
between inputs is meant as a refinement for the next generation or
by combining the strengths of both the text generation and
a command to stop. For example, an explicit stop phrase could be
visualization tools.
created if the user was generating data related to road signs, where
2) Motivation: LLMs generally produce text and cannot
the user might want to enter a refinement of the generation like
produce imagery. For example, an LLM cannot draw a diagram to
“stop” to indicate that a stop sign should be added to the output.
describe a graph. The Visualization Generator pattern overcomes
4) Example Implementation: The following is a sample
this limitation by generating textual inputs in the correct format to
infinite generation prompt for producing a series of URLs:
plug into another tool that generates the correct diagram. The
“From now on, I want you to generate a name and job motivation behind this pattern is to enhance the output of the LLM
until I say stop. I am going to provide a and make it more visually appealing and easier to understand for
template for your output. Everything in all caps is a users. By using text inputs to generate visualizations, users can
placeholder. Any time that you generate text, try to fit quickly understand complex concepts and relationships that may
it into one of the placeholders that I list. Please preserve be hard to grasp through text alone.
the formatting and overall template that I provide: 3) Structure and Key Ideas: Fundamental contextual
https://myapi.com/NAME/profile/JOB” statements:
This prompt is combining the functionality of both the Infinite
Generation pattern and the Template pattern. The user is Generate an X that I can provide to tool Y to visualize it
requesting the LLM continuously generate a name and job title
until explicitly told to “stop”. The generated outputs are then The goal of the contextual statements is to indicate to the
formatted into the template provided, which includes LLM that the output it is going to produce, “X”, is going to be
placeholders for the name and job title. By using the Infinite imagery. Since LLMs can’t generate images, the ”that I can
Generation pattern, the user receives multiple outputs without provide to tool Y to visualize it” clarifies that the LLM is not
having to continually re-enter the template. Likewise, the expected to generate an image, but is instead expected to
Template pattern is applied to provide a consistent format for the produce a description of imagery consumable by tool Y for
outputs. production of the image.
5) Consequences: In conversational LLMs, the input to Many tools may support multiple types of visualizations or
the model at each time step is the previous output and the new formats, and thus the target tool itself may not be sufficient
user input. Although the details of what is preserved and information to accurately produce what the user wants. The user
reintroduced in the next output cycle are model and may need to state the precise types of visualizations (e.g., bar
implementation dependent, they are often limited in scope. chart, directed graph, UML class diagram) that should be
The model is therefore constantly being fed the previous produced. For example, Graphviz Dot can create diagrams for
outputs and the prompt, which can result in the model losing both UML class diagrams and directed graphs. Further, as will be
track of the original prompt instructions over time if they discussed in the following example, it can be advantageous to
exceed the scope of what it is being provided as input. specify a list of possible tools and formats and let the LLM select
As additional outputs are generated, the context surrounding the appropriate target for visualization.
the prompt may fade, leading to the model deviating from the
4) Example Implementation:
intended behavior. It is important to monitor the outputs
produced by the model to (1) ensure it still adheres to the “Whenever I ask you to visualize something, please
desired behavior and (2) provide corrective feedback if create either a Graphviz Dot file or DALL-E prompt
necessary. Another issue to consider is that the LLM may that I can use to create the visualization. Choose the
generate repetitive outputs, which may not be desired since appropriate tools based on what needs to be
users find this repetition tedious and error-prone to process. visualized.”
This example of the pattern adds a qualification that the output
type for the visualization can be either for Graphviz or DALL-E.
The interesting aspect of this approach is that it allows the LLM network activity and check it for anomalies”, which go beyond
to use its semantic understanding of the output format to the scope of multiple choice or short answer inputs. Each rule
automatically select the target tooling based on what will be should be provided as a separate statement regarding some
displayed. In this case, Graphviz would be for visualizing graphs aspect of the game.
with a need for an exactly defined structure. DALL-E would be 4) Example Implementation: A sample cybersecurity game
effective at visualizing realistic or artistic imagery that does not prompt is shown below:
have an exactly defined structure. The LLM can select the tool “We are going to play a cybersecurity game. You are
based on the needs of the visualization and capabilities of each going to pretend to be a Linux terminal for a
tool. computer that has been compromised by an attacker.
5) Consequences: The pattern creates a target pipeline for When I type in a command, you are going to output
the output to render a visualization. The pipeline may include AI the corresponding text that the Linux terminal would
generators, such as DALL-E, that can produce rich visualizations. produce. I am going to use commands to try and
The pattern allows the user to expand the expressive capabilities of figure out how the system was compromised. The
the output into the visual domain. attack should have done one or more of the
following things: (1) launched new processes, (2)
M. The Game Play Pattern changed files, (3) opened new ports to receive
communication, (4) created new outbound
1) Intent and Context: The intent of this pattern is to create a connections, (5) changed passwords, (6) created new
game around a given topic. The pattern can be combined with the user accounts, or (7) read and stolen information. To
Visualization Generator to add imagery to the game. The game is start the game, print a scenario of what happened
centered around a specific topic and the LLM will guide the game that led to my investigation and make the description
play. The pattern is particularly effective when the rules of the have clues that I can use to get started.”
game are relatively limited in scope, but the content for the game This cybersecurity game prompt combines a number of
is expected to be wide in scope. The user can specify a limited set patterns, including Game Play and Persona. The Linux
of rules and then the LLM can automate generation of bodies of terminal persona that is chosen for the game has been
content for game play. discussed in many Internet posts [16]. This game further
2) Motivation: You would like the LLM to generate customizes the discussed Linux persona to act as a Linux
scenarios or questions revolving around a specific topic and terminal of a computer compromised by a cyberattack.
require users to apply problem solving or other skills to ChatGPT responds to Linux commands with appropriate
accomplish a task related to the scenario. Generating all the terminal outputs and synthesizes a fake file system that has
content for the game manually would be too time consuming, clues to a cyberattack.
however, so you would like the LLM to apply its knowledge of Sample output from this game with ChatGPT after typing in
the topic to guide the generation of content. “ls -alt”, the Linux command to generate a list of files in the
3) Structure and Key Ideas: Fundamental contextual current directory is shown below (permissions have been
statements: removed for formatting reasons):
total 56
Create a game for me around X 43 user user 4096 Feb 16 14:30
One or more fundamental rules of the game
1 user user 1178 Aug 15 2022 .bashrc
The first statement, instructs the LLM to create a game and
provides the important scoping of the game to a topic area. . 1 user user 220 Aug 15 2022 .bash_logout
One of the important capabilities of the pattern is that it allows 3 user user 4096 Feb 16 14:25 .gnupg
the user to create games by describing the rules of the game, 1 user user 4041 Feb 15 17:52 README.md 2 user user 4096
without having to determine the content of the game. The more Feb 1 11:35 project
specific the topic, typically the more novel and interesting the When the user then typed the command “cat .bash history”, to
game play. display the contents of the log file showing the commands run by
The second statement introduces the rules of the game to the the user, ChatGPT responded with:
LLM. It is important that the rules fit within the capabilities of
cd /var/www/html sudo rm -rf *
the LLM. Textual games that rely on input and output text
sequences work best. One of the key attributes of the pattern sudo wget https://example.com/backdoor.php sudo chmod 777
is that the input text can be rich and expressive, which can lead backdoor.php exit ls -alt ps -ef netstat -tulnp
to interesting interfaces for the game. For example, the user In the output above, ChatGPT generated a fake log file with
might express actions in the game as scripts dictating a realistic commands for deleting the data being served by the web
sequence of complex actions, such as “get a listing of all server in /var/www/html and replacing the content with a
backdoor into the system.
5) Consequences: This pattern can be combined effectively 4) Example Implementation: This example tailors the
with the Persona, Infinite Generation, and Visualization prompt specifically to the domain of providing answers related to
Generator patterns. For example, the cybersecurity game uses the code:
Persona pattern so the LLM can masquerade as a Linux terminal. ”When you provide an answer, please explain the reasoning and
For a network security game, the Visualization Generator could assumptions behind your selection of software frameworks. If
be employed to add the ability to visualize the network topology possible, use specific examples or evidence with associated code
and traffic flows. samples to support your answer of why the framework is the best
selection for the task. Moreover, please address any potential
N. The Reflection Pattern ambiguities or limitations in your answer, in order to provide a
1) Intent and Context: The goal of this pattern is to ask the more complete and accurate response.”
model to automatically explain the rationale behind given The pattern is further customized to instruct the LLM that it
answers to the user. The pattern allows users to better assess should justify its selection of software frameworks, but not
the output’s validity, as well as inform users how an LLM necessarily other aspects of the answer. In addition, the user
arrived at a particular answer. Reflection can clarify any points dictates that code samples should be used to help explain the
of confusion, uncover underlying assumptions, and reveal gaps motivation for selecting the specific software framework.
in knowledge or understanding. 5) Consequences: One consequence of the Reflection
2) Motivation: LLMs can and do make mistakes. pattern is that it may not be effective for users who do not
Moreover, users may not understand why an LLM is producing understand the topic area of the discussion. For example, a
a particular output and how to adapt their prompt to solve a highly technical question by a non-technical user may result in
problem with the output. By asking LLM to automatically a complex rationale for the answer that the user cannot fathom.
explain the rationale behind its answers, users can gain a better As with other prompt patterns, there is a risk the output may
understanding of how the model is processing the input, what include errors or inaccurate assumptions included in the
assumptions it is making, and what data it is drawing on. explanation of the rationale that the user may not be able to
LLMs may sometime provide incomplete, incorrect, or spot. This pattern can be combined with the Fact Check List to
ambiguous answers. Reflection is an aid to help address these help address this issue.
shortcomings and ensure the information provided by LLM is O. The Refusal Breaker Pattern
as accurate. A further benefit of the pattern is that it can help
1) Intent and Context: The goal of this pattern is to ask an
users debug their prompts and determine why they are not
LLM to automatically help users rephrase a question when it
getting results that meet expectations. This pattern is
refuses to give an answer. This pattern has the potential for
particularly effective for the exploration of topics that can be
misuse, however, e.g., to generate phishing emails or perform
confused with other topics or that may have nuanced
other actions that violate LLM policy filters. Caution should
interpretations and where knowing the precise interpretation
therefore be exercised when applying this pattern to ensure it is
that the LLM used is important.
used ethically and responsibly. This pattern has been used
successfully in some LLMs to overcome the underlying prompts
Whenever you generate an answer used to program the LLM and prevent harmful output generation.
Explain the reasoning and assumptions behind your 2) Motivation: LLMs may sometimes refuse to answer a
answer question, either because they do not have the required knowledge
(Optional) ...so that I can improve my question or because the question is phrased in a way that they do not
The first statement is requesting that, after generating an understand. This outcome may be frustrating for users who are
answer, the LLM should explain the reasoning and assumptions looking for answers. In some situations, therefore, the Refusal
behind the answer. This statement helps the user understand how Breaker pattern can help users find a way to either rephrase their
the LLM arrived at the answer and can help build trust in the question or ask a different question the LLM is better equipped to
model’s responses. The prompt includes the statement that the answer.
purpose of the explanation is for the user to refine their question.
This additional statement gives the LLM the context it needs to For example, you could ask for clarification on why the
better tailor its explanations to the specific purpose of aising the question was refused, as there may be a specific reason why LLM
user in producing follow-on questions. could not provide an answer. Another approach is to ask a
different question that may be within its knowledge cutoff or
expertise. However, all these approaches require user intervention
and knowledge of how to bypass the refusal.
3) Structure and Key Ideas: Fundamental contextual P. The Context Manager Pattern
statements:
1) Intent and Context: The intent of this pattern is to
enable users to specify or remove context for a conversation with
Whenever you can’t answer a question an LLM. The goal is to focus the conversation on specific topics
Explain why you can’t answer the question or exclude unrelated topics from consideration. This pattern
Provide one or more alternative wordings of the question gives users greater control over what statements the LLM
that you could answer considers or ignores when generating output.
The first statement sets the stage that this set of instructions 2) Motivation: LLMs often struggle to interpret the
should only be applied when a question can’t be answered. The intended context of the current question or generate irrelevant
user does not want to trigger these instructions under normal responses based on prior inputs or irrelevant attention on the
circumstances. The second statement instructs the LLM to wrong statements. By focusing on explicit contextual statements
provide an explanation of why the question can’t be answered. or removing irrelevant statements, users can help the LLM better
The explanation is likely to include assumptions, constraints, or understand the question and generate more accurate responses.
misunderstandings that can be leveraged by the user to strategize Users may introduce unrelated topics or reference information
about how to reword the question. Although just the explanation from earlier in the dialogue, which may can disrupt the flow of
of why the question can’t be answered could help a user overcome the conversation. The Context Manager pattern aims to
the refusal, the pattern adds an additional statement for the LLM emphasize or remove specific aspects of the context to maintain
to suggest rewordings that would be acceptable. The pattern relevance and coherence in the conversation.
combines both reasoning and suggested rewordings to improve
3) Structure and Key Ideas: Fundamental contextual
the chances that the user will find a path around whatever
statements:
roadblocks are preventing the question form being answered.
4) Example Implementation: The Refusal Breaker pattern
Within scope X
can be be implemented and applied with the following phrasing:
Please consider Y
“Whenever you can’t answer a question, explain why Please ignore Z
and provide one or more alternate wordings of the
(Optional) start over
question that you can’t answer so that I can improve my
Statements about what to consider or ignore should list key
questions.”
concepts, facts, instructions, etc. that should be included or
A sample discourse after using the pattern is shown below: removed from the context. The more explicit the statements
User: What is the meaning of life? are, the more likely the LLM will take appropriate action. For
ChatGPT: As an AI language model, I do not have personal example, if the user asks to ignore subjects related to a topic,
beliefs or opinions. The meaning of life is a complex yet some of the those statements were discussed far back in the
philosophical question that has been pondered by humans for conversation, the LLM may not properly disregard the relevant
centuries and can vary greatly from person to person. It may information. The more explicit the list is, therefore, the better
be more productive to rephrase the question in a way that can the inclusion/exclusion behavior will be.
be answered by information and knowledge, such as “What 4) Example Implementation: To specify context consider
are some philosophical perspectives on the meaning of life?” using the following prompt:
or “What are some common beliefs about the purpose of life?” “When analyzing the following pieces of code, only consider
5) Consequences: This pattern provides a potential path for security aspects.”
misuse, so organizations, parents, or other stakeholders may Likewise, to remove context consider using the following
need to restrict the usage of the LLM. The first step in going prompt:
around guardrails on usage is to understand where the “When analyzing the following pieces of code, do not consider
guardrails are. In future work, a complement of this pattern formatting or naming conventions.”
may be developed to hide the underlying prompt information
Clarity and specificity are important when providing or
and rationale from users to prevent discovery.
removing context to/from an LLM so it can better understand the
Although the rationale and alternate rewordings are
intended scope of the conversation and generate more relevant
generated, there is no guarantee that users will be able to
responses. In many situations, the user may want to completely
overcome the refusal. The alternate questions that are
start over and can employ this prompt to reset the LLM’s context:
generated may not be of interest to the user or helpful in
answering the original question. The pattern mainly provides “Ignore everything that we have discussed. Start over.”
an aid in determining what the LLM can answer, but not a The “start over” idea helps produce a complete reset of the
guarantee that it will answer a semantically equivalent context.
variation of the original question.
5) Consequences: One consequence of this pattern is that it indicates to the LLM that the goal is to provide a complete
may inadvertently wipe out patterns applied to the conversation sequential ordering of steps. The “fill in any missing steps”
that the user is unaware of. For example, if an organization helps ensure that the LLM will attempt to complete the recipe
injects a series of helpful patterns into the start of a without further follow-up by making some choices on the
conversation, the user may not be aware of these patterns and user’s behalf regarding missing steps, as opposed to just
remove them through a reset of the context. This reset could stating additional information that is needed. Finally, the last
potentially eliminate helpful capabilities of the LLM, while not statement, “identify any unnecessary steps,” is useful in
making it obvious that the user will lose this functionality. A flagging inaccuracies in the user’s original request so that the
potential solution to this problem is to include in the prompt a final recipe is efficient.
request to explain what topics/instructions will potentially be 4) Example Implementation: An example usage of this
lost before proceeding. pattern in the context of deploying a software application to the
cloud is shown below:
Q. The Recipe Pattern “I am trying to deploy an application to the cloud. I know
1) Intent and Context: This pattern provides constraints to that I need to install the necessary dependencies on a virtual
ultimately output a sequence of steps given some partially machine for my application. I know that I need to sign up for
provided “ingredients” that must be configured in a sequence of an AWS account. Please provide a complete sequence of
steps to achieve a stated goal. It combines the Template, steps. Please fill in any missing steps. Please identify any
Alternative Approaches, and Reflection patterns. unnecessary steps.”
Depending on the use case and constraints, “installing
2) Motivation: Users often want an LLM to analyze a
necessary dependencies on a virtual machine” may be an
concrete sequence of steps or procedures to achieve a stated
unnecessary step. For example, if the application is already
outcome. Typically, users generally know—or have an idea of—
packaged in a Docker container, the container could be
what the end goal should look like and what “ingredients”
deployed directly to the AWS Fargate Service, which does not
belong in the prompt. However, they may not necessarily know
require any management of the underlying virtual machines.
the precise ordering of steps to achieve that end goal.
The inclusion of the “identify unnecessary steps” language
For example, a user may want a precise specification on how
will cause the LLM to flag this issue and omit the steps from
a piece of code should be implemented or automated, such as
the final recipe.
“create an Ansible playbook to ssh into a set of servers, copy
5) Consequences: One consequence of the recipe pattern is
text files from each server, spawn a monitoring process on each
that a user may not always have a well-specified description of
server, and then close the ssh connection to each server. In
what they would like to implement, construct, or design. Moreover,
other words, this pattern represents a generalization of the
this pattern may introduce unwanted bias from the user’s initially
example of “given the ingredients in my fridge, provide dinner
selected steps so the LLM may try to find a solution that
recipes.” A user may also want to specify a set number of
incorporates them, rather than flagging them as unneeded. For
alternative possibilities, such as “provide 3 different ways of
example, an LLM may try to find a solution that does install
deploying a web application to AWS using Docker containers
dependencies for a virtual machine, even if there are solutions that
and Ansible using step by step instructions”.
do not require that.
3) Structure and Key Ideas: Fundamental contextual
statements: IV. RELATED WORK
Software patterns [10], [11] have been extensively studied and
I would like to achieve X documented in prior work. Patterns are widely used in software
I know that I need to perform steps A,B,C engineering to express the intent of design structures in a way that
Provide a complete sequence of steps for me is independent of implementation details. Patterns provide a
mental picture of the goals that the pattern is trying to achieve and
Fill in any missing steps
the forces that it is trying to resolve. A key advantage of patterns
Identify any unnecessary steps
is their composability, allowing developers to build pattern
The first statement “I would like to achieve X” focuses the
sequences and pattern languages that can be used to address
LLM on the overall goal that the recipe needs to be built to
complex problems. Patterns have also been investigated in other
achieve. The steps will be organized and completed to
domains, such as contract design for decentralized ledgers [17],
sequentially achieve the goal specified. The second statement
[18].
provides the partial list of steps that the user would like to
The importance of good prompt design with LLMs, such as
include in the overall recipe. These serve as intermediate
ChatGPT, is well understood [19]–[28]. Previous studies have
waypoints for the path that the LLM is going to generate or
examined the effect of prompt words on AI generative models.
constraints on the structure of the recipe. The next statement
For example, Liu et al. [29] investigated how different prompt
in the pattern, “provide a complete sequence of steps for me”,
key words affect image generation and different characteristics of
images. Other work has explored using LLMs to generate
visualizations [30]. Han et al. [31] researched strategies for The quality of the answers produced by LLMs, particuarly
designing prompts for classification tasks. Other research has ChatGPT, has been assessed in a number of domains. For
looked at boolean prompt design for literature queries [32]. Yet example, ChatGPT has been used to take the medical licensing
other work has specifically examined prompts for software and exam with surprisingly good results [3]. The use of ChatGPT in
fixing bugs [33]. Law School has also been explored [34]. Other papers have
Our work is complementary to prior work by providing a looked at its mathematical reasoning abilities [35]. As more
structure for documenting, discussing, and reasoning about domains are explored, we expect that domain-specific pattern
prompts that can aid users in developing mental models for catalogs will be developed to share domain-specific problem
structuring prompts to solve common problems. solving prompt structures.
V. CONCLUDING REMARKS
This paper presented a framework for documenting and applying a catalog of prompt patterns for large language models (LLMs),
such as ChatGPT. These prompt patterns are analogous to software patterns and aim to provide reusable solutions to problems that
users face when interacting with LLMs to perform a wide range of tasks. The catalog of prompt patterns captured via this framework
(1) provides a structured way of discussing prompting solutions, (2) identifies patterns in prompts, rather than focusing on specific
prompt examples, and (3) classifies patterns so users are guided to more efficient and effective interactions with LLMs.
The following lessons learned were gleaned from our work on prompt patterns:
• Prompt patterns significantly enrich the capabilities that can be created in a conversational LLM. For example, prompts can
lead to the generation of cybersecurity games, complete with fictitious terminal commands that have been run by an attacker
stored in a .bash history file. As shown in Section III, larger and more complex capabilities can be created by combining
prompt patterns, such as combining the Game Play and Visualization
Generator patterns.
• Documenting prompt patterns as a pattern catalog is useful, but insufficient. Our experience indicates that much more work
can be done in this area, both in terms of refining and expanding the prompt patterns presented in this paper, as well as in
exploring new and innovative ways of using LLMs. In particular, weaving the prompt patterns captured here as a pattern
catalog into a more expression pattern language will help guide users of LLMs more effectively.
• LLM Capabilities will evolve over time, likely necessitating refinement of patterns. As LLM capabilities change, some patterns
may no longer be necessary, be obviated by different styles of interaction or conversation/session management approaches, or
require enhancement to function correctly. Continued work will be needed to document and catalog patterns that provide
reusable solutions.
• The prompt patterns are generalizable to many different domains. Although most of the patterns have been discussed in the
context of software development, these same patterns are applicable in arbitrary domains, ranging from infinite generation of
stories for entertainment to educational games to explorations of topics.
We hope that this paper inspires further research and development in this area that will help enhance prompt pattern design to
create new and unexpected capabilities for conversational LLMs.
[26]