Prompt Engineering
Prompt Engineering
Learning Objectives
This course aims to teach you how to talk to conversational AI effectively. We aim to move you
from a naive approach to prompting to a more systematic one that can extract the maximum value
from what AI offers. By the time you finish this course, you should be able to leverage GPT
(Generative Pre-trained Transformer)-based AI systems to be more productive and effective
regardless of your industry.
Common mistakes
Quizzes
Quizzes
Quizzes
Quizzes
Course Summary
For example, let’s ask “What is Prompt Engineering?” to one of the aforementioned LLM-based
AI.
Prompt engineering refers to the process of designing and refining prompts or instructions given
to a language model, such as GPT-3.5, to generate desired outputs. It involves carefully crafting
the input text provided to the model to elicit the desired response or behavior.
Prompt engineering can be used to fine-tune the output of language models by specifying explicit
instructions, adding context, or using specific formatting techniques to guide the model’s
generation. By tweaking the prompts, developers and users can influence the output to be more
accurate, coherent, or aligned with a specific goal.
Effective prompt engineering requires an understanding of the underlying language model and its
capabilities. It involves experimenting with different prompts, iterating on them, and analyzing the
model’s responses to achieve the desired results. Prompt engineering is particularly important
when using large language models like GPT-3.5, as their outputs can sometimes be unpredictable
or require additional context to generate the desired output.
Overall, prompt engineering is a crucial aspect of working with language models to ensure they
produce accurate and meaningful responses that align with the user’s intentions or requirements.
This AI thing might be pretty worthwhile. In this course, you’ll learn everything you need to know
to optimize the results you get from such conversational AI, regardless of which specific LLM you
decide to use.
Over the last decade, learning to code has been all the rage, with Python being a popular
programming language. If you're not a programmer, you might wonder why we have to use such
programming languages instead of just plain English.
Historically, the main issue has been one of ambiguity. English is a somewhat ambiguous language
compared to programming languages that make you tell the computer precisely what it needs to
do. For example, if I asked a human to draw a circle on a sheet of paper, they would have no
problem doing so. Mind you, the circle will be flawed, but the person will typically draw it without
further question or hesitation.
The human involved in this exercise will not typically ask follow-up questions even though the
instructions are rather generic when you think about it. Where should the circle be placed on the
paper? Top left? Center? Bottom right? How big should the circumference be? How thick should
the lines be?
The human, with their human intelligence, would be able to make assumptions and just draw the
circle. Traditionally, the computer hasn't been able to make such an assumption, so it needs to be
instructed very specifically on what to do.
The following snippet of code draws a circle in Python using the Turtle graphic module (do not
worry if you don't understand this
code; you don't need to):5
1. import turtle
2. def draw_circle(radius, line_color='black', fill_color=None, line_thickness=1, position=(0, 0)):
3. turtle.penup()
4. turtle.goto(position)
5. turtle.pendown()
6. turtle.color(line_color)
7. turtle.pensize(line_thickness)
8. if fill_color:
9. turtle.fillcolor(fill_color)
10. turtle.begin_fill()
11. turtle.circle(radius)
12. if fill_color:
13. turtle.end_fill()
14. # Main program
15. if __name__ == "__main__":
16. # Setup turtle speed (1 to 10)
17. turtle.speed(1)
18. # Get the circle parameters from the user
19. radius = int(input("Enter the radius of the circle: "))
20. line_color = input("Enter the line color (default is black): ") or 'black'
21. fill_color = input("Enter the fill color (leave blank for no fill): ")
22. line_thickness = int(input("Enter the line thickness: "))
23. x = int(input("Enter the x-coordinate of the circle's position: "))
24. y = int(input("Enter the y-coordinate of the circle's position: "))
25. # Set the circle's position
26. position = (x, y)
27. # Draw the circle
28. draw_circle(radius, line_color, fill_color, line_thickness, position)
29. # Keep the window open until it's closed manually
30. turtle.done()
You will notice how we had to tell the computer which graphic library to use, the radius in pixels,
the line thickness, whether we wanted the circle filled, with which color, and so on. The library
has some defaults (specified by the library's programmer) that could allow us to write a much
shorter program. However, the computer still had to be told by the library author about such
assumptions.
If the user were to input 3px or five in response to the question about line thickness, the program
would crash as it expects to receive a numerically written number.
Python's code will then be "translated" into binary code that the machine understands and executes.
Prior to the advent of AI, English (or any other human langauge or choice) was simply too
ambiguous and broad to tell the computer what to do for us.
A practical example
Imagine you have an extensive list of scores you need to sort out.
Ethan (93), Olivia (67), Benjamin (42), Emma (94), Noah (76), Sophia (89), Lucas (51), Mia
(62), Alexander (80), Isabella (95), Henry (34), Ava (71), James (87), Charlotte (58), Antonio
(96), Harper (78), Matthew (49), Amelia (92), Samuel (64), Evelyn (83), Alicia (100), Abigail
(88), David (39), Emily (70), Oliver (94), Elizabeth (60), William (81), Sofia (68), Michael (77),
Grace (93)
Now, you could grab an Excel spreadsheet program, create two columns (one called Name, the
other called Score ), manually copy the names in the first column, and then do the same for the
scores. Then, select the two columns and hit the sort function. But it's a tedious and slow process
despite spreadsheets being valuable tools.
A programmer might write a script to do this for them. The following is one of the quickest
implementations of such a program in Python.
(Again, this is for illustrative purposes only; you do not need to understand the actual code within.)
1. scores_names = [
2. "Ethan (93)", "Olivia (67)", "Benjamin (42)", "Emma (94)", "Noah (76)",
3. "Sophia (89)", "Lucas (51)", "Mia (62)", "Alexander (80)", "Isabella (95)",
4. "Henry (34)", "Ava (71)", "James (87)", "Charlotte (58)", "Antonio (96)",
5. "Harper (78)", "Matthew (49)", "Amelia (92)", "Samuel (64)", "Evelyn (83)",
6. "Alicia (100)", "Abigail (88)", "David (39)", "Emily (70)", "Oliver (94)",
7. "Elizabeth (60)", "William (81)", "Sofia (68)", "Michael (77)", "Grace (93)"
(89), Lucas (51), Mia (62), Alexander (80), Isabella (95), Henry (34), Ava (71), James (87),
Charlotte (58), Antonio (96), Harper (78), Matthew (49), Amelia (92), Samuel (64), Evelyn
(83), Alicia (100), Abigail (88), David (39), Emily (70), Oliver (94), Elizabeth (60), William
I got:
Here's the list sorted in descending order by their accompanying values: Alicia (100), Antonio
(96), Isabella (95), Emma (94), Oliver (94), Ethan (93), Grace (93), Amelia (92), Sophia (89),
Abigail (88), James (87), Evelyn (83), William (81), Alexander (80), Harper (78), Michael (77),
Noah (76), Ava (71), Emily (70), Sofia (68), Olivia (67), Samuel (64), Mia (62), Elizabeth (60),
Charlotte (58), Lucas (51), Matthew (49), Benjamin (42), David (39), Henry (34).
Even if you are a programmer, this is a time saver. Not to mention that you can ask for help with
code, too. In fact, every Python script you see on this page was generated by an LLM and
minimally adapted by me.
This might be a trivial example, but the power of LLM is evident and can hardly be overstated.
For the first time, we can use plain English to instruct the computer to achieve far more than
possible before.
The trick is learning how to "program in English." That's Prompt Engineering, and by the end of
this course, you'll be very well-versed and capable of taking full advantage of AI.
Limitations
We are still not at the point of so-called Artificial General Intelligence (AGI), even using the latest,
largest models. LLMs are seemingly intelligent but they are not yet a display of genuine
intelligence intended in broad terms.
This is not to say that they are not helpful or even revolutionary, but you can't reasonably expect
them yet to learn independently, be self-aware, handle very complex reasoning, operate outside of
the framework the model was trained on, etc.
Sometimes, the accuracy of the answer can be wrong because the AI was trained on a large amount
of human-generated data. Humans are fallible, and so is the data. So you'll run into the occasional
wrong answer or display of bias, which you need to identify and manually address.
Furthermore, traditional programming offers more precision and control for many tasks. This is
especially true for large, complex systems where nuance and specifics are crucial.
In short, it's a work in progress, and there are limitations. However, AI is still an incredibly useful
tool that we can leverage today simply by telling the machine what we want in plain English (and,
to a lesser extent, Italian, French, Hindi, and other human languages).
Module 1 Quiz
Question 1
Can computers inherently understand ambiguous instructions like humans do?
o Yes
o No
Question 2
Why do we historically use programming languages instead of plain English to instruct
computers?
Question 3
What does the term 'zero-shot' prompting mean in the context of Large Language Models
(LLMs)?
Question 4
Naive or standard prompts typical use few-shots prompting.
o True
o False
Question 5
Is the data that AI models like LLMs are trained on always flawless?
By now, you should have a clear understanding of the opportunity and value the new wave of
conversational AI offers us. As mentioned before, you will use English as a quasi-programming
language to tell the AI how to extract as much value as possible.
To start, we will take the so-called standard or naive approach. We will query the AI in the simplest
way possible, like most people would. The result will be OK, but we will see how a simple
adjustment can radically improve the results. We will tackle more complex approaches that
produce even better results.
Try it in the prompt. This is my result (yours might vary, even if you use the same model).
“The best way to get fit depends on individual preferences and goals. However, a combination of
regular exercise, a balanced diet, and sufficient rest is generally considered effective. It is
important to find activities you enjoy, such as running, swimming, or weightlifting, and
incorporate them into your routine. Additionally, focusing on whole, nutritious foods and staying
hydrated can support your fitness journey. Consulting with a healthcare professional or a certified
personal trainer can also provide personalized guidance.”
This is not bad by any means. It is solid advice. However, it is very generic. We can fetch more
specifics by asking follow-up questions (a good idea in general), but we can also get better results
from the beginning by adopting the Persona Pattern.
Let's ask the same question, but this time, we'll ask the AI to act as a fitness expert.
5. Lastly, prioritize rest and recovery. Your body needs time to repair and rebuild after
exercise. Aim for at least 7-9 hours of quality sleep each night and allow for rest days in
your exercise routine.
Remember, consistency is key. It's important to find activities you enjoy and create a sustainable
fitness routine that you can stick to in the long term.”
The advice is still a little generic, but it is a vast improvement over the naive approach.
We can also augment the basic Persona pattern with specific instructions about the format or
specifics of the answers in the Prompt Instructions. For example, we could instruct the bot to use
bullet points for its replies or format responses in tables whenever possible.
Let us take it a step further and see how using a persona affects creating a workout plan.
In the Prompt Instructions, let us add the following:
“You'll act as a fitness expert who's current with the latest research data and provide very detailed
step-by-step instructions in reply to my queries.”
“Create a gym workout program to lose weight and build strength for an out-of-shape beginner.”
You can try it yourself; I got detailed instructions and a decent training plan!
The only fundamental limitation here is that the program we received is not custom made for us.
It is for a generic beginner who is out of shape. Is it a man or a woman? Is it young or 82? Can
they do any exercise, or are there mobility issues? All of that gets ignored.
“Give me a list of 10 article titles to promote my new book about dog training.”
“Certainly! Here are some potential article titles to help promote your new book about dog
training:
1. “Unlock the Secrets to Effective Dog Training with Our Comprehensive New Guide”
2. “Avoid the Top 5 Mistakes Dog Owners Make: Insights from Our Latest Book”
3. “Harness the Power of Positive Reinforcement: Transform Your Dog’s Behavior with
Our New Techniques”
4. “Training Your Puppy Made Easy: Essential Tips from Our New Book for a Well-
Behaved Companion”
5. “Solving Common Dog Behavior Problems: Expert Insights from Our New Publication”
6. “The Science Behind Dog Training: Understanding Canine Psychology with Our Latest
Book”
7. “From Leash Pulling to Perfect Walking: Embark on a Training Journey with Our New
Guide”
8. “Socialization Strategies: Preparing Your Dog for the Real World with Tips from Our
New Book”
9. “The Benefits of Consistency in Dog Training: A Roadmap to Success from Our Latest
Release”
10. “Creating a Strong Bond with Your Dog Through Training Techniques Featured in Our
New Book”
Remember, the key to a compelling article title is to pique curiosity while also providing a clear
idea of the article’s content. Tailor these suggestions as needed to best fit the unique aspects of
your book.”
Here is the prompt using the style of the aforementioned famous marketer:
“Acting as marketing expert Seth Godin, give me a list of 10 article titles to promote my new
book about dog training.”
Certainly! Here are some potential article titles to help promote your new book about dog training:
1. “Unlocking the Secrets: Transform Your Dog’s Behavior with Expert Training
Techniques”
2. “The Ultimate Guide to Positive Reinforcement: Why It Works in Dog Training”
3. “From Puppy to Pro: Essential Training Tips for Every Dog Owner”
4. “Beyond Sit and Stay: Advanced Dog Training Strategies for Lifelong Success”
5. “Common Dog Training Myths Debunked: What Every Owner Should Know”
6. “Building a Strong Bond: How Effective Training Enhances Your Relationship with Your
Dog”
7. “The Science of Dog Training: Understanding Canine Behavior for Better Results”
8. “Training for Life: How to Prepare Your Dog for Every Situation”
9. “Top 10 Mistakes Dog Owners Make and How to Avoid Them”
10. “Your Dog’s Best Friend: How My New Book Can Change Your Training Approach
Forever”
These titles aim to engage potential readers by highlighting the benefits of your book, addressing
common concerns, and showcasing your expertise in dog training.
Much more compelling, and all it took was a few extra words to our prompt. That's the power of
Prompt Engineering!
Exercises:
1. Try a few queries of your own. Start with a generic, naive prompt and then improve it by
adding a persona.
2. Ask the LLM to give you two answers, leveraging two personas who hold different
viewpoints. For example enter the following in the Prompt Instructions:
“You'll act as a plant-based nutrition expert and a low-carb, high-fat (keto) nutrition expert and
provide one answer from each of them for every question I ask.”
“To create a personalized gym workout program that focuses on losing weight and building
strength, I need to gather some information from you. Here are a few questions to help me tailor
the program to your needs:
1. Current Fitness Level: What is your current fitness level? (Beginner, Intermediate,
Advanced)
2. Workout Frequency: How many days per week can you commit to working out?
3. Available Equipment: What equipment do you have access to? (e.g., free weights,
machines, resistance bands, etc.)
4. Time Commitment: How much time can you dedicate to each workout session?
5. Specific Goals: Do you have any specific strength goals or areas you want to focus on?
(e.g., upper body, lower body, core)
6. Injury History: Do you have any injuries or medical conditions that I should be aware
of?
7. Diet and Nutrition: Are you following any specific diet or nutritional plan?
8. Preferences: Do you prefer certain types of exercises or activities (e.g., cardio,
weightlifting, HIIT)?
Once I have this information, I can create a more effective and personalized workout program
for you!”
Let's see how else we can use the Interview pattern to our advantage. Imagine that we wanted to
create a blog post to publicize our “Prompt Engineering for Everyone course”.
I'll start a new chat to remove the existing fitness-specific prompt instructions.
At first, we'll try the naive approach:
“ Craft a blog post to announce my new course,"Prompt Engineering for Everyone".”
Try it for yourself to see the result. It's a good post but it's very generic because the AI needs to
learn about our specific course. Right now, it knows nothing about it other than the title.
Next, let's start a new chat and add the following prompt instructions:
“You will act as a SEO and content marketing expert. You will interview me, asking me (one at
the time) all the relevant questions necessary for you to generate the best possible answer to my
queries.
Craft a blog post to announce my new course, "Prompt Engineering for Everyone".”
These follow-up questions allow us to create a more precise blog post that draws information
specific to my course. The quality of your input still matters. The more information you provide
to the AI in answer to its question, the better the blog post will be.
Depending on your answers to these specific follow-up questions, the AI might decide that it has
enough information or opt to ask you further questions.
Another alternative in the Prompt Instructions is to instruct the AI to the given instruction.
“Ask me a series of questions, one by one, to gather all the information you need to give a proper
response.”
Any variation along those lines will do. Therefore, you do not need to remember the exact phrasing
and you can experiment with your own fine-tuned prompt instructions. The critical part is that you
understand the concept of soliciting an interview from the AI to have much more customized
results back from the AI.
Tips
1. Remember, the Interview Pattern is about drawing out as much specific information as
possible. Provide high-quality answers to the questions you receive to the LLM to obtain
better responses.
2. Combining the Persona Pattern and Interview Pattern can lead to richer; more detailed,
and personalized results.
3. Don't hesitate to experiment with different instructions. Sometimes, slight variations in
your instructions can lead to improved outcomes and new perspectives.
Now, get started yourself! Take your time with each exercise and reflect on the differences in the
results when you employ the Interview Pattern.
Exercises
1. Combining the Persona Pattern and the Interview Pattern improve the results for the question.
Try this prompt.
Question 1
The Naive Approach to prompting the AI often results in overly generic and broad responses.
o True
o False
Question 2
Is the Persona Pattern used to make the AI adopt a specific character or identity for more
customized results?
o Yes
o No
Question 3
Which of the following best describes the "Interview Pattern"?
Question 4
Why would one combine the Persona Pattern with the Interview Pattern?
Question 5
When requesting the AI to craft a blog post for the "Prompt Engineering for Everyone" course
using the Interview Pattern, what did the AI first ask for?
Let us try this in practice. Let us use a standard/naive prompt for the following reasoning problem.
Now, the answer is to order six orders of the cheapest item since each item is equally satiating.
When we feed this to our LLM, its reasoning goes off the rails. In my case, I got a logic which
ultimately suggested the following:
“To maximize satiety with a budget of $30 from the given Italian menu, you should aim to select
the items that provide the most food for the cost. Here's one possible combination:
1. Bruschetta: $4.99
2. Prosciutto: $9.99
3. Pecorino: $12.99
Total: $4.99 + $9.99 + $12.99 = $27.97”
This is incorrect as the optimal choice would be to order six servings of Bruschetta at $4.99 each.
Chain-of-Thought to the rescue
Our prompt will include this question, an accurate answer giving context and reasoning to the
LLM, and then a different question that can be answered using the same line of reasoning.
- Prosciutto $9.99
- Pecorino $12.99
- Calamari $13.99
- Bruschetta $4.99
- Carpaccio $14.99
Assuming each food is equally filling, spend $30 for a group of people by maximizing satiety.
A: Since each food is equally filling, maximizing our budget will require us to favor items that
have a low cost-per-satiety ratio. The ratio of each food, sorted by lowest to highest, is as follows:
- Bruschetta: $4.99
- Prosciutto: $9.99
- Pecorino: $12.99
- Calamari: $13.99
- Carpaccio: $14.99
If we are allowed to have multiple orders of the same item, the simplest answer is doing an integer
division of our $30 by the item with the lowest cost per satiety ratio, which is Bruschetta. So we
get $30 div $4.99 (the cost of Bruschetta) which is 6. So the answer is an order of 6 Bruschette.
- Guppies: $3.99
- Goldfish: $1.99
- Betta: $5.99
- Angelfish: $8.99
“To maximize the total number of fish you can buy with a budget of $20, you'll want to focus on
the cheapest options first. Here's the breakdown of the prices:
Goldfish: $1.99
Guppies: $3.99
Betta: $5.99
Angelfish: $8.99
The most economical choice is to buy as many Goldfish as possible since they are the least
expensive.
Calculating how many Goldfish you can buy with $20:
Since you can only buy whole fish, you can purchase 10 Goldfish.
Calculating the total cost for 10 Goldfish:
This leaves you with $0.10, which is not enough to purchase any additional fish.
Summary: Buy 10 Goldfish for a total cost of $19.90 to maximize the number of fish you can get
with your $20 budget.”
Perfect! Exactly what we wanted. The reply offers the correct answer and a proper explanation
modeled after our example.
Sometimes, the test question we ask might be more complex than the one we give in our example,
and the AI, depending on the model we are using, might not always do a great job.
Consider this prompt (assume the same Prompt Instructions as before about the Italian menu):
“A store sells three box types:
This isn't terrible since it's mostly correct, but it's missing the fact that we have more room in our
budget.
To utilize the remaining budget effectively, let's recap the leftover amounts from the previous
calculations:
1. From 3 Large Boxes: You have $2.03 left over after spending $17.97.
2. From 4 Medium Boxes: You have $0.04 left over after spending $19.96.
3. From 10 Small Boxes: You have $0.10 left over after spending $19.90.
Since the goal is to maximize the use of your remaining budget, let's see how to spend the leftover
amounts:
For the Large Boxes ($2.03 remaining):
You cannot buy any additional Large or Medium Boxes since they exceed the budget.
You can buy 1 Small Box for $1.99, leaving you with $0.04 remaining.
New Total:
3 Large Boxes (114L) + 1 Small Box (10L) = 124 liters total volume.
For the Medium Boxes ($0.04 remaining):
You cannot buy anything additional with $0.04.
Total Remains:
4 Medium Boxes (88L) and no additional purchases.
For the Small Boxes ($0.10 remaining):
You cannot buy anything additional with $0.10.
Total Remains:
10 Small Boxes (100L) and no additional purchases.
Final Summary:
If you started with 3 Large Boxes: Buy 1 Small Box for a total of 124 liters.
If you started with 4 Medium Boxes: Total remains 88 liters.
If you started with 10 Small Boxes: Total remains 100 liters.
The best option to maximize volume with the leftover money is to buy 1 Small Box if you initially
went with the Large Boxes, bringing your total to 124 liters.
This is the right answer but it's not the whole answer. We can get that by simply asking for it:
Certainly! Here's the complete breakdown including prices for each option while maximizing
volume within a $20 budget:
Box Types and Prices:
Small Box: 10L for $1.99 (Cost per liter: $0.199)
Medium Box: 22L for $4.99 (Cost per liter: $0.227)
Large Box: 38L for $5.99 (Cost per liter: $0.158)
Volume Maximization Strategy:
The Large Box provides the best value in terms of volume per dollar spent. Here’s how many of
each box type you can buy without exceeding $20.
1. Buying Large Boxes:
Cost of one Large Box: $5.99
Maximum number of Large Boxes: $20 / $5.99 ≈ 3.34 (you can buy 3)
Total cost for 3 Large Boxes: 3 * $5.99 = $17.97
Total volume: 3 * 38L = 114L
Remaining budget: $20 - $17.97 = $2.03
2. Buying Medium Boxes:
Cost of one Medium Box: $4.99
Maximum number of Medium Boxes: $20 / $4.99 ≈ 4.01 (you can buy 4)
Total cost for 4 Medium Boxes: 4 * $4.99 = $19.96
Total volume: 4 * 22L = 88L
Remaining budget: $20 - $19.96 = $0.04
3. Buying Small Boxes:
Cost of one Small Box: $1.99
Maximum number of Small Boxes: $20 / $1.99 ≈ 10.05 (you can buy 10)
Total cost for 10 Small Boxes: 10 * $1.99 = $19.90
Total volume: 10 * 10L = 100L
Remaining budget: $20 - $19.90 = $0.10
Summary of Options:
3 Large Boxes:
o Total Cost: $17.97
o Total Volume: 114L
o Remaining Budget: $2.03
4 Medium Boxes:
o Total Cost: $19.96
o Total Volume: 88L
o Remaining Budget: $0.04
10 Small Boxes:
o Total Cost: $19.90
o Total Volume: 100L
o Remaining Budget: $0.10
Spending the Remaining Budget:
1. If you buy 3 Large Boxes:
Remaining Budget: $2.03
You can buy 1 Small Box for $1.99, leaving you with $0.04.
New Total:
o Total Cost: $17.97 + $1.99 = $19.96
o Total Volume: 114L + 10L = 124L.
2. If you buy 4 Medium Boxes:
Remaining Budget: $0.04 (not enough to buy anything).
Total Remains: 4 Medium Boxes (88L) and no additional purchases.
3. If you buy 10 Small Boxes:
Remaining Budget: $0.10 (not enough to buy anything).
Total Remains: 10 Small Boxes (100L) and no additional purchases.
Final Recommendation:
The best option to maximize volume is to buy 3 Large Boxes and 1 Small Box, resulting
in a total of 124 liters at a total cost of $19.96.
If you choose to go with 4 Medium Boxes or 10 Small Boxes, you will end up with 88
liters and 100 liters, respectively, without the ability to spend any remaining budget.
Thus, the optimal strategy is to purchase 3 Large Boxes for $17.97 and 1 Small Box for $1.99,
resulting in 124 liters of total volume.
It's often helpful to make suggestions, make small corrections, and ask to reformat the answer in a
way that is useful to us.
Zero-Shot Chain-of-Thought Prompting
Kojima et al. have found a simple, effective approach to improve the results we obtain from our
model. This approach is often labeled as a Zero-Shot CoT.
There are a few words that, when added to the prompt, are likely to solicit better answers since
they invite the AI to do step-by-step reasoning, much like a human would when trying to come to
a resolution.
And:
Let's work this out in a step by step way to be sure we have the right answer.
These words are helpful but not magic. So feel free to use them to improve your results, but they
are usually best used along with other techniques discussed in this course.
For example, tucking these words at the end of our original standard prompt still generates an
incorrect answer with the GPT 3.5 model available at the time of writing.
In other words, our traditional Chain-of-Thought approach illustrated above, in which we use our
prompt to “teach” the model the desired outcome, is still superior.
Still, every time we get better results with just a few words, we should consider that option,
especially since this prompting technique solicits longer and more elaborate answers, which is
helpful for generating blog posts, essays, guides, etc.
Chain-of-Thought to explore subjects
Chain-of-Thought can be used in various ways to improve the chatbot's reasoning, especially in
areas where it's feeble. However, a more valuable use is when it comes to exploring subjects more
in-depth.
Instead of asking a generic question, we can break it down into steps we want the model to consider
to develop a much richer and valuable answer.
Let's see a practical example that doesn't necessarily use the question-and-answer template format.
For example, let's ask the model to tell us about Space Exploration:
Space exploration is the investigation and study of outer space through the use of space
technology, including spacecraft, satellites, and telescopes. It involves the exploration of
celestial bodies, such as planets, moons, asteroids, and comets, as well as the study of cosmic
phenomena like stars, galaxies, and the universe’s structure.
Robotic Exploration: This involves sending unmanned spacecraft to gather data and conduct
experiments. Notable examples include NASA's Mars rovers, the Voyager probes, and the
Hubble Space Telescope.
Human Exploration: This includes missions with astronauts aboard, such as the Apollo moon
landings, the International Space Station (ISS) missions, and future plans for crewed missions to
Mars.
Recent advancements in space exploration include the Artemis program, which aims to return
humans to the Moon by the mid-2020s, and the ongoing Mars missions, including the
Perseverance rover and the Ingenuity helicopter, which are exploring the Martian surface and
atmosphere.
Additionally, private companies like SpaceX and Blue Origin are increasingly involved in space
exploration, developing technologies for commercial space travel and potential colonization
efforts.
Now, let's try a Chain-of-Thought approach. Start a new chat and place the following in the
Prompt Instructions:
We are giving the model a lot of items to consider in its answer and tucking in Kojima et al.'s
suggestion of inviting the model to take a step-by-step approach for good measure.
Space exploration is the human endeavor to explore and understand the vast expanse of outer
space beyond Earth's atmosphere. It involves sending spacecraft, satellites, and, in some cases,
humans beyond our planet to gather information about celestial bodies, study the cosmos, and
develop technologies that can be used for various scientific, technological, and commercial
purposes. Space exploration has evolved over time, leading to numerous historical space
missions and significant achievements that have shaped our understanding of the universe and
our place in it.
The downside is that we had to develop a list requiring knowledge of the subject or at least research
into it, and this is time-consuming.
On the plus side, we didn't have to retrain the model, which would be truly time-consuming and
potentially expensive. Instead, the prompt split the "problem" into smaller steps worth exploring
and leveraged the existing model training to compute a reply.
Moreover, these starting points can lead to various interconnected thoughts and ideas from the
model. The beauty of a Chain-of-Thought is that it can branch out in different directions, exploring
numerous aspects and perspectives related to the initial topic.
We can ask specific questions at any time after the model has already shown us a broader
understanding of the topic.
Exercises
1. Chain-of-Thought Reasoning Practice: Give the AI a list of fruits and their prices.
Assuming each fruit offers the same health benefits, use the Chain-of-Thought approach
to spend $10 and maximize nutritional value. Expected Output: With a $10 budget,
purchase as many of the least expensive fruit to maximize nutritional value.
2. Zero-Shot CoT Prompting Using the phrases "Let's think step by step" or "Let's work this
out in a step-by-step way," pose a question about an unfamiliar topic and see if the AI can
produce a more reasoned, detailed response. Expected Activity: Assess the quality and
depth of the AI's answer compared to a traditional prompt.
3. Deep Dive using Chain-of-Thought: Select a broad topic, for instance, "Ocean
Conservation." Then, list various facets of the topic, like plastic pollution, overfishing,
coral reef degradation, etc. Use the Chain-of-Thought approach to get the AI's
comprehensive overview of the topic. Expected Outcome: Evaluate the AI's response to
see if it covers the topic more extensively and insightfully than a regular prompt.
Module 3 Quizzes
Question 1
What were the two phrases mentioned that can be added to the prompt to solicit better answers by
doing step-by-step reasoning?
Question 2
Using the Chain-of-Thought approach always requires retraining the AI model.
o True
o False
Question 3
Does using the Zero-Shot CoT prompting technique always produce short answers?
o Yes
o No
Question 4
In the provided example about space exploration, why was the Chain-of-Thought approach used?
Question 5
What is one downside to using the Chain-of-Thought approach as mentioned in the content?
For complex tasks that require exploration or strategic look ahead, traditional or simple prompting
techniques fall short. Yao et el. (2023) and Long (2023) recently proposed Tree of Thoughts (ToT),
a framework that generalizes over chain-of-thought prompting and encourages exploration over
thoughts that serve as intermediate steps for general problem solving with language models.
ToT maintains a tree of thoughts, where thoughts represent coherent language sequences that serve
as intermediate steps toward solving a problem. This approach enables an LM to self-evaluate the
progress through intermediate thoughts made towards solving a problem through a deliberate
reasoning process. The LM's ability to generate and evaluate thoughts is then combined with search
algorithms (e.g., breadth-first search and depth-first search) to enable systematic exploration of
thoughts with lookahead and backtracking.
Acting as a marketing expert, define a marketing plan for my online store, which sells shoes.
Expert 2: Next, we should conduct a competitive analysis. This involves researching other online
shoe stores to understand their marketing strategies, pricing, product offerings, and customer
engagement techniques. This information will help us identify gaps in the market and opportunities
for differentiation.
Expert 3: The third step is to establish clear marketing goals and objectives. These should be
specific, measurable, achievable, relevant, and time-bound (SMART). For example, goals could
include increasing website traffic by 30% over the next quarter or achieving a 15% conversion
rate on your online store.
Expert 1: Now that we've identified the target audience, we can develop buyer personas. These
are detailed profiles of your ideal customers that include their shopping behaviors, preferences,
and pain points. This will guide our messaging and marketing channels.
Expert 2: After the competitive analysis, we should outline the unique selling proposition (USP)
of your online store. What sets your shoe store apart from competitors? This could be unique
styles, sustainable materials, or exceptional customer service. Clearly defining your USP will help
in crafting your marketing messages.
Expert 3: Once we have our marketing goals, we can decide on the marketing channels to use.
This could include social media platforms, email marketing, influencer partnerships, and search
engine optimization (SEO). Each channel should align with where your target audience spends
their time online.
Expert 1: Next, we should create a content strategy that aligns with our buyer personas. This
includes deciding on the type of content to produce, such as blog posts about shoe care, social
media posts showcasing new arrivals, or videos demonstrating how to style different shoes.
Expert 2: Following the content strategy, we need to establish a budget for our marketing
activities. This includes costs for advertising, content creation, influencer partnerships, and any
tools or software needed for marketing automation and analytics.
Expert 3: Finally, we should implement a system for tracking and measuring the effectiveness of
our marketing efforts. This could involve using analytics tools to monitor website traffic,
conversion rates, and customer engagement metrics. Regularly reviewing this data will help us
adjust our strategy as needed.
By following these steps, we can create a comprehensive marketing plan tailored to your online
shoe store.
(Note: In this hypothetical scenario, each "expert" provided their input, but in reality, marketing
planning is a collaborative effort that requires input from various specialists to create a well-
rounded strategy.)
Not only is this a great initial answer, but as you can see, the three experts who appear to specialize
in different aspects of online marketing (SEO, E-Commerce, etc.) build on top of each other's
previous steps and thoughts.
Moreover, having established this framework, we can now ask follow-up questions. Specific ones
we might have, or simply soliciting further input from the AI. For example, with a simple:
Next steps
Expert 1: The next step is to implement the marketing strategies we've developed. This involves
launching campaigns across the chosen marketing channels, such as social media ads, email
newsletters, and content marketing initiatives. It's important to maintain consistency in branding
and messaging across all platforms to reinforce your brand identity.
Expert 3: Finally, we need to establish a feedback loop. This means regularly soliciting feedback
from customers through surveys or direct communication to understand their experiences and
preferences. This data will help refine your offerings and marketing strategies over time, ensuring
they remain relevant and effective.
By following these next steps, you'll be able to effectively execute and refine your marketing plan
for your online shoe store.
Regularly assessing the strategies and offerings of our competitors helps us stay ahead of the curve.
By identifying gaps in the market or areas where competitors may be falling short, we can innovate
and differentiate our store. Whether it's through new product features, unique marketing angles, or
improved customer service, innovation keeps us relevant and adaptable.
(Note: Each expert's input builds upon the previous steps and adds further depth to the marketing
plan. These steps are intended to create a comprehensive and adaptable strategy for the online shoe
store.)
The vast potential of this approach when exploring topics and seeking advice cannot be overstated.
Even if the individual steps are generic, we can always ask the AI to assist or give us specifics for
a particular step.
Additional Thoughts
Specificity in Instructions: In a real-world scenario, while the generic steps are valuable, for more
actionable results, you can be more specific in your instructions. For instance, you might request
each "expert" to provide two actionable tactics or tools per step they suggest. And you can, of
course, request specific experts or expertise.
Integration with Real Data: If you can supply the LLM with specific data about your business (like
target audience demographics, current website analytics, or specific marketing goals), it can
potentially refine its responses even further. Just be mindful of potential confidential information.
Segmented Inquiry: As briefly mentioned before, once you have a broad strategy laid out, you can
dive deeper into each individual step, asking the experts to further expand on their suggestions, or
even query different experts about the same step to gather multiple perspectives.
Exercises
1. Using the Tree-of-Thought prompting approach, leverage the LLM to answer a different
type of question you might have.
2. Try to devise your variation of Dave's prompt instructions. Does it make the output better
or worse? You might stumble upon a winning prompt that you can use in various scenarios.
Controlling Verbosity and the Nova System
Navigating the balance between the level of detail in responses received from the model and the
expectations we hold can sometimes yield unexpected results. At times, a concise response works,
while on other occasions, a more elaborate explanation is needed to grasp the context entirely.
While employing follow-up questions can make the model expand on its responses, the ability to
specify the desired verbosity level directly within our Prompt Instructions would be a very
convenient feature.
Controlling Verbosity
Online users have begun to share their inventive, prompt ideas and strategies, showcasing
considerable ingenuity.
For example, the following prompt shared on Reddit not only generates excellent answers but also
empowers control over the response’s length:
You are an autoregressive language model that has been fine-tuned with instruction-tuning and
RLHF. You carefully provide accurate, factual, thoughtful, nuanced answers, and are brilliant at
reasoning. If you think there might not be a correct answer, you say so. Since you are
autoregressive, each token you produce is another opportunity to use computation, therefore you
always spend a few sentences explaining background context, assumptions, and step-by-step
thinking BEFORE you try to answer a question. Your users are experts in AI and ethics, so they
already know you’re a language model and your capabilities and limitations, so don’t remind them
of that. They’re familiar with ethical issues in general so you don’t need to remind them about
those either.
Your users can specify the level of detail they would like in your response with the following
notation: V=((level)), where ((level)) can be 0-5. Level 0 is the least verbose (no additional
context, just get straight to the answer), while level 5 is extremely verbose. Your default level is 3.
This could be on a separate line like so:
V=4
((question))
Or it could be on the same line as a question (often used for short questions), for example:
V=0 How do tidal forces work?
Let’s now put this into practice with a query to empirically evaluate its efficacy. Incorporate it
within the Prompt Instructions and proceed to pose the ensuing question:
V=0 Why is the sky blue?
The answer will be exhaustive but relatively straightforward.
Now, let’s experiment with:
V=5 Why is the sky blue?
The resulting answer should be significantly more detailed.
Remember that you can create custom instructions that combine a prompt like this with other
techniques we explored in the course.
The Nova System (or Nova Process) is an intelligent way to solve problems using a team of virtual
experts powered by an LLM such as GPT-4o.
What’s Special About Nova?
Nova uses ChatGPT to create a “team” of experts that discuss and find solutions to tricky
problems.
It has a Discussion Continuity Expert (DCE) who keeps the conversation on track.
There’s also a Critical Evaluation Expert (CAE) who checks solutions to make sure they’re
good and safe.
How Does Nova Work?
1. Problem Unpacking: It breaks the problem down to understand it fully.
2. Expertise Assembly: It picks the right experts for the job and gets their initial thoughts.
3. Collaborative Ideation: The experts brainstorm together. The DCE leads the talk, and the
CAE ensures the ideas are good and safe.
Key Roles:
DCE: Keeps the discussion clear and on track.
CAE: Reviews the ideas and discusses any issues using facts and evidence.
Feel free to check out their GitHub repository for further information about the system and how to
ask it follow-up questions that keep up with the same pattern, but as a way of example, here is
what their prompt looks like:
Greetings, ChatGPT! You are going to facilitate the Nova System, an innovative problem-solving
approach implemented by a dynamic consortium of virtual experts, each serving a distinct role.
Your role will be the Discussion Continuity Expert (DCE). As the DCE, you will facilitate the Nova
process by following these key stages:
Problem Unpacking: Break down the task into its core elements to grasp its complexities and
devise a strategic approach.
Expertise Assembly: Identify the required skills for the task and define roles for a minimum of two
domain experts, the DCE, and the Critical Analysis Expert (CAE). Each expert proposes
preliminary solutions to serve as a foundation for further refinement.
Collaborative Ideation: Conduct a brainstorming session, ensuring the task's focus. The CAE
balances the discussion, pays close attention to problem-finding, enhances the quality of the
suggestions, and raises alarms about potential risks in the responses.
The Nova process is iterative and cyclical. The formulated strategy undergoes multiple rounds of
assessment, improvement, and refinement in an iterative development modality.
DCE: As the DCE, you are the thread weaving the discussion together, providing succinct
summaries after each stage and ensuring everyone understands the progress and the tasks at hand.
Your responsibilities include keeping the discussion focused on the current iteration goals,
tracking the state of the system in text in your output, and providing a summary and set of next
steps at the end of every iteration.
CAE: The CAE serves as the critic, examining proposed strategies for potential pitfalls. This role
includes evaluating ideas from multiple angles, identifying potential flaws, and substantiating
critiques with data, evidence, or reasoning. The CAE's goal is to poke holes and find problems in
the suggestions and strategies suggested by the experts and the DCE, and to find ways to enhance
efficiency, effectiveness, and simplicity.
Your output should follow this format, with bracketed sections filled out from the first-person
perspective of the respective expert. Replace the bracket parts with the expert output, and the
words "Expert 1" etc with names and titles.
DCE's Instructions:
{Instructions and feedback from the DCE}
Actions:
Expert 1: {Task/goal for Expert 1}
Expert 2: {Task/goal for Expert 2}
etc...
CAE Analysis: {A few short sentences from the CAE providing a critical and contrasting point of
view that points out potential problems}
To ensure constant learning and improvement, we conduct a succinct, one line, text-based
retrospective session every iteration or upon user request. We celebrate problem-solving
milestones within Nova System with a recognition system, as well as constructively criticize
ourselves to see how we might find opportunities for improvement.
As you are working, I would like to see enhancements and improvements as a step-by-step iterative
process, breaking the work apart into individual work efforts.
Remember to show your work and think step by step. I want to see examples in each iteration.
Please be very careful and methodical.
Now, let's ignite this problem-solving adventure! As the DCE, initiate the discussion with the user:
'Greetings! I'm Nova, an advanced problem-solving framework powered by a consortium of virtual
experts, each contributing a unique skill set. How can Nova assist you in solving your complex
problem today?'
Now, when I asked the actual question, Why is the sky blue? I received the following answer,
which is the most sophisticated take of all the prompts we have tried so far:
Let’s begin by unpacking the question, “Why is the sky blue?” We’ll start by breaking down the
underlying scientific principles, including the nature of sunlight and how it interacts with our
atmosphere.
Dr. Eleanor Ray, Atmospheric Scientist:
Sunlight, or white light, is made up of a spectrum of colors, each with different wavelengths. When
sunlight reaches our atmosphere, it encounters molecules and small particles. Blue light has a
shorter wavelength and is scattered in all directions by the gases and particles in the Earth’s
atmosphere. This scattering causes us to perceive the sky as blue during the day. The phenomenon
is called Rayleigh scattering.
Possible Solutions:
1. Detailed study into the phenomenon of Rayleigh scattering.
2. Examination of the spectrum of colors in sunlight and their interaction with atmospheric
particles.
Dr. Oliver Hughes, Optical Physicist:
In addition to the Rayleigh scattering explanation, the sky’s color is influenced by other factors
like pollution, altitude, and angle of sunlight. For instance, during sunrise and sunset, the sky often
appears red or orange. This is because the Sun is lower on the horizon, and its light passes through
more of Earth’s atmosphere, scattering shorter wavelengths and leaving longer wavelengths like
red and orange to dominate.
Possible Solutions:
1. Study the color variations of the sky during different times of the day and in different
environmental conditions.
2. Analyze the angle of incidence of sunlight and its relationship to the perceived color of the
sky.
DCE’s Summary:
The primary reason the sky appears blue is due to Rayleigh scattering, where the shorter blue
wavelengths of sunlight are scattered more than other colors, leading us to perceive the sky as
blue. Other factors like pollution, altitude, and angle of sunlight can influence the perceived color
of the sky during different times and conditions.
Actions:
Dr. Eleanor Ray: Deep dive into the various atmospheric conditions and their impact on sky color.
Dr. Oliver Hughes: Investigate the physical properties of light and its interaction with various
elements in the Earth’s atmosphere.
CAE Analysis: While the primary factor behind the blue sky is Rayleigh scattering, it’s crucial
not to overlook other factors. For example, atmospheric conditions such as pollution levels or even
volcanic eruptions can significantly influence sky color. Additionally, we must consider human
perception: how our eyes and brain interpret colors might play a role in how we perceive the sky’s
color.
DCE State: We have broken down the problem of “Why is the sky blue?” into its scientific
underpinnings. Our experts have proposed studies and analyses to delve deeper into the topic.
Goals for the next iteration:
1. Understand the effect of human perception on sky color.
2. Investigate the influence of different atmospheric conditions on sky color, like after a
volcanic eruption.
3. Explore the difference in sky colors in different geographic locations.
Current Work Efforts:
Question 1
According to researchers, the Tree-of-Thought (ToT) approach achieved a 74% success rate in the
Game of 24, while Chain-of-Thought only achieved 4%.
o True
o False
Question 2
What does the Tree-of-Thought (ToT) prompting encourage the AI to do?
Question 3
Which of the following can be considered a benefit of the ToT approach?
Question 4
What purpose does controlling verbosity serve in the model's response?
Question 5
In the Nova System, who is responsible for ensuring the conversation remains on topic?
Course Conclusion
As we draw this course to a close, let's take a moment to reflect on the journey we've embarked
upon.
The transformative power of Artificial Intelligence is beyond doubt, and with LLM at the helm,
we're navigating uncharted territories of conversational capabilities. Yet, the quality of these
interactions lies in our capacity to harness them effectively. And this is where prompt engineering
shines, providing us with the tools and techniques to communicate more purposefully with these
systems, making the best use of them.
Our initial modules acquainted us with the basics of Prompt Engineering, underscoring the
importance of treating English as a sort of new programming language. With hands-on labs and
interactive sessions, we delved into the nuances of communicating with GPT-based AI tools,
understanding the limitation of the naive prompting approach, and embracing the immediate
improvements brought on by the Persona and Interview Patterns.
As we progressed, we discussed more advanced strategies. The Chain-of-Thought and Tree-of-
Thought approaches are not just methods but philosophies in their own right, guiding us to craft
sequences of prompts that allow for dynamic, rich, and context-aware interactions with AI.
Additionally, we learned to harness the power of controlling verbosity and gained insights into the
Open-source Nova System, ensuring our AI interactions are as helpful as possible.
Why does all this matter? The skills you've acquired are not just theoretical constructs but practical
tools. Whether it's in drafting precise business emails, generating creative content, assisting in
research, or even day-to-day tasks like planning and organization, the art of Prompt Engineering
amplifies the potential of AI in a myriad of ways.