0% found this document useful (0 votes)
88 views

IBM - Generative AI - Prompt Engineering Basics- Course Notes

The document outlines a course on prompt engineering basics for generative AI, detailing its relevance, best practices, and techniques for creating effective prompts. It consists of three modules that cover the concept of prompt engineering, various approaches, and hands-on labs to practice skills. The course is designed for a wide audience, including professionals and students, and aims to enhance understanding and application of generative AI tools.

Uploaded by

Somedutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

IBM - Generative AI - Prompt Engineering Basics- Course Notes

The document outlines a course on prompt engineering basics for generative AI, detailing its relevance, best practices, and techniques for creating effective prompts. It consists of three modules that cover the concept of prompt engineering, various approaches, and hands-on labs to practice skills. The course is designed for a wide audience, including professionals and students, and aims to enhance understanding and application of generative AI tools.

Uploaded by

Somedutta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Page 1 of 35

IBM – GENERATIVE AI: PROMPT ENGINEERING BASICS

What You’ll Learn:

- Explain the concept and relevance of prompt engineering in generative AI


models
- Apply best practices for creating prompts and explore examples of
impactful prompts
- Practice common prompt engineering techniques and approaches for
writing effective prompts
- Explore commonly used tools for prompt engineering to aid with prompt
engineering.

There are 3 Modules in this Course:


This course is designed for everyone, including professionals, executives,
students, and enthusiasts interested in leveraging effective prompt engineering
techniques to unlock the full potential of generative artificial intelligence (AI)
tools like ChatGPT.

This course is designed for everyone, including professionals, executives,


students, and enthusiasts interested in leveraging effective prompt engineering
techniques to unlock the full potential of generative artificial intelligence (AI)
tools like ChatGPT.
Prompt engineering is a process to effectively guide generative AI models and
control their output to produce desired results. In this course, you will learn the
techniques, approaches, and best practices for writing effective prompts.

You will learn about prompt techniques like zero-shot and few-shot, which can
improve the reliability and quality of large language models (LLMs). You will also
explore various prompt engineering approaches like Interview Pattern, Chain-of-
Thought, and Tree-of-Thought, which aim at generating precise and relevant
responses.

You will be introduced to commonly used prompt engineering tools like IBM
Watsonx Prompt Lab, Spellbook, and Dust.

The hands-on labs included in the course offer an opportunity to optimize results
by creating effective prompts in the IBM Generative AI Classroom. You will also
hear from practitioners about the tools and approaches used in prompt
engineering and the art of writing effective prompts.
Page 2 of 35

MODULE 01: PROMPT ENGINEERING FOR GENERATIVE AI:

In this module, you will learn the concept of prompt engineering in generative AI.
You will also learn the best practices for writing effective prompts and assess
common prompt engineering tools.

Learning Objectives:
- Assess commonly used tools for prompt engineering
- Define a prompt and its elements
- Explain the concept and relevance of prompt engineering in generative AI
- Apply the best practices for writing effective prompts to obtain the desired
response
- Explore commonly used tools for prompt engineering

A. COURSE INTRODUCTION:
Let's get started with the Basics of Prompt Engineering.
An expert knows all the answers if you ask the right questions. Interestingly, this
is the same principle on which we design prompts for generative AI models,
prompts that we use to query and question AI applications such as chatbots,
image, audio or video generation tools, and even virtual worlds. Prompts
optimize the response of generative AI models. The power lies in the questions
that you ask. Knowing how to write a prompt that is effective and direct will allow
you to generate more precise and relevant content. A good question to ask right
now is what will I be able to do after completing this course?
This course invites all beginners, whether professionals, enthusiasts,
practitioners, or students who have a genuine interest in learning how to write
effective prompts. A course for everyone regardless of your background or
experience. By the end of this course, you'll be able to explain the concept and
relevance of prompt engineering in generative AI models, apply the best
practices for creating prompts, assess commonly used tools for prompt
engineering, and apply
common prompt engineering techniques and approaches for writing effective
prompts.
This is a focused course comprising three modules, each of which requires one to
two hours to complete.
In Module 1 of the course, you'll learn about the concept of prompt
engineering, starting with how to define a prompt and its elements. You'll learn to
apply the best practices for writing effective prompts, and assess common
prompt engineering tools such as IBM watsonx, Prompt Lab, Spellbook, and
Dust.
In Module 2, you'll study various prompt engineering approaches like Interview
Pattern, Chain-of-Thought, and Tree-of-Thought. You'll discover techniques for
skillfully crafting prompts such as Zero -shot and Few-shot to produce precise
and relevant responses.
Module 3 requests your participation in a final project and presents a graded quiz
to test your understanding of course concepts. You can also visit the course
glossary and receive guidance on the next steps in your learning journey.
The course is curated with a mix of concept videos and supported
readings. Watch all the videos to capture the full potential of the learning
material. You'll enjoy hands on labs and a final project that demonstrates how to
optimize results by creating effective prompts in the IBM generative AI
classroom.
The course includes practice quizzes to help you reinforce your learning. At the
end of the course, you'll also attempt a graded quiz. The course also offers
Page 3 of 35

discussion forums to connect with the course staff and interact with your
peers. Most interestingly, through the Expert viewpoint videos, you'll hear from
experienced practitioners who will share their perspectives on the tools
and approaches used in prompt engineering and the art of writing effective
prompts.
Are you ready to learn everything you can about writing prompts to unlock the
full
potential of generative AI? Let’s get started.
B. COURSE OVERVIEW:
This course is designed for practitioners and enthusiasts passionate about
learning prompt engineering techniques to unlock the full potential of generative
artificial intelligence (AI) models.
This course explains the techniques, approaches, and best practices for writing
effective prompts. You will learn to utilize these techniques to effectively guide
generative AI models and control their output to obtain the desired results.
You will learn about prompt techniques like zero-shot and few-shot, which can
improve the reliability and quality of large language models (LLMs).You will also
explore various prompt engineering approaches like Interview Pattern, Chain-of-
Thought, and Tree-of-Thought, which aim at generating precise and relevant
responses.
You will be introduced to commonly used prompt engineering tools like IBM
watsonx Prompt Lab, Spellbook, and Dust.
The hands-on labs included in the course offer an opportunity to optimize results
by creating effective prompts in the IBM Generative AI Classroom. You will also
hear from practitioners about the tools and approaches used in prompt
engineering and the art of writing effective prompts.
After completing this course, you will be able to:
 Explain the concept and relevance of prompt engineering in Generative AI
models
 Assess commonly used tools for prompt engineering
 Apply best practices for writing effective prompts
 Apply common prompt engineering techniques and approaches for writing
prompts

Course Content
This course is divided into three modules. It is recommended that you complete
one module per week or at a pace that suits you - whether its a few hours every
day, or completing the entire course over a weekend or even in one day.

Week – 1: Module 1: Prompt Engineering for Generative AI


In this module, you will learn the concept of prompt engineering in generative AI.
You will also learn the best practices for writing effective prompts and assess
common prompt engineering tools.

Week – 2: Module 2: Prompt Engineering; Techniques and Approaches


In this module, you will discover techniques to craft prompts that effectively
steer generative AI models. You will also learn about various prompt engineering
approaches that can enhance the capabilities of generative AI models to produce
precise and relevant responses.

Week – 3: Module 3: Course Quiz and Wrap-Up


In this module, you will attempt a graded quiz that will test and reinforce your
understanding of the concepts covered in the course. The module also includes a
Page 4 of 35

glossary to enhance your comprehension of generative AI-related terms. Finally,


this module will guide you toward the next steps in your learning journey.
This module also includes optional topics related to text-to-image prompting and
prompt engineering capabilities in IBM watsonx.

Learning Resources
The course offers a variety of learning assets: videos, readings, hands-on labs,
expert viewpoints, discussion prompts, and quizzes.
The concepts are presented through videos and readings, complemented by
hands-on learning experiences in the labs.
Expert Viewpoints are videos that feature industry practitioners showcasing real-
world applications of the skills taught in this course.
Interactive learning is encouraged through discussions where you can meet
your staff and peers.
Practice quizzes at the end of each module will test your understanding of
what you learned. The final graded quiz will assess your conceptual
understanding of the course.

Who Should Take this Course?


This course caters to any one eager to delve into the concept of prompt
engineering and explore the various tools and approaches.
This course is for you if you are:
 An individual looking for an introduction to prompt engineering
 A student aiming to graduate with practical knowledge of writing effective
prompts for generative AI models, thereby increasing job readiness
 A professional seeking to enhance your professional capabilities and
efficiency by leveraging the capabilities of prompt engineering in
generative AI
 A manager or executive keen on using prompt engineering for strategic
advantages in your organization

Recommended Background
This course is relevant for anyone interested in exploring the field of generative
AI and requires no specific prerequisites.
The course uses simple, easy-to-understand language to explain the critical
concepts of generative AI without relying on technical jargon. The hands-on labs
provide an opportunity to practice the skills covered in the course and do not
require a programming background or college degree.
To derive maximum learning from this course, all that’s needed is active
participation in and completion of the various learning engagements offered
throughout the modules.
Good luck!

C. SPECIALIZATION OVERVIEW:
This course is part of multiple programs:
 Generative AI Fundamentals specialization
 Applied AI Developer Professional Certificate
 Generative AI for Data Scientists Specialization
 Generative AI for Software Developers Specialization
Page 5 of 35

 Generative AI for Data Analysts Specialization


 Generative AI for Data Engineering Professional Certificate
 Generative AI for Cybersecurity Professionals Specialization
 Generative AI for Human Resources (HR) Professionals Specialization
 Generative AI for Project Managers Specialization
 Generative AI for Product Managers Specialization
 Generative AI for BI Intelligence (BI) Analysts Specialization

Generative AI Fundamentals Specialization


This specialization provides a comprehensive understanding of the fundamental
concepts, models, tools, and applications of Generative AI, empowering you to
apply and unlock its possibilities.
In this specialization, you will explore the capabilities and applications of
generative AI. You will learn about the building blocks and foundation models of
generative AI. You will explore generative AI tools and platforms for diverse use
cases. Additionally, you will learn about prompt engineering, enabling you to
optimize the outcomes produced by generative AI tools. Further, you will gain an
understanding of the ethical implications of generative AI in relation to data
privacy, security, the workforce, and the environment. Finally, the specialization
will help to recognize the potential career implications and opportunities through
generative AI.
This specialization is intended for:
 Working professionals who want to enhance their careers by leveraging
the power of generative AI
 Technophiles who wish to stay updated with the advancements in AI
 Individuals seeking an introduction to generative AI and a seamless
experience through the world of Generative AI
 Managers and executives who want to leverage generative AI in their
organizations
 Students who wish to graduate with practical AI skills that will enhance
their job-readiness

Specialization Content
The Generative AI Fundamentals specialization comprises five short courses.
Each course requires 4-6 hours of learners’ engagement time.
 Generative AI: Introduction and Applications
 Generative AI: Prompt Engineering Basics
 Generative AI: Foundation Models and Platforms
 Generative AI: Impact, Considerations, and Ethical Issues
 Generative AI: Business Transformation and Career Growth

Applied AI Professional Certificate


This professional certificate gives you a firm understanding of AI technology, its
applications, and its use cases. You will also explore the capabilities and
applications of generative AI. Additionally, you will learn about prompt
engineering, enabling you to optimize the outcomes produced by generative AI
tools.
You will become familiar with concepts and tools like machine learning, data
science, natural language processing, image classification, image processing,
IBM Watson AI services, OpenCV, and APIs. Even if you have no programming
background, through this Professional Certificate, you will learn practical Python
Page 6 of 35

skills to design, build, and deploy AI applications on the web. The courses will
also enable you to apply pre-built AI smarts to your products and solutions.

Certificate Content
The Applied AI Developer Professional Certificate includes the following courses:
 Introduction to Artificial Intelligence (AI)
 Generative AI: Introduction and Applications
 Generative AI: Prompt Engineering Basics
 Building AI Powered Chatbots Without Programming
 Python for Data Science, AI & Development
 Developing AI Applications with Python and Flask
 Building AI Applications with Watson APIs

Generative AI for Data Scientists Specialization


This specialization is designed to support individuals to kickstart their journey
into applying generative AI in the field of data science. It caters to both current
professionals and those aspiring to enter this field, encompassing roles such as
data scientists, data analysts, data architects, engineers, and even individuals
with a passion for working with data. In this specialization, you will understand
the basics of generative AI and its real-world applications, learn about generative
AI prompts engineering concepts and approaches, and explore commonly used
prompt engineering tools, including IBM Watsonx, Prompt Lab, Spellbook, and
Dust. Finally, you will also learn to apply generative AI tools and techniques
throughout the data science methodology for data augmentation, data
generation, feature engineering, model development, model refinement,
visualizations, and insights.

Specialization Content
The Generative AI for Data Scientists Specialization includes the following
courses:
 Generative AI: Introduction and Applications
 Generative AI: Prompt Engineering Basics
 Generative AI: Elevate your Data Science Career

Generative AI for Software Developers Specialization


This specialization is designed for anyone interested in leveraging the power of
generative AI in software development. This specialization is suitable for both
existing and aspiring web developers, mobile app developers, front-end
developers, back-end developers, full-stack developers, DevOps professionals,
and Site Reliability Engineers (SREs). You will learn the basics of generative AI
including its uses, models, and tools for text, code, image, audio, and video
generation, and explore various prompt engineering approaches and prompt
engineering tools including IBM Watsonx, Prompt Lab, Spellbook, and Dust.
You will also boost your programming skills by learning to leverage generative AI
to design, develop, translate, test, document, and launch applications and their
code, and gain hands-on experience using generative AI tools and models, such
as GitHub Co-pilot, Open AI ChatGPT, and Google Gemini, for various software
engineering tasks.

Specialization Content
The Generative AI for Software Developers Specialization includes the following
courses:
 Generative AI: Introduction and Applications
 Generative AI: Prompt Engineering Basics
Page 7 of 35

 Generative AI: Elevate your Software Development Career


Page 8 of 35

A. CONCEPT OF PROMPT ENGINEERING

01. WHAT IS A PROMPT?


Welcome to "What Is a Prompt?"
After watching this video, you will be able to define a prompt and its
elements. You will also be able to explain the relevance of writing effective
prompts to guide generative AI models to produce the desired results.
One of the significant capabilities of generative AI models is that their output
closely resembles what a human can produce. Relevant, contextual, imaginative,
nuanced, and linguistically accurate and one of the critical factors in generating
this output is prompt.
What is a prompt? A prompt is any input you provide to a generative model to
produce a desired output. You can think of it as an instruction you provide to the
model. For example, write a small paragraph describing your favourite holiday
destination. Write HTML code to generate a dropdown selection of cities within
an online form. These are straightforward prompts used to produce a specific
output.
Prompts can also be a series of instructions that refine the output step by step to
achieve the desired result. For example, write a short story about a scientist
studying life on Mars. What were some of the challenges he faced during his
research? With these examples, it is clear that prompts contain
questions, contextual text, guiding patterns or examples, and partial input for
the model.
Based on these natural language requests submitted as prompts, generative AI
models collect information, derive inferences, and provide creative
solutions. These instructions help the model produce relevant and logical
responses or output based on provided inputs. Let's look at some more
examples to help us understand this better.
Suppose you want the model to write a short story about the struggles
and achievements of a farmer who became a successful businessman in 10
years. If your prompt is rich man's story from a small town, his struggles and
achievements, it'll produce a generic output. As this is what we call naive
prompting. It means asking queries from the model in the simplest possible
manner. To convey your intentions to the model, you can make simple
adjustments that can radically improve the result. Like your prompt must have
context to proper structure and can be comprehensible so you can rewrite the
prompt as "Write a short story about the struggles and achievements of a farmer
who became a rich and influential businessman in 10 years."
Let's look at another example where you want the model to generate an image
of a sunset scenery you have in mind. Writing the prompt as "Sunset
image between mountains" may not give you the desired output. The prompt is
too brief and lacks a detailed outline of the image you have in mind. You can
rewrite the prompt as "Generate an image depicting a calm sunset about
a river valley that rests amidst the mountains." To master the art of writing
effective prompts,
let's understand the building blocks of a well-constructed prompt one by one.
Instructions, give the model distinct guidelines regarding the task you wish to
execute. Steering the actions of the generative AI model to influence the
formation of its response. Like "Write an essay in 600 words, analysing the
effects of global warming on marine life" is one example.
Context, helps establish the circumstances that form the setting of the
instruction and provides
a framework for generating relevant content.
Page 9 of 35

To understand this, let's add some context to the prompt discussed in the
previous example. "In recent decades, global warming has undergone significant
shifts, leading to rising sea levels, increased storm intensity, and changing
weather patterns. These changes have had a severe impact on marine life. Write
an essay in 600 words analysing the effects of global warming on marine
life." This prompt will help the model generate output aligned with the
context. Input data is any piece of information provided by you as part of the
prompt. This can be used as a reference for the generative model to attain
responses with a specific set of details or ideas.
To provide input data, the same prompt can be reconstructed in the following
manner. "You have been provided with a data set containing temperature records
and measurements of sea levels in the Pacific Ocean. Write essay in 600
words, analysing the effects of global warming on marine life in the Pacific
Ocean."
Output indicator, offers benchmarks for assessing the attributes of the output
generated by the model. It can outline the tone, style, length, and other qualities
you desire from the output. In the prompt "Write an essay in 600
words, analysing the effects of global warming on marine life," the output
indicator specifies that the output generated should be an essay of 600 words.
It will be evaluated based on the clarity of the analysis and the incorporation of
relevant data or case studies. Each of these elements plays a role in helping the
generative AI model comprehend your requirements and give you the desired
output.
In this video, you learned that a prompt is any input or series of instructions
you provide to a generative model to produce a desired output. These
instructions help in directing the creativity of the model and assist it in producing
relevant and logical responses. The building blocks of a well structured prompt
include instruction, context, input data, and output indicators. These elements
help the model comprehend our necessities and generate relevant responses.

02. WHAT IS PROMPT ENGINEERING?


Welcome to What is Prompt Engineering?
After watching this video, you'll be able to define what prompt engineering is and
explain the relevance and importance of prompt engineering with respect to
generative AI models. You'll also be able to explain the process involved in
formulating effective prompts through prompt engineering to guide these models
to produce relevant responses. The process of designing effective prompts to
generate better and desired responses is called prompt engineering.
Although generative AI models have the potential to supplement human
creativity, if you fail to provide precise prompts, these models may
produce inadequate results and even false and misleading information.
Prompt engineering is a blend of critical analysis, creativity, and technical
acumen. It is not limited to asking the right question. It includes framing the
question in the right context with the right information and your expectation
of desired outcomes to elicit the most appropriate response. Let's understand
this through an example.
A ship is sailing through the Atlantic Ocean. To plan his voyage, the ship's captain
wants to know the precise weather forecast for a specific location at a specific
time. In this case, providing the model with a simple prompt, such as weather
forecast of the Atlantic Ocean,
may not get the desired results. To get the most accurate results, the captain will
have to engineer the prompts. When designing the prompt, the captain needs
to define the context to include details such as the intended location for the
weather forecast in latitude and longitude
Page 10 of 35

and the time range for prediction.


For example, the captain of a ship is planning a strategic voyage in the Atlantic
Ocean. To help the captain navigate effectively, provide weather forecasts for the
upcoming week from 28th August 2023 to 1st September 2023. The coordinates
of the target location are between 20 degrees north and 30 degrees north
latitude and 40 degrees west and 20 degrees west longitude.
The captain must also indicate whether he requires specific output, like should
the model return information on other weather elements that could affect the
voyage. For example, to help plan an effective navigation in the Atlantic
Ocean, provide detailed information about expected wind patterns, wave
heights, precipitation probabilities, cloud cover, and any potential storms that
might affect the voyage during a specified time frame and location.
It's important to recognize that prompt engineering is a well-structured iterative
process that involves refining the prompts and experimenting with various
factors that could influence the output from the model. Let's understand the
step-by-step process involved in creating effective prompts through prompt
engineering.
Define the goal.
The initial step in the process involves establishing a distinct goal. You must
know what exactly
you want the model to generate. For example, form a brief overview of the
benefits and risks associated with artificial intelligence and automobiles.
Craft initial prompt.
Having defined the goal, it's now time to create an initial prompt. This might
manifest as a question, a directive, or even a situation depending on the
goal. For example, write an article that presents a well-rounded analysis of the
benefits and drawbacks associated with the incorporation of artificial intelligence
in the field of automobile industry.
Test the prompt.
The prompt you created should now be tested and analysed for its
response. While the response might be relevant, it could lack the
distinct perspective you are aiming for. For example, the response to the initial
prompt directly lays out the benefits and drawbacks of integrating artificial
intelligence in the automobile industry. It does not highlight any ethical concerns
that might arise. Further, there's no discussion on the positive and negative
implications of the scene.
Analyze the response
You must carefully review the response and examine whether it aligns with your
goals. If not, make a note of the areas where it fell short. For example, the initial
prompt used fails to cover a comprehensive range of benefits and
risks associated with artificial intelligence in the automobile industry.
Refine the prompt.
Using the knowledge acquired through testing and analysis, it's now appropriate
to modify the prompt. This might include enhancing its specificity, incorporating
additional context, or rephrasing. The initial prompt can be refined as
follows. Write an informative article discussing the role of artificial intelligence
in revolutionizing the automobile industry. Address key aspects such as benefits,
drawbacks, ethical considerations, and both positive and negative
implications. Cover specific domains like autonomous driving and real-time traffic
analysis,
while also examining potential challenges such as technical complexity and
cybersecurity concerns.
Iterate the process.
Page 11 of 35

The last three steps are repeated until you're satisfied with the
response. Consequently, following several cycles of refinement, the final prompt
shall take this form
Example: Write an article highlighting how artificial intelligence is reshaping the
automobile industry focusing on the positive advancements, particularly in
autonomous driving and real-time traffic analysis, while thoroughly exploring
concerns related to intricate technical aspects such as decision-making
algorithms and potential cybersecurity breaches. Emphasize the implications
these concerns may have on vehicle safety. Ensure that the analysis is thorough,
backed with examples and encourages critical thinking.
Now let's summarize the importance and relevance of prompt engineering and
generative AI models.
Optimizing model efficiency.
Prompt engineering helps design intelligent prompts that allow the users to
harness the full capabilities of these models without requiring extensive
retraining.
Boosting performance for specific tasks.
Prompt engineering empowers generative AI models to deliver responses that
are nuanced and have a context, rendering them more effective for specific
tasks.

Understanding model constraints.


Refining prompts through each iteration and studying the corresponding
responses of the model can help us understand its strengths and
weaknesses. This knowledge can further guide
feature enhancement or complete development of the model in the future.
Enhancing model security.
Skilled prompt engineering can prevent issues of harmful content generation
due to poorly designed prompts, thereby enhancing safe utilization of the
model.
In this video, you learned that prompt engineering is the process of designing
effective prompts
to leverage the full capabilities of generative AI models and producing optimal
responses. You also learned the process of refining a prompt via prompt
engineering. At last, you learned the importance of prompt engineering and
optimizing model efficiency, boosting performance, understanding model
constraints, and enhancing its security.

Prompt Instructions
Entering prompt instructions
Moving to the central portion, you'll find two input areas—one for your Prompt
Instructions and one to send the actual prompt.
The prompt instructions are a way to instruct the AI about the context of the
conversation that is about to happen or a way to ask it to behave in a specific
manner.

Prompt Instructions and the Prompt


The difference between prompt instructions and the prompt
Technically, you could paste in the Prompt your Prompt Instructions just before
your question for the AI. What is the difference between the two approaches?
Aside from appearing much neater visually, splitting the prompt instructions from
the prompt allows us to lock the instructions to the whole conversation without
repasting it each time we ask a question.
Page 12 of 35

Occasionally, this differs from what we want, so placing such instructions directly
in the prompt textbox can be beneficial to limit them to that single question
without affecting the rest of the chat.

Experimenting with Prompts


Large language models (LLMs) are pre-trained on vast amounts of data to
generate novel content, including text, images, and video. To generate the
desired response, you should feed the input to a model as a prompt. It is
important to provide a context-based, clear, and precise prompt to guide the
model and get the desired output. The process of designing effective prompts to
generate better and desired responses is called prompt engineering.

OpenAI's gpt-4o-mini is an advanced foundation large language model. As a


powerful tool for generating coherent and contextually relevant text, the gpt-4o-
mini foundation model can be used for various applications, including chatbots,
virtual assistants, content generation, and more. It can be applied for translating
languages, answering questions in an informative way, and completing tasks
such as writing poems, code, scripts, musical pieces, emails, letters, and so on.

03. BEST PRACTICES FOR PROMPT CREATION:


Welcome to best practices for prompt creation.
After watching this video, you will learn to apply the best practices for creating
effective prompts and explain the process of drafting and refining prompts
through various examples.
Writing effective prompts is crucial for utilizing the full potential of generative AI
models to
produce relevant and accurate responses. By applying the best practices for
creating effective prompts, you can supervise the style, tone, and content of the
output generated. You can apply the best practices for crafting effective prompts
across four essential dimensions: clarity, context, precision, and role play or
persona pattern. Let's explore each of these aspects individually.
To have clarity in your prompt, keep the following points in mind. Simple and
straightforward language can convey instructions easily. Write prompts that are
unambiguous and easy to comprehend. Specialized terminologies can
potentially puzzle the model or the users, so write prompts using simple terms
that can be understood by a broad range of audiences. Vague prompts could
lead to responses that do not align with your intentions, therefore, you must
provide a clear description of the task that the model must perform. Let's
understand this through an example.
Discuss culinary processes that take place on foliaceous stipules of plants with
the help of sunlight, also mention a green thing and how light, air, and water are
important for aerial parts of the plant. This prompt has much ambiguity. Nowhere
does it explicitly mention the process you want to discuss. The prompt contains
complex terminologies which make it difficult to comprehend. It's also vague and
does not clearly describe the task at hand. To ensure clarity, you can rewrite this
prompt as, explain the process of photosynthesis in plants, detailing the role of
chlorophyll and how sunlight, carbon dioxide, and water contribute to this
biological function. The revised prompt has simple, clear, and concise language
and explicitly states that
you want to discuss the process of photosynthesis in plants.
Moving on to the next essential dimension, which is the context.
Context always helps the model understand the situation or the subject. This
could involve giving a brief introduction or explanation of the circumstances in
which the response is required, relevant information or specific details like
Page 13 of 35

people, places, events, or concepts help guide the understanding of the


model. It's important to incorporate such details while writing prompts. For
example, the prompt, write what happened during the outbreak of the
Revolutionary War in 1775 does not contain adequate context and specific
details to guide the understanding of the model. To establish the right context
and include the relevant information,
you can rewrite this prompt as, describe the historical events leading to the
American Revolutionary War, focusing on key incidents like the Boston Tea
Party, Battle of Saratoga and so on. Highlight the tensions between the American
colonies and the British government and explain how these events led to the
outbreak of the Revolutionary War in 1775.
Another important dimension for creating effective prompts is precision.
Precision helps outline your request in the prompt. If you're searching for a
particular kind of response, express that with clarity. Examples
incorporated within your prompts can help the model understand what kind of
response you're looking for and direct its thought process. For example, in the
prompt talk about supply and demand and how it is affected in economics,
there's no precise outline of a particular kind of response and no examples. To
ensure precision, you can rewrite this prompt as explain the concept of supply
and demand in economics,
describe how an increase in demand can influence pricing with the help of an
illustrative example like the smartphone market.
Similarly, explain the repercussions of reduced supply on pricing by drawing
parallels with situations like disruptions in oil production. This prompt clearly
expresses that you want to explain a concept with the help of examples.
Finally, let's discuss the last dimension, which is role play or persona pattern.
Prompts written from the perspective of a specific character or persona can help
the model
generate responses aligned with that perspective. Essential contextual details
enable the model to effectively assume a particular role. If you're requesting the
model to reply from the standpoint of a historical figure, a fictional character or a
specific profession, then provide contextual details. Let's look at this example
here.
Write a log entry describing the strange flora and fauna of an uncharted alien
planet. This prompt will simply give scientific details about the alien planet and
will not explain its answer from the perspective of a professional.
You can rewrite the prompt as pretend you are an astronaut who has just
landed on an uncharted alien planet. Write a log entry describing the strange
flora and fauna you've encountered, like the color of the sky and the unfamiliar
sounds echoing through the alien landscape. Express your feelings of
excitement, curiosity, and a hint of apprehension as you
document this extraordinary journey. In this example, you explicitly
gave contextual details and assumed yourself to be an astronaut. This prompt
will generate a response aligned with the perspective of an astronaut.
In this video, you learned that writing effective prompts for generative AI models
is essential for supervising the style, tone, and content of the output. Best
practices for writing effective prompts can be implemented across four
dimensions: clarity, context, precision, and role play.
Clarity includes using simple and concise language, context provides background
and required details, precision means being specific and providing
examples. Role play can enhance responses by assuming a persona and offering
relevant context. These practices can be adapted to specific needs for optimal
results.
Page 14 of 35

04. COMMON PROMPT ENGINEERING TOOLS:


Welcome to Common Prompt Engineering Tools.
After watching this video, you'll be able to describe the common functionalities
of prompt engineering tools and explain the capabilities of a few common tools
for prompt engineering.
Prompt engineering is the process of designing accurate and
contextually appropriate prompts to interact with generative AI models
to generate relevant and accurate outputs. To aid you with this process, you have
diverse prompt engineering tools.
Prompt engineering tools provide various features and functionalities to optimize
the creation
of prompts for desired outcomes. These tools are specifically useful for users who
may not be
proficient with natural language processing or NLP techniques but want to
achieve specific outcomes when using generative AI models. Let's explore the
common functionalities offered by diverse prompt engineering tools.
Firstly, several tools for prompt engineering offer suggestions for prompts based
on a given input or the desired output. Next, these tools can suggest how to
structure prompts for better contextual communication. They help craft prompts
that provide the necessary context for the model to understand the user's
intent. You can iteratively refine prompts based on the initial responses from a
tool to find the most effective prompt. Prompt engineering tools might offer
features to help mitigate bias in the response of a generative AI model. They can
guide how to craft prompts to reduce the likelihood of biased or inappropriate
outputs. These tools can help create prompts relevant to specific domains such
as legal, medical, or technical. Some prompt engineering tools offer libraries
of predefined prompts for various use cases that can be customized for specific
needs.
Moving forward, let's explore a few common tools for prompt engineering. Let's
start with IBM Watsonx.ai. A platform of integrated tools to train, tune, deploy,
and manage foundation models easily. The platform includes the Prompt Lab
tool that enables users to experiment with
prompts based on different foundation models and build prompts based on their
needs. To help you get started, Prompt Lab provides sample prompts for different
use cases, including summarization, classification, generation, and extraction. To
create prompts specific to your needs, you can train a model by adding
instructions and examples to show the model how to respond to the input.
Moving further, let's learn about Spellbook. An integrated development
environment, or IDE by Scale AI. With Spellbook, you can build
applications based on a language model or LLM, and experiment with prompts
for various use cases including text generation, text extraction, classification,
question answering, auto completion, and summarization. For prompt
engineering, Spellbook includes a prompt editor that enables you to edit and test
prompts.
You can use prompt templates to leverage structured prompts for generating
text. You can also access pre-built prompts as examples.
Another tool for prompt engineering is Dust. It provides a web user interface for
writing prompts and chaining them together. You can manage different versions
of chain prompts. It also provides a custom coding language in a set of standard
blocks for processing the outputs provided by the LLMs. Dust also supports API
integration to integrate other models and services.
Another tool for effective prompt engineering is PromptPerfect, which can be
used to optimize prompts for different LLMs or text to image models. It supports
Page 15 of 35

common text models such as GPT, Claude, StableLM, and Llama, and image
models such as DALL-E and Stable Diffusion.
For writing or optimizing a prompt, you first need to select the relevant model for
which you
want to optimize the prompt. Different models have different optimizing
strategies. You can also select the add-ons related to preview, quality, language,
and moderation. When you write a prompt, you can experiment with the
autocomplete feature that provides suggestions as you type. You can further
optimize the written prompt. Here you can see an example that reveals the
original prompt written by a user and the corresponding optimized prompt
generated by PromptPerfect. For a further level of optimization, you can optimize
and refine the prompt step by step in the streamline mode. You write a prompt,
optimize it, and again edit the prompt and optimize until you are satisfied with
the output.
Some other popular tools and interfaces provide resources for prompt
engineering or help you experiment with prompts. GitHub provides vast
repositories for prompt engineering and LLMs.
The guides, examples, and tools provided in these repositories help improve
prompt engineering skills.
OpenAI Playground is a web based tool that helps to experiment and test
prompts with various models of OpenAI such as GPT. Playground AI platform
helps you experiment with text prompts for generating images with the Stable
Diffusion model.
LangChain is a Python library that provides functionalities for building and
chaining prompts.
Finally, it's interesting to learn that prompts are also available for buying and
selling. One such example of a marketplace for prompts is
PromptBase. PromptBase supports prompts for
popular generative AI tools and models, including Midjourney, ChatGPT, DALL-E,
Stable Diffusion, and Llama. Through PromptBase, you can buy prompts specific
to your requirements
and specific to a model or tool. For example, you can buy a prompt to
generate comical cartoon characters through Midjourney. Also, if you have good
prompt crafting skills, you can upload and sell prompts through PromptBase. It
also supports crafting prompts directly on the platform and selling them on its
marketplace.
In this video, you learned that prompt engineering tools provide various features
and
functionalities to optimize prompts. Some of these functionalities
include suggestions for prompts, contextual understanding, iterative refinement,
bias mitigation, domain-specific aid, and libraries of predefined prompts. A few
common tools and platforms for prompt engineering include IBM
Watsonx.ai, Prompt Lab, Spellbook, Dust, and PromptPerfect.

05. LESSON SUMMARY:


Congratulations! You have completed this lesson.
At this point, you have learned the concepts of prompts and prompt engineering
in generative AI. You have also explored the best practices for writing effective
prompts and some common prompt engineering tools.
You learned the definition of prompts and their elements. You were introduced to
prompt engineering and its relevance to generative AI models. You learned the
best practices for writing effective prompts and how to refine them. You learned
the functionalities and capabilities of common prompt engineering tools. You
even got the opportunity to experience creating prompts and learning about
Page 16 of 35

naive prompting and persona patterns through hands-on lab experiences. You
were privy to what experts from the field had to say about prompt engineering.
Specifically, you learned that:
 A prompt is any input or a series of instructions you provide to a
generative model to produce a desired output.
 These instructions help in directing the creativity of the model and assist it
in producing relevant and logical responses.
 The building blocks of a well-structured prompt include instruction,
context, input data, and output indicators.
 These elements help the model comprehend our necessities and generate
relevant responses.
 Prompt engineering is designing effective prompts to leverage the full
capabilities of the generative AI models in producing optimal responses.
 Refining a prompt involves experimenting with various factors that could
influence the output from the model.
 Prompt engineering helps optimize model efficiency, boost performance,
understand model constraints, and enhance its security.
 Writing effective prompts is essential for supervising the style, tone, and
content of output.
 Best practices for writing effective prompts can be implemented across
four dimensions: clarity, context, precision, and role-play.
 Prompt engineering tools provide various features and functionalities to
optimize prompts.
 Some of these functionalities include suggestions for prompts, contextual
understanding, iterative refinement, bias mitigation, domain-specific aid,
and libraries of predefined prompts.
 A few common tools and platforms for prompt engineering include IBM
watsonx Prompt Lab, Spellbook, Dust, and PromptPerfect.
Page 17 of 35

MODULE 02: PROMPT ENGINEERING TECHNIQUES AND APPROACHES


In this module, you will discover techniques for skillfully crafting prompts that
effectively steer generative AI models. You will also learn about various prompt
engineering approaches that can enhance the capabilities of generative AI
models to produce precise and relevant responses.

Learning Objectives:
- Apply prompt engineering techniques for writing effective text prompts.
- Apply prompt engineering approaches to optimize the response of
generative AI models.

01. TEXT-TO-TEXT PROMPT TECHNIQUES:


Welcome to text-to-text prompt techniques.
After watching this video, you'll be able to describe the techniques using which
text prompts can improve the reliability of large language models. You'll also be
able to explain the benefits of using text prompts with large language models
effectively.
In recent years, there has been a significant advancement in natural language
processing, or NLP, by using large language models, or LLMs. However, as the
size and complexity of LLMs continue to increase, questions about their
reliability, security, and potential biases have surfaced.
Using text prompts effectively is a promising solution to these concerns. Text
prompts are carefully crafted instructions that direct LLM behaviour to generate
a desired output. However, the quality and relevance of the generated output
depend on the effectiveness of the prompt and the capability of the LLM. Let's
explore the techniques that make text prompts effective and
improve the reliability of output generated by LLMs. Let's begin by exploring the
task specification technique.
Text prompts should explicitly specify the objective to the LLM to increase
accurate responses.
For example, the prompt translate this English sentence into French, is a clear
directive for achieving the task. Contextual guidance is a technique using which
text prompts provide
specific instructions to the LLMs to generate relevant output. For example, if
you'd like the model to generate a short write up on the landmarks of New York
City, a prompt like write a short paragraph on New York City would yield a
generic response that might not cover what you want.
On the other hand, a more specific prompt like write a short paragraph on New
York City, highlighting its iconic landmarks, would generate a more appropriate
output because of the context included in the prompt. Domain expertise is also
essential to improving LLM dependability. Text prompts can use domain specific
terminology when you need LLMs to generate content in specialized fields, like
medicine, law, or engineering, where accuracy and precision are crucial.
Let's say you'd like to request medical information regarding
hypothyroidism. Your prompt could read something like, please explain the
causes, symptoms, and treatments of hypothyroidism,
including the latest research and medical guidelines.
Bias mitigation is a technique in which text prompts provide explicit
instructions to generate neutral responses. For instance, let's assume that you
are concerned about a gender bias in the model's response to a prompt for a
writeup on leadership qualities. To address this, you can use a text prompt like
this, write a 100-word paragraph on leadership traits without favouring any
gender. Provide equal examples of traits from all genders.
Page 18 of 35

Framing is yet another technique by which text prompts guide LLMs to


generate responses within required boundaries. Suppose you'd like the model to
summarize a lengthy article about climate change. Your text prompt can be,
provide a summary of 100 words of the article on climate change, focusing on its
primary findings and recommendations.
Did you know that LLMs today trained on large amounts of data and tuned to
follow instructions, can perform tasks zero-shot. Zero-shot prompting is a method
using which generative AI models generate meaningful responses to prompts
without needing prior training on these specific prompts. For example, the
prompt could be, select the adjective in this sentence.
The sentence is Anita bakes the best cakes in the neighbourhood. The output
here would be best. However, often you will not get the desired response in one
prompt, and you may need to iterate. This is where the user feedback loop
technique comes in, wherein users provide feedback to text prompts
and iteratively refine them based on the response generated by the LLM. This
loop allows users to improve the model's output quality incrementally, until the
desired state is achieved. For example, the user asks the model to write a poem
via a text prompt, the LLM generates a poem. The user says, make it more
humorous.
The LLM adjusts the poem to include more humorous elements. The user
approves the revised poem.
Similarly, for complex tasks, when you are unable to describe your needs
clearly, a technique called a few-shot is used. It enables in context learning
wherein demonstrations are provided in the prompt to steer the model to better
performance. The demonstrations act as conditioning for subsequent examples
where you would like the model to generate a response.
For instance, suppose the task for the model is to generate short travel
recommendations.
As few-shot prompts, you provide the following guided context to the
model. Recommend a summer travel destination well known for beautiful
beaches. Suggest a fall travel destination that is renowned for its beautiful
foliage. After using these few-shot prompts, the model can generate travel
recommendations for other types of vacations. For example, if the task is
recommend a city to explore. The model will generate an answer, consider
visiting a vibrant city like Paris, known for its rich history, art, and iconic
landmarks. This is how the model can generate travel recommendations for
different types of vacations, based on the minimal training data provided in the
few-shot prompts.
There are several benefits when you use text prompts with LLMs, using the
methods we just discussed. Let's look at some of them. The LLM's explainability
is enhanced. Explainability refers to the degree to which a user can understand
and interpret the model's decision-making process, and the reasons behind its
generated outputs. Explainability helps users, developers, and stakeholders
understand how the model works. Why it makes certain predictions or generates
specific text. And whether it can be trusted in various applications. Explainability
is crucial to addressing ethical concerns related to AI. It helps all stakeholders
evaluate and ensure the LLM's behaviour is consistent with the specific domain's
ethical guidelines and legal requirements.
In addition to increasing the reliability and explainability of LLMs, effective text
prompts also foster trust between the user and the LLM. When the user can
understand how the LLM works and see the direct influence of their instructions
on the LLM's behaviour, it leads to transparent and meaningful interactions
between the user and the LLM.
Page 19 of 35

In this video, you learned the various techniques using which text prompts
can improve the reliability and quality of LLMs. Specifically, you looked at task
specification, contextual guidance, domain expertise, bias, mitigation framing,
and the user feedback loop. You also learned about the zero-shot and few-shot
techniques. Finally, you learned the several benefits of using text prompts with
LLMs effectively, such as increasing the explainability of LLMs,
addressing ethical considerations and building user trust.

02. INTERVIEW PATTERN APPROACH:


Welcome to interview pattern approach. After watching this video, you'll be able
to explain the interview pattern approach to prompt engineering. You'll also learn
to apply this concept to write more effective prompts for generative AI models to
produce more specific responses.
The interview pattern approach to prompt engineering is a strategy that
involves designing prompts by simulating a conversation or interacting with the
model in the style of an interview.
Let's learn how the interview pattern approach typically works.
This approach requires meticulous optimization of the prompt to ensure the
model generates responses that precisely meet your objectives. It involves
providing specific prompt instructions to the model in response to which the
model asks necessary follow-up questions from the user.
Depending on the response to these follow-up questions, the model draws
information relevant to its task, processes it, and provides a well optimized
response to the user. The more information you provide, the better the result will
be.
You'll understand this better with the help of an example. Suppose you want the
model to behave like a travel consultant and plan your travel itinerary for a
vacation. How will you prompt the model to do so? You can give prompt
instructions to the model saying you will act as a seasoned travel expert. Your
objective is to engage in a comprehensive trip planning session
with me. Begin by asking a series of detailed questions, one at a time. To gather
all the essential information required to craft the most tailored and memorable
travel itinerary based on my specific preferences, interests, and budget.
In response to this prompt instruction, the model will ask all the required follow-
up questions,
such as what types of destinations do you enjoy traveling to the most? Could you
describe your ideal vacation in terms of activities and experiences? How do you
typically plan your trips,
and what factors are most important to you when choosing a destination? Do you
find any specific cultural or historical aspects intriguing when planning your
travel destination?
What kind of accommodation options do you prefer when you travel, and
why? How do you balance budget considerations with the desire for a memorable
travel experience?
In this example, each question builds upon the previous one, creating a
structured and informative conversation about travel preferences. Depending on
your response to these queries, the model will plan a memorable travel itinerary
that aligns with your preferences and needs.
In this video, you learned that the interview pattern approach is superior to the
conventional prompting approach. As it allows a more dynamic and iterative
conversation when interacting with generative AI models. Instead of providing a
single static prompt, the interview pattern involves a back-and-forth exchange of
information with the model, which helps clarify queries and guides the response
Page 20 of 35

of the model in real time. This, in turn enhances the capabilities of the users
to optimize the results obtained.

The Interview Pattern


In the previous lab, we saw the limitations of the naive approach to prompting
and how the Persona Pattern can improve results.
Still, when we asked for a training program, the results were for a generic out-of-
shape person. They were not specific to us and therefore not as useful as they
could be.
We can employ the Interview Pattern along with the Persona Pattern to
optimize these results. Let's start with the fitness program scenario, and then
consolidate the idea with a new example.
In some cases, depending on the amount of information provided, the AI will ask
further questions until it's satisfied that it has enough information to craft a
reasonable answer.

Tips on the Interview Pattern


1. Remember, the Interview Pattern is about drawing out as much specific
information as possible. Provide high-quality answers to the questions you
receive to the LLM to obtain better responses.
2. Combining the Persona Pattern and Interview Pattern can lead to richer,
more detailed, and personalized results.
3. Don't hesitate to experiment with different instructions. Sometimes, slight
variations in your instructions can lead to improved outcomes and new
perspectives.

03. CHAIN OF THOUGHT APPROACH:


Welcome to Chain-of-Thought Approach.
After watching this video, you'll be able to explain the chain-of-thought approach
to prompt engineering. You'll also learn to apply this concept to develop prompts
with compelling and diverse examples, allowing generative AI models to
understand the context and generate
coherent responses with ongoing conversation.
Chain-of-thought is a prompt-based learning approach that involves
constructing a series of prompts or questions to guide the model to generate a
desired response. Using this approach, you can demonstrate the cognitive
abilities of the generative AI models and better explain their reasoning process. It
involves breaking down a complex task into smaller and easier ones through a
sequence of more straightforward prompts, with each prompt building upon the
previous one to guide the models toward the intended outcome.
Before posing a question directly to the model, you feed it with related
questions along with their corresponding solutions. This chain-of-prompts helps
the model think about the problem
and use the same strategy to answer more such questions correctly. In simpler
words, the prompt includes a question and an accurate answer to the question to
provide the required context and step-by-step reasoning for the model, then it
poses a different question to be answered using the same line of reasoning.
You'll understand this better with the help of an example. If you pose a
mathematical problem to the model like, Matthew has six eggs. He buys two
more trays of eggs. Each tray has 12 eggs.
How many eggs does he have now? This logic can go off the rails, especially for a
complex problem. To train the model with the appropriate reasoning involved in
solving such questions,
Page 21 of 35

you can pose a similar question like this one. Mary has eight radishes. She used
five radishes to prepare the dinner. The next morning she bought 10 more
radishes. How many radishes does she have now?
Along with the questions, you must provide the correct logical solution as
well. Mary had eight radishes. She cooked dinner using five of them, so she had
8-5=3 radishes left with her. The next morning, she bought 10 more, so she has
3+10=13 radishes now. This will help the model understand the logic involved
and come up with the right solution.
Your final prompt should include the following: A related question with the
appropriate solution, and then another question that can be solved using the
same logic or reasoning. In this prompt, you can see there's a question, a logical
solution to the question, and another question that can be solved using the same
logic.
In this video, you learned that the chain of thought approach strengthens the
cognitive abilities for generative AI models and solicits a step-by-step thinking
process. This approach involves providing the model with related questions and
their corresponding solutions, to train the model with the logic behind solving
these questions so that the same logic can be applied to solve more such
questions.

The Chain-of-Thought approach in Prompt Engineering


The Chain-of-Thought (CoT) methodology significantly bolsters the cognitive
performance of AI models by segmenting complex tasks into more manageable
steps. By adopting this prompting strategy, AI models can demonstrate
heightened cognitive abilities and offer a deeper understanding of their
reasoning processes.
This approach is an example of prompt-based learning, and it requires feeding
the model with questions and their corresponding solutions before posing related
subsequent questions to it. In other words, our CoT prompt teaches the model to
reason about the problem and mimic the same reasoning to respond to further
queries correctly.
Chain-of-Thought can be used in various ways to improve the chatbot's
reasoning, especially in areas where it's feeble. However, a more valuable use is
when it comes to exploring subjects
more in-depth. Instead of asking a generic question, we can break it down into
steps we want the model to consider to develop a much richer and valuable
answer.
The beauty of a Chain-of-Thought is that it can branch out in different directions,
exploring numerous aspects and perspectives related to the initial topic.

Zero-Shot Chain-of-Thought Prompting


There are a few words that, when added to the prompt, are likely to solicit better
answers since they invite the AI to do step-by-step reasoning, much like a human
would when trying to come to a resolution.
According to researchers, two effective phrases are:
Let's think step by step.
And:
Let's work this out in a step-by-step way to be sure we have the right answer.
These words are helpful but not magic. So feel free to use them to improve your
results, but they are usually best used along with other techniques discuss in this
course.
For example, tucking these words at the end of our original standard prompt still
generates an incorrect answer with the GPT 3.5 model available at the time of
writing.
Page 22 of 35

In other words, our traditional Chain-of-Thought approach illustrated above, in


which we use our prompt to “teach” the model the desired outcome, is still
superior.
Still, every time we get better results with just a few words, we should consider
that option, especially since this prompting technique solicits longer and more
elaborate answers, which is helpful for generating blog posts, essays, guides,
etc.
Zero-Shot CoT Prompting Using the phrases "Let's think step by step" or "Let's
work this out in a step-by-step way," pose a question about an unfamiliar topic
and see if the AI can produce a more reasoned, detailed response. Expected
Activity: Assess the quality and depth of the AI's answer compared to a
traditional prompt.

Deep Dive using Chain-of-Thought


The downside is that we had to develop a list requiring knowledge of the subject
or at least research into it, and this is time-consuming.
On the plus side, we didn't have to retrain the model, which would be truly time-
consuming and potentially expensive. Instead, the prompt split the "problem"
into smaller steps worth exploring and leveraged the existing model training to
compute a reply.
Moreover, these starting points can lead to various interconnected thoughts and
ideas from the model. The beauty of a Chain-of-Thought is that it can branch out
in different directions, exploring numerous aspects and perspectives related to
the initial topic.
We can ask specific questions at any time after the model has already shown us
a broader understanding of the topic.
Select a broad topic, for instance, "Ocean Conservation." Then, list various facets
of the topic, like plastic pollution, overfishing, coral reef degradation, etc. Use the
Chain-of-Thought approach to get the AI's comprehensive overview of the
topic. Expected Outcome: Evaluate the AI's response to see if it covers the topic
more extensively and insightfully than a regular prompt.

04. TREE-OF-THOUGHT APPROACH:


Welcome to tree-of-thought approach. After watching this video, you'll be able to
explain the tree-of-thought approach to prompt engineering. You'll also learn to
apply this approach to draft prompts for generating tailored responses.
The tree-of-thought is an innovative technique built to expand the capabilities of
the chain of thought prompting approach. It enables the generative AI models to
demonstrate advanced reasoning capabilities. It involves hierarchically
structuring a prompt or query akin to a tree structure to specify the desired line
of thinking or reasoning for the model.
This approach is particularly useful when you want to provide explicit
instructions or constraints to the model to ensure it generates the desired
output. This method holds immense potential for unlocking new solutions
and tackling complex problems. Let's understand how the tree-of-thought
approach works.
It involves generating multiple lines of thought resembling a decision tree to
explore different possibilities and ideas. Unlike traditional linear approaches, this
technique allows the model to evaluate and pursue multiple paths
simultaneously. Each thought or idea branches out,
creating a treelike structure of interconnected thoughts. The model proceeds by
assessing every possible route, assigning numerical values according to its
predictions of outcomes. And eliminating lesser promising lines of
Page 23 of 35

thought, ultimately pinpointing the most favourable choices. You will understand
this better with the help of an example.
Suppose you want the model to design recruitment and retention strategies
for attracting skilled remote employees for an e-commerce business. You want
the model to employ the tree-of-thought approach to do that. You can give the
following prompt instructions to the model.
Imagine three different experts answering this question. All experts will write
down one step of their thinking and then share it with the group. Then all experts
will go on to the next step, etc.
If any expert realizes they're wrong at any point, then they leave.
Along with the prompt instruction, you will also give the original question for the
prompt. Act as a human resource specialist design a recruitment and retention
strategy for an e-commerce business, focusing on attracting and retaining skilled
remote employees. Building such prompt instruction will allow the generative AI
model to consider a step-by-step process and think logically. It will also make it
consider intermediate thoughts, building upon them, and exploring branches that
may or may not lead somewhere. This practice will maximize the use and
capabilities of the model, rendering more useful results.
In this video, you learned that the tree-of-thought approach is an
innovative technique that builds upon the chain-of-thought approach
and involves structuring prompts hierarchically.
Akin to a treelike structure to guide the model's reasoning and generation of
output. This approach is particularly valuable when explicit instructions
or constraints are necessary for desired outputs. It enables the model to explore
various possibilities and ideas simultaneously, branching out like a decision Tree.

The Tree-of-Thought approach to Prompt Engineering


At its core, Chain-of-Thought prompting solicits a step-by-step thinking process
from the LLM. Compared to the naive/standard/Input-Output prompting, we get
far better results with it.
There are some limitations, however. In a research paper (i.e.,
arXiv:2305.10601), Yao et al. compared various approaches to prompting,
including naive prompting, CoT, as well as a new technique called Tree-of-
Thought (ToT), as shown in their image below.

The researchers remarked that the CoT didn't perform as well as it “lacks
mechanisms to try different clues, make changes to decisions, or backtrack.”
And that's the main limitation of CoT. When considering a complex problem,
humans (well, systematic and logical ones, at least) tend to explore a tree of
Page 24 of 35

thoughts, evaluating what works and what doesn't, backtracking if needed,


jumping back to a previous “node” in the tree if it was more beneficial or
promising for the resolution of the problem.

Tree-of-thought prompting uses a similar approach that not only invites the AI to
consider a step-by-step process and to think logically but also makes it consider
intermediate thoughts, building upon them and exploring branches that may or
may not lead somewhere. This exploration maximizes the use of LLM and its
capabilities, leading to drastically more useful results.

Tree-of-Thought (ToT) Prompting is an innovative method that expands upon and


refines the existing Chain-of-Thought prompting approach. By incorporating ToT
Prompting, LLMs can demonstrate enhanced reasoning skills. Moreover, this
technique allows these models to correct their mistakes autonomously and
continually build upon their knowledge.

Dave Hulbert suggested a few rather convincing prompts that leverage this
approach and yield, anedotically, great results. I particularly like how he
incorporates the Persona pattern and recommend you approach ToT prompting
using his prompts or similar variations you might develop yourself.

Additional Thoughts
Specificity in Instructions: In a real-world scenario, while the generic steps are
valuable, for more actionable results, you can be more specific in your
instructions. For instance, you might request each "expert" to provide two
actionable tactics or tools per step they suggest. And you can, of course, request
specific experts or expertise.

Integration with Real Data: If you can supply the LLM with specific data about
your business (like target audience demographics, current website analytics, or
specific marketing goals), it can potentially refine its responses even further. Just
be mindful of potential confidential information.

Segmented Inquiry: As briefly mentioned before, once you have a broad


strategy laid out, you can dive deeper into each individual step, asking the
experts to further expand on their suggestions, or even query different experts
about the same step to gather multiple perspectives.

05. LESSON SUMMARY:


Congratulations! You have completed this lesson.

At this point, you have learned the techniques for skilfully crafting prompts that
effectively steer generative AI models. You now know the various prompt
engineering approaches that optimize the response of generative AI models.

You explored the techniques, including zero-shot and few-shot prompting, using
which text prompts can improve the reliability of large language models (LLMs)
and yield greater benefits from their responses. You learned how using different
approaches such as interview patterns, Chain-of-Thought, and Tree-of-Thought to
write prompts helps generative AI models produce more specific, contextual, and
customized responses to the user's needs. You even had the opportunity to
experience the application of each of these approaches through hands-on lab
experiences. You were privy to what experts from the field had to say about the
role of prompt engineering in AI.
Page 25 of 35

Specifically, you learned that:


- The various techniques using which text prompts can improve the
reliability and quality of the output generated from LLMs are task
specification, contextual guidance, domain expertise, bias mitigation,
framing, and the user feedback loop.

- The zero-shot prompting technique refers to the capability of LLMs to


generate meaningful responses to prompts without needing prior training.

- The few-shot prompting technique used with LLMs relies on in-context


learning, wherein demonstrations are provided in the prompt to steer the
model toward better performance.

- The several benefits of using text prompts with LLMs effectively are
increasing the explain ability of LLMs, addressing ethical considerations,
and building user trust.

- The interview pattern approach is superior to the conventional prompting


approach as it allows a more dynamic and iterative conversation when
interacting with generative AI models.

- The Chain-of-Thought approach strengthens the cognitive abilities of


generative AI models and solicits a step-by-step thinking process.

- The Tree-of-Thought approach is an innovative technique that builds upon


the Chain-of-Thought approach and involves structuring prompts
hierarchically, akin to a tree, to guide the model's reasoning and output
generation.

06. READING - PROMPT HACKS:


Objective
After completing this reading, you will be able to:
Page 26 of 35

 Explain the concept of prompt hacks.


 Apply them to generate more effective outputs from generative AI models
for text and image generation.
 Distinguish between text prompts and prompt engineering.

Introduction
Prompt hacks in generative AI refer to techniques or strategies that involve
manipulating the prompts or inputs provided to a generative AI model, such as a
large language model (LLM) or an image generation model, to produce desired or
specific outputs. These hacks include carefully crafting the prompts to influence
the model's behavior and generate outputs that align with the user's intentions.
They improve the performance of LLMs by:
 Improving the quality and accuracy of LLM outputs: By carefully
crafting the prompt, you can guide the LLM toward the desired output and
reduce the likelihood of errors.

 Enabling LLMs to perform new and innovative tasks: By combining


prompts with other inputs, such as images or code, you can enable LLMs
to perform tasks that they would not be able to perform otherwise.

 Making LLMs more accessible and user-friendly: Using prompt hacks


can make it easier for people to use LLMs easily and effectively.

Prompt Hacks and Text-Generation


Prompt hacks can be used with LLMs to generate more effective responses from
text-generating generative AI models in the following ways:
 Using special modifiers: LLMs have special modifiers that can control
the style, tone, and other properties of the generated output. For example,
you might use the modifier "write in a humorous style" to generate a
funny poem or story.
 Providing context and examples: The more context and examples you
provide to the LLM, the better it will be able to understand what you want
and generate the desired output. For example, if you wish the LLM to
generate a code snippet for a specific task, you might provide it with a
detailed description and some examples of similar code snippets.
 Combining prompts with other inputs: Prompts can be combined with
other inputs, such as images or code, to generate more complex and
informative outputs. For example, you might prompt an LLM to generate a
description of an image while providing it with the image itself.
Here's an example of using prompt hacks with text-generating models.
You need a poem about a cat. So, you use the following prompt:

Prompt: Write a poem about a cat.


Output:
A furry little creature,
With four paws and a tail,
It loves to chase mice,
And play with its ball.
Now, this is a fairly straightforward poem about a cat. However, you require a
slightly innovative output. You can use prompt hacking to bring in more creativity
or humor. For example, you could add the following modifier to the prompt:
Prompt: Write a poem about a cat in the style of a gangsta rapper.
Output:
Yo, I'm a cat, and I'm here to say,
Page 27 of 35

I'm the coolest cat around today.


I got fur so soft and eyes so green,
And I'm the best hunter you've ever seen.
You now have a poem, about a cat, which is creative and funny!

Prompt Hacks and Image Generation


Did you know you can use prompt hacking with LLMs to create images?
Directing image generation models through LLMs is an emerging method that is
gaining popularity in artificial intelligence. This approach entails employing an
LLM to guide an image generation model, like DALL-E 2 or Imagen, to produce
more precise and lifelike images.
How exactly does this work?
You provide the LLM with a text description of the image you want and then use
the LLM's output to guide the image generation model.
For example, suppose you want to generate an image of a cat sitting on a couch.
You could provide the LLM with a text prompt like this:
Prompt: "A fluffy orange cat sitting on a red couch, looking at the camera."
The LLM would then generate a response like this:
Output:
"A fluffy orange cat is sitting on a red couch. It is looking directly at the camera.
Its eyes are green, and its fur is soft and smooth."
You can then use this response to guide the image generation model to generate
an image of a cat sitting on a couch such that it matches the description
provided by the LLM.
Here's another example. Imagine you need a background image for the poem
"Twinkle twinkle little star."
You can use a prompt hack here and ask the LLM to suggest the prompt for
generating this image.
Prompt: Consider the poem "Twinkle twinkle little star." Can you create a text
description of an image that represents this poem?
The following image depicts the response generated by ChatGPT using this
prompt.
Output:

The prompt hack comes after this.


Prompt: Can you suggest a prompt that will be helpful to generate a relevant
image for description: "In the serene backdrop of a dark velvet sky, a single
radiant star takes center stage. The star glistens with a soft silver glow, casting a
gentle, shimmering light onto the tranquil night landscape. The surrounding
Page 28 of 35

darkness is dotted with more distant stars, creating a celestial tapestry that
seems to stretch forever. The star in focus stands out like a diamond in the sky, a
beacon of hope and wonder. It symbolizes the innocence and curiosity of
childhood, reminding us of the simple joys of looking up at the night sky and
dreaming."
Output:

Now, if you need to use the same prompt for DALL-E, you can also use a prompt
hack here!

Prompt: Can you rewrite the prompt for DALL-E?

Output:

Bingo!
Page 29 of 35

You now have what you want. You can use these images for the poem the way
you'd like.
Prompt Hacks and Prompt Engineering
Prompt hacking and prompt engineering are closely related fields, but they have
some key differences.
Prompt hacking is the use of prompts to manipulate the output of an LLM in a
way that is unexpected or unintended, whereas prompt engineering is the
systematic design and development of prompts for LLMs

It is important to note that the distinction between prompt hacking and prompt
engineering is not always clear-cut. Some techniques can be used for both
purposes. For example, using special modifiers to control the style and tone of
the output to generate humorous or creative outputs. It can also be used to
improve the performance of an LLM on a specific task, such as generating text in
a specific style.

Tips for Powerful Prompt Hacking


Here are some additional tips for prompt hacking:
 Be creative, and don't be afraid to try new things
 Be specific and clear in your instructions.
 Use the LLM's documentation to learn more about its capabilities and
limitations.
 Experiment with different prompts and see what works best for you.
With some practice, you can use prompt hacking to generate high-quality and
creative outputs from LLMs.
To conclude, prompt hacking is a powerful technique that can be used to get the
most out of LLMs. However, it is a relatively new field, and there is no one-size-
fits-all approach. The best way to learn how to hack prompts is to experiment
and discover what works for you.

Summary
You learned the concept of prompt hacks in generative AI. You also learned how
they can be used with LLMs for better text and image generation. Finally, you
learned the difference between prompt hacking and prompt engineering.
Page 30 of 35

MODULE 3: COURSE QUIZ, PROJECT, AND WRAP UP


This module includes a graded quiz to test and reinforce your understanding of
concepts covered in the course. The module also includes a glossary to enhance
comprehension of generative AI-related terms. The module includes a final
project, which provides an opportunity to gain hands-on experience on the
concepts covered in the course. The module also includes optional content. This
content includes the techniques for writing effective prompts for image
generation. Additionally, you can learn about Prompt Lab, a prompting tool
designed to maximize your prompt engineering capabilities in IBM watsonx.

Learning Objectives:
- Apply prompt engineering techniques for writing effective prompts for
image generation
- Describe user interface for Prompt Lab in IBM Watsonx.
- Demonstrate understanding of the course concepts through the graded
quiz and project.
- Plan for the next steps in your learning journey.

01. TEXT-TO-IMAGE PROMPTS USING STABLE DIFFUSION:


Welcome to text-to-image prompt techniques.
After watching this video, you'll be able to explain common image prompting
techniques used to improve the quality and impact of images, and apply these
techniques to write better prompts for image generation.
Images are an essential part of communication and are used in various fields
such as marketing, advertising, education, journalism, and many
others. Nonetheless, certain images excel in their ability to convey
emotions more effectively than others.
An image prompt is a text description of an image that you want to generate. It
can be as simple as a single word or phrase, or it can be more detailed
describing the composition, colors, and mood of the image. To increase the
impact of images obtained through generative AI models and make them more
convincing and compelling, you can use image prompting techniques.
These techniques aim to improve the quality, diversity, and relevance of images
produced by generative AI models. There are different image prompting
techniques that can be used to improve the impact of images. Let's learn about
these techniques one by one.
Style modifiers are descriptors used to influence the artistic style or visual
attributes of images
produced by generative AI models. These descriptors can help the model
produce graphics with innovative style while conforming to the structure and
content of the input prompt. You can modify the various visual elements of an
image like color, contrast, texture, shape, and size, and generate output that
is aesthetically appealing and visually pleasing. Your prompt can include
information about miscellaneous art styles, historical art periods, photography
techniques, types of art materials used, and even traits of well-known brands or
artists you want the model to emulate. All this information can help the
generative model understand the desired appearance or style of the output
image. Here are a few examples of style modifiers used in image prompts. The
style modifiers used in these prompts have been highlighted.
Moving on to the next image prompting technique, that is quality boosters. High-
quality images are more convincing and reliable as compared to low-quality
ones. Images with low resolution
frequently exhibit blurriness and pixelation, making it difficult for viewers
to discern the finer details within the image. On the other hand, images with
Page 31 of 35

high-resolution guarantee essential visibility and readability. The perceived worth


of an image can be raised by using high-quality graphic design. Quality boosters
are terms used in an image prompt to enhance the visual appeal and
improve the overall fidelity and sharpness of the output. These are specific terms
that can direct the generative AI model to perform steps like noise
reduction, sharpening, color correction, and resolution enhancement. You can
use terms like high resolution, 2k, 4k, hyper-detailed, sharp focus, complimentary
colors, and many others in your image prompts
as quality boosters. They can enhance specific features of the image, resulting in
more coherent output. Let's look at some examples to understand how quality
boosters can be used in image prompts. Terms such as highlights the texture, 4k
resolution, sharp, crisp details, and fine lines, complementary colors, blurred
background, and stand out are quality boosters used in the given image
prompts.
The third image prompting technique is repetition. This technique leverages the
power of iterative sampling to enhance the diversity of images generated by the
model. Repetition involves emphasizing a particular visual element within an
image to create a sense of familiarity for the model, allowing it to focus on a
specific idea or concept you want to highlight. This can be accomplished by
repeating the same word or similar phrase within the image prompt.
Repetition helps reinforce the message conveyed through the image and
increase the memorability of the model. Rather than producing just one image
based on a prompt, the model generates multiple images with subtle
differences, resulting in a diverse set of potential outputs. This technique is
particularly valuable when generative models are confronted with
abstract or ambiguous prompts to which numerous valid interpretations are
possible. Let's look at some examples of repetitive words used in an image
prompt. Words such as tiny, dense, enormous, vast, serene, clear, and lush have
been repeated many times to focus on a specific idea.
The fourth image prompting technique is weighted terms. Weighted terms refer
to the use of words or phrases that can have a powerful emotional or
psychological impact. For example, words such as free, limited time offer, and
guaranteed, are often used in advertising to elicit a sense of urgency, security,
and trust. Similarly, words such as luxury, premium, and exclusive are used to
create a sense of exclusivity and sophistication.
Generative AI models allow you to give positive or negative weights to such
terms, to emphasize or de-emphasize a certain emotion. Using weighted terms in
an image prompt can help create images that are memorable and
convincing, and can draw emotional responses from the audience. Here are some
examples of weighted terms used in an image prompt.
As you can see in the first example, a weight of positive ten is given to the word
warm, whereas the weight of crackling is positive eight. This means the
generative model must focus more
on the word warm and a little less on the word crackling.
Similarly, in the second example, a positive six weight is given to the word
shimmering, and the weight of neon-lit is positive eight, so the model should
focus more on neon-lit.
Whereas in the last example, negative weight of six is given to the word
colorful, and a positive weight of ten is given to exotic. This means the model
must emphasize the word exotic and de-emphasize the word colorful.
The fifth image prompting technique is fix deformed generations. This technique
is used to modify any deformities or anomalies that may impact the
effectiveness of the image.
Page 32 of 35

Deformities in an image can include conditions like distortion, particularly on


human body parts like hands or feet, pixelation or other image quality issues
that can detract from the visual appeal and clarity of the image. This can be
mitigated to some extent by using good negative prompts. Here are some
examples of deformed generation prompting techniques used in image
prompts. You can see that in all these examples, good negative words have been
used to
mitigate the issues of deformed images.
In this video, you learned that image prompting techniques play a vital role in
advancing the image generation capabilities of generative AI models. Style
modifiers, quality boosters,
repetition, weighted terms, and fix deformed generations are five techniques
that can be used to improve the impact of images produced. By incorporating
these techniques, one can create more memorable, engaging, and persuasive
visuals that can effectively communicate the intended message.

Generative AI: Prompts (Hands-on Lab)


Prompts are particularly significant in image generation, as the generative AI
model depends on textual cues to generate the desired image. There are various
types of images, and they can be classified based on attributes like color, style,
resolution, and so on. Hence, it's critical to specify the image type you require
and the details it should encompass when crafting textual prompts for image
generation.

Enhance Your Prompt with Weighted Terms for Emphasis:


Weighted terms let you enhance or diminish particular objects or emotions in an
image. Generative AI models allow you to give positive or negative weights to
such terms to emphasize or de-emphasize a certain object or emotion in the
image.

GLOSSARY: PRINTED AS ATTACHED

FINAL PROJECT: APPLYING PROMPT ENGINEERING TECHNIQUES AND


BEST PRACTICES (HANDS-ON LAB):
Tree-of-thought prompting invites the AI to consider a step-by-step process and
to think logically but also makes it consider intermediate thoughts, building upon
them and exploring branches that may or may not lead somewhere. This
exploration maximizes the use of large language models (LLM) and their
capabilities, leading to drastically more useful results.

The interview pattern approach to prompt engineering involves designing


prompts by simulating a conversation or interacting with the model in the style
of an interview.
Page 33 of 35

CONGRATULATIONS AND NEXT STEPS:


Congratulations on successfully completing this course! We hope you found it
enriching.
In this course, you learned about prompts and prompt engineering in generative
AI. You learned that writing effective prompts involves four key dimensions:
clarity, context, precision, and role-play.
You explored common prompt-engineering tools like IBM watsonx Prompt Lab,
Spellbook, Dust, and PromptPerfect.
You gained knowledge of text-prompt techniques like zero-shot and few-shot,
which improve the reliability and quality of large language models (LLMs).
You were also introduced to various prompt engineering approaches: Interview
Pattern, Chain-of-Thought, and Tree-of-Thought.
This foundational knowledge of prompt engineering will enable you to effectively
utilize the capabilities of generative AI to produce precise and relevant
responses.
This concludes the Generative AI: Prompt Engineering course. However, it only
marks a milestone in your ongoing journey to delve deeper into the world of
generative AI.

Next Steps:
This course is part of the Generative AI for Everyone specialization, which
provides learners with comprehensive knowledge and practical skills to leverage
the power of generative AI. This specialization helps learners enhance their
professional prospects and equips them with sought-after skills for career
opportunities in different sectors.

If you have not yet explored the other courses in the specialization, we
recommend doing so. The specialization includes the following courses:
 Course 1: Generative AI: Introduction and Applications
 Course 2: Generative AI: Prompt Engineering Basics
 Course 3: Generative AI: Foundation Models and Platforms
 Course 4: Generative AI: Impact, Considerations, and Ethical Issues
 Course 5: Generative AI: Business Transformation and Career Growth

As the next step, you can also consider exploring other role-based
specializations:
Page 34 of 35

 Applied AI Professional Certificate


This professional certificate gives you a firm understanding of AI technology, its
applications, and its use cases. You will also explore the capabilities and
applications of generative AI. Additionally, you will learn about prompt
engineering, enabling you to optimize the outcomes produced by generative AI
tools.

 Generative AI for Data Scientists


In this specialization, you will understand the basics of generative AI and its real-
world applications, learn about generative AI prompts engineering concepts and
approaches, and explore commonly used prompt engineering tools. Finally, you
will also learn to apply generative AI tools and techniques throughout the data
science methodology for data augmentation, data generation, feature
engineering, model development, model refinement, visualizations, and insights.

 Generative AI for Software Developers


In this specialization, you will learn the basics of generative AI including its uses,
models, and tools, and explore various prompt engineering approaches. You will
also boost your programming skills by learning to leverage generative AI to
design, develop, translate, test, document, and launch applications and their
code, and gain hands-on experience using generative AI tools and models, such
as GitHub Co-pilot, Open AI ChatGPT, and Google Gemini, for various software
engineering tasks.
 Generative AI for Data Analysts Specialization (Coming Soon)
 Generative AI for Data Engineers Specialization (Coming Soon)
 Generative AI for Cybersecurity Professionals Specialization (Coming Soon)
 Generative AI for Customer Support Professionals Specialization (Coming
Soon)
 Generative AI for Project Managers Specialization (Coming Soon)
 Generative AI for Product Managers Specialization (Coming Soon)
 Generative AI for BI Analysts Specialization (Coming Soon)
We would love to hear from you. Your feedback and ratings will help us improve
this course.
Good luck!

READING: ABOUT THE OPTIONAL LESSON USING IBM WATSONX.AI:


In this optional lesson you will explore the Prompt Lab in IBM watsonx.ai. This is
optional because it requires you to access IBM watsonx platform, which may not
be available to everyone. Normally a credit card is required to access watsonx.ai.
However, for learners in this course we are providing you with a feature code
that will enable you to sign up without a credit card. To complete the lab you will
need to perform the following:
1. Obtain a feature code for an IBM Cloud account.
2. Create the IBM cloud account using the feature code.
3. Provision your instance for watsonx.ai and explore the Prompt Lab.

EXPLORING IBM WATSONX PROMPT LAB:


When it comes to prompting foundation models, there isn't just one right answer.
There are usually multiple ways to prompt a foundation model for a successful
result.
The IBM watsonx Prompt Lab is a prompting tool designed to help you craft
prompts that achieve the desired or expected result from a given model and set
Page 35 of 35

of parameters. In this activity, you will explore the key features of the watsonx
Prompt Lab, and also learn how to prompt a foundation model.

You might also like