IBM - Generative AI - Prompt Engineering Basics- Course Notes
IBM - Generative AI - Prompt Engineering Basics- Course Notes
You will learn about prompt techniques like zero-shot and few-shot, which can
improve the reliability and quality of large language models (LLMs). You will also
explore various prompt engineering approaches like Interview Pattern, Chain-of-
Thought, and Tree-of-Thought, which aim at generating precise and relevant
responses.
You will be introduced to commonly used prompt engineering tools like IBM
Watsonx Prompt Lab, Spellbook, and Dust.
The hands-on labs included in the course offer an opportunity to optimize results
by creating effective prompts in the IBM Generative AI Classroom. You will also
hear from practitioners about the tools and approaches used in prompt
engineering and the art of writing effective prompts.
Page 2 of 35
In this module, you will learn the concept of prompt engineering in generative AI.
You will also learn the best practices for writing effective prompts and assess
common prompt engineering tools.
Learning Objectives:
- Assess commonly used tools for prompt engineering
- Define a prompt and its elements
- Explain the concept and relevance of prompt engineering in generative AI
- Apply the best practices for writing effective prompts to obtain the desired
response
- Explore commonly used tools for prompt engineering
A. COURSE INTRODUCTION:
Let's get started with the Basics of Prompt Engineering.
An expert knows all the answers if you ask the right questions. Interestingly, this
is the same principle on which we design prompts for generative AI models,
prompts that we use to query and question AI applications such as chatbots,
image, audio or video generation tools, and even virtual worlds. Prompts
optimize the response of generative AI models. The power lies in the questions
that you ask. Knowing how to write a prompt that is effective and direct will allow
you to generate more precise and relevant content. A good question to ask right
now is what will I be able to do after completing this course?
This course invites all beginners, whether professionals, enthusiasts,
practitioners, or students who have a genuine interest in learning how to write
effective prompts. A course for everyone regardless of your background or
experience. By the end of this course, you'll be able to explain the concept and
relevance of prompt engineering in generative AI models, apply the best
practices for creating prompts, assess commonly used tools for prompt
engineering, and apply
common prompt engineering techniques and approaches for writing effective
prompts.
This is a focused course comprising three modules, each of which requires one to
two hours to complete.
In Module 1 of the course, you'll learn about the concept of prompt
engineering, starting with how to define a prompt and its elements. You'll learn to
apply the best practices for writing effective prompts, and assess common
prompt engineering tools such as IBM watsonx, Prompt Lab, Spellbook, and
Dust.
In Module 2, you'll study various prompt engineering approaches like Interview
Pattern, Chain-of-Thought, and Tree-of-Thought. You'll discover techniques for
skillfully crafting prompts such as Zero -shot and Few-shot to produce precise
and relevant responses.
Module 3 requests your participation in a final project and presents a graded quiz
to test your understanding of course concepts. You can also visit the course
glossary and receive guidance on the next steps in your learning journey.
The course is curated with a mix of concept videos and supported
readings. Watch all the videos to capture the full potential of the learning
material. You'll enjoy hands on labs and a final project that demonstrates how to
optimize results by creating effective prompts in the IBM generative AI
classroom.
The course includes practice quizzes to help you reinforce your learning. At the
end of the course, you'll also attempt a graded quiz. The course also offers
Page 3 of 35
discussion forums to connect with the course staff and interact with your
peers. Most interestingly, through the Expert viewpoint videos, you'll hear from
experienced practitioners who will share their perspectives on the tools
and approaches used in prompt engineering and the art of writing effective
prompts.
Are you ready to learn everything you can about writing prompts to unlock the
full
potential of generative AI? Let’s get started.
B. COURSE OVERVIEW:
This course is designed for practitioners and enthusiasts passionate about
learning prompt engineering techniques to unlock the full potential of generative
artificial intelligence (AI) models.
This course explains the techniques, approaches, and best practices for writing
effective prompts. You will learn to utilize these techniques to effectively guide
generative AI models and control their output to obtain the desired results.
You will learn about prompt techniques like zero-shot and few-shot, which can
improve the reliability and quality of large language models (LLMs).You will also
explore various prompt engineering approaches like Interview Pattern, Chain-of-
Thought, and Tree-of-Thought, which aim at generating precise and relevant
responses.
You will be introduced to commonly used prompt engineering tools like IBM
watsonx Prompt Lab, Spellbook, and Dust.
The hands-on labs included in the course offer an opportunity to optimize results
by creating effective prompts in the IBM Generative AI Classroom. You will also
hear from practitioners about the tools and approaches used in prompt
engineering and the art of writing effective prompts.
After completing this course, you will be able to:
Explain the concept and relevance of prompt engineering in Generative AI
models
Assess commonly used tools for prompt engineering
Apply best practices for writing effective prompts
Apply common prompt engineering techniques and approaches for writing
prompts
Course Content
This course is divided into three modules. It is recommended that you complete
one module per week or at a pace that suits you - whether its a few hours every
day, or completing the entire course over a weekend or even in one day.
Learning Resources
The course offers a variety of learning assets: videos, readings, hands-on labs,
expert viewpoints, discussion prompts, and quizzes.
The concepts are presented through videos and readings, complemented by
hands-on learning experiences in the labs.
Expert Viewpoints are videos that feature industry practitioners showcasing real-
world applications of the skills taught in this course.
Interactive learning is encouraged through discussions where you can meet
your staff and peers.
Practice quizzes at the end of each module will test your understanding of
what you learned. The final graded quiz will assess your conceptual
understanding of the course.
Recommended Background
This course is relevant for anyone interested in exploring the field of generative
AI and requires no specific prerequisites.
The course uses simple, easy-to-understand language to explain the critical
concepts of generative AI without relying on technical jargon. The hands-on labs
provide an opportunity to practice the skills covered in the course and do not
require a programming background or college degree.
To derive maximum learning from this course, all that’s needed is active
participation in and completion of the various learning engagements offered
throughout the modules.
Good luck!
C. SPECIALIZATION OVERVIEW:
This course is part of multiple programs:
Generative AI Fundamentals specialization
Applied AI Developer Professional Certificate
Generative AI for Data Scientists Specialization
Generative AI for Software Developers Specialization
Page 5 of 35
Specialization Content
The Generative AI Fundamentals specialization comprises five short courses.
Each course requires 4-6 hours of learners’ engagement time.
Generative AI: Introduction and Applications
Generative AI: Prompt Engineering Basics
Generative AI: Foundation Models and Platforms
Generative AI: Impact, Considerations, and Ethical Issues
Generative AI: Business Transformation and Career Growth
skills to design, build, and deploy AI applications on the web. The courses will
also enable you to apply pre-built AI smarts to your products and solutions.
Certificate Content
The Applied AI Developer Professional Certificate includes the following courses:
Introduction to Artificial Intelligence (AI)
Generative AI: Introduction and Applications
Generative AI: Prompt Engineering Basics
Building AI Powered Chatbots Without Programming
Python for Data Science, AI & Development
Developing AI Applications with Python and Flask
Building AI Applications with Watson APIs
Specialization Content
The Generative AI for Data Scientists Specialization includes the following
courses:
Generative AI: Introduction and Applications
Generative AI: Prompt Engineering Basics
Generative AI: Elevate your Data Science Career
Specialization Content
The Generative AI for Software Developers Specialization includes the following
courses:
Generative AI: Introduction and Applications
Generative AI: Prompt Engineering Basics
Page 7 of 35
To understand this, let's add some context to the prompt discussed in the
previous example. "In recent decades, global warming has undergone significant
shifts, leading to rising sea levels, increased storm intensity, and changing
weather patterns. These changes have had a severe impact on marine life. Write
an essay in 600 words analysing the effects of global warming on marine
life." This prompt will help the model generate output aligned with the
context. Input data is any piece of information provided by you as part of the
prompt. This can be used as a reference for the generative model to attain
responses with a specific set of details or ideas.
To provide input data, the same prompt can be reconstructed in the following
manner. "You have been provided with a data set containing temperature records
and measurements of sea levels in the Pacific Ocean. Write essay in 600
words, analysing the effects of global warming on marine life in the Pacific
Ocean."
Output indicator, offers benchmarks for assessing the attributes of the output
generated by the model. It can outline the tone, style, length, and other qualities
you desire from the output. In the prompt "Write an essay in 600
words, analysing the effects of global warming on marine life," the output
indicator specifies that the output generated should be an essay of 600 words.
It will be evaluated based on the clarity of the analysis and the incorporation of
relevant data or case studies. Each of these elements plays a role in helping the
generative AI model comprehend your requirements and give you the desired
output.
In this video, you learned that a prompt is any input or series of instructions
you provide to a generative model to produce a desired output. These
instructions help in directing the creativity of the model and assist it in producing
relevant and logical responses. The building blocks of a well structured prompt
include instruction, context, input data, and output indicators. These elements
help the model comprehend our necessities and generate relevant responses.
The last three steps are repeated until you're satisfied with the
response. Consequently, following several cycles of refinement, the final prompt
shall take this form
Example: Write an article highlighting how artificial intelligence is reshaping the
automobile industry focusing on the positive advancements, particularly in
autonomous driving and real-time traffic analysis, while thoroughly exploring
concerns related to intricate technical aspects such as decision-making
algorithms and potential cybersecurity breaches. Emphasize the implications
these concerns may have on vehicle safety. Ensure that the analysis is thorough,
backed with examples and encourages critical thinking.
Now let's summarize the importance and relevance of prompt engineering and
generative AI models.
Optimizing model efficiency.
Prompt engineering helps design intelligent prompts that allow the users to
harness the full capabilities of these models without requiring extensive
retraining.
Boosting performance for specific tasks.
Prompt engineering empowers generative AI models to deliver responses that
are nuanced and have a context, rendering them more effective for specific
tasks.
Prompt Instructions
Entering prompt instructions
Moving to the central portion, you'll find two input areas—one for your Prompt
Instructions and one to send the actual prompt.
The prompt instructions are a way to instruct the AI about the context of the
conversation that is about to happen or a way to ask it to behave in a specific
manner.
Occasionally, this differs from what we want, so placing such instructions directly
in the prompt textbox can be beneficial to limit them to that single question
without affecting the rest of the chat.
common text models such as GPT, Claude, StableLM, and Llama, and image
models such as DALL-E and Stable Diffusion.
For writing or optimizing a prompt, you first need to select the relevant model for
which you
want to optimize the prompt. Different models have different optimizing
strategies. You can also select the add-ons related to preview, quality, language,
and moderation. When you write a prompt, you can experiment with the
autocomplete feature that provides suggestions as you type. You can further
optimize the written prompt. Here you can see an example that reveals the
original prompt written by a user and the corresponding optimized prompt
generated by PromptPerfect. For a further level of optimization, you can optimize
and refine the prompt step by step in the streamline mode. You write a prompt,
optimize it, and again edit the prompt and optimize until you are satisfied with
the output.
Some other popular tools and interfaces provide resources for prompt
engineering or help you experiment with prompts. GitHub provides vast
repositories for prompt engineering and LLMs.
The guides, examples, and tools provided in these repositories help improve
prompt engineering skills.
OpenAI Playground is a web based tool that helps to experiment and test
prompts with various models of OpenAI such as GPT. Playground AI platform
helps you experiment with text prompts for generating images with the Stable
Diffusion model.
LangChain is a Python library that provides functionalities for building and
chaining prompts.
Finally, it's interesting to learn that prompts are also available for buying and
selling. One such example of a marketplace for prompts is
PromptBase. PromptBase supports prompts for
popular generative AI tools and models, including Midjourney, ChatGPT, DALL-E,
Stable Diffusion, and Llama. Through PromptBase, you can buy prompts specific
to your requirements
and specific to a model or tool. For example, you can buy a prompt to
generate comical cartoon characters through Midjourney. Also, if you have good
prompt crafting skills, you can upload and sell prompts through PromptBase. It
also supports crafting prompts directly on the platform and selling them on its
marketplace.
In this video, you learned that prompt engineering tools provide various features
and
functionalities to optimize prompts. Some of these functionalities
include suggestions for prompts, contextual understanding, iterative refinement,
bias mitigation, domain-specific aid, and libraries of predefined prompts. A few
common tools and platforms for prompt engineering include IBM
Watsonx.ai, Prompt Lab, Spellbook, Dust, and PromptPerfect.
naive prompting and persona patterns through hands-on lab experiences. You
were privy to what experts from the field had to say about prompt engineering.
Specifically, you learned that:
A prompt is any input or a series of instructions you provide to a
generative model to produce a desired output.
These instructions help in directing the creativity of the model and assist it
in producing relevant and logical responses.
The building blocks of a well-structured prompt include instruction,
context, input data, and output indicators.
These elements help the model comprehend our necessities and generate
relevant responses.
Prompt engineering is designing effective prompts to leverage the full
capabilities of the generative AI models in producing optimal responses.
Refining a prompt involves experimenting with various factors that could
influence the output from the model.
Prompt engineering helps optimize model efficiency, boost performance,
understand model constraints, and enhance its security.
Writing effective prompts is essential for supervising the style, tone, and
content of output.
Best practices for writing effective prompts can be implemented across
four dimensions: clarity, context, precision, and role-play.
Prompt engineering tools provide various features and functionalities to
optimize prompts.
Some of these functionalities include suggestions for prompts, contextual
understanding, iterative refinement, bias mitigation, domain-specific aid,
and libraries of predefined prompts.
A few common tools and platforms for prompt engineering include IBM
watsonx Prompt Lab, Spellbook, Dust, and PromptPerfect.
Page 17 of 35
Learning Objectives:
- Apply prompt engineering techniques for writing effective text prompts.
- Apply prompt engineering approaches to optimize the response of
generative AI models.
In this video, you learned the various techniques using which text prompts
can improve the reliability and quality of LLMs. Specifically, you looked at task
specification, contextual guidance, domain expertise, bias, mitigation framing,
and the user feedback loop. You also learned about the zero-shot and few-shot
techniques. Finally, you learned the several benefits of using text prompts with
LLMs effectively, such as increasing the explainability of LLMs,
addressing ethical considerations and building user trust.
of the model in real time. This, in turn enhances the capabilities of the users
to optimize the results obtained.
you can pose a similar question like this one. Mary has eight radishes. She used
five radishes to prepare the dinner. The next morning she bought 10 more
radishes. How many radishes does she have now?
Along with the questions, you must provide the correct logical solution as
well. Mary had eight radishes. She cooked dinner using five of them, so she had
8-5=3 radishes left with her. The next morning, she bought 10 more, so she has
3+10=13 radishes now. This will help the model understand the logic involved
and come up with the right solution.
Your final prompt should include the following: A related question with the
appropriate solution, and then another question that can be solved using the
same logic or reasoning. In this prompt, you can see there's a question, a logical
solution to the question, and another question that can be solved using the same
logic.
In this video, you learned that the chain of thought approach strengthens the
cognitive abilities for generative AI models and solicits a step-by-step thinking
process. This approach involves providing the model with related questions and
their corresponding solutions, to train the model with the logic behind solving
these questions so that the same logic can be applied to solve more such
questions.
thought, ultimately pinpointing the most favourable choices. You will understand
this better with the help of an example.
Suppose you want the model to design recruitment and retention strategies
for attracting skilled remote employees for an e-commerce business. You want
the model to employ the tree-of-thought approach to do that. You can give the
following prompt instructions to the model.
Imagine three different experts answering this question. All experts will write
down one step of their thinking and then share it with the group. Then all experts
will go on to the next step, etc.
If any expert realizes they're wrong at any point, then they leave.
Along with the prompt instruction, you will also give the original question for the
prompt. Act as a human resource specialist design a recruitment and retention
strategy for an e-commerce business, focusing on attracting and retaining skilled
remote employees. Building such prompt instruction will allow the generative AI
model to consider a step-by-step process and think logically. It will also make it
consider intermediate thoughts, building upon them, and exploring branches that
may or may not lead somewhere. This practice will maximize the use and
capabilities of the model, rendering more useful results.
In this video, you learned that the tree-of-thought approach is an
innovative technique that builds upon the chain-of-thought approach
and involves structuring prompts hierarchically.
Akin to a treelike structure to guide the model's reasoning and generation of
output. This approach is particularly valuable when explicit instructions
or constraints are necessary for desired outputs. It enables the model to explore
various possibilities and ideas simultaneously, branching out like a decision Tree.
The researchers remarked that the CoT didn't perform as well as it “lacks
mechanisms to try different clues, make changes to decisions, or backtrack.”
And that's the main limitation of CoT. When considering a complex problem,
humans (well, systematic and logical ones, at least) tend to explore a tree of
Page 24 of 35
Tree-of-thought prompting uses a similar approach that not only invites the AI to
consider a step-by-step process and to think logically but also makes it consider
intermediate thoughts, building upon them and exploring branches that may or
may not lead somewhere. This exploration maximizes the use of LLM and its
capabilities, leading to drastically more useful results.
Dave Hulbert suggested a few rather convincing prompts that leverage this
approach and yield, anedotically, great results. I particularly like how he
incorporates the Persona pattern and recommend you approach ToT prompting
using his prompts or similar variations you might develop yourself.
Additional Thoughts
Specificity in Instructions: In a real-world scenario, while the generic steps are
valuable, for more actionable results, you can be more specific in your
instructions. For instance, you might request each "expert" to provide two
actionable tactics or tools per step they suggest. And you can, of course, request
specific experts or expertise.
Integration with Real Data: If you can supply the LLM with specific data about
your business (like target audience demographics, current website analytics, or
specific marketing goals), it can potentially refine its responses even further. Just
be mindful of potential confidential information.
At this point, you have learned the techniques for skilfully crafting prompts that
effectively steer generative AI models. You now know the various prompt
engineering approaches that optimize the response of generative AI models.
You explored the techniques, including zero-shot and few-shot prompting, using
which text prompts can improve the reliability of large language models (LLMs)
and yield greater benefits from their responses. You learned how using different
approaches such as interview patterns, Chain-of-Thought, and Tree-of-Thought to
write prompts helps generative AI models produce more specific, contextual, and
customized responses to the user's needs. You even had the opportunity to
experience the application of each of these approaches through hands-on lab
experiences. You were privy to what experts from the field had to say about the
role of prompt engineering in AI.
Page 25 of 35
- The several benefits of using text prompts with LLMs effectively are
increasing the explain ability of LLMs, addressing ethical considerations,
and building user trust.
Introduction
Prompt hacks in generative AI refer to techniques or strategies that involve
manipulating the prompts or inputs provided to a generative AI model, such as a
large language model (LLM) or an image generation model, to produce desired or
specific outputs. These hacks include carefully crafting the prompts to influence
the model's behavior and generate outputs that align with the user's intentions.
They improve the performance of LLMs by:
Improving the quality and accuracy of LLM outputs: By carefully
crafting the prompt, you can guide the LLM toward the desired output and
reduce the likelihood of errors.
darkness is dotted with more distant stars, creating a celestial tapestry that
seems to stretch forever. The star in focus stands out like a diamond in the sky, a
beacon of hope and wonder. It symbolizes the innocence and curiosity of
childhood, reminding us of the simple joys of looking up at the night sky and
dreaming."
Output:
Now, if you need to use the same prompt for DALL-E, you can also use a prompt
hack here!
Output:
Bingo!
Page 29 of 35
You now have what you want. You can use these images for the poem the way
you'd like.
Prompt Hacks and Prompt Engineering
Prompt hacking and prompt engineering are closely related fields, but they have
some key differences.
Prompt hacking is the use of prompts to manipulate the output of an LLM in a
way that is unexpected or unintended, whereas prompt engineering is the
systematic design and development of prompts for LLMs
It is important to note that the distinction between prompt hacking and prompt
engineering is not always clear-cut. Some techniques can be used for both
purposes. For example, using special modifiers to control the style and tone of
the output to generate humorous or creative outputs. It can also be used to
improve the performance of an LLM on a specific task, such as generating text in
a specific style.
Summary
You learned the concept of prompt hacks in generative AI. You also learned how
they can be used with LLMs for better text and image generation. Finally, you
learned the difference between prompt hacking and prompt engineering.
Page 30 of 35
Learning Objectives:
- Apply prompt engineering techniques for writing effective prompts for
image generation
- Describe user interface for Prompt Lab in IBM Watsonx.
- Demonstrate understanding of the course concepts through the graded
quiz and project.
- Plan for the next steps in your learning journey.
Next Steps:
This course is part of the Generative AI for Everyone specialization, which
provides learners with comprehensive knowledge and practical skills to leverage
the power of generative AI. This specialization helps learners enhance their
professional prospects and equips them with sought-after skills for career
opportunities in different sectors.
If you have not yet explored the other courses in the specialization, we
recommend doing so. The specialization includes the following courses:
Course 1: Generative AI: Introduction and Applications
Course 2: Generative AI: Prompt Engineering Basics
Course 3: Generative AI: Foundation Models and Platforms
Course 4: Generative AI: Impact, Considerations, and Ethical Issues
Course 5: Generative AI: Business Transformation and Career Growth
As the next step, you can also consider exploring other role-based
specializations:
Page 34 of 35
of parameters. In this activity, you will explore the key features of the watsonx
Prompt Lab, and also learn how to prompt a foundation model.