ChatGPT Mastery - Prompt Engineering
ChatGPT Mastery - Prompt Engineering
Prompt Engineering
___
QUIZ 32
ESSAY QUESTIONS 35
FAQ 38
2
Chapter 1: Introduction to Prompt
Engineering
Prompt engineering is a critical skill for effectively using
large language models (LLMs) like GPT-4. It involves
designing and refining input prompts to elicit the
desired outputs from these AI systems. The quality of
the prompt directly impacts the quality of the output,
making prompt engineering both an art and a science.
3
• Effective prompts unlock LLM potential: A
well-crafted prompt can guide the model to
produce creative, accurate, and insightful
outputs, making it a powerful tool for various
applications.
4
Chapter 2: Crafting Clear and Effective
Instructions
6
Chapter 3: Strategies for Enhanced
Prompt Engineering**
7
o Tactic: Instruct the Model to Answer
With Citations From a Reference
Text: Explain how to prompt the model
to cite specific passages from the
provided text to support its answers,
increasing transparency and verifiability.
8
Chapter 4: Giving the Model Time to
"Think"
10
Chapter 5: Utilizing External Tools
11
• Use Code Execution to Perform More
Accurate Calculations or Call External APIs:
Language models are not inherently reliable for
performing precise mathematical calculations or
executing code. To address this, we can
instruct the model to generate code for specific
tasks and then execute that code using a
dedicated engine.
12
arguments based on the provided function
schemas, and these arguments are used to
execute the functions. The output from these
function calls is then fed back into the model.
This approach, recommended by the sources,
streamlines the integration of external
functionality into language model applications.
The sources again point to the OpenAI
Cookbook and introductory text generation
guides for more information and examples.
13
Chapter 6: Systematic Testing and
Evaluation
14
• Computer-Based Evaluation: This approach
utilizes computers to automatically assess
model outputs based on predetermined criteria.
It's particularly effective for tasks with objective
answers, like multiple-choice questions or
factual recall. Computers can also be used to
evaluate outputs based on subjective criteria,
such as fluency or coherence, by employing
model-based queries.
15
4. Indicate with a "yes" or "no" whether the citation
effectively supports the fact.
16
Chapter 7: Strategies for Handling Long
Documents and Conversations
18
sections are interpreted within the
context of earlier content.
19
Chapter 8: Guiding the Model's
Reasoning Process
20
3. Evaluate the student's
solution. Based on the
comparison, the model can
provide feedback on the
student's approach.
21
§ Example: Consider the same
math problem scenario. We can
use a sequence of queries:
22
Additionally, the sources introduce a tactic for
prompting the model to review its previous work,
especially when dealing with tasks like information
extraction from large documents:
23
Chapter 9: Leveraging External Tools
24
3. Vector Search: Perform a
vector search to find the
embedded chunks from the
corpus that are closest to the
query embedding. This retrieves
the most semantically relevant
information.
26
functions, and the output can be fed back into
the model for further processing.
27
Chapter 10: Evaluating and Testing
Model Performance
28
§ Human Evaluation: Involves
human judges assessing the
quality and accuracy of model
outputs against gold-standard
answers. This approach is
particularly valuable for
subjective or nuanced tasks
where human judgment is
crucial.
§ Computer-Based Evaluation:
Uses automated metrics and
algorithms to compare model
outputs to gold-standard
answers. This is suitable for
tasks with objective criteria and
single correct answers.
§ Model-Based Evaluation:
Employs another language
model to evaluate the outputs of
the model being tested. This can
be useful when there's a range
of acceptable outputs, and a
model can effectively judge their
quality.
29
1. Defining Required Facts: Listing the facts that
should be present in the model's answer.
30
evaluation. They provide guidelines for
determining the necessary sample size
based on the desired statistical power
and the magnitude of the difference
being measured.
31
Quiz
32
10. Provide an example of a situation where it
would be advantageous to ask the model if it
missed anything on previous passes.
Answer Key
33
6. External tools can augment the capabilities of
language models. Code execution engines can
handle mathematical calculations or execute
API calls, providing accurate results for tasks
that models struggle with independently.
34
Essay Questions
35
Glossary of Key Terms
36
Persona: Instructing the model to adopt a specific
character, tone, or style in its responses.
37
FAQ
38
LLMs can sometimes hallucinate information,
especially when dealing with obscure topics. Here's
how to improve accuracy:
Yes, you can guide the model's tone and style by:
39
• Providing examples of the desired style:
Show the model samples of the tone and
language you want it to emulate.
41