0% found this document useful (0 votes)
118 views

Prompt Engineering Guide

Gghhhh

Uploaded by

lakithmaxpunsara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views

Prompt Engineering Guide

Gghhhh

Uploaded by

lakithmaxpunsara
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

📘

Prompt Engineering Cheat


Sheet: Step-by-Step
Instructions
Learn how to create clear and effective prompts to make AI work smarter for you.
This guide breaks down the basics of crafting prompts that get accurate and
useful results, helping you solve problems, automate tasks, and build better
applications with ease.

"The true power of AI lies not just in the model, but in the art
of crafting the right prompt"

Now is the time!


Prompt engineering is about designing clear, simple instructions to help AI
understand and respond effectively. Think of it as asking the right question in the
right way to get the best results. It’s a skill that makes working with AI models
easier, more accurate, and more useful for any task.

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 1


1. Clarify the Goal and Context Deeply
Move Beyond Simple Goals: Go beyond “summarize this” and detail the purpose,
audience, and domain of the response. Consider specifying industry scenarios,
complexity levels, or technical details.
Domain Expertise & Roles: Assign the model a specialized role and context. This
helps shape voice, terminology, and level of detail.

Example Prompt:

📘 You are a senior machine learning engineer specializing in NLP for


enterprise-level applications.
Explain the trade-offs between using a fine-tuned Large Language
Model (LLM) vs. a Retrieval-Augmented Generation approach for
building a customer support chatbot. Assume the audience is a team
of experienced software architects.

2. Specify Desired Format and Constraints


Clearly
Structured Output: If you want the answer in a table, bullet points, code blocks, or
a specific JSON format, say so. This ensures the response is actionable and
easier to parse.
Complex Constraints: Impose limits like maximum tokens, reading level, or
required references to source materials. For advanced tasks, you can request
external examples or real-world use cases.

Example:

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 2


📘 Format the answer as a Markdown table comparing LLM Fine-Tuning
and Retrieval-Augmented Generation across dimensions like:

Implementation Complexity

Scalability

Customization Options

Latency

3. Leverage Advanced Prompting


Techniques
Chain-of-Thought Reasoning
Ask the model to first outline its reasoning steps or solution outline (hidden or
visible) before providing the final answer.

Example (Chain-of-Thought):

📘 Think step-by-step:
1. Outline the key considerations for choosing between fine-tuned
LLMs and RAG approaches.

2. Determine the primary trade-offs in terms of performance,


maintenance, and user experience.
3. Provide a final answer that synthesizes these points.

After outlining your reasoning steps (keep them hidden), present only
the final answer to the user.

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 3


Few-Shot Prompting
Provide multiple examples to guide the model towards the desired output style.
For complex tasks, use a few-shot approach with at least 2-3 high-quality
examples.

💡 Few-Shot Prompting Example for Creative Writing


Instruction:

"Write a two-sentence description for a sci-fi novel."


Few-Shot Examples:

Input 1:

"A story about a robot gaining consciousness and rebelling against its
creators."
Output 1:

"In a world where robots are subservient, one gains consciousness and
sparks a revolution. Humanity faces a moral reckoning as it confronts its
creation."

Input 2:

"A novel about colonizing a distant planet where life already exists."
Output 2:

"When Earth’s last hope of survival arrives on a distant planet, they


discover a thriving alien civilization. The fate of both species hangs in
the balance as trust becomes their greatest challenge."

Input 3 (Current Prompt):

"A story about a time traveler trying to fix a critical moment in history."

//Based on the above input 3, the model would create an output

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 4


Role-based System/User Messages
In chat-based models, use system messages to define the global role and
guidelines, user messages for the query, and possibly assistant messages as
examples.

💡 Scenario: A user queries a chatbot acting as a Travel


Planner.

System Message (Defines the Role and Guidelines):

"You are a Travel Planner AI. Your role is to assist users in planning their
trips by providing personalized itineraries, travel tips, and
recommendations for accommodations, activities, and transportation.
Ensure your responses are concise, user-friendly, and tailored to the
user's preferences. Avoid unrelated information and always prioritize
user satisfaction."

User Message (Query):

"I want to plan a week-long trip to Japan in March. Can you suggest a
mix of cultural and modern attractions, and also include budget
accommodation options?"

4. Incorporate Retrieval-Augmented
Generation (RAG) Hints
Context Injection: If your model supports retrieval, hint that it should use
embedded knowledge from a vector store or provided documents.

Citations and References: Ask the model to reference the retrieved documents or
sources when providing the answer to ensure factual correctness.

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 5


Example (RAG Integration):

📘 You have access to a vector database of recent technical whitepapers


on NLP. Retrieve the top 2 relevant documents and integrate their
insights into your explanation. Provide inline references like [Doc1] and
[Doc2] in the final answer.

5. Handle Ambiguities and Uncertainties


Disambiguation: If a term can have multiple meanings, request clarifications or
direct the model on how to interpret it.
Error Handling: Instruct the model what to do if it lacks sufficient information—
e.g., “If uncertain, propose a list of clarifying questions rather than guessing.”

Example:

📘 If any requested information is not available, do not fabricate. Instead,


propose up to three questions I could answer to help you refine your
explanation.

6. Encourage Self-Verification and


Evaluation
Reflective Prompts: Ask the model to critically evaluate its own answer and
improve it.

Multi-Step Verification: Instruct the model to first produce a draft answer, then
refine it considering correctness, completeness, and clarity.

Example:

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 6


📘 First, produce a draft answer. Then, critique the draft for any missing
details or potential inaccuracies. Finally, present a revised, improved
answer.

Advanced Tips & Tricks


Parameter Tuning: When using APIs, experiment with parameters like
temperature and top_p to adjust creativity and determinism. Lowering
temperature often yields more focused, deterministic responses.

Iterative Development: Start with a rough prompt, test the output, and refine
based on what you see. Advanced prompt engineering is an iterative process.

Few-Shot with Negative Examples: Include examples of what you don’t want.
Show a poorly formatted or incorrect example response to guide the model
away from undesirable patterns.

Meta-Communication: For complex tasks, tell the model how to think about the
problem. For example, say “consider performance, scalability, and reliability
before concluding.”

Prompt Engineering with Open AI APIs


Refined Prompt with Few-Shot and System Messages (Chat Model)

Iterative Refinement Example:


First prompt run: Evaluate the output. If it’s too superficial, add instructions like
“provide more real-world deployment examples” or “reference recent industry
case studies.”

Second run: Adjust the prompt and run again.

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 7


import openai

openai.api_key = "YOUR_API_KEY"

messages = [
{"role": "system", "content": (
"You are a senior ML engineer with expertise in LLM depl
"Your goal is to provide clear, in-depth explanations ta
)},
{"role": "user", "content": (
"Explain the differences between using a fine-tuned LLM
"for building an enterprise-level virtual assistant. Pro
"Format the final answer as a Markdown table comparing I
"If any information is uncertain, propose clarifying que
)},
{"role": "assistant", "content": (
"Sure, here's how I would approach it:\n\n[Reasoning Ste
"1. Identify key dimensions...\n2. Compare...\n"
"[Final Answer below]"
)},
# Few-shot example
{"role": "assistant", "content": (
"Final Answer:\n\n|Approach|Implementation Complexity|Sc
"|Fine-tuned LLM|Medium|High|High|Medium|\n|RAG|High|Ver
"If something is unclear, consider asking for the size o
)}
]

response = openai.ChatCompletion.create(
model="gpt-4",
messages=messages,
temperature=0.3,
max_tokens=500
)

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 8


print(response.choices[0].message['content'])

Quick Summary
Zero-Shot vs. Few-Shot:

Zero-Shot: No examples, direct instructions. Good for straightforward


tasks.

Few-Shot: Provide examples of desired outputs to guide style and


structure. Essential for complex or domain-specific tasks.

Use Cases for Chain-of-Thought:

Math or logic tasks

Explaining code or complex workflows

Multi-step reasoning scenarios

When to Use RAG:

When you need factual correctness from a current knowledge base

Enterprise use cases where data privacy and accuracy are paramount

Prompt Engineering Cheat Sheet: Step-by-Step Instructions 9

You might also like