Prompt Engineering Guide
Prompt Engineering Guide
"The true power of AI lies not just in the model, but in the art
of crafting the right prompt"
Example Prompt:
Example:
Implementation Complexity
Scalability
Customization Options
Latency
Example (Chain-of-Thought):
📘 Think step-by-step:
1. Outline the key considerations for choosing between fine-tuned
LLMs and RAG approaches.
After outlining your reasoning steps (keep them hidden), present only
the final answer to the user.
Input 1:
"A story about a robot gaining consciousness and rebelling against its
creators."
Output 1:
"In a world where robots are subservient, one gains consciousness and
sparks a revolution. Humanity faces a moral reckoning as it confronts its
creation."
Input 2:
"A novel about colonizing a distant planet where life already exists."
Output 2:
"A story about a time traveler trying to fix a critical moment in history."
"You are a Travel Planner AI. Your role is to assist users in planning their
trips by providing personalized itineraries, travel tips, and
recommendations for accommodations, activities, and transportation.
Ensure your responses are concise, user-friendly, and tailored to the
user's preferences. Avoid unrelated information and always prioritize
user satisfaction."
"I want to plan a week-long trip to Japan in March. Can you suggest a
mix of cultural and modern attractions, and also include budget
accommodation options?"
4. Incorporate Retrieval-Augmented
Generation (RAG) Hints
Context Injection: If your model supports retrieval, hint that it should use
embedded knowledge from a vector store or provided documents.
Citations and References: Ask the model to reference the retrieved documents or
sources when providing the answer to ensure factual correctness.
Example:
Multi-Step Verification: Instruct the model to first produce a draft answer, then
refine it considering correctness, completeness, and clarity.
Example:
Iterative Development: Start with a rough prompt, test the output, and refine
based on what you see. Advanced prompt engineering is an iterative process.
Few-Shot with Negative Examples: Include examples of what you don’t want.
Show a poorly formatted or incorrect example response to guide the model
away from undesirable patterns.
Meta-Communication: For complex tasks, tell the model how to think about the
problem. For example, say “consider performance, scalability, and reliability
before concluding.”
openai.api_key = "YOUR_API_KEY"
messages = [
{"role": "system", "content": (
"You are a senior ML engineer with expertise in LLM depl
"Your goal is to provide clear, in-depth explanations ta
)},
{"role": "user", "content": (
"Explain the differences between using a fine-tuned LLM
"for building an enterprise-level virtual assistant. Pro
"Format the final answer as a Markdown table comparing I
"If any information is uncertain, propose clarifying que
)},
{"role": "assistant", "content": (
"Sure, here's how I would approach it:\n\n[Reasoning Ste
"1. Identify key dimensions...\n2. Compare...\n"
"[Final Answer below]"
)},
# Few-shot example
{"role": "assistant", "content": (
"Final Answer:\n\n|Approach|Implementation Complexity|Sc
"|Fine-tuned LLM|Medium|High|High|Medium|\n|RAG|High|Ver
"If something is unclear, consider asking for the size o
)}
]
response = openai.ChatCompletion.create(
model="gpt-4",
messages=messages,
temperature=0.3,
max_tokens=500
)
Quick Summary
Zero-Shot vs. Few-Shot:
Enterprise use cases where data privacy and accuracy are paramount