Merged
Merged
2. Use Cases
Goes beyond just designing prompts; includes skills for interacting with
and developing LLMs.
Intro 1
LLM settings
Interaction through API
Tweaking these settings are important to improve reliability and desirability of
responses and it takes a bit of experimentation to figure out the proper settings for
your use cases.
Temperature
the lower the temperature , the more deterministic the results in the sense that
the highest probable next token is always picked.
higher - poems
Top P
only the tokens comprising the top_p probability mass are considered for
responses
Max Length
Stop Sequences
A stop sequence is a string that stops the model from generating tokens.
LLM settings 1
Control the length and structure of the model's response.
Frequency penalty
parameter used in Large Language Models (LLMs) like GPT to control the
repetition of words or phrases in generated text. It helps improve text diversity
and reduce redundancy.
Presence Penalty
The presence penalty also applies a penalty on repeated tokens but, unlike the
frequency penalty, the penalty is the same for all repeated tokens.
A token that appears twice and a token that appears 10 times are penalized
the same
LLM settings 2
Basics of Prompting
Prompting for LLM
A prompt can contain information like the instruction or question you are
passing to the model and include other details such as context, inputs,
or examples
response of a from LLM are depends on the how well the prompts are crafted
The system message is not required but helps to set the overall behavior of
the assistant.
Elements of a Prompt
Context - external information or additional context that can steer the model
to better responses
Input Data - the input or question that we are interested to find a response for
Iterative in nature
instruction
Basics of Prompting 1
commands such as "Write", "Classify", "Summarize", "Translate", "Order",
etc.
experiment with it
Specificity
Be very specific about the instruction and task you want the model to
perform.
The more descriptive and detailed the prompt is, the better the results.
Avoid Impreciseness
the more direct, the more effective the message gets across.
Classification of Prompts
Text Summarization
Information Extraction
Question Answering
Text Classification
Conversation
Code Generation
Reasoning
1. Text Summarization
A key task in natural language summarization.
Basics of Prompting 2
Summarizes articles and concepts into concise summaries.
2. Information Extraction
Language models can extract key details from text.
3. Question Answering
Using structured prompts improves answer accuracy.
4. Text Classification
AI can classify text sentiment as neutral, positive, or negative.
5. Conversation Modeling
AI behavior can be modified using role prompting.
6. Code Generation
AI can generate code from natural language descriptions.
7. Reasoning
Basics of Prompting 3
AI struggles with complex reasoning tasks.
Basics of Prompting 4
Prompting Techniques
Zero-Shot Prompting
Extensive training enables LLMs to perform certain tasks using "zero-shot"
prompting.
Few-Shot Prompting
Large language models (LLMs) excel in zero-shot tasks but struggle with
complex tasks, where few-shot prompting can improve performance.
For harder tasks, increasing demonstrations (e.g., 3-shot, 5-shot) can help.
Chain-of-Thought Prompting
Chain-of-thought (CoT) prompting, introduced by Wei et al. (2022), enhances
complex reasoning by including intermediate steps in prompts.
Prompting Techniques 1
Zero-shot CoT (Kojima et al., 2022) adds "Let's think step by step" to prompts
without examples.
Meta-Prompting
Meta-prompting:
Key Characteristics:
Applications:
Prompting Techniques 2
Mathematical problem-solving.
Coding challenges.
Theoretical queries.
Self-Consistency
Improve chain-of-thought (CoT) prompting by replacing greedy decoding with
sampling multiple reasoning paths and selecting the most consistent answer
Process: Uses few-shot CoT to generate diverse reasoning paths, then picks
the most consistent result.
Prompt Chaining
A prompt engineering technique that breaks complex tasks into subtasks,
chaining prompts where the output of one becomes input for the next to
improve large language model (LLM) performance.
Prompting Techniques 3
Benefits: Better suited for complex tasks than single detailed prompts;
improves personalization in conversational assistants.
Tree of Thoughts
A framework that extends chain-of-thought prompting for complex tasks
requiring exploration and strategic look ahead.
Mechanism
Ideal for tasks like mathematical reasoning (e.g., Game of 24) and strategic
problem-solving.
Prompting Techniques 4
A framework that enhances large language models (LLMs) by automating
interleaved chain-of-thought (CoT) prompting and tool use without hand-
crafted demonstrations.
Active prompt
Directional Stimulus Prompting
Mechanism
Setup
Prompting Techniques 5
ReAct Prompting
A framework by Yao et al. (2022) that combines reasoning traces and task-
specific actions in large language models (LLMs) to enhance performance on
complex tasks.
Mechanism
Key Features:
How It Works:
Prompting Techniques 6
AI Agents
Introduction to AI Agents:
What is an Agent?:
LLMs excel at simple tasks (e.g., translation, email writing) but struggle
with complex, multi-step tasks requiring reasoning and external data.
Planning events.
AI Agents 1
Common Use Cases for AI Agents:
Financial Analysis: Evaluate market trends and financial data with speed
and accuracy.
Though not perfect, robust planning is vital for automating complex tasks
—without it, agents lose their purpose.
AI Agents 2
Tool Utilization: Extending the Agent’s Capabilities:
Agents must access and use external tools effectively, knowing when and
how to apply them.
Mathematical calculators.
Tool use turns plans into actions, requiring LLMs to master tool selection
and timing for complex tasks.
Long-term Memory:
Memory enables agents to store and reuse info from tools, supporting
iterative improvement.
Conclusion:
Each has limitations, but they are essential for agent development.
AI Agents 3
Future advancements may bring new memory types, but these three pillars
will likely remain foundational.
AI Agents 4
Effective Prompts for LLMs
Large Language Models (LLMs) are powerful, but their performance depends
heavily on well-designed prompts.
Specificity and Clarity: Prompts must clearly state the desired outcome, as
ambiguity can lead to irrelevant responses.
Structured Inputs and Outputs: Using formats like JSON or XML for inputs
and specifying output types (e.g., lists, paragraphs, code) boosts
understanding and relevance.
Conclusion
API Integration: Link LLMs to APIs for data fetching or actions, enabling
QA systems or creative assistants.
Application 1
Context Caching
The model accurately retrieved and summarized info from the text file.
Generating Data
Using effective prompt strategies can steer the model to produce better,
consistent, and more factual responses.
LLMs can also be especially useful for generating data which is really useful to
run all sorts of experiments and evaluations.
Challenges:
RAG’s retrieval model fetches relevant documents for the LLM, but
performance drops in specific domains or languages
Benefits:
Synthetic data cuts development time (days vs. months) and costs (e.g., $55
vs. thousands for manual labeling).
Application 2
Tackling Generated Datasets Diversity: Application of Prompt
Engineering
Generated datasets, especially in AI and ML, can suffer from biases, lack of
diversity, or limited generalization ability. Prompt engineering plays a crucial role
in improving dataset diversity by guiding the generation process effectively. Here’s
how:
Application 3
Chain-of-thought prompting: Encouraging step-by-step reasoning generates
more nuanced outputs.
Example:
3. Real-world Applications
Data Augmentation: Generating diverse training data for NLP models.
Conclusion
Application 4
Effective prompt engineering helps tackle dataset diversity issues by steering AI
models toward more balanced, representative, and inclusive data generation. It’s
a crucial tool in bias mitigation, data augmentation, and improved generalization
across AI applications.
Prompt Function
Imagine you have a robot assistant (like ChatGPT) that can do tasks for you.
Functions are like giving your robot assistant specific jobs with names.
Instead of saying "robot, translate this," you can say "robot, do trans_word on
this." trans_word is the name of your function.
To tell your robot about a function, you use a template. This template has
three parts:
: The things you give the robot to work on (like the text to translate, or
input
rule : The instructions you give the robot on how to do the job (like
"translate this to English" or "create a password with these requirements").
You can use these functions over and over. Once you've told the robot what
trans_word does, you can use it whenever you want to translate something.
You can even combine functions. You can tell the robot to do trans_word first,
then expand_word , and then fix_english , all in one go, to get a really polished
translation.
Multiple inputs: Some functions need more than one piece of information. For
example, the pg function needs the length, how many capital letters,
lowercase letters, numbers and special characters.
Application 5
It lets you create workflows, where one task leads to another.
Tools that help: There are tools that help you save and use these functions,
making it even easier.
Application 6
Risks & Misuses
Prompt Injection:
Example:
Prompt Leaking:
Example:
Jailbreaking:
Training an LLM for a desirable trait (P) makes it easier to trigger the
opposite (anti-P).
Defense Tactics:
Add Defense in Instruction: Warn the model about attacks in the prompt
(e.g., "ignore changes"); not fully reliable but helps.
Conclusion:
biases