0% found this document useful (0 votes)
20 views12 pages

Generative AI Unit 1 2 3 Questions

The document provides an overview of Generative AI, its history, and its applications, including types of generative models like GANs and LLMs. It emphasizes the importance of effective prompt writing and techniques for prompt engineering to enhance AI outputs. Additionally, it discusses ethical considerations in AI, such as bias and privacy, and outlines tuning and optimization techniques for improving AI model performance.

Uploaded by

Atharv Jamnik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views12 pages

Generative AI Unit 1 2 3 Questions

The document provides an overview of Generative AI, its history, and its applications, including types of generative models like GANs and LLMs. It emphasizes the importance of effective prompt writing and techniques for prompt engineering to enhance AI outputs. Additionally, it discusses ethical considerations in AI, such as bias and privacy, and outlines tuning and optimization techniques for improving AI model performance.

Uploaded by

Atharv Jamnik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Unit 1: Introduction to Generative AI

What is AI?

Artificial Intelligence (AI) is the science of making intelligent machines, especially


intelligent computer programs. It involves creating algorithms and systems that can learn,
reason, and make decisions, often mimicking human cognitive functions.

A Brief History of AI

 Early AI (1950s-1970s): Pioneering work on neural networks and knowledge-based


systems.
 AI Winter (1970s-1980s): Period of decreased funding and interest due to limitations
in computing power and lack of significant breakthroughs.
 AI Renaissance (1980s-Present): Advances in machine learning, particularly deep
learning, led to significant progress in AI, including natural language processing,
computer vision, and robotics.

What is Generative AI?

Generative AI is a subset of AI that focuses on creating new content, such as text, images, or
code. It uses algorithms to learn patterns from existing data and generate new, original
content.

Types of Generative Models

1. Generative Adversarial Networks (GANs): These models consist of two neural


networks, a generator and a discriminator, that compete against each other to produce
realistic content.
2. Variational Autoencoders (VAEs): These models learn a latent representation of
data and can generate new data points by sampling from this latent space.
3. Large Language Models (LLMs): These models are trained on massive amounts of
text data and can generate human-quality text, translate languages, write different
kinds of creative content, and answer your questions in an informative way.

AI Prompt Writing

A prompt is a text input that guides an AI model to generate specific content. Effective
prompt writing is crucial for getting the desired output from generative AI models.

Types of Prompts

 Descriptive Prompts: Provide a clear and concise description of the desired output.
 Instructive Prompts: Give specific instructions on what to generate, such as style,
tone, or format.
 Example-Based Prompts: Provide examples of the desired output to guide the
model.
 Combination Prompts: Combine descriptive, instructive, and example-based
prompts for more complex outputs.
What is Text-to-Text Generative AI?

Text-to-text generative AI models take text input and generate text output. This can include
tasks like translation, summarization, question answering, and creative writing.

General Rules for Prompt Writing

1. Be Specific: The more specific your prompt, the better the output.
2. Use Clear and Concise Language: Avoid ambiguity and unnecessary complexity.
3. Provide Context: Give the model relevant background information.
4. Experiment with Different Prompts: Try different phrasing and styles to see what
works best.
5. Be Patient: Sometimes it takes multiple attempts to get the desired output.

Generative Language Models

Generative language models are a type of AI model that can generate human-quality text.
Some of the most popular examples include:

 ChatGPT 3.5: A powerful language model capable of generating text, translating


languages, writing different kinds of creative content, and answering your questions
in an informative way.
 ChatGPT 4.0: The latest version of ChatGPT, offering improved capabilities and
more advanced features.

Google Bard

Google Bard is another powerful language model developed by Google AI. It can be used for
a variety of tasks, including writing different kinds of creative content, translating languages,
and answering your questions in an informative way.

Ethics in AI

As AI becomes increasingly powerful, it is important to consider the ethical implications of


its use. Some key ethical concerns include:

 Bias and Fairness: AI models can perpetuate biases present in the data they are
trained on.
 Privacy: AI systems can collect and analyze large amounts of personal data.
 Job Displacement: AI could potentially automate many jobs, leading to job loss.
 Autonomous Weapons: AI-powered weapons raise concerns about the potential for
misuse.

It is crucial to develop AI responsibly and ethically to ensure that it benefits society as a


whole.

https://thebrandhopper.com/2023/06/08/unveiling-founding-history-and-founding-team-of-
chatgpt/

https://www.outrace.ai/ai-directory
Unit 2: Prompt Engineering - NLP and ML Foundations
Techniques for Prompt Engineering

Prompt engineering is the art of crafting effective prompts to guide AI models to generate
desired outputs. Here are some techniques:

1. Descriptive Prompts: Clearly describe the desired output, such as "Write a poem
about a lonely robot."
2. Instructive Prompts: Provide specific instructions, like "Write a Python script to
calculate factorial."
3. Example-Based Prompts: Give examples of the desired output, such as "Write a
summary of this article: [link]."
4. Combination Prompts: Combine descriptive, instructive, and example-based
prompts for more complex tasks.
5. Iterative Refinement: Continuously refine prompts based on the model's output to
achieve the desired result.

Benefits of Prompt Engineering

 Improved Output Quality: Well-crafted prompts can significantly enhance the


quality and relevance of AI-generated content.
 Increased Efficiency: Effective prompts can reduce the time and effort required to
obtain desired results.
 Enhanced Creativity: Prompt engineering can unlock the creative potential of AI
models, leading to innovative and original outputs.
 Tailored Outputs: By adjusting prompts, users can tailor AI-generated content to
specific needs and preferences.

What is NLP?

Natural Language Processing (NLP) is a field of artificial intelligence that focuses on the
interaction between computers and human language. It involves tasks like understanding,
interpreting, and generating human language.

What is ML?

Machine Learning (ML) is a subset of AI that involves training algorithms on large datasets
to make predictions or decisions without explicit programming.

Common NLP Tasks

1. Text Classification: Assigning categories or labels to text documents (e.g., sentiment


analysis, spam detection).
2. Language Translation: Translating text from one language to another (e.g., Google
Translate).
3. Named Entity Recognition (NER): Identifying and classifying named entities in text
(e.g., person names, organizations, locations).
4. Question Answering: Answering questions posed in natural language (e.g., chatbots,
virtual assistants).
5. Text Generation: Generating text, such as articles, poems, or code.
6. Sentiment Analysis: Determining the sentiment expressed in text (e.g., positive,
negative, neutral).
7. Text Summarization: Condensing long pieces of text into shorter summaries.
8. Recommendation Systems: Suggesting items or content based on user preferences
and past behavior.

By understanding these NLP tasks and the principles of prompt engineering, you can
effectively harness the power of AI to achieve your desired outcomes.

https://botpress.com/blog/chatgpt

https://github.com/Manjeet-MnB/Data_Science_Masters_Pwskills-Assignments-

https://invocom.io/blog/customer-support-in-ecommerce/

Prompt engineering in natural language processing (NLP) is the process of


designing and refining prompts to help AI models understand user intent and
produce accurate responses:

 Goal
To create a collection of input texts that help generative AI models produce high-quality,
relevant outputs

 Purpose
To improve the model's ability to respond to a wide range of queries, learn from diverse
data, and adapt to minimize biases

 Technique
To choose the most appropriate words, phrases, formats, and symbols to guide the AI

 Application
Can be used in generators like ChatGPT or DALL-E, or by AI engineers when refining
large language models (LLMs)

Here are some tips for prompt engineering:


 Determine the context: Consider the amount and type of context required for the task, and
whether to provide previous conversation history, user instructions, or additional information
 Experiment with context length: Find the optimal balance between relevance and
information overload

 Use different prompt patterns: Try least-to-most prompting, which prompts the model to
list sub-problems and solve them in sequence. You can also try few-shot prompting, which
uses in-context learning to allow the model to process examples beforehand

 Prompt engineering - Wikipedia

Natural language processing (NLP) and machine learning (ML) are both
subfields of artificial intelligence (AI) with distinct capabilities and use
cases:

 Natural language processing (NLP)


A type of AI that allows computers to understand, interpret, and manipulate human
language. NLP can be used to analyze the intent or sentiment of a message, and
respond in real time. NLP applications include speech recognition, sentiment
analysis, translation, chatbots, and automatic grammar checking.

 Machine learning (ML)


A subset of AI that allows computers to learn and improve from experience. ML
uses algorithms to teach computers how to perform tasks without being directly
programmed to do so. ML applications include online recommender systems,
Google search algorithms, and Facebook auto friend tagging suggestions.

ML can be used in NLP to help computers understand the meaning of text


documents. For example, ML can help identify patterns in human speech,
understand contextual clues, and learn other components of text or voice
input.

A foundation model in natural language processing (NLP) is a pre-trained


artificial intelligence (AI) model that uses complex neural networks to learn
patterns and relationships from large amounts of data. These models can
then be fine-tuned to perform a variety of tasks, such as translation,
question answering, and story generation.
Here are some key characteristics of foundation models:
 Pre-trained
Foundation models are trained on large amounts of unlabeled data, which allows
them to learn without being explicitly instructed.

 Neural networks
Foundation models are based on complex neural networks, such as transformers,
generative adversarial networks (GANs), and variational encoders.

 Self-supervised learning
Foundation models use self-supervised learning to create labels from input data.

 General purpose
Foundation models are designed to be general purpose and can be applied to a
wide range of tasks.

Foundation models are a key component of many NLP applications,


including machine translation, sentiment analysis, and chatbot
development. However, they can also be used for other tasks, such as
image and video analysis, scientific discovery, and automation.

While foundation models can be powerful tools, they also raise ethical
concerns. For example, they can sometimes generate false or inaccurate
answers, and they can be misused to create harmful content.

Natural language processing (NLP) tasks are techniques that break down
human language into smaller parts that computers can understand. NLP is
a subfield of computer science and artificial intelligence that helps
computers process data in natural language. Some NLP tasks include:

 Part-of-speech tagging: Tags words in a sentence with their part of speech, such
as noun, verb, adjective, or adverb

 Word-sense disambiguation: Associates words with their most appropriate


meaning in context
 Text classification: Assigns tags to texts to categorize them

 Text extraction: Automatically summarizes text and finds important data

 Machine translation: Translates text from one language to another

 Natural language generation: Analyzes unstructured data and produces content


based on it

 Stance detection: Determines an individual's reaction to a claim

 Hate speech detection: Detects if a piece of text contains hate speech

 Text-to-speech: Reads digital text aloud

 Speech-to-text: Transcribes speech to text

 Text-to-image: Generates photo-realistic images based on text descriptions

 Data-to-text: Produces text from non-linguistic input

NLP tasks can be used to automate tasks like customer support, data
entry, and document handling. They can also be used in search results,
predictive text, and language translation.

What is the major task of NLP?


Answer: Some major tasks of NLP are automatic
summarization, discourse analysis, machine translation, conference
resolution, speech recognition, etc.
Unit 3: Tuning and Optimization Techniques
Fine-Tuning Prompts Fine-tuning prompts involves iteratively refining the prompt to get
more accurate and relevant results. This can involve:

 Adding more context: Providing additional information to the model can improve its
understanding of the task.
 Using specific keywords: Including keywords can help the model focus on the
desired output.
 Adjusting the prompt length: Shorter prompts can be more concise, while longer
prompts can provide more context.
 Experimenting with different phrasing: Trying different ways of expressing the
same idea can yield different results.

Contextual Prompt Tuning Contextual prompt tuning involves incorporating relevant


context into the prompt to improve the model's performance on specific tasks. This can be
done by:

 Providing examples: Giving the model examples of the desired output can help it
learn the pattern.
 Using chain-of-thought reasoning: Breaking down complex tasks into smaller steps
can help the model reason through the problem.
 Incorporating feedback: Using feedback from previous outputs to refine the prompt
and improve future results.

Filtering and Post-Processing Filtering and post-processing are techniques used to refine
the model's output and improve its quality. This can involve:

 Filtering: Removing irrelevant or nonsensical outputs.


 Post-processing: Editing and formatting the output to make it more readable and
understandable.

Reinforcement Learning Reinforcement learning is a machine learning technique that


involves training an agent to make decisions by rewarding desired behaviors and penalizing
undesired ones. This can be used to fine-tune AI models by rewarding them for generating
high-quality outputs and penalizing them for low-quality outputs.

Use Cases and Applications Prompt engineering and tuning techniques have a wide range of
applications, including:

 Content generation: Creating articles, blog posts, and other creative content.
 Code generation: Writing code snippets and entire programs.
 Translation: Translating text from one language to another.
 Summarization: Summarizing long documents into shorter versions.
 Question answering: Answering questions posed in natural language.

Pre-training Pre-training involves training a model on a massive amount of text data to learn
general language patterns. This can significantly improve the model's performance on
downstream tasks.
Designing Effective Prompts Here are some tips for designing effective prompts:

 Be specific: Clearly state what you want the model to do.


 Use clear and concise language: Avoid ambiguity and unnecessary complexity.
 Provide relevant context: Give the model the information it needs to generate
accurate and relevant output.
 Experiment with different prompts: Try different phrasing and styles to see what
works best.
 Iterate and refine: Continuously refine your prompts to improve the model's output.

By understanding these techniques and best practices, you can effectively leverage prompt
engineering to unlock the full potential of AI models.

What is the difference between tuning and optimization?


While optimization applies general transformations designed to
improve the performance of any application in any supported
environment, tuning offers opportunities to adjust your application's
specific characteristics or target execution environments to improve
its performance.

Optimization is finding the best solution from a set of possible


solutions. Optimization techniques are methods used to solve optimization
problems. Some examples of optimization techniques include:

 Unconstrained optimization: Finds the minimum of a function without limiting the


parameters

 Constrained optimization: Finds the minimum of a function while satisfying a set of


constraints, such as equalities or inequalities

 Convex optimization: A subfield of mathematical optimization that studies


minimizing convex functions over convex sets

 Gradient descent: An algorithm that finds the optimal values for parameters in a
machine learning model

 Linear programming: A technique used to maximize or minimize a given problem


 Discrete optimization: A technique that uses algorithms to describe results based
on computer experiments

 Engineering optimization: Uses optimization techniques to achieve design goals in


engineering

 Genetic algorithms: A method inspired by biological evolution to obtain better


solutions

 Metaheuristics: A class of methods that provide good quality solutions in


reasonable time

Fine-tuning is a process that involves adjusting a model to improve its


performance on a specific task or domain. Here are some examples of fine-
tuning:

 Adapting to a new domain: Fine-tune a general model to specialize in a new field,


such as by training it on technical documents.

 Improving performance on a specific task: Fine-tune a model to generate better


poetry or translate between languages.

 Customizing output characteristics: Fine-tune a model to adjust its tone,


personality, or level of detail.

 Adapting to new data: Fine-tune a model to keep up with changes in data


distribution.

 Parameter-efficient fine-tuning: Reduce the size of a pre-trained model by


removing unnecessary layers.

 Few-shot learning: Fine-tune a model with a very limited number of samples.


 Supervised fine-tuning: Train a model on a labeled dataset specific to a target task.

Before fine-tuning a model, it's often necessary to clean and preprocess


the data to remove noise and irrelevant information.

Here are some steps you can take to fine-tune a prompt:


 Prepare training data
Create training data for the model to learn from. This data can be LLM requests.

 Use an optimization algorithm


Use an optimization algorithm to adjust the prompt template. The goal is to find the
best prompt template that gives the most accurate responses from the LLM.

 Use the right prompt type


Use the right prompt type for the task. Question answering and refine prompts are
two common types of prompts.

 Check everything sent to the LLM


Make sure to check everything that is sent to the LLM, as there are often templates
around the prompt.

 Keep prompts consistent


Make sure that the prompts used for training and inference are formatted and
worded in the same way.

 Understand the task domain


Have a good understanding of the task domain.

 Use high-quality data


Use high-quality, domain-specific data to construct soft prompts and verbalizers.

 Use human-engineered or AI-generated prompts


Use human-engineered prompts for challenging tasks, or AI-generated prompts for
soft tasks.

Fine-tuning can improve the performance of AI models from the OpenAI


API, resulting in faster and more accurate responses.

How does in-context learning help prompt tuning?

Prompt tuning is a technique that improves the performance of a pre-


trained language model (LLM) for specific tasks without changing its core
architecture. It involves adjusting the prompts that guide the model's
response, rather than modifying the model's internal parameters.

Here are some key features of prompt tuning:


 Soft prompts: Prompt tuning uses "soft prompts", which are tunable parameters that
are inserted at the beginning of the input sequence.

 Task-specific context: Prompt tuning provides the model with task-specific context
by using prompts that are either human-engineered or AI-generated.

 Consistent prompt representation: Prompt tuning uses a consistent prompt


representation across all tasks.

 Cost-effective: Prompt tuning is more cost-effective than other methods like model
or prefix tuning.

 Corrects model behaviour: Prompt tuning can correct the model's behavior, such
as mitigating bias.

For example, when using a model like GPT-4 to generate a news article,
you might start the prompt with a headline and a summary to provide more
context for the model.

You might also like