0% found this document useful (0 votes)
20 views17 pages

Session 2 Introduction To Generative AI

Generative AI encompasses methodologies that create content resembling their training data, including images, text, and music. It operates through models like Generative Adversarial Networks (GANs) and Diffusion Models, which learn patterns from data to generate new instances. Real-world applications include drug discovery, content creation, and conversational AI, highlighting its potential in various fields.

Uploaded by

Nameless Wonder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views17 pages

Session 2 Introduction To Generative AI

Generative AI encompasses methodologies that create content resembling their training data, including images, text, and music. It operates through models like Generative Adversarial Networks (GANs) and Diffusion Models, which learn patterns from data to generate new instances. Real-world applications include drug discovery, content creation, and conversational AI, highlighting its potential in various fields.

Uploaded by

Nameless Wonder
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Introduction to Generative AI

Generative AI and Prompt Engineering

Ram N Sangwan
Generative AI refer to a set of AI methodologies that
What is Generative AI can create content that resembles the training data
they were exposed to.

Artificial Intelligence

• A type of AI that can create new content.


Machine Learning
• Subset of Deep Learning where the models are
trained to generate output on their own.
Deep Learning • Models that can create a wide range of outputs
such as images, music, speech, text and other
types of data.

Generative AI
Audio
Generative AI
Code
Text

Machine Learning that can


produce contents such as Images
audio, text, code, video,
images and other data.
Video
How does Generative AI Work?

Training Data

Learns the underlying patterns in


a given data set and uses that
knowledge to create new data
that shares those patterns. Figure out common dog
patterns and features

“Draw a picture of a Dog”


Generative AI and Other AI Approaches

Data • Classification
• Recommendation
Predictive System
Output
ML Model • Game Playing
Labels

• Image Synthesis
GenAI • Text Generation
Input Model Output
• Music Composition
Types of Generative AI Models

Image
Based
• Generates Visual Content e d
t- Ba s
• Learns from Large collection of Tex
images
• Generates Textual Content
• Learns from large collection of
text data
Generative Adversarial Network
(GAN)
• Generate realistic images that
resemble training data.

• Create high-quality images and


original artworks.
• Imagine you have a bunch of cat
images, and you want a machine
learning model to create similar images.
• This is exactly what a GAN does.

https://www.christies.com/lot/lot-edmond-de-belamy-from-la-famille-de-6166184/
Generative Adversarial Networks (GANs)
Adversarial Objective
Generator
• These two networks are pitted
• Takes in random numbers as Discriminator against each other where the
input and generates the images generator creates more
of interest (the forger) • Takes both the images from the realistic synthetic images
generator and the real images to fool the discriminator
from the data and spots the while the discriminator
difference between them (the networks tries to get better
detective) at detecting fake images.

• Both the generator and the discriminator are trained together. • This back-and-forth strategy forces both the
networks to improve until the generator can
• And, over the duration of training, the generator gets better at
create highly realistic synthetic images, that
creating images which look real, and the discriminator gets better
indistinguishable from real images
at spotting fakes.
Diffusion Models

• Work by adding noise to the images in the training data by


forward diffusion process and then reversing the process to recover
the original image using reverse diffusion.

• These models can be trained on large un-labeled datasets in an


unsupervised manner.
LLMs
Built to understand, generate and
process human language at a massive
scale

Transformers and Large


Language Models LLMs and Transformers

Based on Deep Learning architecture such


as Transformers
Generative AI Real-World Use Cases

Drug Discovery

• New molecular
Structure
Visual

• Image generation
• Video Generation
• Design
Language Music

• Content Creation
• Code Generation • Music Generation
• Conversational AI
Email Spam Classification Model
Data: Examples of emails either tagged as Spam or not Spam

Training Inference

Discriminative – Learns the boundary that Discriminative – Determine on which side of


separates “spam” vs “not spam” the boundary a new email falls
Generative – Learns the distribution of Generative – Based on learned
“spam” and “not spam” emails to understand distributions compute the likelihood of the
how each class generates content new email being “spam” vs “not spam”
Discriminative and Generative Models

Aspect Generative Models Discriminative Models


Model conditional probability of labels given
Purpose Model data distribution
data

Data generation, denoising, unsupervised


Use Cases Classification, supervised learning tasks
learning

Variational Autoencoders (VAEs), Logistic Regression, Support Vector


Common Examples
Generative Adversarial Networks (GANs) Machines, Deep Neural Networks

Maximize likelihood of observed data, Learn decision boundary, Differentiate


Training Focus
Capture data structure between classes

Image generation, Inpainting (e.g., GANs, Text classification, Object detection (e.g.,
Example Task
VAEs) Deep Neural Networks)
Mechanics of
Generative Models
Generative AI
Can generate new instances based on what they have learned. E.g. Variational Autoencoders, GANs, and RNNs.

Training Data
The quality of dataset directly impacts the performance of the generated outputs.

Loss Functions
Mathematical functions that measure the difference between the generated output and a desired target.
• Guide the learning process by providing feedback on how well the model is performing.

Optimization Algorithms
Adjust the parameters of the generative model to minimize the loss function during training. E.g. Stochastic
Gradient Descent (SGD), Adam, and RMSProp.

Evaluation Metrics
Metrics such as perplexity for language models or Inception Score for image generation tasks.

Hyperparameters and Tuning


Settings that control the behaviour of the learning process. E.g. learning rate, batch size, number
of layers in the network, etc.
Mechanics of
Generative AI
Regularization Techniques

Help prevent overfitting by adding constraints to the model's parameters or


architecture during training. E.g. dropout, weight decay (L2), and early stopping.

Data Augmentation

Involves generating additional training data from existing instances by applying


transformations such as rotation, scaling, flipping, etc. It can help improve the
generalization ability of generative models.
Potential of Generative AI
• Low Resource Languages
• Personalized Content Generation
• AI Tutors
• Intelligent Assistants
• Accelerating Scientific Discovery
Thank You

You might also like