0% found this document useful (0 votes)
20 views33 pages

Session 5

Uploaded by

ahmed shawki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views33 pages

Session 5

Uploaded by

ahmed shawki
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Machine Learning Diploma

DEEP LEARNING 5 :COMPUTER VISION


Agenda
➢ Image Augmentation
➢ Definition of Image Augmentation.
➢ Image Augmentation Techniques.
➢ Transfer Learning
➢ Definition of Transfer Learning
➢ Traditional Learning Vs Transfer Learning
➢ Difference Types of Transfer Learning
➢ Common Strategy for Deep Transfer Learning
➢ Benefits of Transfer Learning
Data Augmentation
Image Augmentation
▪ Deep neural networks require a lot of training data to obtain good results and prevent
overfitting. However, it often very difficult to get enough training samples. Multiple reasons
could make it very hard or even impossible to gather enough data:
1. To make a training dataset, you need to obtain images and then label them. For example, you need
to assign correct class labels if you have an image classification task. For an object detection task,
you need to draw bounding boxes around objects. For a semantic segmentation task, you need to
assign a correct class to each input image pixel. This process requires manual labor, and
sometimes it could be very costly to label the training data. For example, to correctly label medical
images, you need expensive domain experts.
2. Sometimes even collecting training images could be hard. There are many legal restrictions for
working with healthcare data, and obtaining it requires a lot of effort. Sometimes getting the training
images is more feasible, but it will cost a lot of money. For example, to get satellite images, you
need to pay a satellite operator to take those photos. To get images for road scene recognition, you
need an operator that will drive a car and collect the required data.
Image Augmentation
• Image augmentation is a process of creating new training examples from the existing ones. To make a
new sample, you slightly change the original image. For instance, you could make a new image a little
brighter; you could cut a piece from the original image; you could make a new image by mirroring the
original one, etc.
Different Image Augmentation Techniques
Image Rotation
• one of the most commonly used augmentation techniques is image rotation. Even if you rotate the image,
the information on the image remains the same. A cat is a cat, even if you see it from a different angle.

• we can use this technique to increase the size of our training data by creating multiple images rotated at
different angles.
Image Flipping
• Flipping can be considered as an extension of rotation. it allows us to flip the image in the Left-Right
direction as well as the Up-Down direction.
Image Shifting
• By shifting the images, we can change the position of the objects in the image and hence give more variety to the model.
Which eventually can result in a more generalized model.

• Image shift is a geometric transformation that maps the position of every object in the image to the new location of the
final output image. So if an object is at position x,y in the original image, it gets shifted to new position X, Y in the new
image as shown below. Where dx and dy are the respective shifts along with the different directions.
Image Noising
• Another popularly used image augmentation technique is, Image Noising where we add noise to the
image. This technique allows our model to learn how to separate the signal from the noise in the image.
This also makes our model more robust to changes in the image.
Image Blurring
• Images come from different sources and hence the quality of images will not be the same from each
source. Some images might be of very high quality and others must be very bad. In such scenarios we
can blur the original images, this will make our model more robust to the quality of the image being used
in the test data.
Transfer Learning
Transfer Learning
• The reuse of a pre-trained model on a new problem is known as transfer learning in machine learning. A
machine uses the knowledge learned from a prior assignment to increase prediction about a new task in
transfer learning.

• for example, use the information gained during training to distinguish beverages when training a classifier
to predict whether an image contains cuisine.

• The knowledge of an already trained machine learning model is transferred to a different but closely
linked problem throughout transfer learning. For example, if you trained a simple classifier to predict
whether an image contains a car, you could use the model’s training knowledge to identify other objects
such as truck.
Transfer Learning
Traditional Learning Vs Transfer Learning
• Traditional learning is isolated and occurs purely based on specific tasks, datasets and training separate isolated models
on them. No knowledge is retained which can be transferred from one model to another. In transfer learning, you can
leverage knowledge (features, weights etc) from previously trained models for training newer models and even tackle
problems like having less data for the newer task.
Traditional Learning Vs Transfer Learning
Transfer Learning
➢ What to transfer?
• We need to understand what knowledge is common between the source and target task. What
knowledge can be transferred from source task to target task that will help improve the performance of
the target task

➢ When to transfer or when not to Transfer?


• When the source and target domains are not related at all we should not try to apply transfer learning. In
such a scenario the performance will suffer. This type of transfer is called Negative Transfer. We should
apply Transfer learning only when source and target domains/tasks are related

➢ How to transfer?
• Identifying different techniques to apply transfer learning when the source and target domain/task are
related. We can use Inductive transfer learning, Transductive transfer learning or unsupervised transfer
learning.
Different Types of Transfer Learning
1)Inductive Transfer Learning
➢ Inductive Transfer learning -Same Source and Target domain but different Task
• Inductive Transfer Learning requires the source and target domains to be the same, though the specific tasks the model
is working on are different.

• The algorithms try to use the knowledge from the source model and apply it to improve the target task. The pre-trained
model already has expertise on the features of the domain and is at a better starting point than if we were to train it from
scratch.

• Ex : If we want the child to identify fruits then we start showing apples of different colors like red apples, green apples,
pale yellow apple, etc. We show the child different variety of apples like Gala, Granny smith, Fuji apples, etc. We show
these apples in different settings so that the child is able to identify apples in most of the scenarios. The same logic is
used to identify different fruits like grapes, Oranges, Mangoes, etc. Here we use the knowledge acquired in learning
apples applied to learning to identify other fruits. Our Source and Target domains are related to the identification of fruits
but one task involves identifying apples and one task involves identify Mangoes.
1)Inductive Transfer Learning
➢ The goal of Inductive transfer learning :
• Inductive transfer learning is to improve the performance of the target predictive function.

• Inductive transfer learning requires a few labeled data in the target domain as the training data to induce
the target predictive function

• If the source and target domains both have labelled data then we can perform multi-tasking transfer
learning

• If the source has labelled data and target task does not have labelled data then we can perform self-
learning transfer learning
2) Transductive Transfer Learning
• Scenarios where the domains of the source and target tasks are not exactly the same but interrelated
uses the Transductive Transfer Learning strategy. One can derive similarities between the source and
target tasks. These scenarios usually have a lot of labeled data in the source domain, while the target
domain has only unlabeled data.

• Let’s extrapolate this learning and now we want the child to learn about household objects like chair,
table, bed, etc. The child will utilize the knowledge acquired for identifying fruits to identify household
objects.

• The child might not have been shown enough household objects but will use the knowledge of shapes,
colors, etc. learn to identify fruits to identify household objects.

• Transductive transfer learning, no labeled data exists in the target domain while a lot of labeled data
exists in the source domain
2) Transductive Transfer Learning

• Transductive Transfer learning can be applied when :


• Feature spaces between the source and target domains can be different or
• Feature spaces between domains are the same but the marginal probability distributions
of the input data are different. This is also referred to as Domain adaptation.
3) Unsupervised Transfer Learning
• Unsupervised transfer learning is similar to inductive transfer learning where the target task is
different from but related to the source task. The domain of the source and target task is the
same. We have no labeled data for source-target task
Transfer Learning with Deep Learning
• Deep Learning requires significant training data and training time compared to Machine Learning like for
computer vision or sequential text processing or audio processing. We can save the weights of our
trained models and share for others to use. We also have pre-trained models today that are extensively
used for Transfer Learning referred to as Deep Transfer Learning.

• Common strategies for Deep Transfer Learning


• Use the pre-trained model as feature extractors
• Fine-tune the pre-trained models

• Pre-trained deep neural networks for Computer Vision:


• VGG-16
• VGG-19
• Inception V3
• ResNet-50
• Xception
Transfer Learning with Deep Learning
• In computer vision, neural networks typically aim to detect edges in the first layer, forms in the middle
layer, and task-specific features in the latter layers. The early and central layers are employed in
transfer learning, and the latter layers are only retrained. It makes use of the labelled data from the task it
was trained on.
Common Strategy for Deep Transfer Learning
Use the pre-trained model as feature extractors
• To implement Transfer learning, we remove the last predicting layer of the pre-trained model and replace them with our
own predicting layers. FC-T1 and FC_T2 as shown below

• Weights of these pre-trained models are used as a feature extractor


• Weights of the pre-trained model are frozen and are not updated during the training

Fine-tune the pre-trained models
• We can use deep neural networks like VGG-16, VGG-19, Inception V3, ResNet-50, Xception as pre-
trained model

• To implement Transfer learning with fine-tuning, we remove the last predicting layer of the pre-trained
model and replace them with our own predicting layers. FC-T1 and FC_T2 as shown below.

• Initial lower layers of the network learn very generic features from the pre-trained model. To achieve this
initial layers weights of pre-trained models frozen and not updated during the training

• Higher layers are used for learning task-specific features. Higher layers of pre-trained models are
trainable or fine-tuned

• Improves performance with less training time


Fine-tune the pre-trained models
Benefits of Transfer Learning
• Transfer learning brings a range of benefits to the development process of machine learning
models. The main benefits of transfer learning include the saving of resources and improved
efficiency when training new models. It can also help with training models when only
unlabeled datasets are available, as the bulk of the model will be pre-trained.
• The main benefits of transfer learning for machine learning include:
• Removing the need for a large set of labelled training data for every new model.
• Improving the efficiency of machine learning development and deployment for multiple models.
• A more generalized approach to machine problem solving, leveraging different algorithms to solve new challenges.
• Models can be trained within simulations instead of real-world environments.
End-to-End Project
Any Question?
THANK YOU!

You might also like