Session 5
Session 5
• we can use this technique to increase the size of our training data by creating multiple images rotated at
different angles.
Image Flipping
• Flipping can be considered as an extension of rotation. it allows us to flip the image in the Left-Right
direction as well as the Up-Down direction.
Image Shifting
• By shifting the images, we can change the position of the objects in the image and hence give more variety to the model.
Which eventually can result in a more generalized model.
• Image shift is a geometric transformation that maps the position of every object in the image to the new location of the
final output image. So if an object is at position x,y in the original image, it gets shifted to new position X, Y in the new
image as shown below. Where dx and dy are the respective shifts along with the different directions.
Image Noising
• Another popularly used image augmentation technique is, Image Noising where we add noise to the
image. This technique allows our model to learn how to separate the signal from the noise in the image.
This also makes our model more robust to changes in the image.
Image Blurring
• Images come from different sources and hence the quality of images will not be the same from each
source. Some images might be of very high quality and others must be very bad. In such scenarios we
can blur the original images, this will make our model more robust to the quality of the image being used
in the test data.
Transfer Learning
Transfer Learning
• The reuse of a pre-trained model on a new problem is known as transfer learning in machine learning. A
machine uses the knowledge learned from a prior assignment to increase prediction about a new task in
transfer learning.
• for example, use the information gained during training to distinguish beverages when training a classifier
to predict whether an image contains cuisine.
• The knowledge of an already trained machine learning model is transferred to a different but closely
linked problem throughout transfer learning. For example, if you trained a simple classifier to predict
whether an image contains a car, you could use the model’s training knowledge to identify other objects
such as truck.
Transfer Learning
Traditional Learning Vs Transfer Learning
• Traditional learning is isolated and occurs purely based on specific tasks, datasets and training separate isolated models
on them. No knowledge is retained which can be transferred from one model to another. In transfer learning, you can
leverage knowledge (features, weights etc) from previously trained models for training newer models and even tackle
problems like having less data for the newer task.
Traditional Learning Vs Transfer Learning
Transfer Learning
➢ What to transfer?
• We need to understand what knowledge is common between the source and target task. What
knowledge can be transferred from source task to target task that will help improve the performance of
the target task
➢ How to transfer?
• Identifying different techniques to apply transfer learning when the source and target domain/task are
related. We can use Inductive transfer learning, Transductive transfer learning or unsupervised transfer
learning.
Different Types of Transfer Learning
1)Inductive Transfer Learning
➢ Inductive Transfer learning -Same Source and Target domain but different Task
• Inductive Transfer Learning requires the source and target domains to be the same, though the specific tasks the model
is working on are different.
• The algorithms try to use the knowledge from the source model and apply it to improve the target task. The pre-trained
model already has expertise on the features of the domain and is at a better starting point than if we were to train it from
scratch.
• Ex : If we want the child to identify fruits then we start showing apples of different colors like red apples, green apples,
pale yellow apple, etc. We show the child different variety of apples like Gala, Granny smith, Fuji apples, etc. We show
these apples in different settings so that the child is able to identify apples in most of the scenarios. The same logic is
used to identify different fruits like grapes, Oranges, Mangoes, etc. Here we use the knowledge acquired in learning
apples applied to learning to identify other fruits. Our Source and Target domains are related to the identification of fruits
but one task involves identifying apples and one task involves identify Mangoes.
1)Inductive Transfer Learning
➢ The goal of Inductive transfer learning :
• Inductive transfer learning is to improve the performance of the target predictive function.
• Inductive transfer learning requires a few labeled data in the target domain as the training data to induce
the target predictive function
• If the source and target domains both have labelled data then we can perform multi-tasking transfer
learning
• If the source has labelled data and target task does not have labelled data then we can perform self-
learning transfer learning
2) Transductive Transfer Learning
• Scenarios where the domains of the source and target tasks are not exactly the same but interrelated
uses the Transductive Transfer Learning strategy. One can derive similarities between the source and
target tasks. These scenarios usually have a lot of labeled data in the source domain, while the target
domain has only unlabeled data.
• Let’s extrapolate this learning and now we want the child to learn about household objects like chair,
table, bed, etc. The child will utilize the knowledge acquired for identifying fruits to identify household
objects.
• The child might not have been shown enough household objects but will use the knowledge of shapes,
colors, etc. learn to identify fruits to identify household objects.
• Transductive transfer learning, no labeled data exists in the target domain while a lot of labeled data
exists in the source domain
2) Transductive Transfer Learning
• To implement Transfer learning with fine-tuning, we remove the last predicting layer of the pre-trained
model and replace them with our own predicting layers. FC-T1 and FC_T2 as shown below.
• Initial lower layers of the network learn very generic features from the pre-trained model. To achieve this
initial layers weights of pre-trained models frozen and not updated during the training
• Higher layers are used for learning task-specific features. Higher layers of pre-trained models are
trainable or fine-tuned