Ml-Ii 5
Ml-Ii 5
THEORY:
1. Fundamentals of Transfer Learning
1.1 Definition
Transfer Learning is a machine learning technique where a model developed for a specific task is
reused as the starting point for a model on a second task. It leverages knowledge from one
domain to improve learning in another.
Pre-trained Models: Models trained on large datasets, often on general tasks like image
classification or language understanding. They capture broad features that can be useful
for related tasks.
Freezing: Involves keeping the weights of certain layers fixed (non-trainable) to preserve
learned features while training other parts of the model.
Fine-tuning: Involves updating the weights of the pre-trained model by continuing the
training process on a new dataset. Fine-tuning adapts the model’s features to the specific
needs of the new task.
Definition: The model learns from labeled examples in the target domain and improves
generalization by using knowledge gained from the source domain.
Definition: Transfer learning where knowledge from a source domain helps improve
learning in the same type of target task but with different data.
Definition: Learning from a single example or very few examples in the target domain.
Uses pre-trained models to generalize from minimal data.
Definition: Enabling the model to recognize classes that were not seen during training. It
involves leveraging semantic information or descriptions to classify new, unseen classes.
Definition: Simultaneously learning multiple related tasks using a shared model. The
shared representation helps improve performance across all tasks by transferring
knowledge between them.
Definition: Directly using instances (examples) from the source domain as part of the
training set for the target domain.
Definition: Occurs when transferring knowledge from the source domain harms the
performance on the target domain due to significant differences between the domains.
Example: Using pre-trained models like BERT or GPT for tasks such as text
classification, sentiment analysis, and machine translation.
6.2 Audio/Speech
Example: Utilizing models like ResNet or VGG, pre-trained on ImageNet, for object
detection, image segmentation, and facial recognition in specialized domains.