0% found this document useful (0 votes)
12 views5 pages

Ml-Ii 5

This document provides an overview of transfer learning strategies, including definitions, approaches, and types such as inductive, transductive, and unsupervised transfer learning. It also discusses various components of transfer learning, challenges like negative transfer, and applications in fields such as natural language processing, audio/speech, and computer vision. Key concepts include pre-trained models, fine-tuning, domain adaptation, and multitask learning.

Uploaded by

struggler010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views5 pages

Ml-Ii 5

This document provides an overview of transfer learning strategies, including definitions, approaches, and types such as inductive, transductive, and unsupervised transfer learning. It also discusses various components of transfer learning, challenges like negative transfer, and applications in fields such as natural language processing, audio/speech, and computer vision. Key concepts include pre-trained models, fine-tuning, domain adaptation, and multitask learning.

Uploaded by

struggler010
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING (DATA SCIENCE)

LAB EXPERIMENT NO.9

Name: Sayantan Mukherjee


Sap-60009220131
Batch:D2-2
Roll:D102

AIM: Understanding About Transfer Learning Strategies and


Understanding its Components.

THEORY:
1. Fundamentals of Transfer Learning
1.1 Definition

Transfer Learning is a machine learning technique where a model developed for a specific task is
reused as the starting point for a model on a second task. It leverages knowledge from one
domain to improve learning in another.

1.2 Pre-trained Model Approach

 Pre-trained Models: Models trained on large datasets, often on general tasks like image
classification or language understanding. They capture broad features that can be useful
for related tasks.
 Freezing: Involves keeping the weights of certain layers fixed (non-trainable) to preserve
learned features while training other parts of the model.
 Fine-tuning: Involves updating the weights of the pre-trained model by continuing the
training process on a new dataset. Fine-tuning adapts the model’s features to the specific
needs of the new task.

2. Transfer Learning Strategies


2.1 Inductive Learning

 Definition: The model learns from labeled examples in the target domain and improves
generalization by using knowledge gained from the source domain.

2.2 Inductive Transfer

 Definition: Transfer learning where knowledge from a source domain helps improve
learning in the same type of target task but with different data.

2.3 Transductive Transfer Learning

 Definition: Transfer learning that improves performance on the target domain by


leveraging data from the source domain. It is often used when the target task has few
labeled examples.

2.4 Unsupervised Transfer Learning


 Definition: Transfer learning where the target domain does not have labeled data. The
model is trained using only the source domain data, often utilizing unsupervised
techniques for feature extraction and representation learning.

3. Types of Deep Transfer Learning


3.1 Domain Adaptation

 Definition: Adjusting a model trained on a source domain to perform well on a different


but related target domain. It focuses on minimizing the discrepancies between the source
and target domains.

3.2 Domain Confusion

 Definition: Training the model to become domain-invariant by reducing the differences


between source and target domains. Typically achieved through adversarial training
techniques.

3.3 One-shot Learning

 Definition: Learning from a single example or very few examples in the target domain.
Uses pre-trained models to generalize from minimal data.

3.4 Zero-shot Learning

 Definition: Enabling the model to recognize classes that were not seen during training. It
involves leveraging semantic information or descriptions to classify new, unseen classes.

3.5 Multitask Learning

 Definition: Simultaneously learning multiple related tasks using a shared model. The
shared representation helps improve performance across all tasks by transferring
knowledge between them.

4. Types of Transferable Components


4.1 Instance Transfer

 Definition: Directly using instances (examples) from the source domain as part of the
training set for the target domain.

4.2 Feature-representation Transfer


 Definition: Transferring learned features or representations from the source domain to
improve learning in the target domain.

4.3 Parameter Transfer

 Definition: Reusing weights and parameters from a pre-trained model to jump-start


learning in the target domain.

4.4 Relational-knowledge Transfer

 Definition: Transferring relationships or dependencies learned in the source domain to


the target domain, enhancing the model’s ability to understand complex patterns.

5. Transfer Learning Challenges


5.1 Negative Transfer

 Definition: Occurs when transferring knowledge from the source domain harms the
performance on the target domain due to significant differences between the domains.

5.2 Transfer Bounds

 Definition: The theoretical limits of how much performance improvement can be


achieved through transfer learning. These bounds depend on the similarity between
source and target domains and the amount of transferred knowledge.

6. Applications of Transfer Learning


6.1 Natural Language Processing (NLP)

 Example: Using pre-trained models like BERT or GPT for tasks such as text
classification, sentiment analysis, and machine translation.

6.2 Audio/Speech

 Example: Applying models pre-trained on large speech datasets to improve speech


recognition or speaker identification in new contexts.

6.3 Computer Vision

 Example: Utilizing models like ResNet or VGG, pre-trained on ImageNet, for object
detection, image segmentation, and facial recognition in specialized domains.

You might also like