0% found this document useful (0 votes)
12 views3 pages

AAM Ans

The document discusses transfer learning, a technique that utilizes pretrained deep learning models to enhance training efficiency for new tasks, categorized into inductive, transductive, and unsupervised types. It also explains the Holdout Method for model selection, which involves splitting a dataset into training and test sets to evaluate classifier performance based on accuracy and error rates. Examples illustrate the application of these concepts in various scenarios.

Uploaded by

teresanair2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views3 pages

AAM Ans

The document discusses transfer learning, a technique that utilizes pretrained deep learning models to enhance training efficiency for new tasks, categorized into inductive, transductive, and unsupervised types. It also explains the Holdout Method for model selection, which involves splitting a dataset into training and test sets to evaluate classifier performance based on accuracy and error rates. Examples illustrate the application of these concepts in various scenarios.

Uploaded by

teresanair2
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

 Approaches to Transfer Learning:

 Deep Learning algorithms take a long time to train from scratch for complex
tasks, leading to inefficiency. For example, a model's training could take
days and weeks when considering an NLP task. However, we could only use
transfer learning to effectively avoid the model training time.
 A pretrained deep learning model in a new task is known as transfer
learning. Utilizing pretrained models' knowledge (features, weights, etc.) to
train fresh models allows you to overcome issues like using fewer data for
the new task.
 The primary idea behind transfer learning machine learning is to use what
has been learned in one task to enhance generalization in another. Transfer
Learning example, if you initially trained a model to classify cat images, you
would use the knowledge from the model to recognize other images like a
dog. To a new "task B," we apply the weights that a network has learned
at "task A."

Transfer learning can be broadly categorized into three main types:


inductive, transductive, and unsupervised.
 Inductive transfer learning focuses on transferring knowledge from a source
task to a different target task with labeled data available in both domains.
 It is used when Source and target tasks are different, but both domains have
labeled data.
 Example: Using a model trained on image recognition for medical image
classification.

2. Transductive transfer learning involves transferring knowledge from a


source domain to a similar but different target domain, often with labeled
data only in the source domain.
It is used when Source and target tasks are the same, but the data domains
are different.
Example: Applying a model trained on English sentiment analysis to a
different language.
3. Unsupervised transfer learning uses unlabeled data in both the source and
target domains, focusing on feature extraction or clustering.
It is used when Source and target tasks are different, and both domains
have unlabeled data.
Example: Using clustering algorithms to group unlabeled data in both
domains.

Hold out method for model selection:


Holdout Method is a method to evaluate a classifier. In this method, the data set (a
collection of data items or examples) is separated into two sets, called the Training
set and Test set.
A classifier performs function of assigning data items in a given collection to a
target category or class.
Example –
E-mails in our inbox being classified into spam and non-spam.
Classifier should be evaluated to find out, it’s accuracy, error rate, and error
estimates. It can be done using Holdout Method’.
Example –
If there are 20 data items present, 12 are placed in training set and remaining 8 are
placed in test set.
 After partitioning data set into two sets, training set is used to build a
model/classifier.
 After construction of classifier, we use data items in test set, to test accuracy,
error rate and error estimate of model/classifier.

If maximum possible data items are placed in training set for construction of
model/classifier, classifier’s error rates and estimates would be very low and
accuracy would be high. This is sign of a good classifier/model.

Example –
A student ‘ABC’ is coached by a teacher. Teacher teaches her all possible topics
which might appear for exam. Hence, she tends to commit very less mistakes in
exam, thus performing well.
If more training data are used to construct a classifier, it qualifies any data used
from test set, to test it (classifier).
Example –
A student ‘ABC is coached by a teacher. Teacher teaches her some topics, which
might appear for the exam. If the student ‘ABC’ is given a number of exams on
basis of this coaching, an accurate determination of student’s weak and strong
points can be found out.
If more test data are used to evaluate constructed classifier, it’s error rate, error
estimate and accuracy can be accurately determined.

You might also like