0% found this document useful (0 votes)
2 views

Lecture 1

Ghoohffdfhjkiugfffyuiiogddfgjkkjhfdxxfhjjjjjjjjfddddddddfhhhjjjkklpoohg

Uploaded by

Ndndjdjjkdkd
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 1

Ghoohffdfhjkiugfffyuiiogddfgjkkjhfdxxfhjjjjjjjjfddddddddfhhhjjjkklpoohg

Uploaded by

Ndndjdjjkdkd
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Lecture #0

Machine Learning Algorithms


(CSE513)
Which of the following component is
responsible for learning in a artificial neuron?

a) Activation function
b) Bias
c) Weights
d) All of the mentioned
• Suppose a perceptron model with 3 inputs
[1,1,0] and 1 output and threshold=2. If
weights are [2, 3, 1] and bias = -2, what will be
the output of the model?
A. 1
B. 2
C. 0
D. None of these.
• Consider a multilayer neural network having 1 input layer, 1
hidden layer and 1 output layer. Number of neurons at input
layer, hidden layer and output layer are 3, 3 and 1
respectively. How many total weights and bias values are
there in the model.
A. 12
B. 15
C. 18
D. 16
Course Overview
• L T P:

3 0 0

• Text Books:

1. MACHINE LEARNING by THOMAS MITCHELL, MCGRAW HILL EDUCATION

• References:

1. PATTERN CLASSIFICATION by RICHARD O. DUDA, PETER E. HART, DAVID G. STORK, WILEY


Exam details
• Continuous Assessment
CA mandatory percentage Allocation/
submission
Term Paper Yes 60% 3/12
Written Test 1 No 40% 5/6
Written Test 2 no 40% 9/10

• MTE(subjective)
• ETE(subjective)
WHY ??
Machine Learning
Course Objectives
Through this course students should be able to
• categorize the machine learning problems based on learning
rules.
• apply the key concepts that form the core of machine
learning.
• develop the key algorithms for the system that are intelligent
enough to make the decisions.
• contrast the statistical, computational and game-theoretic
models for learning
Application

Entertainment
Application

Intelligent Robots
Unit 1

Introduction to machine learning : learning, need of


machine learning, types of learning, well-posed learning
problems, designing a learning systems, issues in machine
learning

Formal learning model : statistical learning framework,


empirical risk minimization, empirical risk minimization
with inductive bias, PAC learning, general learning model
Unit 2

Decision tree learning : decision tree representation, appropriate problem for


decision tree learning, CART, ID3, C4.5, information gain measure, inductive bias
in decision tree, minimum description length, Issues in decision tree learning,
random forest
Syllabus
• Unit 3

Generative models : maximum likelihood estimator, naive


Bayes, EM Algorithm, Bayesian learning,
Bayes theorem, Bayesian decision theory, brute-force concept
learning, minimum error rate
classification, discriminant function, Bayes optimal classifier,
gibs algorithm, Bayesian belief network
Syllabus
• Unit 4

Complex prediction problems : one-versus-all, all-pairs, linear multiclass predictors,


multiclass
SVM and SGD, structured output predictions, ranking, multivariate performance
measure
Nonparametric Methods : Density estimation, parzen window, the nearest neighbour
rule, Knearest neighbour estimation.
Syllabus
• Unit 5

The bias-complexity tradeoff : no free lunch theorem, error decomposition, the VC-
Dimension, surrogate loss functions, the Rademacher complexity, the Natarajan
dimension
Nonuniform learnability : learning via uniform convergence, characterizing nonuniform
learnability,
structural risk minimization, minimum description length, different notions of
learnability, computational complexity of learning
Syllabus
• Unit 6
Algorithm-Independent machine Learning : combining classifiers, re-sampling for
estimating statistics, lack of inherent superiority of classifier, comparing and
estimating classifiers
Reinforcement learning : the learning task, Q learning, nondeterministic rewards
and action, temporal difference learning
What Is Learning?
• Rat Bait Shyness
– Rats Learning to Avoid Poisonous Baits
learning by memorization
• Suppose we would like to program a machine
that learns how to filter spam e-mails.
– The machine will simply memorize all previous e-
mails that had been labeled as spam e-mails by
the human user. When a new e-mail arrives, the
machine will search for it in the set of previous
spam e-mails. If it matches one of them, it will be
trashed. Otherwise, it will be moved to the user’s
inbox folder
– it lacks an important aspect of learning systems
• the ability to label unseen e-mail messages.
Generalization
• A successful learner should be able to progress from individual
examples to broader generalization. This is also referred to as
inductive reasoning or inductive inference.
• bait shyness example presented previously, after the rats encounter an
example of a certain type of food, they apply their attitude toward it on
new, unseen examples of food of similar smell and taste

• spam filtering task, the learner can scan the previously seen
e-mails, and extract a set of words whose appearance in an e-mail
message is
indicative of spam. Then, when a new e-mail arrives, the machine can
check whether one of the suspicious words appears in it, and predict its
label accordingly.
• What distinguishes learning mechanisms that result in
superstition from useful learning?
• This question is crucial to the development of automated
learners.
• While human learners can rely on common sense to filter out random
meaningless
learning conclusions, once we export the task of learning to a
machine, we must
provide well defined crisp principles that will protect the program
from reaching
senseless or useless conclusions.
The development of such principles is a central goal of the theory of
machine learning.
Inductive Bias
• We conclude that one distinguishing feature in the
bait shyness learning is the incorporation of prior
knowledge that biases
the learning mechanism. This is also referred to as
inductive bias.
• The rats’ learning process is biased toward
detecting some kind of
patterns while ignoring other temporal
correlations between events.
When Do We Need Machine Learning?
• experience: the problem’s complexity
Tasks That Are Too Complex to Program.
– Tasks Performed by Animals/Humans:
• driving, speech recognition, and image understanding.
– Tasks beyond Human Capabilities:
• analysis of very large and complex data sets: astronomical data, turning
medical archives into medical knowledge, weather prediction, analysis of
genomic data, Web search engines, and electronic commerce.

• Need for Adaptivity.


– programs that decode handwritten text, where a fixed program can adapt
to variations between the handwriting of different users; spam detection
programs, adapting automatically to changes in the nature of spam e-
mails; and speech recognition programs.
Types of Learning
• Supervised
• Unsupervised
• Reinforcement
More _____ the prior assumption in a system,
______ flexible the system will be.

a) stronger,less
b) Less,more
c) Less,less
d) None of the mentioned
Thank You !!!

You might also like