0% found this document useful (0 votes)
6 views35 pages

Lecture 3 MLP

The document discusses the development and functionality of perceptrons and multi-layer perceptrons (MLPs) as classifiers in deep learning. It highlights the ability of MLPs to model complex decision boundaries and the Universal Approximation Theorem, which states that a single hidden layer can approximate any continuous function. Additionally, it addresses the capacity of MLPs and the importance of having sufficient layers and nodes for effective modeling.

Uploaded by

abby.iitpkd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views35 pages

Lecture 3 MLP

The document discusses the development and functionality of perceptrons and multi-layer perceptrons (MLPs) as classifiers in deep learning. It highlights the ability of MLPs to model complex decision boundaries and the Universal Approximation Theorem, which states that a single hidden layer can approximate any continuous function. Additionally, it addresses the capacity of MLPs and the importance of having sufficient layers and nodes for effective modeling.

Uploaded by

abby.iitpkd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Deep Learning

DS5007
Sahely Bhadra
[email protected]
Acknowledgement
Perceptron
• Developed by Frank Rosenblatt – 1950-60
• Initial version was a piece of hardware
1 With ReLU

1
b
b

2
MLP as a Classifier

● MLP as a function over real inputs


● MLP is the complex decision boundary over the space of reals

25
Perceptron as a Classifier

● Linear Classifier
1
w1x1+w2x2=T
0

26
Modeling Complex Decision Boundaries (1)

● Build a MLP that outputs 1 if the


input is any point in the shaded
region, else outputs 0

27
Modelling Complex Decision Boundaries (2)

● Build a MLP that outputs 1 if the ● Define Booleans over Reals


input is any point in the shaded
region, else outputs 0

28
Modeling Complex Decision Boundaries (3)

● Build a MLP that outputs 1 if the ● Define Booleans over Reals


input is any point in the shaded
region, else outputs 0

29
Modeling Complex Decision Boundaries (4)

● Build a MLP that outputs 1 if the ● Define Booleans over Reals


input is any point in the shaded
region, else outputs 0
3

x2 AN
4
4 D
3 3
5

4 x1
4

3 3
4

30
More Complex Decision Boundaries

O
R
AN AN
D D

31
More Complex Decision Boundaries

● Can we compose the decision


boundary using only a single
hidden layer?

32
Note: Capacity of an MLP (1)
● Universal Approximation Theorem (Hornik 1991)
○ “a single hidden layer neural network with a linear output unit can
approximate any continuous function arbitrarily well, given enough
hidden units”
● The result is true for MLPs that use other type of
activation functions.
● The theorem however doesn’t mean there is a learning
algorithm that can find the necessary parameter
values!

34
Note: Capacity of an MLP (2)
● A single layer MLP is a universal function approximator
○ Can approximate any function to arbitrary precision
○ But may require infinite nodes in the hidden layer
● Deeper networks require far fewer nodes for the same
approximation error
○ How many layers?
● Too few layers with a small number of nodes – insufficient
capacity.
● How to determine if the MLP has sufficient capacity to
model the data?
35

You might also like