0% found this document useful (0 votes)
44 views13 pages

Assignments For Week 6 2024

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views13 pages

Assignments For Week 6 2024

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

NPTEL

Video Course on Machine Learning

Professor Carl Gustaf Jansson, KTH

Assignments for Week 6 2024

12 tasks with a total of 20 marks


Assignment tasks - Week 6 2024

Problem # 1 Correct Marks: 1 Theme: Neurons and Perceptrons

One of the logical operators described by the following truth-tables is infamous for being not linearly
separable and therefore not possible to implement by a single perceptron.

Which?

A B C D E
P Q OP P Q OP P Q OP P Q OP P Q OP
T T T T T T T T F T T F T T F
T F F T F T T F F T F T T F T
F T F F T T F T F F T T F T T
F F F F F F F F T F F T F F F

Answer: E
Assignment tasks - Week 6 2024

Problem # 2 Correct Marks: 2 Theme: Neurons and Perceptrons Training set


Assume a perceptron: Input j Target J
• with 3 inputs (x1,x2,x3) plus a bias (x0) statically set to 1 1 0 0 -> 1
0 1 1 -> 1
• with weighted input= x0*w0+x1*w1+x2*w2+x3*w3 1 1 0 -> 0
• that outputs 1 if weighted input > 0, else 0 0 0 1 -> 0
• with initial weights are all set to 0
• with weight updating as follows: Wi j+1= Wi j+ a * (Target j- Output j) * X i and a learning rate a=1

Instance Target Weight Vector Weighted Input Output New Weight Vector
1001 1 0000 0 0 (0 0 0 0) + (1 0 0 1) =1 0 0 1
0111 1 1001 1 1 (1 0 0 1) + (0 0 0 0) = 1 0 0 1
1101 0 1001
001 1 0 ?

How will the final weight vector look like when all data-items are processed?

A. 1011 B. 0 0 -1 0 C. 0 -1 0 0 D. 1 0 1 0 E. None of the above

Answer: C
Assignment tasks - Week 6 2024

Problem # 3 Correct Marks: 1 Theme: Neurons and Perceptrons

In a feedforward ANN, the so called Delta Rule is based on a particular error measure based on the target
values and output values for the outputs of the ANN.

When this particular error measure is applied for a single neuron, what is the
error measure for a target value of 12 and an output value of 14.
E= ½ * ( T-Y) ^2

A. 4 B. 2 C. 0.5 D. 16

Answer: B
Assignment tasks - Week 6 2024
Activation
Problem # 4 Correct Marks: 2 Theme: Feed Forward Multiple Layer ANN function

Assume a single neuron ANN: Training set 1


• with 3 inputs (x1,x2,x3) plus a bias (x0) statically set to 1
Input j Target J
• with weighted input= x0*w0+x1*w1+x2*w2+x3*w3 1 0 1 -> 1
2
0 1 0 -> 0
• that outputs Activation function(weighted input) if weighted input > 0, else 0
1 0 1 -> 0
• with initial weights are all set to 0 and a linear Activation function 0 1 0 -> 1
• with weight updating as follows: Wi j+1= Wi j+ a * (Target j- Output j) * X i and a learning rate a=1

Instance Target Weight Vector Weighted Input Output New Weight Vector

1011 1 0000 0 0 (0 0 0 0) + (1 0 1 1) =1 0 1 1
0101 0 1011 1 0.5 (1 0 1 1) + (0 -0.5 0 -0.5) = 1 -0.5 1 0.5
1011 0 1 -0.5 1 0.5
0101 1

How will the final weight vector look like when all data-items are processed?

A. -0.25 0 0 0.25 B. 0 -0.25 0.25 0 C. 0.5 0 0.75 -0.35 D. -0.25 0.5 -0.25 0.25 E. None of the above

Answer: D
Assignment tasks - Week 6 2024

Problem # 5 Correct Marks:3 Theme: Feed Forward Multiple Layer ANN


Bias=0, Learning rate= 1, Activation function = identity function
All weights are initially set to 0.1, input vector =( 1 2 ), target T=0.218
1 Calculate outputs from neurons = feed
0.1 forward y = sum xi*wi in all steps
0.1
X1=1 4 i
0.1 0.1
y=?? Calculate the error sensitivity at the output
0.1 node dj = (Tj– Yj)
0.1 6
2 Backpropagate error sensitivity:
0.1 di = Sum (wij * dj)
0.1 Y=?
j= all outputs of i
5 0.1
X2=2 0.1 0.1
Update weights: wij = wij +
3 0.1 ( learning rate * dj ** xi)
0.1

1. What is the output value Y from neuron # 6 ? (1p) A. 0.118 B. 0.018 C. 1.118 D. 1.018
2. What is the backpropagated error sensitivity at neuron # 1 ? (1p) A. 0.0144 B. 0.040 C. 0.004 D. 0.014
3. What is the updated weight for connection between input x1 and neuron #1? (2p) A. 0.104 B 0.140 C. 0.1144 D. 0.114
Answera: B, C, A
Assignment tasks - Week 6 2024

Problem # 6 Correct Marks: 1 Theme: Recurrent Neural Networks

Select the main reason for extending standard RNNs to so called Bidiectional RNNs:

A. Being able to handle multiple levels of abstractions within the domains of analysis
B. Being able to get information from past (backwards) and future (forward) states
simultaneously.
C. Mastering the vanishing gardient problem
D. Mastering the long dependency problem.

Answer: B
Assignment tasks - Week 6 2024

Problem # 7 Correct Marks: 1 Theme: Recurrent Neural Networks

Which RNN network structure would best fit a text analysis task, where the occurences of references
to a specific kind of event is searched for?

A. B. C. D. E.

Answer: C
Assignment tasks - Week 6 2024

Problem # 8 Correct Marks: 2 Theme: Hebbian Learning and Associative Memory

Hebb’s Law can be represented in the form of two rules:


1. If two neurons on either side of a connection (synapse) are activated
synchronously, then the weight of that connection is increased.
2. If two neurons on either side of a connection (synapse) are activated
asynchronously, then the weight of that connection is decreased.
Learning according to Hebb’s Law is primarily consistent with one of the following
kinds of learning.
A. Reinforcement learning
B. Un-supervised learning.
C. Supervised learning

Answer: B
Assignment tasks - Week 6 2024

Problem # 9 Correct Marks: 2 Theme: Hebbian Learning and Associative Memory

Updating cycles for postsynaptic neuron outputs and connection weights in a Hebbian Learning Network.
Step 1: Initialization: Set initial synaptic weights to small random values in the interval [0, 1].
Step 2: Activation: Compute the postsynaptic neuron output Y j from the presynaptic inputs element X i j in the data-item X j :
Y j = if ( i=1..n Sum Xi j*Wi j –T)>=0 then 1 else 0. T is the threshold value of neuron
Step 3: Update the weights in the network. Wi j+1 = Wi j +a * Y j*Xi j where a is the learning rate parameter
Step 4: Iteration: go back to Step 2.

Task: Consider a Neuron with inputs from 4 neighbouring neurons. The learning rate a = 1 and the threshold T = 1
The weights are initiated as W1 = 1 1 1 1. The training data vectors are: X1= 1 0 0 1, X2 = 0 1 1 0 and X3 = 1 1 0 0.

X1 1 0 0 1 W1 1 1 1 1 Y1 = 1 W2 = W1 + Y1 * (1 0 0 1) = 2 1 1 2
….................................................
..................................................... ?
How will the final weight vector look like when the training data has been processed?

A. 3223 B. 2233 C. 3 3 2 2 D. 3 2 3 2 E. None of the above

Answer: C
Assignment tasks - Week 6 2024

Problem # 10 Correct Marks: 1 Theme: Hopfield Networks and Boltzman Machines

Hopfield Networks use a method borrowed from Mettalurgy to let the network states settle.
The method can be characterized as of below:
• Heat the solid state metal to a high temperature
• Cool it down very slowly according to a specific schedule.
• If the heating temperature is sufficiently high to ensure random state and the cooling process is slow enough
to ensure thermal equilibrium, then the atoms will place themselves in a pattern that corresponds to
the global energy minimum of a perfect crystal.

What is the name of the method

A. Casting
B. Crystallization
C. Annealing
D. Welding
E. Forging

Answer: C
Assignment tasks - Week 6 2024

Problem # 11 Correct Marks: 2 Theme: Hopfield Networks and Boltzman Machines

A 3 unit Hopfield network is trained with the following 3 vectors:


V1 = 1 1 1 V2 = 1 -1 1 V3= -1 -1 -1 , N=3 Stable unit states = -1, 1 Test case 1 0 0

Procedure for establishing a weight matrix and for calculation of a new of a test case using a matrix approach:
1. Define X with the three training vectors as rows
2. Define the weight matrix W = XTX
3. Normalise weight matrix W by dividing with N (number of training vectors)
4. Evaluate test case by multiplying test case with the normalized W.
5. Apply a threshold function with threshold = 0 to all elements in resulting vector.

1 1 -1
- 1 1 1 3 1 3 1 1/3 1
T T
X = 1 -1 -1 X = 1 -1 1 X X= 1 3 1 Normalized = 1/3 1 1/3
1 1 -1 -1 -1 -1 3 1 3 1 1/3 1

S = 1 0 0 * 1 1/3 1 What is the resulting state vector ?


1/3 1 1/3
Answer D 1 1/3 1 A. 1 -1 1 B. 1 1 -1 C. -1 1 1 D. 1 1 1
Assignment tasks - Week 6 2024

Problem # 12 Correct Marks: 2 Theme: Hopfield Networks and Boltzman Machines

If the input array A looks like: and the filter is: and the stride is 1,
7 5 3 1 1 0
1 3 5 7 0 1
1 3 5 7
7 5 3 1

how does the feature map look like after the convolution of input array A with the filter F?

6 4 10 6 6 6 10 10 10 10 10 10
6 8 10 4 8 12 4 8 12 12 8 4
6 12 10 10 10 10 6 6 6 6 6 6

A. B. C. D.

Answer: C

You might also like