0% found this document useful (0 votes)
58 views17 pages

6.10-Tutorial For Week6

The document provides the assignments for Week 6 of a machine learning course on artificial neural networks taught by Professor Carl Gustaf Jansson. The assignments include 12 multiple choice questions covering fundamentals of artificial neural networks, perceptrons, feedforward networks, backpropagation, recurrent neural networks, Hebbian learning and associative memory, Hopfield networks, and convolutional networks. Students are asked to select the correct answer for each question based on the concepts and techniques covered in the video lectures for Week 6.

Uploaded by

Sikappi Subbu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views17 pages

6.10-Tutorial For Week6

The document provides the assignments for Week 6 of a machine learning course on artificial neural networks taught by Professor Carl Gustaf Jansson. The assignments include 12 multiple choice questions covering fundamentals of artificial neural networks, perceptrons, feedforward networks, backpropagation, recurrent neural networks, Hebbian learning and associative memory, Hopfield networks, and convolutional networks. Students are asked to select the correct answer for each question based on the concepts and techniques covered in the video lectures for Week 6.

Uploaded by

Sikappi Subbu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

NPTEL

Video Course on Machine Learning

Professor Carl Gustaf Jansson, KTH

Week 6 Machine Learning based


on Artificial Neural Networks

Video 6.10 Assignments for Week 6


Assignments

Group 1 - questions based on video lectures


Question 1 : Fundamentals of Artificial Neural Networks
What is the term for the Output from a real Neuron in the context of the human brain?

A. SOMA B. DENDRITE C. AXON D. SYNAPSE

Question 2: Fundamentals of Artificial Neural Networks


The Mean Squared Error is applied in the context of Feed Forward networks. If the
particular error formula for a single data-item forward feed process is applied, what is the
error for a target value of 10 and an estimate of 7 for a particular output variable ?

A. 3 B. 9 C. 4.5 D. 2.12
Question 3: Perceptrons
Assume a perceptron:
Training set Y j =1 if W * X = (Sum wi j * xi j) > 0
• with 3 inputs plus 1 for bias i= .. n
• where net = 0 0 1 -> 0 otherwise Y= 0.
x0*w0+x1*w1+x2*w2+x3*w3 1 1 1 -> 1 Wi j+1= Wi j+ a * (Tj-Yj)* Xi j
1 0 1 -> 1
• that outputs 1 if net > 0, else 0 0 1 1 -> 0 delta weight

• with a learning rate a=1


• initial weights are all set to 0

Instance Target Weight Vector Net Output Delta Weight


001 1 0 0000 0 0 0 0 0 0
111 1 1 0000 0 0 1 1 1 1
101 1 1 1111 3 1 0 0 0 0
011 1 0 1111
----------------------------------------------------------------------------------------
????

How will the final weight vector look like when the fourth data-item is processed?
A. 1 1 1 1 B. -1 0 1 0 C. 0 0 0 0 D. 1 0 0 0 E. None of the above
Question 4: Single Neuron in a Feed forward network
Assume a single neuron: If Sum wij * ai > 0 the output of unit j is calculated as
Y = f ( Sum (wij * ai) ) otherwise Y=0.
• with 3 inputs plus 1 for bias
Training set
• with a learning rate a=1 W j i =Wj i+ a *(Tj-Yj) G´(SUMj)*Xi
Wj i =W j i+ a *(Tj-Yj) *X i j if f is linear.
• initial weights all 0 0 0 1 -> 0
1 1 1 -> 1
1 0 1 -> 1 The activation function f is ReLU
0 1 1 -> 0

Instance Target Weight Vector Sum Output Delta Weight


001 1 0 0000 0 0 0 0 0 0
111 1 1 0000 0 0 1 1 1 1
101 1 1 1111 3 3 -2 0 -2 -2
011 1 0 -1 1 -1 -1
----------------------------------------------------------------------------------------
????

How will the final weight vector look like when the fourth data-item is processed?
A. -1 1 -1 -1 B. -1 0 -1 0 C. 0 0 -1 0 D. 0 1 0 -1 E. None of the above
Question 5: Backpropagation Bias=0
y=1.5 Learning rate= 1
D1 =0.375 Activation function = ReLU
11
1 W´=0.5+1.5*0.375=1.055
0.5 11
W´=0.5+1*0.375=0.875 0.5

0.5 D4 =0.375
X1=1 W´=0.875 0.5 4 y=2.25
D2 =0.375 W´= ?
0.5
W´=0.875
y=1.5 W´=1.055
0.5 2 0.5 6 Z=3
W´=0.875 Y=2.25
0.5
0.5 0.5 0.5 W´= ?
W´=0.875 5 D=0.75
X2=2 0.5 D5 =0.375
y=2.25
0.5
W´=0.5+2*0.375=1.25 W´=0.5+1.5*0.375=1.055
3
0.5 D3 =0.375
y=1.5

What is the adapted weight for the edges between neurons 4,5 and 6? A 1.055 B 2.555 C 1.755 D 2.1875 E 1.25
Question 6: Recurrent Neural Network
Which network structure would best fit an analysis task, where a sequence of words is
mapped onto a sentiment or opinion?

A. B. C. D. E.
Question 7: Recurrent Neural Network
Which RNN architecture is well known for handling the wanishing gradient problem
well?

A. Bidirectional RNN B. LSTM C. Elman network D. Echo state

Question 8: Recurrent Neural Network


Which of the foolowing gate types is not include in the LSTM cell model?

A. Input Gate B. Remember Gate C. Forget gate D. Output Gate


Question 9: Hebbian Learning and Associative Memory
Y j = if (Sum Xi*Wi j –T)>=0 then 1 else 0.
Third iteration
i=1..n
Inputs a =1
Wi j+1 = Wi j +a * Y j*Xi j
J=2
x12=1 w12=3
Sum =1*3+1*1+0*3+0*1 -2 =2 >=0
W22=1
x22=1 => Y2 = 1

w32=3
x32=0 Y2=1
w42=1
W13, W23, W33, W43 ?????
x42=0
How will the final weight vector look like when the third data-item ( 1 1 0 0) is processed?
A. 3 1 3 1 B. 2 1 3 2 C. 4 2 3 1 D. 2 2 2 2 E. None of the above
Question 10 Associative Memory

Which of the following tasks CANNOT be handled by


an Auto-associative Memory system?

A Retrieving a full quotation from a subset of words from the quotation


B Retrieving a complete image of a face from a distorted face image
C Retrieving a complete image of an event from an image of one of
the participants of that event
D Retrieving a stored technical error case from a partial description
of a new technical error case
Question 11 Hopfield Networks
An 3 unit Hopfield network is trained with the following 3 vectors:
V1 = 1 1 1 V2 = 1 1 -1 V3= -1 1 1 p=3 Test 1 0 0

Procedure for establishing a weight matrix and for matching of a test case using a matrix approach:
1.Define X in terms of the three training vectors
2. Defing the weight matrix W = XTX
3. Normalise W by dividing with p
4. Evaluating training pattern through matrix multiplication
5. Apply a threshold function with threshold = 0

1 1 1 1 1 -1 3 1 -1 1 1/3 -1/3
X = 1 1 -1 X T= 1 1 1 1. XTX = 1 3 1 -> 1/3 1 1/3
-1 1 1 1 -1 1 -1 1 3 -1/3 1/3 1

S = 1 0 -1 * 1 1/3 -1/3 What is the resulting state vector ?


1/3 1 1/3
-1/3 1/3 1 A 1 -1 1 B 1 1 -1 C -1 1 1 D 1 1 1
Question 12: Convolutional Network
1 1
1/2 1/2
The function f is The function g is
-2 -1 1 2 -2 -1 1 2

Which is the correct graph for the convolution of f and g


1 1
A. B.
1/2 1/2

-2 -1 1 2 -2 -1 1 2

1 1
C. 1/2 D. 1/2

-2 -1 1 2 -2 -1 1 2
Question 13: Convolutional Network
Which technique is NOT a key ingredient in the methodology of Convolutional Neural
Networks

A. ReLU transformation B. Flattening C. Pooling. D. Relaxation

Question 14: Convolutional Network


Which neurophysiologist published work fundamental for the CNN approach?

A. Hubel B. Hebb C. McCulloch D. Pribram


Question 15: Convolution Neural Networks

1 2 3 4
If the input array A looks like: and the filter is: 1 0 and the stride is 1,
4 3 2 1 0 1
1 2 3 4
4 3 2 1

how does the feature map look like after the convolution of input array A with the filter F?

4 4 4 5 5 5 4 4 4 3 3 3
5 5 5 6 6 6 6 6 6 6 6 6
A. 4 4 4 B. 5 5 5 C. 4 4 4 D. 4 4 4
Question 16: Convolution Neural Networks

If the array output from convolution C looks like: 1 2 3 4 and the filter is 2x2 and the stride is 2,
5 6 7 8
1 3 2 4
5 7 6 8

how does the feature map look like after the subsampling (pooling) of array C with the filter F using average pooling?

5.5 3.5 5.5 4 5 3.5 3.5 5.5

A. 5 4 B. 5 3.5 C. 4 5.5 D. 4 5
Recommendations for further readings
Warren McCulloch and Walter Pitt, 1943
"A Logical Calculus of the Ideas Immanent in Nervous Activity“. Bulletin of Mathematical Biophysics Vol 5,
http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

D. 0. Hebb , 1949
The Organization of Behavior: A neuro-psychologial theory, John Wiley,
http://s-f-walker.org.uk/pubsebooks/pdfs/The_Organization_of_Behavior-Donald_O._Hebb.pdf

Frank Rosenblatt,1957
´The perceptron, a perceiving and recognizing automaton´, Project Para, Cornell Aeronautical Laboratory
https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf

D. H. Hubel and T. N. Wiesel, 1962


RECEPTIVE FIELDS, BINOCULAR INTERACTION AND FUNCTIONAL ARCHITECTURE IN THE CAT'S VISUAL CORTEX, J. Phyiiol. (1962), 160
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1359523/pdf/jphysiol01247-0121.pdf

Sepp Hochreiter and Jurgen Schmidhuber, 1997


LONG SHORT-TERM MEMORY, Neural Computation 9(8):1735{1780, 1997
https://www.bioinf.jku.at/publications/older/2604.pdf

John J. Hopfield, 2007 (1997),


´Hopfield Networks´, Scholarpedia, 2(5):2007
http://www.scholarpedia.org/article/Hopfield_network

Weibo Liua , Zidong Wanga , Xiaohu Liua, Nianyin Zengb, Yurong Liuc and Fuad E. Alsaadid, 2017.
´A Survey of Deep Neural Network Architectures and Their Applications´, Neuro computing, Volume 234
https://bura.brunel.ac.uk/bitstream/2438/14221/1/FullText.pdf
NPTEL

Video Course on Machine Learning

Professor Carl Gustaf Jansson, KTH

Thanks for your attention!

The next week of the course will have


the following theme:

Tools for Implementation and


Interdisciplinary Inspiration

You might also like