0% found this document useful (0 votes)
71 views2 pages

Final Neural 2018 May

This document contains a final exam paper for a Neural Networks course. The exam consists of 4 questions covering various neural network architectures including LVQ, RBF, perceptron, MLP, autoencoder, convolutional neural network and LeNet. The questions involve tasks like designing and training neural networks on sample datasets, deriving learning equations, and explaining concepts like weight sharing and max pooling. Diagrams and calculations are required to fully answer some of the questions.

Uploaded by

youssef hossam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views2 pages

Final Neural 2018 May

This document contains a final exam paper for a Neural Networks course. The exam consists of 4 questions covering various neural network architectures including LVQ, RBF, perceptron, MLP, autoencoder, convolutional neural network and LeNet. The questions involve tasks like designing and training neural networks on sample datasets, deriving learning equations, and explaining concepts like weight sharing and max pooling. Diagrams and calculations are required to fully answer some of the questions.

Uploaded by

youssef hossam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

COLLEGE OF ENGINEERING & TECHNOLOGY

Department : Computer Engineering


Lecturer : Dr. Mohamed Waleed Fakhr
Course Name : Neural Networks
Course Code : CC524 Total Marks: 40
Date 15/1/2018 : Time allowed : 2hrs

Final Examination Paper

Answer the following questions:

Question1:
a- Train an LVQ neural network on the training data given in Table-1.
Each class has only 1 center initialized at {{20 20}) for class-A and ({20 -20}) for class-B.
Sketch the training data locations and the Center locations before and after
training, (do only one pass over the training data and take learning rate=1).
(4)
Table-1: Training Data
Training examples X1 (1st dimension) X2 (2nd dimension) Target

A1 10 10 1

A2 -10 -10 1

B1 -10 10 0

B2 10 -10 0

b- Using the training data in Table-1, design an RBF neural network and find suitable values for the output
layer weights. Show clearly the truth table for the 4 inputs and the 4 neurons and sketch the resulting
RBF neural network. (4)
c- Sketch the neural network structure, and explain the training (learning) steps for the following neural
networks: (i) exact RBF, (ii) PNN (iii) ELM (iv) SOM (4)
Question2:

For the training data set for the 2-class problem given in Table-1:
a- Assume a perceptron is used to classify this data and its initial parameters are
(w 1 = 2, w 2 = 1 and threshold (Ө= 0) ) with a logsig activation function:
(i) Sketch the perceptron neural network and Plot its decision boundary, (2)
(ii) Deduce and explain which data points are correctly classified? (2)
(iii) Derive the learning equations for this perceptron (2)
(iv) Apply the learning equation for only pattern A1, and show the final weight values (2)
(take learning rate=1, and take the initial weights above, and keep Ө constant=0)
b- Design (by your own hand calculations and proper drawing) a MLP neural network to solve
the problem in Table-1. Use one hidden layer that has 2 neurons and one output neuron.
Sketch the decision boundaries for each of the 2 hidden layer neurons, and write the truth table for them.
Find proper weights and threshold values for those 2 hidden neurons. (4)
Question3:
a- For the single hidden layer MLP neural network shown below:
(i) Derive the back-propagation learning equation for the weight W42. (assume neurons use logsig
activations, and the MLP is trained using a cross-entropy cost function) (4)
(ii) Apply a single update to the weight W51 based on the pattern A1 in Table-1 (learning rate=1)
(2)

EDQMS 2/3 Page 1 of 2


COLLEGE OF ENGINEERING & TECHNOLOGY
Department : Computer Engineering
Lecturer : Dr. Mohamed Waleed Fakhr
Course Name : Neural Networks
Course Code : CC524 Total Marks: 40
Date 15/1/2018 : Time allowed : 2hrs

(b) Daily electric-load in Cairo in MWatts is given by [2120 2200 2320 2411 2312 2433 2544 2455 2466
2345 2894 2876 2987 2844 2566 2888 2899 2933 2811 2988]
Design a MLP neural network to predict the electric load for 3 days ahead (we want to predict the value for
each of the 3 coming days) using the load values of 5 days
(use a sliding window that moves by one day each time):
(i) Using the given data, Write all the values in the training data array and the target array and show the
size of each. (2)
(ii) Sketch the neural network: In your design, show the structure of the neural network (its inputs,
hidden and output neurons, and what would be the learning techique used). (2)
Question4:
(a) Given training data for 2 classes, where each class has 500 examples, and each example has
512 Features. Explain the steps required to design a deep auto-encoder (using unsupervised pre-training),
that will produce a (512-256-128-64-1) MLP structure. (2)
(b) Explain the operation of (i) weight sharing, (ii) Mini-batch normalization (iii) Relu-activation
(iv) adaptive learning rate, and how they have been used to avoid unsupervised pre-training. (2)

(c) The (4-by-4) image shown is input to a convolutional neural network (CNN). The CNN uses
One (3-by-3) filter-kernel with 9 weights, and a stride=1. All the weights were initialized to 1.
Show the size, plot, and calculate the values for the resulting feature map. (2)

(d) If max-pooling (2-by-2, with stride=1), is applied to the resulting feature in (c), show and calculate the
Values for the resulting reduced-map. (2)

(e) For the (LeNet) CNN neural network shown above: (i) how many pixels in the (gray-level) input image?
(ii) How many convolutions layer in this CNN? (iii) How many feature maps in the 1st convolution layer?
(v) How many different weights in the 1st convolution layer? (4)

EDQMS 2/3 Page 2 of 2

You might also like