0% found this document useful (0 votes)
27 views5 pages

2023 - Midterm 2 Solution - Spring - AI

Uploaded by

faizan majid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views5 pages

2023 - Midterm 2 Solution - Spring - AI

Uploaded by

faizan majid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

National University of Computer and Emerging Sciences, Lahore Campus

Course Name: Artificial Intelligence Course Code: AI2002


Program: BS (CS) BS(DS) Semester: Spring 2023
Duration: 60 Minutes Total Marks: 50
Paper Date: 11-Apr-2023 Weightage 15
Section: ALL Page(s): 5
Exam Type: Mid II
Question Q1 (CLO:2) Q2 (CLO:3) Q3 (CLO:2,3) Total Marks
Marks 10 15 25 50
Obtained Marks

Student Name: _____________________________ Section: __________ Roll No.________________


a) Do not use pencil or red ink to answer the questions. In case of confusion or ambiguity make a reasonable
assumption. Attempt all questions on the question paper in space provided.
QUESTION 1: Perform the alpha beta pruning on the following min-max tree and show all working. (10)

α=34
β = +∞ 4
α = -∞ α=3
β=3 3 4
β=5
α=35
5 α = 37
β = +∞ 7
β=5 α>β
α=3
0
β=0
α>β

---------------------------------------------------------------------------------------------------------------------------------------------
Fast School of Computing 1 of 5
QUESTION 2: Suppose there are 10 chromosomes with finesses as shown in table. What will be the selection
probability according to the proportionate and linear rank selection methods? (5 + 10)

Proportionate Linear Rank Linear Rank Calculations


𝒏
Chromosome Fitness 𝒏𝒇𝒊 𝒏𝒑𝒊 =
No. fi 𝒑𝒊 = 𝒇𝒊 / ∑ 𝒇𝒋 Sort
= (𝑷 − 𝒓𝒊 ) + 𝟏 𝒏𝒇𝒊 /∑𝒏𝒋=𝟏 𝒏𝒇𝒋
𝒋=𝟏

A 50 0.05 0.055 250 10 0.182


B 25 0.025 0.036 140 9 0.164
C 25 0.025 0.018 125 8 0.145
D 100 0.1 0.109 110 7 0.127
E 75 0.075 0.073 100 6 0.109
F 125 0.125 0.145 100 5 0.091
G 250 0.250 0.182 75 4 0.073
H 110 0.110 0.127 50 3 0.055
I 140 0.140 0.164 25 2 0.036
J 100 0.1 0.091 25 1 0.018
Total 1000 1 1 1000 55 1

QUESTION 3: A Multi-layer feed-forward neural network with initialization of weights is given below.

b2 = 0.5 b3 = 0.5 b4 = 0.5

---------------------------------------------------------------------------------------------------------------------------------------------
Fast School of Computing 2 of 5
a) Do a forward pass and compute the output at O1 and O2. Use linear activation function at hidden layer h1,
and h2 and sigmoid activation function at O1 and O2. All biases are 0.5, the input values are x1 = 1, x2 =
4, x3 = 5 and target values are t1= 0.1, t2 = 0.05. Show all the working. (3+3)

𝑛𝑒𝑡_ℎ1 = (0.1 × 1) + (0.3 × 4) + (0.5 × 5) + (0.5 × 1) = 4.3


𝑶(𝒏𝒆𝒕_𝒉𝟏 ) = 𝟒. 𝟑

𝑛𝑒𝑡_ℎ2 = (0.2 × 1) + (0.4 × 4) + (0.6 × 5) + (0.5 × 1) = 5.3


𝑶(𝒏𝒆𝒕_𝒉𝟐 ) = 𝟓. 𝟑

𝑛𝑒𝑡_𝑂1 = (0.7 × 4.3) + (0.9 × 5.3) + (0.5 × 1) = 8.28


𝟏
𝑶(𝒏𝒆𝒕_𝑶𝟏 ) = = 𝟎. 𝟗𝟗𝟗𝟕
𝟏 + 𝒆−𝟖.𝟐𝟖

𝑛𝑒𝑡_𝑂2 = (0.8 × 4.3) + (0.1 × 5.3) + (0.5 × 1) = 4.47


𝟏
𝑶(𝒏𝒆𝒕_𝑶𝟐 ) = = 𝟎. 𝟗𝟖𝟖𝟕
𝟏 + 𝒆−𝟒.𝟒𝟕

b) What are the general weight update equations according to delta rule for this network? (2+2)
𝟏 𝟐
𝑬𝒓𝒓𝒐𝒓𝒋 = 𝟐 (𝒕𝒋 − 𝑶𝒋 )
learning rate= η

Weight update equation for Output Layer unit i to j

𝒏𝒆𝒘_𝒘𝒊𝒋 = 𝒐𝒍𝒅_𝒘𝒊𝒋 + ∆𝒘𝒊𝒋


𝝏𝑬𝒓𝒓𝒐𝒓
∆𝒘𝒊𝒋 = −𝜼 ( ) = 𝜼[(𝒕𝒋 − 𝑶𝒋 ) . 𝝈 (𝒏𝒆𝒕𝒋 )(𝟏 − 𝝈 (𝒏𝒆𝒕𝒋 ) ) . 𝑶(𝒏𝒆𝒕𝒊 )]
𝝏𝒘𝒊𝒋

Weight update equation for Hidden Layer unit k to i

𝒏𝒆𝒘_𝒘𝒌𝒊 = 𝒐𝒍𝒅_𝒘𝒌𝒊 + ∆𝒘𝒌𝒊


𝟐
𝝏𝑬𝒓𝒓𝒐𝒓
∆𝒘𝒌𝒊 = −𝜼 ( ) = 𝜼 [ ∑(𝒕𝒋 − 𝑶𝒋 ). 𝝈 (𝒏𝒆𝒕𝒋 )(𝟏 − 𝝈 (𝒏𝒆𝒕𝒋 ) ). 𝒘𝒊𝒋 ] . (𝟏). 𝒙𝒌 ]
𝝏𝒘𝒌𝒊
𝒋=𝟏

---------------------------------------------------------------------------------------------------------------------------------------------
Fast School of Computing 3 of 5
c) Do a backward pass (backpropagation) and compute update in weights b1, w4 and w10. Use learning rate
η=0.01. Show all the working. (5+5+5)

𝒏𝒆𝒘_𝒘𝟏𝟎 = 𝟎. 𝟏 + 𝟎. 𝟎𝟏 × (𝟎. 𝟎𝟓 − 𝟎. 𝟗𝟖𝟖𝟕) × 𝟎. 𝟗𝟖𝟖𝟕 × (𝟏 − 𝟎. 𝟗𝟖𝟖𝟕) × 𝟓. 𝟑


𝒏𝒆𝒘_𝒘𝟏𝟎 = 𝟎. 𝟏 + 𝟎. 𝟎𝟏 × (−𝟎. 𝟗𝟑𝟖𝟕) × 𝟎. 𝟗𝟖𝟖𝟕 × (𝟎. 𝟎𝟏𝟏𝟑) × 𝟓. 𝟑
𝒏𝒆𝒘_𝒘𝟏𝟎 = 𝟎. 𝟏 − 𝟎. 𝟎𝟎𝟎𝟓𝟓𝟔 = 𝟎. 𝟎𝟗𝟗𝟒
𝒏𝒆𝒘_𝒘𝟏𝟎 = 𝟎. 𝟎𝟗𝟗𝟒𝟒

𝒏𝒆𝒘_𝒘𝟒 = 𝟎. 𝟒 + 𝟎. 𝟎𝟏 × [(𝟎. 𝟏 − 𝟎. 𝟗𝟗𝟗𝟕) × 𝟎. 𝟗𝟗𝟗𝟕 × (𝟏 − 𝟎. 𝟗𝟗𝟗𝟕) × 𝟎. 𝟗 + (𝟎. 𝟎𝟓 − 𝟎. 𝟗𝟖𝟖𝟕) ×


𝟎. 𝟗𝟖𝟖𝟕 × (𝟏 − 𝟎. 𝟗𝟖𝟖𝟕) × 𝟎. 𝟏 ] × 𝟒
𝒏𝒆𝒘_𝒘𝟒 = 𝟎. 𝟒 + 𝟎. 𝟎𝟏 × [−𝟎. 𝟎𝟎𝟎𝟐𝟒𝟑 − 𝟎. 𝟎𝟎𝟏𝟎𝟓] × 𝟒
𝒏𝒆𝒘_𝒘𝟒 = 𝟎. 𝟒 − 𝟎. 𝟎𝟎𝟎𝟎𝟓𝟏𝟕 = 𝟎. 𝟑𝟗𝟗𝟗𝟓
𝒏𝒆𝒘_𝒘𝟒 = 𝟎. 𝟑𝟗𝟗𝟗𝟓

𝒏𝒆𝒘_𝒃𝟏 = 𝟎. 𝟓 + 𝟎. 𝟎𝟏 × [(𝟎. 𝟏 − 𝟎. 𝟗𝟗𝟗𝟕) × 𝟎. 𝟗𝟗𝟗𝟕 × (𝟏 − 𝟎. 𝟗𝟗𝟗𝟕) × 𝟎. 𝟕 + (𝟎. 𝟎𝟓 − 𝟎. 𝟗𝟖𝟖𝟕) ×


𝟎. 𝟗𝟖𝟖𝟕 × (𝟏 − 𝟎. 𝟗𝟖𝟖𝟕) × 𝟎. 𝟖 ]
𝒏𝒆𝒘_𝒃𝟏 = 𝟎. 𝟓 + 𝟎. 𝟎𝟏 × [−𝟎. 𝟎𝟎𝟎𝟏𝟖𝟗 − 𝟎. 𝟎𝟎𝟖𝟑𝟗]
𝒏𝒆𝒘_𝒃𝟏 = 𝟎. 𝟓 − 𝟎. 𝟎𝟎𝟎𝟎𝟖𝟓𝟖 = 𝟎. 𝟒𝟗𝟗𝟗𝟏
𝒏𝒆𝒘_𝒃𝟏 = 𝟎. 𝟒𝟗𝟗𝟗𝟏

---------------------------------------------------------------------------------------------------------------------------------------------
Fast School of Computing 4 of 5
Rough Sheet 1

---------------------------------------------------------------------------------------------------------------------------------------------
Fast School of Computing 5 of 5

You might also like