0% found this document useful (0 votes)
2 views2 pages

CSIT

The document outlines the Mid-sem Remote Exam for CS/IT 308 - Machine Learning at IIIT Vadodara, detailing the exam structure and marking scheme. It includes three main questions focusing on clustering algorithms, Gaussian mixture models, and support vector machines, with specific data points and tasks for classification. The total marks for the exam are 10, with a time allowance of 40 minutes.

Uploaded by

hellobuddyjr0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views2 pages

CSIT

The document outlines the Mid-sem Remote Exam for CS/IT 308 - Machine Learning at IIIT Vadodara, detailing the exam structure and marking scheme. It includes three main questions focusing on clustering algorithms, Gaussian mixture models, and support vector machines, with specific data points and tasks for classification. The total marks for the exam are 10, with a time allowance of 40 minutes.

Uploaded by

hellobuddyjr0
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

CS/IT 308

IIIT Vadodara
B.Tech. CSE and IT
Mid-sem Remote Exam Winter 2020-21
CS/IT 308 - Machine Learning
Total marks: 10 Time Allowed: 40 minutes

1. (4 marks) Consider the following algorithm:


1. Consider the m data points D = {x1 , x2 , ..., xm }, where xi ∈ Rn . Compute
D∗ = {x∗1 , x∗2 , ..., x∗m }, where x∗i = ||xxii||2 .
2. Let there be K clusters with cluster centers µi , for i = 1, 2, ..K. Compute
µ∗i = ||µµii||2 .
3. Compute (x∗i )T µ∗j , ∀i, j.
4. Compute (x∗i )T µ∗c = max (x∗i )T µ∗j , then assign class-c to data point xi .
j

5. Stop the algorithm, if there is no change in the class assignment for all the
data points; otherwise, compute the new cluster centers µi and go to step 2.
Given 8 data points in R2 are: x1 = [−2, −1]T , x2 = [−2, +1]T , x3 =
[−2, +2]T , x4 = [−1, −1]T , x5 = [+1, −1]T , x6 = [+1, +1]T , x7 = [+1, +2]T ,
and x8 = [+2, +1]T .
Now, classify the data points using µ1 = [+1, +1]T , µ2 = [−2, +1]T , and
µ3 = [−1, −1]T .
2. (4 marks) Consider a 2D Gaussian mixture probability density function
1 1 1
p = N2 (µ1 , Σ) + N2 (µ2 , Σ) + N2 (µ3 , Σ). (1)
3 3 3

T T T 1 0
Given µ1 = [0 0] , µ2 = [0 1] , µ3 = [2 0] , and Σ = . It is clear that
0 1
P1 = P2 = P3 = 31 , and number of classes M = 3.
(a) Following observations are drawn from equation (1): x1 = [1 1]T , x2 =
[2 1]T , x3 = [1 2]T , x4 = [−1 −1]T , x5 = [−2 1]T , x6 = [2 −1]T . Classify
the observations using the Bayes’ decision rule.
(b) If the probability function (1) is changed to
p = 0.4 N2 (µ1 , I) + 0.6 N2 (µ3 , I), (2)
then check whether the decision of classification for observations x4 and
x6 remain the same as done in part (a) or not.

1
CS/IT 308

3. (2 marks) Consider a support vector machine (SVM) and following two class
C1 , and C2 training data for two features (x1 , x2 ): C1 = {(1, 1), (2, 2), (2, 0)},
and C2 = {(0, 0), (1, 0), (0, 1)}. Now, (a) Plot these six training points, and
construct (by inspection) the weight vector for the optimal hyperplane, and
the optimal margin, and (b) What are the support vectors?

You might also like