0% found this document useful (0 votes)
21 views2 pages

Practice Assignment 3

Uploaded by

omjee361
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views2 pages

Practice Assignment 3

Uploaded by

omjee361
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Practice Assignment 3

CE 784: Machine Learning and Data Analytics for Civil Engineering


Applications

Questions
1. For SVM, a hyperplane obtained is defined by y = (1, −2)T x = wT x.

(a) Given a hyperplane for classification, check whether the hyperplane correctly
predicts the following two points y = 1, x = (1, 0) and y = 1, x = (1, 1).
(b) Determine the distance of the following three points from a hyperplane defined
wT x
by a line w given that the projection of a point x on the plane is ||w|| 2
?
• x = (−1, 2)
• x = (1, 2)
• x = (1, 1)
2. Consider the following dataset with two classes, where each point is represented by
its (x1 , x2 ) coordinates:
Class 1: (1, 1), (2, 1), (1, −2), (2, −1)
Class 2: (4, 0), (5, 1), (5, −3), (6, 0)

(a) Can a linear Support Vector Machine (SVM) separate the two classes? If yes,
find the equation of the separating hyperplane. If no, explain why a linear
SVM cannot separate the classes.

3. Suppose you have two data points A = (5, 3) and B = (2, 6). Use the following
function to map these data points into a six-dimensional vector space given below:
√ √ √
ϕ(x) = (x21 , x22 , 2x1 x2 , 2x1 , 2x2 , 1)

(a) Determine ϕ(A) and ϕ(B).


(b) Calculate the dot product between these two points in the space defined by
ϕ : ϕ(A)T ϕ(B).
(c) Given a kernel function K(x, y) = (xT y + 1)2 , calculate K(A, B).

4. The ridge regression cost function is given by


m
X λ
J(w) = (wT x(i) − y (i) )2 + ||w||2 .
i=1
2

where w is the parameter vector and λ is the regularization parameter.

(a) Find a closed-form expression for the value of w which minimizes the ridge
regression cost function.
(b) How can kernels be used to implicitly represent feature vectors in a high-
dimensional (possibly infinite-dimensional) space via a feature mapping ϕ(x)?
Given the ridge regression cost function in terms of w.

1
(c) Derive a prediction for a new input xnew using the kernel trick, without ex-
plicitly computing the feature map ϕ(xnew ). Assume that w can be expressed
as a linear combination of the input feature vectors, i.e.,
m
X
w= (αi ϕ(x(i) )
i=1

for some set of parameters αi


(x−z)2
5. Proof that the Radial Basis Function (RBF) kernel K(x, z) = e− σ2 is well-defined
[i.e K is symmetric and positive semi-definite] and projects to infinite space.

6. Suppose we have an arbitrary set of input vectors x1 , x2 , ...xn . The kernel matrix M
corresponding to kernel function K is an n × n matrix such that Mij = K(xi , xj ).
Show that the kernel matrix M has symmetric and positive semi-definite.

You might also like