0% found this document useful (0 votes)
17 views7 pages

Week2 - Proofs and Equations

Week 12

Uploaded by

havilix342
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views7 pages

Week2 - Proofs and Equations

Week 12

Uploaded by

havilix342
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

ML - Week2

Baban Gain
February 2024

1 Linear Regression by Clustering

x y
0.5 1.5
1 2
1.5 3
2 3.5
3 4

Figure 1: The inputs and corre-


sponding target labels

Calculate the value of y at x=2.5 using K-nearest neighbour (k-


NN) regression with k=3

x Distance y
0.5 2.0 1.5
1 1.5 2
1.5 1.0 3
2 0.5 3.5
3 0.5 4
Nearest neighbours are x = 1.5, 2 and 3
The predicted value of y at x=2.5 is 13 (3 + 3.5 + 4) = 3.5

1
2 Derivation

ŷi = β0 + β1 xi (1)
N
X
RSS(β) = (yi − ŷi )2 (2)
i=1
N
X
RSS(β) = (yi − (β0 + β1 xi ))2 (3)
i=1
XN
RSS(β) = (yi − β0 − β1 xi )2 (4)
i=1

2.1 Derivative w.r.t β0

N
∂RSS X
=2· (yi − β0 − β1 xi ) · −1 = 0 (5)
∂β0 i=1
N
∂RSS X
= −2 · (yi − β0 − β1 xi ) = 0 (6)
∂β0 i=1
N N N
X X X 0
yi − β0 − β1 xi = (7)
i=1 i=1 i=1
−2
N
X N
X N
X
β0 = yi − β1 xi (8)
i=1 i=1 i=1
XN XN
N · β0 = yi − β1 xi (9)
i=1 i=1
PN PN
i=1 yi β1 i=1 xi
β0 = − (10)
N N
β0 = Y − β1 · X (11)

2
2.2 Derivative w.r.t β1

N
∂RSS X
=2· (yi − β0 − β1 xi ) · −xi = 0 (12)
∂β1 i=1
N
∂RSS X
= −2 · xi (yi − β0 − β1 xi ) = 0 (13)
∂β1 i=1
N
X
xi (yi − β0 − β1 xi ) = 0 (14)
i=1

Replacing β0 with Y − β1 · X

N
X
xi (yi − Y + β1 · X − β1 xi ) = 0 (15)
i=1
N
X N
X
xi (yi − Y ) + xi · (β1 · X − β1 xi ) = 0 (16)
i=1 i=1
XN N
X
xi (yi − Y ) = xi · (β1 xi − β1 · X) (17)
i=1 i=1
XN N
X
xi (yi − Y ) = β1 · xi · (xi − X) (18)
i=1 i=1
PN
xi (yi − Y )
β1 = PNi=1 (19)
i=1 xi · (xi − X)
PN PN
i=1 xi (yi − Y ) − X(yi − Y )
β1 = PN Pi=1
N
(20)
i=1 xi · (xi − X) − i=1 X · (xi − X)
PN
(xi − X)(yi − Y )
β1 = PNi=1 (21)
i=1 (xi − X) · (xi − X)
PN
(xi − X)(yi − Y )
β1 = i=1 PN (22)
2
i=1 (xi − X)

Further simplifying,

3
PN
i=1 ((xi· yi ) − (xi · Y ) − (X · yi ) + (X · Y ))
β1 = PN (23)
2 2
i=1 ((xi ) − 2 · xi · X + (X) ))
PN PN PN PN
i=1 (xi yi ) − i=1 (xi Y ) − i=1 (Xyi ) + i=1 (XY )
β1 = PN 2
P N P N
(24)
2
i=1 (xi ) − 2 i=1 (xi X) + i=1 (X)
PN
(xi yi ) − N XY − N XY + N XY
β1 = i=1 PN 2
(25)
2 2
i=1 (xi ) − 2N (X) + N (X)
PN
(xi yi ) − N XY
β1 = Pi=1 N 2
(26)
2
i=1 (xi ) − N (X)

Multiplying numerator and denominator by N

P P P
N (xi yi ) − xi yi
β1 = (27)
N (x2i ) − ( xi )2
P P

2.3 Proofs for subtracted parts


2.3.1 Numerator

N
X
X(yi − Y )
i=1
N
X N
X
= X · yi − X ·Y
i=1 i=1
N
X
=X yi − N · X · Y
i=1
=X ·N ·Y −N ·X ·Y
=0

4
2.3.2 Denominator

N
X
X · (xi − X)
i=1
N
X
=X· (xi − X)
i=1
= X · (N · X − N · X)
=0

5
3 Linear Regression

x y
0.5 1.5
1 2
1.5 3
2 3.5
3 4

Figure 2: The inputs and corre-


sponding target labels

3.1 Find β1

8
X = 0.5 + 1 + 1.5 + 2 + 3 = = 1.6
5
14
Y = 1.5 + 2 + 3 + 3.5 + 4 = = 2.8
5
PN
(xi − X)(yi − Y )
β1 = i=1 PN 2
i=1 (xi − X)
N umerator = (−1.1 ∗ −1.3) + (−0.6 ∗ −0.8) + (−0.1 ∗ 0.2) + (0.4 ∗ 0.7) + (1.4 ∗ 1.2) = 3.85
Denominator = 1.21 + 0.36 + 0.01 + 0.16 + 1.96 = 3.7
β1 = 3.85/3.7 = 1.0405

6
Alternatively,
P P P
N (xi yi ) − xi yi
β1 =
N (x2i ) − ( xi )2
P P
X
(xi yi ) = (0.5 ∗ 1.5) + 2 + 4.5 + 7 + 12 = 26.25
X
xi = 8
X
yi = 14
N
X
(x2i ) = 0.25 + 1 + 2.25 + 4 + 9 = 16.5
i=1
(5 ∗ 26.25) − (8 ∗ 14)
β1 =
5 ∗ 16.5 − 64
19.25
β1 = = 1.0405
18.5

3.2 Find β0
β0 = Y − β1 · X
β0 = 2.8 − (1.0405 ∗ 1.6) = 1.1352

3.3 Find y at x=2.5


y = β0 + β1 x
y = 1.1352 + 1.0405 ∗ 2.5 = 3.73645

4 Finding β0, β1 and β2


β0 = Y − β1 · X1 − β2 · X2
P 2 P P P
( x2 ) ( x1 y) − ( x1 x2 ) ( x2 y)
β1 =
( x21 ) ( x22 ) − ( x1 x2 )2
P P P

( x21 ) ( x2 y) − ( x1 x2 ) ( x1 y)
P P P P
β2 =
( x22 ) ( x21 ) − ( x1 x2 )2
P P P

You might also like