0% found this document useful (0 votes)
59 views7 pages

Part (A) : House Price Number Rooms Number Bathrooms Number Stories

The document describes linear regression models for predicting house prices based on number of rooms, bathrooms, and stories of a house. It provides the equations and parameters for a linear model (part a), uses a direct method to calculate parameters (part b), and interprets the parameter values (parts c, d). It then applies the linear model to a new data point and calculates the gradient descent to update the parameters (parts e, f).

Uploaded by

faisal58650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views7 pages

Part (A) : House Price Number Rooms Number Bathrooms Number Stories

The document describes linear regression models for predicting house prices based on number of rooms, bathrooms, and stories of a house. It provides the equations and parameters for a linear model (part a), uses a direct method to calculate parameters (part b), and interprets the parameter values (parts c, d). It then applies the linear model to a new data point and calculates the gradient descent to update the parameters (parts e, f).

Uploaded by

faisal58650
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

From above figures (a) and (c) are example of linear model as figure (a) gives a positive constant

slope
and figure (c) depicting a constant negative slope from which it can be seen that systems given by (a)
and (c ) are linear systems.

Part (a)
house price   o  1  number rooms   2  number bathrooms  3  number stories
y  X
1100000 
 650000 
 
y  550000 
 
147000 
 780000 
1 10 5 3
1 5 2 2 

X  1 2 2 2
 
1 1 1 1
1 7 3 1 
 o 
 
  1
 2 
 
 3 

Part (b)
Direct method

X 1 y  X 1 X 
X\y
import numpy as np

from numpy.linalg import inv

X= np.array([[1, 10, 5, 3], [1, 5, 2, 2], [1, 2, 2, 2], [1, 1, 1, 1], [1, 7, 3, 1]])

y= np.array([[1100000, 650000, 550000, 147000, 780000]])

y=y.transpose()

theta=np.linalg.lstsq(X,y)
75075 
59300 
  
50962.5 
 
78512.5

Part (c)
Second weight is for adding a room and it has a value of 59300. It means by adding a room, an amount
of 59300 will be added in the house price.

Part (d)
The weight of number of story is highest among all other weights, so the most influential is number of
stories.

Part (e)
 75075 
59300 
1 7 4 2  
50962.5 
 
 78512.5
 75075  7  59300  4  50962.5  2  78512.5  851050

Q3

It is a parameter to control change in model parameters according to estimated error each time
it requires to update model parameters. It is basically amount of change in model weights when
updated during itteration/training and it is also known as step size. It has a value between 0 and 1. A
large value can make the estimate to converge to a suboptimal solution while a smaller learning rate
requires more training and more steps to get to a optimal solution.

Part (a)
y   o  1 x
e  y  yi  yi    o  1 xi 
e 2   y  yi    yi    o  1 xi  
2 2

n n n
MSE   e 2    y  yi     yi    o  1 xi  
2 2

i 1 i 1 i 1
n
MSE    yi    o  1 xi  
2

i 1

Part (b)

MSE n n
 2  yi    o  1 xi   (0  1)  2  yi    o  1 xi  
 o i 1 i 1

Part (c)
MSE n n
 2  yi   o  1 xi    0  xi   2  yi   o  1 xi    xi  
1 i 1 i 1

Part (d)

Expression for gradient

 1 MSE   1  n
 
 n     2  
n  i 1 
yi    o  1 xi  
 


D
o
  
 1 MSE   1 n

 n     2  yi    o  1 xi    xi    
 1   n  i 1 
old
 o   o 
      L D
 1  1
1  n
 
  2 
n  i 1
 yi   o  1 xi   


D 
1  n

  2  yi    o  1 xi    xi    
 n  i 1 

Part (e)
old
 o   o 
       D
 1  1
1  n
 
    yi    o  1 xi   
 2 
 n  i 1  
D
1  n

  2  yi   o  1 xi    xi    
 n  i 1 
old
 o   o 
       D
 1  1
1  n
 
old     yi    o  1 xi   
 2 
 o   o   n  i 1  
      
 1  1 1  n

  2  yi    o  1 xi    xi    
 n  i 1 
for
old
 o  1 
   
 1 0

1  n
 
old   2  yi    o  1 xi    
 o  1   n  i 1  
      
 1 0
   1 n

  2  yi    o  1 xi    xi    
 n  i 1 
Y predicted  X 
1 1
1 2  1 
X  ,   
1 3 0 
 
1 2
1 1  1
1 2  1 
Y predicted       1
1 3   0  1
  
1 2  1
 3 1  2 
 6  1 5 
yi  X          
7  1 6 
    
 4  1 3 
1  n
 
  2  yi   o  1 xi    
 n  i 1  
D
1  n

  2  yi    o  1 xi    xi    
 n  i 1 
1 n
 1 32
  2
n  i 1
  yi    o  1 xi      2  2  5  6  3 
 4 4
 8

1 n
 1
  2   yi    o  1 xi    xi      2  2 1  5  2  6  3  3  2   18
n  i 1  4
 8 
D 
 18
1   8  1  0.08  1.08 
  1    0   D     0.01    
0   18 0  0.18 0.18

You might also like