0% found this document useful (0 votes)
93 views5 pages

100 Days of DEep Learning

Uploaded by

HASMUKH RUSHABH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views5 pages

100 Days of DEep Learning

Uploaded by

HASMUKH RUSHABH
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 5

1.

ML is to find the realtionship between the input and output column using
statistical function
Where as DL is based on Logical Nueral Network

Ex : Input Layer -> Hideen Layer 1 -> Hidden Layer 2 -> Output

2. Types of Neural Network


a. Multi Layer Perceptron -> Supervised Learning
b. CNN (Conviontional Neural Network) -> For image
c. RNN (Recurrent Neural Ntworks) -> NLP
d. LSTM (Long Short term Memory) -> NLP
e. Auto Encoder -> COmpression
f. GAN (generative adversarial network) -> to generate a wide range of data
types, including images, music and text.

3. Perceptrons : They are the Algorthim -> Supervised Learning -> basic buliding
block
{W1 + W2 + W3} weights + bias -> Sum of weight is z -> there is
activation function known as step function -> output
Main Objective of traning process is to find the weight and bias

Code :
def perceptron(X,y):

X = np.insert(X,0,1,axis=1)
weights = np.ones(X.shape[1])
lr = 0.1

for i in range(1000):
j = np.random.randint(0,100)
y_hat = step(np.dot(X[j],weights))
weights = weights + lr*(y[j]-y_hat)*X[j]

return weights[0],weights[1:]

def step(z):
return 1 if z>0 else 0

4.
the perceptron is doest not create non linear boundary but we can use mult layer
perceptron

5. Now we will do backpropgration :


xi1 xi2
cgpa | resume score | package (lpa) => Dataset
8 8 4
7 9 5
6 10 6
5 12 7

Code :
This thing is the code in inproper way

epochs = 5
for i in range(epochs):
for j in range(x.shape[0]):
-> select 1 row (random)
-> Predict (using forward prop)
-> Calculate Loss (using loss function -> mse)
-> Update weights and bias using GD
Wn = Wo - n partial derviate (L)
----------------------
partial dervaite (W)
-> calculate avg loss for the epoch

a.
def initialize_parameters(layer_dims):

np.random.seed(3)
parameters = {}
L = len(layer_dims)

for l in range(1, L):

parameters['W' + str(l)] = np.ones((layer_dims[l-1], layer_dims[l]))*0.1


parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))

return parameters

b.
def sigmoid(Z):

A = 1/(1+np.exp(-Z))

return A

c.
def linear_forward(A_prev, W, b):

Z = np.dot(W.T, A_prev) + b

A = sigmoid(Z)

return A

d.
def L_layer_forward(X, parameters):

A = X
L = len(parameters) // 2 # number of layers in the neural
network

for l in range(1, L+1):


A_prev = A
Wl = parameters['W' + str(l)]
bl = parameters['b' + str(l)]
#print("A"+str(l-1)+": ", A_prev)
#print("W"+str(l)+": ", Wl)
#print("b"+str(l)+": ", bl)
#print("--"*20)

A = linear_forward(A_prev, Wl, bl)


#print("A"+str(l)+": ", A)
#print("**"*20)

return A,A_prev

e.
def update_parameters(parameters,y,y_hat,A1,X):
parameters['W2'][0][0] = parameters['W2'][0][0] + (0.0001 * (y - y_hat)*A1[0][0])
parameters['W2'][1][0] = parameters['W2'][1][0] + (0.0001 * (y - y_hat)*A1[1][0])
parameters['b2'][0][0] = parameters['W2'][1][0] + (0.0001 * (y - y_hat))

parameters['W1'][0][0] = parameters['W1'][0][0] + (0.0001 * (y -


y_hat)*parameters['W2'][0][0]*A1[0][0]*(1-A1[0][0])*X[0][0])
parameters['W1'][0][1] = parameters['W1'][0][1] + (0.0001 * (y -
y_hat)*parameters['W2'][0][0]*A1[0][0]*(1-A1[0][0])*X[1][0])
parameters['b1'][0][0] = parameters['b1'][0][0] + (0.0001 * (y -
y_hat)*parameters['W2'][0][0]*A1[0][0]*(1-A1[0][0]))

parameters['W1'][1][0] = parameters['W1'][1][0] + (0.0001 * (y -


y_hat)*parameters['W2'][1][0]*A1[1][0]*(1-A1[1][0])*X[0][0])
parameters['W1'][1][1] = parameters['W1'][1][1] + (0.0001 * (y -
y_hat)*parameters['W2'][1][0]*A1[1][0]*(1-A1[1][0])*X[1][0])
parameters['b1'][1][0] = parameters['b1'][1][0] + (0.0001 * (y -
y_hat)*parameters['W2'][1][0]*A1[1][0]*(1-A1[1][0]))

f.
X = df[['cgpa', 'profile_score']].values[0].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[0][0]

# Parameter initialization
parameters = initialize_parameters([2,2,1])

y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]

update_parameters(parameters,y,y_hat,A1,X)

print('Loss for this student - ',-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))

parameters

g.
X = df[['cgpa', 'profile_score']].values[1].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[1][0]

y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]

update_parameters(parameters,y,y_hat,A1,X)
print('Loss for this student - ',-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))

parameters

h.
X = df[['cgpa', 'profile_score']].values[2].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[2][0]

y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]

update_parameters(parameters,y,y_hat,A1,X)

print('Loss for this student - ',-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))

parameters

i.
X = df[['cgpa', 'profile_score']].values[3].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[3][0]

y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]

update_parameters(parameters,y,y_hat,A1,X)

print('Loss for this student - ',-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))

parameters

j.
parameters = initialize_parameters([2,2,1])
epochs = 50

for i in range(epochs):

Loss = []

for j in range(df.shape[0]):

X = df[['cgpa', 'profile_score']].values[j].reshape(2,1) # Shape(no of


features, no. of training example)
y = df[['placed']].values[j][0]

# Parameter initialization

y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]
update_parameters(parameters,y,y_hat,A1,X)

Loss.append(-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))

print('Epoch - ',i+1,'Loss - ',np.array(Loss).mean())

parameters

Memorization :

You might also like