100 Days of DEep Learning
100 Days of DEep Learning
ML is to find the realtionship between the input and output column using
statistical function
Where as DL is based on Logical Nueral Network
Ex : Input Layer -> Hideen Layer 1 -> Hidden Layer 2 -> Output
3. Perceptrons : They are the Algorthim -> Supervised Learning -> basic buliding
block
{W1 + W2 + W3} weights + bias -> Sum of weight is z -> there is
activation function known as step function -> output
Main Objective of traning process is to find the weight and bias
Code :
def perceptron(X,y):
X = np.insert(X,0,1,axis=1)
weights = np.ones(X.shape[1])
lr = 0.1
for i in range(1000):
j = np.random.randint(0,100)
y_hat = step(np.dot(X[j],weights))
weights = weights + lr*(y[j]-y_hat)*X[j]
return weights[0],weights[1:]
def step(z):
return 1 if z>0 else 0
4.
the perceptron is doest not create non linear boundary but we can use mult layer
perceptron
Code :
This thing is the code in inproper way
epochs = 5
for i in range(epochs):
for j in range(x.shape[0]):
-> select 1 row (random)
-> Predict (using forward prop)
-> Calculate Loss (using loss function -> mse)
-> Update weights and bias using GD
Wn = Wo - n partial derviate (L)
----------------------
partial dervaite (W)
-> calculate avg loss for the epoch
a.
def initialize_parameters(layer_dims):
np.random.seed(3)
parameters = {}
L = len(layer_dims)
return parameters
b.
def sigmoid(Z):
A = 1/(1+np.exp(-Z))
return A
c.
def linear_forward(A_prev, W, b):
Z = np.dot(W.T, A_prev) + b
A = sigmoid(Z)
return A
d.
def L_layer_forward(X, parameters):
A = X
L = len(parameters) // 2 # number of layers in the neural
network
return A,A_prev
e.
def update_parameters(parameters,y,y_hat,A1,X):
parameters['W2'][0][0] = parameters['W2'][0][0] + (0.0001 * (y - y_hat)*A1[0][0])
parameters['W2'][1][0] = parameters['W2'][1][0] + (0.0001 * (y - y_hat)*A1[1][0])
parameters['b2'][0][0] = parameters['W2'][1][0] + (0.0001 * (y - y_hat))
f.
X = df[['cgpa', 'profile_score']].values[0].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[0][0]
# Parameter initialization
parameters = initialize_parameters([2,2,1])
y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]
update_parameters(parameters,y,y_hat,A1,X)
parameters
g.
X = df[['cgpa', 'profile_score']].values[1].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[1][0]
y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]
update_parameters(parameters,y,y_hat,A1,X)
print('Loss for this student - ',-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))
parameters
h.
X = df[['cgpa', 'profile_score']].values[2].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[2][0]
y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]
update_parameters(parameters,y,y_hat,A1,X)
parameters
i.
X = df[['cgpa', 'profile_score']].values[3].reshape(2,1) # Shape(no of features,
no. of training example)
y = df[['placed']].values[3][0]
y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]
update_parameters(parameters,y,y_hat,A1,X)
parameters
j.
parameters = initialize_parameters([2,2,1])
epochs = 50
for i in range(epochs):
Loss = []
for j in range(df.shape[0]):
# Parameter initialization
y_hat,A1 = L_layer_forward(X,parameters)
y_hat = y_hat[0][0]
update_parameters(parameters,y,y_hat,A1,X)
Loss.append(-y*np.log(y_hat) - (1-y)*np.log(1-y_hat))
parameters
Memorization :