0% found this document useful (0 votes)
11 views6 pages

20102A0071 DL Experiment1

Uploaded by

Amruta Gulekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

20102A0071 DL Experiment1

Uploaded by

Amruta Gulekar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Department of Computer Engineering Exp. No.

Semester B.E. Semester VIII – Computer Engineering


Subject Deep learning
Subject Professor In-charge Prof. Kavita Shirsat
Assisting Teachers Prof. Kavita Shirsat
Laboratory M-301B

Student Name Mark John


Roll Number 20102A0071
Grade and Subject
Teacher’s Signature

Experiment 1
Number
Experiment Implement Mccullon pit algorithm to simulate any logic gate.
Title
Resources Hardware: Software:
/ Computer system Python
Apparatus
Required
Description McCulloch Pitt’s model of neuron is a simple model which consists of some (n)
binary inputs with some weight associated with each one of them. An input is
known as ‘inhibitory input’ if the weight associated with the input is of negative
magnitude and is known as ‘excitatory input’ if the weight associated with the
input is of positive magnitude. As the inputs are binary, they can take either of
the 2 values, 0 or 1.

Then we have a summation junction that aggregates all the weighted inputs and
then passes the result to the activation function. The activation function is a
threshold function that gives out 1 as the output if the sum of the weighted
inputs is equal to or above the threshold value and 0 otherwise.
So let’s say we have n inputs = { X1, X2, X3, …. , Xn }
And we have n weights for each= {W1, W2, W3, …., W4}
So the summation of weighted inputs X.W = X1.W1 + X2.W2 + X3.W3 +....+
Xn.Wn

If X ≥ ø(threshold value)
Department of Computer Engineering Exp. No. 1

Output = 1
Else
Output = 0

OR Function
We know that the thresholding parameter for OR function is 1, i.e. theta is 1.
The possible combinations of inputs are: (0,0), (0,1), (1,0), and (1,1). Considering
the OR function’s aggregation equation, i.e. x_1+x_2≥1, let us plot the graph.

The graph shows that the inputs for which the output when passed through OR
function M-P neuron lie ON or ABOVE (output is 1, positive) that line and all
inputs that lie BELOW (output is 0, negative) that line give the output as 0.
Therefore, the McCulloh Pitt’s Model has made a linear decision boundary
which splits the inputs into two classes, which are positive and negative.

AND Function
Similar to OR Function, we can plot the graph for AND function considering the
equation is x_1+x_2=2.

Here, the decision boundary separates all the input points that lie ON or ABOVE
and give output 1 with just (1,1) when passed through the AND function.

From these examples, we can understand that with increase in the number of
Department of Computer Engineering Exp. No. 1
inputs, the dimensions which are plotted on the graph will also increase, which
means that if we consider 3 inputs with OR function, we will plot a graph on a
three-dimensional (3D) plane and draw a decision boundary in 3 dimensions.

Let’s Take a real-world example:


A bank wants to decide if it can sanction a loan or not. There are 2 parameters
to decide- Salary and Credit Score. So there can be 4 scenarios to assess-
1. High Salary and Good Credit Score
2. High Salary and Bad Credit Score
3. Low Salary and Good Credit Score
4. Low Salary and Bad Credit Score
Let X1 = 1 denote high salary and X1 = 0 denote Low salary and X2 = 1 denote
good credit score and X2 = 0 denote bad credit score
Let the threshold value be 2.

X1 X2 X1+X2 Loan approved

1 1 2 1

1 0 1 0

0 1 1 0

0 0 0 0
Department of Computer Engineering Exp. No. 1
Program import numpy as np
np.random.seed(seed=0)
I = np.random.choice([0,1], 3)# generate random vector I,
sampling from {0,1}
W = np.random.choice([-1,1], 3) # generate random vector W,
sampling from {-1,1}
print(f'Input vector:{I}, Weight vector:{W}')
input_table = np.array([
[0,0], # both no
[0,1], # one no, one yes
[1,0], # one yes, one no
[1,1] # bot yes
])

print(f'input table:\n{input_table}')
weights = np.array([1,1])
print(f'weights: {weights}')
dot_products = input_table @ weights
print(f'Dot products: {dot_products}')
def linear_threshold_gate(dot: int, T: float) -> int:
'''Returns the binary threshold output'''
if dot >= T:
return 1
else:
return 0
T = 2
for i in range(0,4):
activation = linear_threshold_gate(dot_products[i], T)
print(f'Activation: {activation}')
T = 1
for i in range(0,4):
activation = linear_threshold_gate(dot_products[i], T)
print(f'Activation: {activation}')
Department of Computer Engineering Exp. No. 1
Output
Department of Computer Engineering Exp. No. 1
Conclusion In this experiment we learned how to implement AND & OR gates using Mcculloch
Pitts Algorithm.

You might also like