0% found this document useful (0 votes)
352 views7 pages

Experiment 5: Aim: To Implement Mc-Culloch and Pitt's Model For A Problem. Theory

The document describes McCulloch and Pitts' model of artificial neurons and how it can be used to simulate logic gates. It explains the basic neuron model, including weighted inputs, a transfer function and threshold. It then shows how to represent a NOT gate using this model by setting the weight and threshold values appropriately based on the gate's truth table. Source code is also provided that iterates through different weight and threshold values to simulate AND, OR, NAND and XOR gates using the McCulloch and Pitts model.

Uploaded by

Nidhi Mhatre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
352 views7 pages

Experiment 5: Aim: To Implement Mc-Culloch and Pitt's Model For A Problem. Theory

The document describes McCulloch and Pitts' model of artificial neurons and how it can be used to simulate logic gates. It explains the basic neuron model, including weighted inputs, a transfer function and threshold. It then shows how to represent a NOT gate using this model by setting the weight and threshold values appropriately based on the gate's truth table. Source code is also provided that iterates through different weight and threshold values to simulate AND, OR, NAND and XOR gates using the McCulloch and Pitts model.

Uploaded by

Nidhi Mhatre
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Experiment 5

Aim: To implement Mc-culloch and Pitt’s Model for a problem.

Theory:

McCulloch and Pitts proposed a computational model that resembles the Biological
Neuron model. These neurons were represented as models of biological networks
into conceptual components for circuits that could perform computational tasks. The
basic model of the artificial neuron is founded upon the functionality of the
biological neuron. An artificial neuron is a mathematical function that resembles the
biological neuron.

Neuron Model:
A neuron with a scalar input and no bias appears below. The figure shows a simple
artificial neural net with n input neurons (X1, X2, Xn) and one output neuron (Y).
The interconnected weights are given by W1, W2, Wn.

Figure 1. Artificial Neuron Model

The output of the above model is given as,

Y= 1 if neti= ∑ Wij * Xj
≥T
0 if neti= ∑ Wij * Xj <
T

Where ‘i’ represent the output neuron and ‘j’ represent the input neuron.
The scalar input X is transmitted through a connection that multiplies its strength by
the scalar weight W to form the product W * X, again a scalar. The weighted input
W * X is the only argument of the transfer function f , which produces the scalar
output Y. The neuron may have a scalar bias, b. You can view the bias as simply
being added to the product W * X. The bias is much like a weight, except that it has
a constant input of 1. The transfer function net input ‘n’, again a scalar, is the sum
of the weighted input W * X and the bias b.
This sum is the argument of the transfer function f. Here f is a transfer function,
typically a step function or a sigmoid function, that takes the argument n and
produces the output Y. Note that W and b are both adjustable scalar parameters of
the neuron. The central idea of
neural networks is that such parameters can be adjusted so that the network exhibits some
desired or interesting behaviour. Thus, you can train the network to do a particular job by
adjusting the weight or bias parameters, or perhaps the network itself will adjust these
parameters to achieve some desired end. As previously noted, the bias b is an adjustable
(scalar) parameter of the neuron. It is not an input. However, the constant 1 that drives the
bias is an input and must be treated as such when you consider the linear dependence of input
vectors in Linear Filters.

Simulation of NOT gate using Mc’culloch & Pitts Model


The truth table of the NOT gate is as follows

Input Output

X Y

1 0

0 1

We assume the weight vector as W.


For the first row (i.e. i/p 1) we may write net value as W* X = W * 1 = W according
to the mc’culloch & pitts model if the output is 0 then net value must be less than
threshold W < T. For the second row (i.e. i/p 0) we may write net value as W * X
=W * 0 = 0 according to the mc’culloch & pitts model if the output is 1 then net
value must be greater than or equal to threshold, 0 ≥ T.
Now we are having two equations
1. W < T

2. 0 ≥ T
Now select the values of W and T such that the above conditions get satisfied. One of the
possible values are T=0, W=-1. Using this values the NOT gate is represented as,

Conclusion: Thus we have studied and implemented Mc-culloch and Pitt’s Model for a
logic gate.
Source Code:

class MP_Neuron:

# firing threshold for the neuron threshold = 0

# weights for the neuron w1 = 0 w2 = 0


possible_w1_vals = [-1, 1]
possible_w2_vals = [-1, 1]
possible_thresh_vals = [-2, -1, 0, 1, 2] def
init__(self, input_matrix):

'''
Example input matrix for AND gate
| x1 | x2 | y |
| -1 | -1 | 0 |
| -1 | +1 | 0 |
| +1 | -1 | 0 |
| +1 | +1 | 1 | '''
self.input_matrix = input_matrix

def iterate_all_values(self):
for w1 in self.possible_w1_vals: self.w1 =
w1
for w2 in self.possible_w2_vals: self.w2
= w2
for threshold in self.possible_thresh_vals: self.threshold =
threshold
if self.check_combination(): return
True
return False

def check_combination(self): valid =


True
for (x1, x2, y) in self.input_matrix:
if not self.compare_target(x1, x2, y): valid =
False
return valid

def compare_target(self, x1, x2, target):


if self.neuron_activate(x1, x2) == target: return
True
else:
return False

def neuron_activate(self, x1, x2): output =


self.w1*x1 + self.w2*x2 if output >=
self.threshold:
return 1
else:
return 0

if name ==" main ":

AND_Matrix = [
[-1, -1, 0],
[-1, 1, 0],
[ 1, -1, 0],
[ 1, 1, 1],
]

OR_Matrix = [ [-
1, -1, 0],
[-1, 1, 1],
[ 1, -1, 1],
[ 1, 1, 1],
]

NAND_Matrix = [ [-
1, -1, 1],
[-1, 1, 1],
[ 1, -1, 1],
[ 1, 1, 0],
]

XOR_Matrix = [ [-
1, -1, 0],
[-1, 1, 1],
[ 1, -1, 1],
[ 1, 1, 0],
]

def neuron_calculate(mp): if
mp.iterate_all_values():
print("Weights are : {}, {}".format(mp.w1, mp.w2)) print("Threshold
is {}".format(mp.threshold))
else:
print("Not linearly separable.") print()

print("++ AND Gate ++")


mp_AND = MP_Neuron(AND_Matrix)
neuron_calculate(mp_AND)

print("++ OR Gate ++")


mp_OR = MP_Neuron(OR_Matrix)
neuron_calculate(mp_OR)

print("++ NAND Gate ++")


mp_NAND = MP_Neuron(NAND_Matrix)
neuron_calculate(mp_NAND)

print("++ XOR Gate ++")


mp_XOR = MP_Neuron(XOR_Matrix)
neuron_calculate(mp_XOR)
Output:

You might also like