Experiment 5: Aim: To Implement Mc-Culloch and Pitt's Model For A Problem. Theory
Experiment 5: Aim: To Implement Mc-Culloch and Pitt's Model For A Problem. Theory
Theory:
McCulloch and Pitts proposed a computational model that resembles the Biological
Neuron model. These neurons were represented as models of biological networks
into conceptual components for circuits that could perform computational tasks. The
basic model of the artificial neuron is founded upon the functionality of the
biological neuron. An artificial neuron is a mathematical function that resembles the
biological neuron.
Neuron Model:
A neuron with a scalar input and no bias appears below. The figure shows a simple
artificial neural net with n input neurons (X1, X2, Xn) and one output neuron (Y).
The interconnected weights are given by W1, W2, Wn.
Y= 1 if neti= ∑ Wij * Xj
≥T
0 if neti= ∑ Wij * Xj <
T
Where ‘i’ represent the output neuron and ‘j’ represent the input neuron.
The scalar input X is transmitted through a connection that multiplies its strength by
the scalar weight W to form the product W * X, again a scalar. The weighted input
W * X is the only argument of the transfer function f , which produces the scalar
output Y. The neuron may have a scalar bias, b. You can view the bias as simply
being added to the product W * X. The bias is much like a weight, except that it has
a constant input of 1. The transfer function net input ‘n’, again a scalar, is the sum
of the weighted input W * X and the bias b.
This sum is the argument of the transfer function f. Here f is a transfer function,
typically a step function or a sigmoid function, that takes the argument n and
produces the output Y. Note that W and b are both adjustable scalar parameters of
the neuron. The central idea of
neural networks is that such parameters can be adjusted so that the network exhibits some
desired or interesting behaviour. Thus, you can train the network to do a particular job by
adjusting the weight or bias parameters, or perhaps the network itself will adjust these
parameters to achieve some desired end. As previously noted, the bias b is an adjustable
(scalar) parameter of the neuron. It is not an input. However, the constant 1 that drives the
bias is an input and must be treated as such when you consider the linear dependence of input
vectors in Linear Filters.
Input Output
X Y
1 0
0 1
2. 0 ≥ T
Now select the values of W and T such that the above conditions get satisfied. One of the
possible values are T=0, W=-1. Using this values the NOT gate is represented as,
Conclusion: Thus we have studied and implemented Mc-culloch and Pitt’s Model for a
logic gate.
Source Code:
class MP_Neuron:
'''
Example input matrix for AND gate
| x1 | x2 | y |
| -1 | -1 | 0 |
| -1 | +1 | 0 |
| +1 | -1 | 0 |
| +1 | +1 | 1 | '''
self.input_matrix = input_matrix
def iterate_all_values(self):
for w1 in self.possible_w1_vals: self.w1 =
w1
for w2 in self.possible_w2_vals: self.w2
= w2
for threshold in self.possible_thresh_vals: self.threshold =
threshold
if self.check_combination(): return
True
return False
AND_Matrix = [
[-1, -1, 0],
[-1, 1, 0],
[ 1, -1, 0],
[ 1, 1, 1],
]
OR_Matrix = [ [-
1, -1, 0],
[-1, 1, 1],
[ 1, -1, 1],
[ 1, 1, 1],
]
NAND_Matrix = [ [-
1, -1, 1],
[-1, 1, 1],
[ 1, -1, 1],
[ 1, 1, 0],
]
XOR_Matrix = [ [-
1, -1, 0],
[-1, 1, 1],
[ 1, -1, 1],
[ 1, 1, 0],
]
def neuron_calculate(mp): if
mp.iterate_all_values():
print("Weights are : {}, {}".format(mp.w1, mp.w2)) print("Threshold
is {}".format(mp.threshold))
else:
print("Not linearly separable.") print()