0% found this document useful (0 votes)
129 views

Implementing Logic Gates Using Neural Networks (Part 2) - by Vedant Kumar - Towards Data Science

Neural networks can be used to implement logic gates. An AND gate outputs 1 only if both inputs are 1, otherwise it outputs 0. This behavior can be achieved using neural networks by assigning weights such that the output neuron value is greater than 0.5 only when the input values satisfy the AND condition. Similarly, other logic gates like OR, NAND and XOR can also be implemented using appropriate weight assignments. XOR is difficult to separate linearly, so it requires adding a hidden layer to the neural network, with the XOR function decomposed into NOR and AND gates.

Uploaded by

helen lee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
129 views

Implementing Logic Gates Using Neural Networks (Part 2) - by Vedant Kumar - Towards Data Science

Neural networks can be used to implement logic gates. An AND gate outputs 1 only if both inputs are 1, otherwise it outputs 0. This behavior can be achieved using neural networks by assigning weights such that the output neuron value is greater than 0.5 only when the input values satisfy the AND condition. Similarly, other logic gates like OR, NAND and XOR can also be implemented using appropriate weight assignments. XOR is difficult to separate linearly, so it requires adding a hidden layer to the neural network, with the XOR function decomposed into NOR and AND gates.

Uploaded by

helen lee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Now, this value is fed to a neuron which has a non-linear

Search Medium Write Sign up Sign In


function(sigmoid in our case) for scaling the output to a desirable range.
The scaled output of sigmoid is 0 if the output is less than 0.5 and 1 if the
output is greater than 0.5. Our main aim is to find the value of weights or
Implementing Logic Gates using the weight vector which will enable the system to act as a particular gate.

Neural Networks (Part 2) Implementing AND gate


AND, NAND and XOR gate
AND gate operation is a simple multiplication operation between the
Vedant Kumar · Follow inputs. If any of the input is 0, the output is 0. In order to achieve 1 as the
Published in Towards Data Science · 6 min read · Jul 21, 2020 output, both the inputs should be 1. The truth table below conveys the
same information.
209 1

AND
Y
X Xг
Hello everyone!! Before starting with part 2 of implementing logic gates Wo=-3
0, O o
い2こ2
1W1=2
using Neural networks, you would want to go through part1 first. 0, ) AND
110
1,1 Wo=3,W15-21425-2
From part 1, we had figured out that we have two input neurons or x
Wo
NAWD
vector having values as x1 and x2 and 1 being the bias value. The input
values, i.e., x1, x2, and 1 is multiplied with their respective weight matrix w2 Antibicial reuron
that is W1, W2, and W0. The corresponding value is then fed to the Truth Table of AND gate and the values of weights that make the system act as AND and NAND gate,
Image by Author
summation neuron where we have the summed value which is

As we have 4 choices of input, the weights must be such that the


condition of AND gate is satisfied for all the input points.

(0,0) case
Consider a situation in which the input or the x vector is (0,0). The value
of Z, in that case, will be nothing but W0. Now, W0 will have to be less
Image by Author
than 0 so that Z is less than 0.5 and the output or ŷ is 0 and the definition

of the AND gate is satisfied. If it is above 0, then the value after Z has The line separating the above four points, therefore, be an equation
passed through the sigmoid function will be 1 which violates the AND W0+W1*x1+W2*x2=0 where W0 is -3, and both W1 and W2 are +2. The
gate condition. Hence, we can say with a resolution that W0 has to be a equation of the line of separation of four points is therefore x1+x2=3/2.
negative value. But what value of W0? Keep reading… The implementation of the NOR gate will, therefore, be similar to the just
the weights being changed to W0 equal to 3, and that of W1 and W2 equal

(0,1) case to -2

Now, consider a situation in which the input or the x vector is (0,1). Here
the value of Z will be W0+0+W2*1. This being the input to the sigmoid Moving on to XOR gate
function should have a value less than 0 so that the output is less than 0.5 For the XOR gate, the truth table on the left side of the image below
and is classified as 0. Henceforth, W0+W2<0. If we take the value of W0 depicts that if there are two complement inputs, only then the output will
as -3(remember the value of W0 has to be negative) and the value of W2 be 1. If the input is the same(0,0 or 1,1), then the output will be 0. The
as +2, the result comes out to be -3+2 and that is -1 which seems to satisfy points when plotted in the x-y plane on the right gives us the information
the above inequality and is at par with the condition of AND gate. that they are not linearly separable like in the case of OR and AND
gates(at least in two dimensions).

(1,0) case
Similarly, for the (1,0) case, the value of W0 will be -3 and that of W1 can
be +2. Remember you can take any values of the weights W0, W1, and W2
as long as the inequality is preserved.

(1,1) case
In this case, the input or the x vector is (1,1). The value of Z, in that case,
will be nothing but W0+W1+W2. Now, the overall output has to be greater
than 0 so that the output is 1 and the definition of the AND gate is
satisfied. From previous scenarios, we had found the values of W0, W1,
W2 to be -3,2,2 respectively. Placing these values in the Z equation yields
an output -3+2+2 which is 1 and greater than 0. This will, therefore, be
classified as 1 after passing through the sigmoid function.

XOR gate truth table and plotting of values on the x-y plane, Image by Author
A final note on AND and NAND implementation
Two possible solution increased to 9. A total of 6 weights from the input layer to the 2nd layer

To solve the above problem of separability, two techniques can be and a total of 3 weights from the 2nd layer to the output layer. The 2nd

employed i.e Adding non-linear features also known as the Kernel trick layer is also termed as a hidden layer.

or adding extra layers also known as Deep network

XOR(x1,x2) can be thought of as NOR(NOR(x1,x2),AND(x1,x2))

Weights of the network for it to act as an XOR gate, Image by Author

Talking about the weights of the overall network, from the above and
part 1 content we have deduced the weights for the system to act as an
AND gate and as a NOR gate. We will be using those weights for the
implementation of the XOR gate. For layer 1, 3 of the total 6 weights
would be the same as that of the NOR gate and the remaining 3 would be
the same as that of the AND gate. Therefore, the weights for the input to
the NOR gate would be [1,-2,-2], and the input to the AND gate would be
The solution to implementing XOR gate, Image by Author
[-3,2,2]. Now, the weights from layer 2 to the final layer would be the
same as that of the NOR gate which would be [1,-2,-2].
Weights of the XOR network
Here we can see that the layer has increased from 2 to 3 as we have added Universal approximation theorem
a layer where AND and NOR operation is being computed. The inputs It states that any function can be expressed as a neural network with one
remain the same with an additional bias input of 1. The table on the right hidden layer to achieve the desired accuracy
below displays the output of the 4 inputs taken as the input. An
interesting thing to notice here is that the total number of weights has

Geometrical interpretation
Neural Networks Deep Learning Logic Gates Geometry

Artificial Intelligence

Written by Vedant Kumar Follow

60 Followers · Writer for Towards Data Science

Engineer| AI enthusiast| Learner

Linear separability of the two classes in 3D, Image by Author More from Vedant Kumar and Towards Data Science

With this, we can think of adding extra layers as adding extra


dimensions. After visualizing in 3D, the X’s and the O’s now look
separable. The red plane can now separate the two points or classes. Such
a plane is called a hyperplane. In conclusion, the above points are
linearly separable in higher dimensions.
Recommended from Medium

Vedant Kumar in Towards Data Science Khuyen Tran in Towards Data Science

Convolutional Neural Networks Stop Hard Coding in a Data


Basic fundamentals of CNN Science Project — Use Config…
Files Instead
And How to Efficiently Interact with Config
Files in Python

6 min read · Aug 12, 2020 · 6 min read · May 26


Clément Delteil in Towards AI Peter Kar… in Artificial Intelligence in Plain Eng…
109 1.9K 20 s sh
Deep Learning Explained : L1 (Lasso) and L2 (Ridge)
Perceptron regularizations in logistic…
The key concept behind every neural regression
Logistic regression , Lasso and Rigde
networks. regularizations, derivations, math

· 7 min read · Jan 27 · 6 min read · Feb 3

354 3 56 2

Jacob Marks, Ph.D. in Towards Data Science Vedant Kumar in Towards Data Science

How I Turned My Company’s Docs Applications of Artificial Lists


into a Searchable Database with… Intelligence in Fire & Safety
OpenAI
And how you can do the same with your Overview of the applications of AI in AI Regulation ChatGPT
docs mitigating the dangers due to fire 6 stories · 11 saves 21 stories · 16 saves

15 min read · Apr 25 5 min read · Dec 6, 2020 ChatGPT prompts Natural Language
17 stories · 30 saves Processing
4.2K 48 17 369 stories · 19 saves

See all from Vedant Kumar See all from Towards Data Science

Help Status Writers Blog Careers Privacy Terms About Text to speech Teams

Youssef Hosni in Towards AI Connor Roberts

Building An LSTM Model From Master the Secret Sauce of AI:


Scratch In Python Neural Networks &…
How to build a basic LSTM using Basic Backpropagation
Get ready to embark onUnleashed!
an exhilarating
Python libraries journey into the world of artificial…
intelligence, where we unravel the magic
· 17 min read · Jan 2 · 11 min
behind · Mar 15
read networks…
neural

534 4 10

The PyCoach in Artificial Corner Peter Kar… in Artificial Intelligence in Plain Eng…
s sh
You’re Using ChatGPT Wrong! K-Nearest Neighbors (KNN) in
Here’s How to Be Ahead of 99% … Depth
ChatGPT
Master Users
ChatGPT by learning prompt K-NN, machine learning, classification
engineering.

· 7 min read · Mar 18 · 4 min read · Feb 5

25K 455 98

See more recommendations

You might also like