MY Research On XOR Problem
MY Research On XOR Problem
PROBLEM STATEMENT
The XOR (exclusive OR) problem is a fundamental challenge in artificial neural networks (ANNs). A simple
perceptron cannot solve XOR because it is not linearly separable. However, a multi-layer perceptron (MLP)
with a hidden layer can successfully model the XOR function.
BLOCK DIAGRAM
• 2 input neurons
• 2 hidden neurons
• 1 output neurons
• One Input Layer, in which, A and B along with their weights will be defined
• Then Hidden Layer
• Lastly Output Layer
Figure 2: Model of a single Neuron
Layman’s Analogy
If you imagine making a recipe, the weighted inputs are your ingredients measured out. The bias
is a “pinch of salt” you add at the end to finalize the taste. This final “salt” might make or break
how the dish (neuron output) turns out.
• Activation Function: After we sum and add the bias, we run the result through a function (like
Sigmoid, ReLU, etc.).
o For more efficient result like up to 99% sigmoid is used but it requires a lot of hardware.
o For hardware effective solution, we will use ReLU (mostly used now a days) but this will
give less accurate result up to 92%.
• The activation function ensures the neuron can capture non-linear relationships—critical for
something like XOR.
• The output of this block is the final output of the neuron.
• Interface & Control: It coordinates when to read weights from memory, when to multiply, when
to add bias, and when to pass data to the activation function. In an FPGA design, this can be logic
that sequences operations in the right order.
SUMMARY
SUMMARY
This design can then be coded in Verilog or VHDL, loaded onto an FPGA, and tested to confirm it
produces the correct XOR output.
MATHEMATICAL EXPLAINATION
Now after understanding the block diagram of our project, our next task is to set the bias value, weight, and
the inputs value and mathematically derive this module for XOR Problem.