0% found this document useful (0 votes)
25 views4 pages

MY Research On XOR Problem

The document outlines a project focused on implementing an artificial neural network (ANN) to solve the XOR problem using a multi-layer perceptron (MLP) architecture. It details the design and functionality of the ANN, including the roles of input, hidden, and output neurons, as well as the importance of bias and activation functions in achieving non-linear separability. The project aims to describe the ANN in Verilog and deploy it on an FPGA for real-time computation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views4 pages

MY Research On XOR Problem

The document outlines a project focused on implementing an artificial neural network (ANN) to solve the XOR problem using a multi-layer perceptron (MLP) architecture. It details the design and functionality of the ANN, including the roles of input, hidden, and output neurons, as well as the importance of bias and activation functions in achieving non-linear separability. The project aims to describe the ANN in Verilog and deploy it on an FPGA for real-time computation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

FPGA IMPLEMENTATION OF ARTIFICIAL NUERAL NETWORKS

PROBLEM STATEMENT
The XOR (exclusive OR) problem is a fundamental challenge in artificial neural networks (ANNs). A simple
perceptron cannot solve XOR because it is not linearly separable. However, a multi-layer perceptron (MLP)
with a hidden layer can successfully model the XOR function.

GOALS OF THIS PROJECT

• Implementing an artificial neural network (ANN) to solve the XOR problem.


• Using a hardware description language Verilog to describe the ANN.
• Deploy the ANN on an FPGA to demonstrate real-time computation.

BLOCK DIAGRAM

Figure 1: Implementation of ANN for XOR Problem

In the above diagram, we can see that:

• 2 input neurons
• 2 hidden neurons
• 1 output neurons
• One Input Layer, in which, A and B along with their weights will be defined
• Then Hidden Layer
• Lastly Output Layer
Figure 2: Model of a single Neuron

This contains following blocks:

• Weight memory to store the weight value.


• My input which in my case is A, or B.
• They both will be multiplied at a multiplexer.
• Then each input will be multiplied with its respective weight and then added with the bias value
and stored in register to save the value.
• comboAdd: add up all the products from the mul blocks
• Bias:
What Is It?
The bias is a constant value added to the weighted sum of inputs in a neuron. Think of it like an
“offset” that helps the neuron shift its decision boundary and learn more complex behaviors.
Where Does It Come From?
Typically, the bias is stored either as a fixed constant or in a small memory/register inside the
FPGA. During training (usually done offline in software), you learn a specific bias value for each
neuron. That value is then loaded into the FPGA design.
Why Do We Need It?
Without a bias, a neuron can only produce outputs that are strictly dependent on the sum of
inputs being zero or not. The bias allows the neuron to fire (activate) even if all weighted inputs
are zero. This flexibility is crucial for learning non-trivial functions like XOR.
• Bias Add:
What Is It?
The “Bias Add” block (or stage) is simply an adder (or summer) that adds the bias to the total sum
of weighted inputs.
Where Does It Fit in the Flow?
1. You multiply inputs by their respective weights (the “mul” stage).
2. You sum these products together (the “sum” or “comboAdd” stage).

3. Then you add the bias in the “Bias Add” stage.

Layman’s Analogy

If you imagine making a recipe, the weighted inputs are your ingredients measured out. The bias
is a “pinch of salt” you add at the end to finalize the taste. This final “salt” might make or break
how the dish (neuron output) turns out.

• Activation Function: After we sum and add the bias, we run the result through a function (like
Sigmoid, ReLU, etc.).
o For more efficient result like up to 99% sigmoid is used but it requires a lot of hardware.
o For hardware effective solution, we will use ReLU (mostly used now a days) but this will
give less accurate result up to 92%.
• The activation function ensures the neuron can capture non-linear relationships—critical for
something like XOR.
• The output of this block is the final output of the neuron.
• Interface & Control: It coordinates when to read weights from memory, when to multiply, when
to add bias, and when to pass data to the activation function. In an FPGA design, this can be logic
that sequences operations in the right order.

Figure 3: Mathematical expression of a single neuron

SUMMARY

• The above explanation theoretically explains a neuron.


• For our Project we will be using two such models for inputs, then two such models in hidden,
and then one such model as an output.
• For the XOR network (2 inputs, 2 hidden neurons, 1 output neuron), you will have multiple
copies of this sequence. Each neuron will:
• Take its inputs (either the raw inputs in1, in2 or outputs from a previous layer).
• Fetch its specific weights.
• Perform the multiply-and-accumulate (sum).
• Add its unique bias.
• Possibly store the result in a register (the small box “d”).
• Pass the data through the activation function to get the neuron’s output.
• The registers (“d”) help ensure the design meets timing requirements and organizes data flow
within the FPGA. The “Bias Add” ensures the bias is correctly added.

SUMMARY

• One block = One neuron.


• For the XOR problem, you need 2 + 2 + 1 = 5 neurons in total (though the first “2 input neurons” can
be considered as direct inputs to the 2 hidden neurons, depending on your design approach).
• Each neuron does:
1. Read weights from memory
2. Multiply inputs by weights
3. Sum results + bias
4. Apply activation function
5. Output the final value

This design can then be coded in Verilog or VHDL, loaded onto an FPGA, and tested to confirm it
produces the correct XOR output.

MATHEMATICAL EXPLAINATION

Now after understanding the block diagram of our project, our next task is to set the bias value, weight, and
the inputs value and mathematically derive this module for XOR Problem.

You might also like