Game NN
Game NN
Objective: Help players understand how neural networks work through an interactive maze
game where they simulate the function of neurons, layers, and the learning process.
Materials Needed:
1. Paper
2. Pencils or pens
3. Markers (optional for decoration)
4. A ruler (optional for drawing neat lines)
Game Setup:
Example Activation:
Objective: Determine the output based on the hidden layer neuron results.
The output layer takes the activated values from the hidden neurons. Players will
calculate the output by adding up the values that come from the hidden neurons and
checking if the result meets a "target."
Example Output:
If all neurons fired, sum their outputs and determine if the target (e.g., predicting if a
person will buy a product) is met.
o If output exceeds a threshold (e.g., 50), the player concludes "Yes, they will
buy!".
o If not, "No, they won't buy."
Example:
Round 1:
o Initial weights might be set randomly, and the output is incorrect.
o The player adjusts the weights (e.g., reducing a weight by 1 for Input 3) and tries
again.
Round 2:
o The output may now be closer to the correct answer, showing the network
"learning" from the previous round.
Game Flow:
1. Round 1:
o Players start by selecting input values (age, experience, etc.).
o They pass the inputs through the network (calculating hidden layer activation,
then output).
o The player compares the output to the expected result (for example, "Will the
person buy the product?").
o If correct, the game moves to the next round; if incorrect, adjust weights and try
again.
2. Round 2+:
o In subsequent rounds, players continue adjusting their weights and improving
their results.
o The goal is to "train" the network to predict the correct output consistently.
Neurons: Each node in the input, hidden, and output layers represents a simple neuron.
Weights: The connections between neurons have weights that influence how much input
affects the output.
Activation: Neurons are activated based on a threshold (e.g., using a step function) to
decide whether to pass information to the next layer.
Learning: Players adjust weights based on feedback, simulating how a neural network
learns through backpropagation and gradient descent.
Conclusion:
At the end of the game, players should have a basic understanding of how a neural network
works, how inputs are processed through layers, and how learning occurs through adjusting
weights. This paper game simplifies the concepts behind neural networks and makes learning fun
and interactive!
Game Setup:
Input Layer:
o Neuron 1: Age
o Neuron 2: Experience
Inputs:
Input 1 (Age) = 4
Input 2 (Experience) = 7
Each hidden neuron receives the inputs, multiplies by their respective weights, and then sums
the results.
Hidden Neuron 1:
Sum = (4 * 2) + (7 * 1) + (9 * 3) = 8 + 7 + 27 = 42
Apply activation function (Step Function, output 1 if sum > 30, otherwise 0):
Hidden Neuron 2:
Sum = (4 * 3) + (7 * 2) + (9 * 1) = 12 + 14 + 9 = 35
Hidden Neuron 3:
Sum = (4 * 1) + (7 * 3) + (9 * 2) = 4 + 21 + 18 = 43
Now, we calculate the output by combining the results of the hidden neurons. Let’s assume the
weights from the hidden layer to the output layer are as follows:
Sum to Output = (Hidden Neuron 1 * Weight) + (Hidden Neuron 2 * Weight) + (Hidden Neuron
3 * Weight)
Sum = (1 * 2) + (1 * 1) + (1 * 1) = 2 + 1 + 1 = 4
Step 5: Activation of Output Neuron
o Since 4 is less than the threshold of 5, the output neuron does not fire.
o Final Output = 0 (this means the prediction is "No", e.g., the person will not buy
the product).
Let’s assume the expected output was 1 (i.e., "Yes, they will buy the product"). Since the output
was 0, we have an error. This is where the learning happens:
Adjust the weights based on the error. In this simplified game, you can adjust weights
slightly by adding or subtracting a small number to the weights of the neurons.
Let’s say we adjust the weights by adding 0.5 to each weight in the connection that contributed
to the output. This is a simple form of gradient descent (though in reality, we would apply a
learning rate and more sophisticated methods, but this will work for our paper game).
Sum to Output = (Hidden Neuron 1 * Updated Weight) + (Hidden Neuron 2 * Updated Weight)
+ (Hidden Neuron 3 * Updated Weight)
Output Activation:
Since 5.5 is greater than the threshold of 5, the output neuron fires.
Final Output = 1 (Prediction: "Yes, the person will buy the product").
Now the network has learned and provided the correct output!
o Output: 0 (Incorrect, the prediction was "No, they will not buy.")
1. Neurons (represented by the circles) process inputs and "fire" if the sum exceeds a
certain threshold.
2. Weights influence how much an input affects the output. Adjusting weights allows the
network to "learn."
3. The game uses simple activation functions (like step functions) to simulate how neurons
activate and send signals.
4. Learning occurs by adjusting weights based on errors between predicted and expected
outputs (similar to how neural networks learn through backpropagation in real models).
This game provides a fun and interactive way to understand the basics of neural networks and
machine learning!
Let's walk through another example of the "Neural Network Maze" game, this time with
different inputs and weights, to give more practice with the process.
New Example:
Game Setup:
Input Layer:
o Neuron 1: Temperature
o Neuron 2: Humidity
Inputs:
Input 1 (Temperature) = 5
Input 2 (Humidity) = 8
Hidden Neuron 1:
Sum = (5 * 3) + (8 * 2) + (3 * 1) = 15 + 16 + 3 = 34
Apply activation function (Step Function, output 1 if sum > 30, otherwise 0):
Hidden Neuron 2:
Sum = (5 * 2) + (8 * 1) + (3 * 4) = 10 + 8 + 12 = 30
o Neuron 2 fires because the sum is equal to 30 (we can treat equal as firing).
Output = 1.
Hidden Neuron 3:
Sum = (5 * 1) + (8 * 3) + (3 * 2) = 5 + 24 + 6 = 35
The output neuron takes the outputs of the hidden layer neurons and applies weights to
generate a final prediction.
Let’s assume the weights from the hidden layer neurons to the output neuron are as follows:
Sum = (1 * 1) + (1 * 2) + (1 * 3) = 1 + 2 + 3 = 6
o Final Output = 1 (Prediction: "Yes, the weather will be suitable for outdoor
activities").
Let's assume the expected output was 1 (i.e., "Yes, the weather is suitable"). The prediction was
correct, so we don't need to adjust the weights this time. The network correctly predicted that
the weather is suitable.
Inputs:
1. Neurons: Represented by the circles, these neurons process inputs and fire based on the
sum of weighted inputs.
2. Weights: The strength of connections between neurons. Changing the weights adjusts
the behavior of the network.
3. Activation Function: Determines if a neuron fires or not. In this game, we used a simple
step function where the neuron fires if the sum of inputs exceeds a threshold.
4. Learning: In this round, the output was correct, so no learning (weight adjustment) was
needed. In a real-world neural network, incorrect predictions would lead to weight
adjustments, but here we demonstrated the process with correct output.
This example shows how the inputs (temperature, humidity, wind speed) are processed through
the network and result in a prediction ("Yes, the weather will be suitable for outdoor activities").
With the feedback loop, the network would learn and adjust weights if needed for better
predictions in the future!
In the previous example, the threshold for the output neuron was set to 5. The network fires
the output neuron if the sum of the hidden layer’s outputs, after being weighted, exceeds this
threshold.
Now, let’s assume that in a different round, the sum to the output layer is below the threshold.
We'll see what happens in this case.
Let’s use the same setup, but this time the weights for the connections from the hidden layer
neurons to the output layer will be adjusted in such a way that the final sum to the output layer
will be below 5.
Setup:
o Temperature = 5
o Humidity = 8
o Wind Speed = 3
1. Hidden Neuron 1:
o Sum = (5 * 3) + (8 * 2) + (3 * 1) = 15 + 16 + 3 = 34
2. Hidden Neuron 2:
o Sum = (5 * 2) + (8 * 1) + (3 * 4) = 10 + 8 + 12 = 30
3. Hidden Neuron 3:
o Sum = (5 * 1) + (8 * 3) + (3 * 2) = 5 + 24 + 6 = 35
Now, we calculate the sum to the output neuron using these new weights:
Sum to Output = (Hidden Neuron 1 * Weight) + (Hidden Neuron 2 * Weight) + (Hidden Neuron
3 * Weight)
Final Output:
Final Output = 0 (Prediction: "No, the weather is not suitable for outdoor activities.")
In this case, the sum to the output neuron was 2, which is below the threshold of 5. Since the
sum did not exceed the threshold, the output neuron did not fire, and the final prediction was
"No" — indicating that the weather is not suitable for outdoor activities.
Key Takeaways:
When the sum to the output neuron is below the threshold (in this case, 5), the output
is 0, meaning the network makes a prediction of "No".
The threshold determines the decision boundary for the output neuron. If the sum
exceeds the threshold, the output neuron fires (prediction is 1), but if the sum is below
the threshold, the output neuron does not fire (prediction is 0).
This is a simplified way to model how neural networks make decisions based on the activation
of neurons and their weighted connections!