AND Gate Perceptron Learning
AND Gate Perceptron Learning
The AND gate returns 1 if and only if both inputs are 1; otherwise, it returns 0.
We can solve this problem using the perceptron learning algorithm, which learns the weights
and bias for a linear decision boundary to fit the desired outputs.
|---------|---------|--------|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
The perceptron learning algorithm updates weights (w1, w2) and bias (b) as follows:
w1 = w1 + learning_rate * (t - y) * x1
w2 = w2 + learning_rate * (t - y) * x2
b = b + learning_rate * (t - y)
3. Repeat until all outputs match targets or maximum iterations are reached.
Example: Learning weights for the AND gate.
Iteration 1:
Update: w1 = 0 + 1*(1-0)*1 = 1
w2 = 0 + 1*(1-0)*1 = 1
b = 0 + 1*(1-0) = 1